You are viewing a single comment's thread from:
RE: Utopian bot VP analysis - 2 more rounds are available per year :)
I am pretty sure Elear set the cron job to run every minute instead of every 5 minutes. Also, don't think the droplet Elear is hosting it on has any problems, or that he actually did anything to fix the problem today - the delays were mostly caused by nodes not working properly, me fucking up something in the code, or someone messing up the spreadsheet we use, which in turn messes with the database and the bot. Honestly I am surprised that there were only 6 in the last 3 months, as I recall there being more lol.
See the following start time, of course the time sync might be slightly different, these start times might be impossible with every 1 minute cronjob. I'll add this in the text. By the way, I'm very sorry for you to receive many mention alarms. Since I already cited your article, I just mentioned you, since you'll receive alarm due to the citation anyway.
Again, thanks a lot for your comment!
2019-02-05 16:15:09
2019-02-04 17:55:09
2019-02-03 20:00:09
2019-02-02 22:05:12
2019-02-02 12:55:54
2019-02-02 01:15:15
2019-02-01 05:25:15
2019-01-31 08:05:12
2019-01-30 08:30:12
2019-01-29 12:05:09
2019-01-28 13:05:09
2019-01-27 13:25:09
2019-01-26 13:30:09
2019-01-25 13:50:09
2019-01-24 13:55:09
2019-01-23 14:35:09
2019-01-22 15:05:12
I guess you are right then, but I could've sworn that he said he set it to 1 minute. Also, I don't mind the mentions at all, so don't worry about it! Should've said this earlier as well: cool article, are you planning on posting more analysis contributions?
Thanks a lot! Yes, I actually like to post more analysis contribution. I have some good ideas but always postponed due to others. And somehow I feel a bit pressure to write analysis sometimes, since I'm actually an economist in the academia now. How rigorous I should be, for instance. But for other categories, it's much more like a hobby to me :), so in some sense, it's a lot of fun! I was also an engineering student in college and worked as SWE before, so I know how to program. I also worked for Facebook before where I did many data analysis. As you know, most analysis results, we kind-of already know the result even before doing it :) just analysis for details. (To be honest, this one too. I knew the result for the delay itself, but it was very interesting to me somehow, so I wanted to show it to others too. I sometimes like very tiny details) Of course we also often find unexpected results, for instance, in this case, 6 downs within 3 mo was a bit surprising to me. That's why analysis is also very important and challenging. I'll try to post more analysis posting. But please don't expect a very good and broad post. But I'll make sure to post a unique one :) Many thanks!
Haha :) To me 6 per 3 months seems already many. If your recollection is right (I believe so), then that failure fortunately may have happened while VP hasn't reached full yet.
As I expected, the sheet (not shXt) part might be the problem, data fetching may have some problem some times. By any chance, the sheet is filled out by human? then it's very prone to errors. If it's publicly accessible (I guess not), I'd like to see.
The cronjob I guess, if it's every 1 minute, that may incur dup instances in the worst case, so it should be careful. I mean before it starts voting, if somehow the preparation steps takes long, the another instance can be called. Actually I thought that might be why you set it every 5 min to guarantee enough margin.
I actually examined the start time, but it's actually on 5, 10, 15, 20, 25,... minutes. So I think it's every 5 minutes. Also backed by powerful LLN :)
It's not publicly accessible, and most of the stuff in fetched by a bot. The mistakes that are made are mostly people spelling their name wrong, or just being careless and messing something up. I could probably make the code more robust to prevent this from having any effect, but I'm lazy!
I've had problems with this in the past, and it's the reason why the code for pulling contributions into our spreadsheet only runs every ~2 minutes. It used to be 1 minute before, and because of the time it sometimes takes to connect to a node, it would result in duplicate contributions being pulled in, which was quite annoying. I might've recommended that Elear do the same with the bot, but I can't remember, so you may well be correct.
Haha, I actually saw that there are not many error checks in the code, but I try not to say in the post :) And if the problem doesn't occur for data fetching, it seems okay in general. You're a very sincere person :) I also realized that when I wrote my bot, there are so many unexpected errors due to Steemit API :( everything should be caught well :) And I totally understand, since I believe I'm lazier than you :)
I guess probably it might have been every one minute earlier, but due to that dup problem, it may have been changed to every 5 minutes. But still to be honest, this is a perfect solution. After all, it may need to use sleep & dup check routine. If I have some time, I'll submit PR, but may need some time. I've already spent too much time on Steemit :) But it's very enjoyable due to Utopian and people like you. These days are my kind-of second stage of steemit life. Thanks!