Sort:  

I don't know what that has to do with reviewing the code before deploying it? Maybe you can explain.

The new code can be deployed and the fork accepted by a super-majority of the top-20 witnesses. The rest of the witnesses do not factor into it. If those witnesses outside of the top-20 do not then upgrade to the new fork, they will effectively become defunct.

So, sure - anyone can review the code. But if the people in the top-20 are going to blindly accept it anyway, there's nothing that the rest of the witnesses can do. You either go along with it or shut down your node.

And if you're in the secret Slack, you play by Ned's rules or you're ousted. So...guess what those people decide to do?

Actually, I've mentioned you twice this morning. I saw that you actually posted and discussed your concerns. I think that was the right thing to do.

Even if I don't always agree with all you say and do, you acted with integrity on this one.

Yes, @ats-david had a very in depth post on his blog about his concerns that I really didn't see elsewhere. I was surprised he was one of the tiny few.

There are too many lines of code to look for oddities... It's humanly impossible to study.

The only way to test it, is to run data through it. Not just "live" data as @timcliff mentioned in a previous reply to me.

(Apparently testnet gets live data sent to it, the same data flowing in the steem mainnet)

...but tainted data. Horribly corrupt data. Throw everything at it.

There are too many lines of code to look for oddities... It's humanly impossible to study.

This is correct when there are massive updates that are backed up for over 18 months and pushed out by the devs as a blob. It is not the case when there are smaller targeted updates (such as the one release a few months ago to address some json spamming attack). The latter are more closely scrutinized by witnesses (not all of whom, but some of whom, have a software development background and are capable of reviewing code to a reasonable degree). Carefully reviewing and second guessing 18 months of design and development work that occurs largely behind a private and opaque process just isn't possible.

Either we flat out reject the release and insist that it be (re)packaged in small bite size pieces to be individually approved (and indeed a minority of witnesses is strongly in favor of this approach) or, absent some known, identified reasons to reject it (for example, stability considerations prompted by the recent crash was seen as such as reason by a minority of witnesses) or we pass it through on the basis of assuming that the dev team is competent. (If they are not, then the Steem community ought to be seriously working to replacing or restructuring it). All of which needs to be considered as a tradeoff between conservatism and 'best practices' on the one hand and the practical consideration of availability of upgraded features on the other (and numerous devs and community members were communicating to witnesses how important many of these upgrades were perceived to be, representing a clear incentive to get them rolled out).

Mostly, things are working now, and the upgrade glitches lasted about 1 day (exchange downtime is fully up to the exchanges, and nothing prevents them from being up; the necessary fixes was released to them yesterday). Whether getting the feature improvements out in deployment was the right call relative to stability risk will be something that time will tell.

There are too many lines of code to look for oddities... It's humanly impossible to study.

Exactly what I thought when I read that last pre HF20 post by Steemitblog.

Cg