Sort:  

Is this the same person who owns the steemitmarket account and was offering to sell votes a few months back? I had a chat with them on Facebook, and I don't consider them ethical actors. I appreciate them stress testing things, but that can be done on test chains with input from the dev team. To me, hacking "for the money" isn't ethical. There are always going to be potential exploits of shared resources, and no system is perfect. It's always a balance between protecting the system and keeping it useful to ethical actors. I hope there's an easy fix here, but I fear there is not.

Thanks for keeping us informed.

Steem is posting rate limited blockchain. It means that instead of paying fees to post, every user can make posts for free based on how much Steem Power they have. The more SP user has, the more frequently he can make posts.

https://bytemaster.github.io/article/2016/02/10/How-to-build-a-decentralized-application-without-fees/

If the blockchain starts to grow too fast, we can just change the limits bigger. Normal users won't be affected much because they don't need to really post very often. And if they need to, they can buy or earn more SP.

easy, if an account makes x posts per hour tell the node stop posting their posts for x amount of time? is that possible

This guy is testing system in vitro. Thanks for updating us.

I would rather say in vivo than in vitro !

OK, he didn't intend to ruin anything, so it looks like a "laboratory try" to me. But since it's done without any security measures, your note is valid. We should be thankful for such tests, before things go too far.

Did anyone ask why he slowed down? Perhaps the protocol cut him off. Also this attack happened in July before we fixed a bug.

With ChainBase the majority of the data stays on disk.

My concern is more about blockchain size than memory.

I know you are working to alleviate nodes RAM consumption. But having a small number of bots being able to stuff the blockchain of approximately 140MB in 10 hours, even if taken down by protocol is a problem for me. Such an attack, if repeated would generate 10GB of data in 1 month. That's the current size of the blockchain.

Being efficient at spamming is only a matter of how many accounts you have to perform it and we all know how easy it is to create fake accounts, especially if you have some mining power.

You may have read here that I added language detection to SteemSQL. This feature rely on a paid third party service, which fee is based on number of request and volume of data sent for language detection.
The spam attack made SteemSQL exhaust its quota and I had to upgrade it (pay more) to preserve this functionality. This is a direct impact of such an attack.

When you set up a new witness/seed node, you have to download and replay the whole blockchain which takes considerable amount of time. Even if you are working on faster replay, managing gigabytes of data has a cost.

I think this deserve an answer with more than ... "don't worry" (this is how I read your answer. I might be wrong)

Could you explain/describe wath mechanism is (or will be) put in place to prevent this to occurs again, even at a larger scale?

Being efficient at spamming is only a matter of how many accounts you have to perform it

This is not the case. It is a function of SP, which rate limits posting once blocks start to fill up. Small free accounts and especially even smaller mined accounts will not be able to do much.

You may be right that the allowable degree of bloating is still too high. It was reduced once and maybe should be again.

IMHO the allowable degree of bloating is working by design, which showed real capacity of the chain/network. When the chain is getting busier, available bandwidth of each VESTS will be less. By the way, the block size limit has already brought some non-ideal user experience, for example, long articles have to be posted as several part (and the API doesn't return too long content (not due to block size limit though)).

Aren't there two parameters, one controlling the maximum block size and another controlling the sensitivity of throttling when blocks start to fill? If I recall correctly, last summer both were reduced. I may be that it is better to leave the blocksize larger (for the reason you state) but further reduce the sensitivity to filling. (By which I mean allow less filling before bandwidth limits apply, technically that would be increased "sensitivity".)

The block size limit can be dynamically adjusted by witnesses. The other one, although is dynamic as well, is hard coded in an algorithm.

If you look at the account used by steemitmarket to perform his "test" attack, they have not that much SP and still were able to produce 86K posts in 10 hours.

Yes that is my point. The coefficients may be suboptimal for the current case of a lightly used network that still allows low-SP accounts to abuse.

My attack is slowed down because I did not use a script, and with lots of comments RAM overflowed. Then I stopped the attack, my goal was just to get approximate figures and threat.

Don't worry - he is a white hacker)

Maybe he is, but not all people out there are.
Even if it not a security concern, such an attack performed with malicious intention could harm the system, especially when you have to deal with huge amount of data.

I think everything it will be good

I want to start an attack with one account at different posts. I can count how many MB I managed spam, and based on this figure, we will know whether there is a problem and how it is dangerous

Why not make your test on a Testnet rather than live network?
Then publish your findings

One account does not bring any harm, and comments will be closed on the site. but the results are illustrative

One account does not bring any harm, and comments will be closed on the site.but the results are illustrative

This post has been ranked within the top 50 most undervalued posts in the second half of Nov 08. We estimate that this post is undervalued by $8.12 as compared to a scenario in which every voter had an equal say.

See the full rankings and details in The Daily Tribune: Nov 08 - Part II. You can also read about some of our methodology, data analysis and technical details in our initial post.

If you are the author and would prefer not to receive these comments, simply reply "Stop" to this comment.

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal