You are viewing a single comment's thread from:

RE: Introducing MIRA

in #steem6 years ago

I think direction is critical. I am curious about a few things.

  1. Can you through more Ram than required to speed up a node. Say this takes ram requirements down to 16gb. Would having a 64Gb node allow for caching or will that only be Jussi job.

  2. How close to the 4gb ram target do you think the first iteration would take us?

  3. On a node with significant ram (256gb-512gb) would this be slower than the current system?

Sort:  
 6 years ago (edited)

These are all fair questions - while a lot of this project is already completed, it will be much easier to answer these questions once we get a little bit closer to completion. I believe the answer to #1 is 'probably not in the first iteration'. This is also being implemented as a plugin, so it can still be done the 'old' way.

Good to hear of this progress Justin.

Is development still on the schedule Ned outlined a couple of weeks back? ie to have it ready to roll out by end of January-ish ?

Tomorrow, Thursday 8pm - 11pm on MSP Waves we have the second State of Steem Forum - this week focussing on Technology including nodes, RocksDB etc.

Would you or anyone else from Steemit Inc be able to pop along for 20 minutes to give an update and maybe answer a few questions?

Thank you



Correct me if I’m wrong, but wouldn’t the extra RAM be used by the OS as buffer cache for the disk-backed RocksDB files, like every other disk-backed database?

Yes, RocksDB itself does technically do this (or not if allow_os_buffer is set to false) - certain things could also be supported through runtime/compile time options. I believe the OP may have been asking about something more like using different storage methods for portions of the DB and not necessarily OS level caching.

Some really good points there @themarkymark