You are viewing a single comment's thread from:

RE: Calibrae Day 2 Progress Report

in #calibrae7 years ago (edited)

We need users!

Also, I intend to make running a witness for Calibrae pretty much point-and-click. There will be binaries, and scripts to make things easier. I had fun, with the endless puzzle of making use of steemd, but it is such a high barrier of entry that I don't think is reasonable.

I also intend to eventually alter the witness scheduling system so that there is no flat payout cycle for top 19s, but a steadily declining rate of block allocations so that the competition for votes is more fierce, and the churn on positions is higher as new users join the election.

Sort:  

Looks like you are really putting some thought into this. I've brought your project to the attention of a few others.

What OSs will you support with your binaries? I'm a Linux man.

Well, linux is easy. I am still tinkering with how to persuade it to build static binaries, I can see easy enough that it probably will just run without integrating the shared libraries into it, on most distros, probably, but I'd like to just have the option of download-and-run, portable, and all (It's written that way, pretty much, if you launched it next to the binary via ./steemd)

I'm working on that today, actually... I have the snapshot here on my pc now, it took some time to get 23gb downloaded, so I can finish the changes to the cmake files to make it do static binaries.

As for windows and mac, well, I'll leave the mac binaries to mac people, I haven't the faintest idea how to do that. Hopefully we can get windows binaries going too. There is absolutely no reason why they can't be made. Of course Docker could be used but I'd rather not force people to use that, handy as it can be, it's just another layer of complexity.

Have you heard of the AppImage project? Many software projects are now offering these stand alone packages for Linux now.

http://appimage.org/

Thanks for the tip... if that works like it looks, this is gonna solve a lotta problems.

So, from what I am reading so far about it, this does require that I will have an ubuntu 14.04 environment, which I know it will build for. This is not a difficult constraint to satisfy, sounds like I should be able to just build an ubuntu 14 chroot environment (this may be a little tricky, but doable, in an automated way, like using debootstrap), then write the scripts that do the build inside that, and then voila. Binaries for everyone! Well, everylinux anyway :)

How does this differ from just using docker, which I'm fairly familiar with? You can do that with docker right?

I've been using https://github.com/phusion/baseimage-docker and love it. You can run one or multiple processes, and scheduled stuff. It's a PC in a docker container, and very widely supported by VPS providers, for really simple deployment.

I'm running the same docker container on my laptop, and when I make change, just replace the docker image on the remote machine, it saves state, it's great!

Am I missing some point?

https://www.reddit.com/r/docker/comments/5eptm9/are_docker_containers_cross_platform_or_no/

Docker also lets you run on any x86-64 system as well. I will be for sure updating and uploading a docker to the docker hub also, so it can be as easy as docker run calibrae-project/calibrd. The docker can be thus built on ubuntu 14.04, same as the appimage, and run on any linux, or within a docker.

There has recently been a privilege escalation exploit that can let the code escape from the container though - https://threatpost.com/docker-patches-container-escape-vulnerability/123161/ but since this project won't involve running arbitrary code, it should not be a problem. However, the appimage version will not have this problem run outside a container.

So it will be entirely arbitray, up to the operator how to deal with this.

Thanks - I didn't hear about that docker exploit, that's bad for bare-metal docker providers I guess, but won't affect most people who would use VPS or their own PCs.

I see, as long as there will remain a docker container, and preferably an easier to use one. It should not be anything like as difficult to set up a witness as it currently seems!

It absolutely should not! It should be just fill in a form, point, and shoot. Bam. Target destroyed!

I will be also making this feature too, thanks for making me think of it, it is very very easy to add this feature, a form to build the steemd configuration automatically. We can then also use a central repository of all available seed nodes on the network to automatically fill this important information as well.

A little note to @sneak, @ned, vandenberg, et al, this is how you do user consultations to determine necessary changes to a software project.

Excellent! =)

Even popular software such as Gimp is offering .appimage packages along with all of their other standard ones these day. Even Linus Torvolds thinks its a good idea, and he's hard to please.

There's quite a few bits of software that I run this way now, especially if I want to give new development versions a spin, since they may not have landed in the repositories yet. I'm an openSuse user.

I'm looking forward to getting involved when you have something up and running.

BTW, will you establish your own coin for this, or will you use Steem/SBD ?

We will be removing all the abstractions and SBD from the ledger - everything will be a renamed version of VESTS. The ratio of Vests per SBD and STEEM can be found by querying the chain at any given moment, and it changes according to the median of the witness price feeds. The snapshot has a specific state on this set up, and every balance will be queried by its VESTS base, and liquid VESTS will become JUICE and Steem Power (Vested VESTS) will become Stake.

We should have a testnet up and running within a couple of weeks I think.

Which accounts will be carried over? All or start from scratch? That also has implications for existing posts and comments.

The opt in thread is here:

https://steemit.com/calibrae/@elfspice/calibrae-strategy-change-opt-in-rather-than-algorithmically-determined-opt-out

Drop a comment in saying you want in, to be included.

The accounts will be copied over from a snapshot of the chain dated 5th of August, including the public key (enabling you to log in with the password that was current on that day) your json-data (location, website, comment, display name) and the account balances, which will all be converted to VESTS equivalent, and added to the relevant liquid Juice and Stake, and importantly, your reputation score.

Reputation scores will be calculated as a coefficient against your stake in Calibrae (your rank against the highest account in the platform, as a percentage, limiting your maximum vote power and ability to dispense rewards or punishments). This will enable community suppression of high stake mischief-makers and bots, and when the reputation is smashed down to zero, the account will only be able to make one transaction per day (either post, or vote, or transfer, or profile data change). This is a community based behaviour regulation system. Here on steemit, you can self-vote up your reputation, and people who have negative reputations, like berniesanders, can still destroy your rewards, which is entirely unjust.

We are not copying the forum data, at all, only the latest state of the account, as at the 5th of august. It will be a clean slate, except your account balances will be transferred. It's sorta like the BCC fork, except only including those who want to be included.

What does the blockchain JSON compress to? I'd like to do some analysis, including what takes up space, so we can model how much proposed changes would affect blockchain size/performance.

I dunno. It should be protobuf, for space reasons, but I have no idea, to be honest. Let me less the block_log.

It's probably some kind of native Boost binary serialization format. Probably, knowing boost, this format changes after some version and before some version... IDK. but I would use protocol buffers, personally. Likely it's not a lot different.

Ok. My home PC doesn't have enough RAM really, so I may just have a go at running a witness node on your network on an VPN hourly basis for a little while, once you've simplified it a bit, and get it that way.

To run a witness, especially, on a brand new chain, would only require about 2gb of memory, even with a slow spinning disk, at least for backup. But if it was getting a block every minute you might find you want it to have an SSD, and the connection has to be super reliable.

I take a bit of a different angle on reliabilities of witness servers - so long as, I figure, there is 22 witnesses, it doesn't matter if half of them miss blocks a few times a day. I may look into it down the track to have the scheduling system account for this and deprioritise failing nodes automatically and slowly re-add their slots in the schedule.

There is no real issue if the witnesses are recalculating the schedule every few minutes, in reality, it's not that complex a process, I don't believe it impacts on block production. The way it works here, and with the culture, it makes the job very rigid. If 8 out of 10 blocks make it on, then the total capacity of the network is only reduced by 20%, and until it reaches over 3000tx per block do you start to see a ceiling on this. or so. I forget how fast it goes, but the network has never been saturated yet, at 95% or so uptime on nodes.

As a rule, nodes only have something like 6 seconds to prepare to make a block. The schedule is not mapped out very far ahead, for obvious reasons that this would make attacking by schedule, a specific node difficult.

I personally don't think the scheduling system is the optimal solution, at all, but it seems to work so far.

By the way, running a witness, you would not want to rent by the hour, as they have to be running every 3 seconds to grab new blocks. I can show you VPS services that will cost you under 10 euros a month and be ridiculously ample.

I meant for the purpose of getting the old chain data, and to learn about the current issues, not adding new blocks, but maybe you're not planning to run the old chain at all? I'm still some way from fully understanding how all these parts fit together too.

On my dev machine I have an SSD drive, but only 20GB of space on it currently, and only has 8GB RAM. I've tended to use VPS or cloud HPC when I've needed more, but I'd be pleased to see the cheap VPS options.

I've got some real-world work to do for a client, but I hope to get on your discord soon. I've never used that before either, so all this is a pretty steep learning curve for me! But I see that as part of the appeal of the opportunity. :)

oh, well, I say why not first make the legacy steemd build this way, and yes, I'll seed the chain snapshot and mention it in the README.md and people can then do their own configuration/forensics. These are probably, basically, first cabs of the rank, since getting this part done is probably one of the easiest and most important tasks.

Sure, I'll divvy out the juice on VPS when you pop into the chat...