You are viewing a single comment's thread from:

RE: Introducing @calibrae, @elfspice/@l0k1's new and final Steem account.

in #calibrae7 years ago (edited)

It will be bot-proof, but not through automation, but via enabling humans to 'curate' user accounts via the various mechanisms.

All accounts on steemit will be migrated, every last one, as current at the day prior to launch. They will all be reset to default starting values, and then everyone can have at it. Basically, you will just take your username and password from here, and it will unlock your new, clean, baseline level beginner account.

The initial users will even have the power to blot out all the bots and premine whales and trolls right from the first day, by muting them en masse, well, as many as they have the power to make transactions for (I'm thinking to make the initial account start with granting 12 transactions per day). Even if they show up, they will be stuck with one transaction a day, if enough people mute them. Then, if they want to stay, they can work their way up slowly by being good.

Sort:  

I like it :) New game, same player shoots again :)

Yes a limited # of transactions should favor humans over automated behaviour.
Do you think comments should be more recognised as a signal of a "quality" post worthy of a part of the reward pool ?

Reading your comment again it made me think about how this method could be used to create markets that can't be manipulated by HFT bots.

It requires a reputation system to stop the bots!

Steemit's model is based on financial markets, which deprecate the influence of social factors. But they put a forum in, which means eventually the social will have to rise to equal the power of stake... Or someone will fork it so it does.

Yes I guess in the end bots can be programmed to replicate any human behaviour.
I naively thought that you could score a post as human interaction through counting the comments, subcomments and upvoted comments and the number of unique commenters.
Assuming flame wars would not be voted or flagged down.
But I guess once the bot programmers figure that out and they have access to 1000's of bots that could be defeated as well... sigh....

Yup. It's a win for the community, and a lose for antisocial moneygrubbers :)

I am modelling this after polycentric legal systems like the ancient jewish and middle ages icelanders. Judges are subject to judgement by their peers. Then they can't become tyrants.

I am sure that it won't take long for the power of the system of security by human driven judgement actions to send a clear message to the scammer/bot/asshole community, that the whole platform can turn on them in a matter of days and render all their efforts fruitless, and even lock up their deposits for days. I am thinking also to add a transfer limitation when thte account hits zero reputation, so the asshole has to choose between trickling out their stake, or making good so they can get more of it at once. Any liquid assets they have, they have the choice between trickling it out, or setting a power down on their stake... you see what i mean, it makes the dilemma on the miscreants, and clarity for the good people.

I am working on the right formulas for mute/follow effects on holding or releasing Stake power via modulating effective reputation score, to avoid flamewars, some of it is going to be hard to model because of its complexity, but it's complexity is due to concurrency rather than complex sequencing.

It should not be sequence dependent so as individual accounts change the parameters only one operation is required per change. Virtual operations I guess they are called, storing a transient state derived from deterministic data coming from the database state.

I don't see how there can be any patterning discovered in comments that can positively determine humans. The best turing test is other humans, and the best turing test of all, is many other humans. That's why this model puts the tools to submit these judgements in the hands of the users rather than trying to guess it with heuristics.

Well, I'm not sure about that, there isn't really an opposite to commenting, unlike follow/mute up/down vote. Whether a comment is supportive or not isn't what comments are about. In a debate the merit, or lack thereof, can only be judged by humans. I'll be interested to hear proposals to this effect, but bots can comment the shit out of a post, and it means nothing.