You are viewing a single comment's thread from:
RE: Introducing @calibrae, @elfspice/@l0k1's new and final Steem account.
Reading your comment again it made me think about how this method could be used to create markets that can't be manipulated by HFT bots.
It requires a reputation system to stop the bots!
Steemit's model is based on financial markets, which deprecate the influence of social factors. But they put a forum in, which means eventually the social will have to rise to equal the power of stake... Or someone will fork it so it does.
Yes I guess in the end bots can be programmed to replicate any human behaviour.
I naively thought that you could score a post as human interaction through counting the comments, subcomments and upvoted comments and the number of unique commenters.
Assuming flame wars would not be voted or flagged down.
But I guess once the bot programmers figure that out and they have access to 1000's of bots that could be defeated as well... sigh....
Yup. It's a win for the community, and a lose for antisocial moneygrubbers :)
I am modelling this after polycentric legal systems like the ancient jewish and middle ages icelanders. Judges are subject to judgement by their peers. Then they can't become tyrants.
I am sure that it won't take long for the power of the system of security by human driven judgement actions to send a clear message to the scammer/bot/asshole community, that the whole platform can turn on them in a matter of days and render all their efforts fruitless, and even lock up their deposits for days. I am thinking also to add a transfer limitation when thte account hits zero reputation, so the asshole has to choose between trickling out their stake, or making good so they can get more of it at once. Any liquid assets they have, they have the choice between trickling it out, or setting a power down on their stake... you see what i mean, it makes the dilemma on the miscreants, and clarity for the good people.
I am working on the right formulas for mute/follow effects on holding or releasing Stake power via modulating effective reputation score, to avoid flamewars, some of it is going to be hard to model because of its complexity, but it's complexity is due to concurrency rather than complex sequencing.
It should not be sequence dependent so as individual accounts change the parameters only one operation is required per change. Virtual operations I guess they are called, storing a transient state derived from deterministic data coming from the database state.
I don't see how there can be any patterning discovered in comments that can positively determine humans. The best turing test is other humans, and the best turing test of all, is many other humans. That's why this model puts the tools to submit these judgements in the hands of the users rather than trying to guess it with heuristics.