You are viewing a single comment's thread from:

RE: Steemit and the Web of Trust: A Potential Love Story

in #steemit7 years ago (edited)

Cool ideas.

To support a version of this, you could just use the votes that have been cast to assign trust on behalf of the user. That is, you could just create another front end to the block chain that adds or calculates this metadata looking at the last 30 days or so or even longer if a user wants. But using the upvote to assign trust would make you want to use down vote to decrease it, but that runs contrary to the general guidelines. So, perhaps store the new weights in the new system and add additional UI elements to let the user decrease it. The first time the user uses the new interface it could assign initial weights based on their voting history. Maybe they could even periodically choose to rebalance based on votes... then flagged users could still be penalized with lower trust values but flagging wouldn't necessarily be how you decrease your trust.

The drawback of using a new front end to accomplish this is that the old one continues to exist and people would continue attempting to exploit it for profit. On the plus side, if it proved popular enough, and given the advantages it probably would for new users, it could help realign the user goals with platform goals and people would aim to create higher quality content because it would be more profitable than gaming the system.

Of course, if the ability to game the system still exists, and if the level of effort is lower than the effort of creating better content, it might still continue to be a problem because nothing will have been done to make it more expensive for the gamers.

Once users have assigned trust levels, will they ever see random new content again from users with no established trust? I'm sure you addressed this, but my brain must be too tired to recall. I'll have to come back and re-read it later.

Again, cool ideas.

Sort:  

The problem with using extant votes is that votes are associated with a given post, not associated with a given provider. Who should the upvote go to? The curator who voted it up? The original author? All of the curators who voted it up? All of the above? While any given answer is relatively arbitrary, it really should be the decision of the user. (That also means that "why am I seeing this thing?" needs to be a more transparent and answerable question.)

If you continuously rescale Trust values to be between 0 and 1, normalizing to a float based on a moving window, you can integrate some sort of decay function over time. This also solves the problem of your interest in a given provider changing over time. More recent Trust investments will be larger than older ones.

Every system is exploitable. Every single one. Web of trust systems are vulnerable to actual interpersonal negotiation exploitation. That's a problem anytime you have humans making decisions. Nature of the beast.

One assumes that in a system with a nontrivial number of users who are creating content, they should see some random new content from users with no connection to their personal web of trust once in a while. Of course, you can make the option to do that pilot deliberate choice, going to a specific page or tab, part of the site design. I suspect it wouldn't be terribly popular, however. (See the current New tab as it is.)

10-year-old technology. It's amazing.

The problem with using extant votes is that votes are associated with a given post, not associated with a given provider.

The votes in question would be the ones that the first party cast for the second party, thus elevating the level of trust that first party has for the second party.

The curator who voted it up?

Oh, I hadn't considered, but maybe increment the trust the first party has for third parties that also upvoted the second party. But that might be too easily exploited.

That also means that "why am I seeing this thing?" needs to be a more transparent and answerable question.

It could be graphically displayed like those annoying tag clouds... nodes with larger trust values are displayed larger, then color the edges in the graph to show which parties were responsible for it arriving in the users feed.

If you continuously rescale Trust values to be between 0 and 1, normalizing to a float based on a moving window, you can integrate some sort of decay function over time. This also solves the problem of your interest in a given provider changing over time. More recent Trust investments will be larger than older ones.

I like it.

Every system is exploitable.

Sure is.

I suspect it wouldn't be terribly popular, however. (See the current New tab as it is.)

I generally visit that after I've digested my feed. But then it doesn't show me the tags I want to see so then I manually edit the url...