Towards an Incentive-Compatible Magnitude Distribution

in #gridcoin7 years ago (edited)


Hi GridCoin Community:   

I am writing this post to address what I see as incentive problems inherent in the current magnitude distribution. I describe some of the problems, and touch on Total Credit Delta, ranking projects, and other potential solutions. Primarily, I would like to contribute to the ongoing discussion, and perhaps begin some new discussions as well. If all of this sounds like old stuff, please forgive me; I’m relatively new here.   

The Problems  


Of these problems, 1-3 are incentive related, and 4-5 are other problems also addressed by one of the proposed solutions.

  1. One of the purposes of GridCoin is to reward people for contributing their idle processing power to scientific and other computations of interest. Currently, a person oftentimes has financial incentive to contribute to a project they do not support as much as another. For example, I want to use my GPU to run computations for Milkyway@home, but I might make more GRC if I run it for Collatz Conjecture instead. While some may consider this a feature and not a bug, as it incentivizes users to contribute to projects that may not otherwise receive much support, I consider this a major problem. Ideally, a person should be incentivized to contribute to the project they deem most worthy of their support.  
  2. It is possible to think that a project deserves to be whitelisted, and at the same time believe that other projects on the whitelist are more deserving of computational power. Currently, there is no way to reflect this in magnitude distribution, as all projects are given equal magnitude.  
  3. If a project runs out of work units, users still receive GRC based on their Recent Average Credit (RAC), meaning that users can receive GRC for doing no work.   
  4. Short term projects that have a large, but finite amount of work units – for example, one month of work units given the current computational power devoted to SETI@home – are not possible to include under current work unit whitelisting requirements.   
  5. Distributed computing projects outside of BOINC are currently not able to be included in the GridCoin network. I am not sure if this is a priority for the GridCoin community, but I have seen it mentioned here and there, so I am including it in this list.

Proposed Solutions  


  • As outlined thoroughly by @jringo here and here, the Total Credit Delta (TCD) solves problem 3, as users only receive credit for the work they’ve actually done since the last superblock. In any overall structure change, I think TCD, or something similar, should be implemented, as it is seems like a much fairer way of assessing someone’s contribution to a project.  
  • Unfortunately, TCD does not address problems 1 and 2. As mentioned in previous threads, most recently by @donkeykong9000 here, ranking projects by certain factors such as publications and work units is another possibility. This is a good way to address problem 2, and to a lesser extent problem 1.  

I think that combined, these two solutions above address very well problems 2 and 3. Project ranking can also address, but perhaps not solve, problem 1, depending on how it is implemented. Thus, even with ranking projects, there may still be a financial incentive to crunch for projects that are not your favorite.   

  • Basing magnitude on hardware contribution/FLOPS alone, regardless of the project being crunched. The idea here is simple: ignoring the CPU/GPU difference for a moment, your hardware performs some amount of work in some amount of time. You are rewarded for that work proportionally to the sum total of the entire network work, regardless of which project you’re crunching.   

There are several problems with this last proposal, one of them quite serious, but first I will point out the benefits.  

  • a) Your hardware would be guaranteed to produce a certain amount of GRC, all else being equal. The project to which you contribute would have no bearing on how much you earn, so you can contribute to your favorite project with no financial incentive pushing you to contribute to another problem. This solves problem 1. 
  • b) Projects would automatically be ranked in a – using the term liberally – “free-market” way, meaning that the most popular projects will simply receive the most computational power. This addresses problem 2, although this presents a major problem I mention later.  
  • c) This solves problem 3 as well as TCD does. 
  • d) Paving the way for short-term projects and projects outside of BOINC to be included in the GridCoin network, since GRC awarded is based on work done by hardware, not on information provided only by BOINC, thus addressing problems 4 and 5.  

Now the issues:

  • e) This would require agreed upon standards and benchmarks for different projects. For example, how do we compare a contribution to Universe@home with a contribution to VGTU? One way would be to take a CPU, run it on both projects to find out how much work that hardware accomplishes on each project within a given time frame, and then equate the work units crunched for one project with those of the other. Complicating this immensely is the fact that different hardware architectures are better suited to different projects. Another complicating factor is that some projects are bundles of other projects, and so the same CPU might perform differently on different subprojects. Perhaps this could be fairly addressed by taking a weighted average of different kinds of hardware. We already have a lot of information from many GridCoiners here on Steemit regarding how much work particular hardwares can accomplish. Regardless, this is not a trivial task.  
  • f) The biggest problem: a project already owning a massive amount of computing power could take advantage of this. For example, a project manager who needs to perform a lot of computations, but already has an enormous amount of computing power, could get their project whitelisted, and then point all of their existing hardware at the project. I’m talking about big players here. They could theoretically start receiving some enormous portion of minted GRC for something they had the budget to do anyway – in effect, some organization that does not need volunteer computing power using GridCoin as a subsidy for their project. The current magnitude distribution prevents something like this from happening on an enormous scale – only one project could be monopolized in such a way. I have thought about ways to prevent this from happening, but I wanted to receive feedback from the community before continuing.   

There might be more issues associated with this last proposal, those are just the ones I thought of.

In conclusion, I think TDC + ranking projects in some manner will greatly improve the current magnitude distribution, but there are also other avenues that should be explored, if not to adopt them, then to learn from them.   


Sort:  
Loading...

Funny you posted this considering what i just pusted minutes before you.

Either way your idea is flawed for a variety of reasons : Not all hardware behaves equal on the same project. For example, and you can find lots of forum treads about it, TN-Grid brutally devours ram bandwith, and, you get situations like, to the surprise of a contributor, his 24 core xeon is only crunching about a 20% more than his i5. On the other hand most math related projects make a very efficient use of a cpu capability due to their greater simplicity allowing them to stay on the cache.

What we need is magnitude tiers , which multiply the amount of grc distributed, for example something like this .

-1-3rd most crunched project : 3x magnitude

-Opensource application (which implies it avaliable for many platforms) - 1.25 x magnitude .

-Often runs out of tasks - 0.75 x Magnitude

But i dont know how hard something like that would be to implement, my gut says that it shouldnt be to hard, though.

  1. I hesitated to take credit for the idea as I wasn't sure if I was the first person to suggest this. Also, I began writing this yesterday ;)

  2. I think I addressed your concern in part (e). I agree it's very complicated, but I don't think it's beyond the community to arrive at a reasonable and equitable way of devising standardized benchmarks. I brought this up to stimulate conversation - maybe others already have some ideas.

  3. I'm not sure that the magnitude tiers solution you mentioned adequately addresses problems 1 and 3, and I think it closely mirrors the thread @donkeykong9000 wrote yesterday that I mentioned in regards to problem 2. I guess problems 4 and 5 aren't incentive related problems, but I still think they're worth keeping in mind.

Interesting article.

If a project runs out of work units, users still receive GRC based on their Recent Average Credit (RAC), meaning that users can receive GRC for doing no work.

This is not valid. RAC grows slowly and decays slowly. First month users are underpayed, thus even if there are no WU it's not payment for nothing but rather reimbursement. TCD solves other problems, like initial latency - will allow to reach maximum magnitude much faster than RAC. Still, some WU in some projects are validated after weeks, so it's impossible to completely resolve payment / magnitude delay problems.

There are plenty complex problems to be solved and as for the solutions - to be implemented. Welcome on board!

RAC

Thank you, I just read your post and jefpatat's as well.

First month users are underpayed, thus even if there are no WU it's not payment for nothing but rather reimbursement.

What if, for example, a project runs out of WU for an extended period of time? Assuming all WU validated, if the project remains whitelisted, every user's share of the total RAC will remain the same, meaning they will maintain the same magnitude as they did in the last superblock in which the last WU was validated.

TCD solves other problems, like initial latency - will allow to reach maximum magnitude much faster than RAC. Still, some WU in some projects are validated after weeks, so it's impossible to completely resolve payment / magnitude delay problems.

Thank you for pointing this out. I think maybe the last proposal might be able to address this, but it effectively assumes immediate validation, which is clearly not the case and so presents a lot of problems.

Thanks for the kind words and reply!

[...] runs out of WU for an extended period of time? Assuming all WU validated, if the project remains whitelisted [...]

I see. In such a case project is removed from the whitelist. Have a look at the listing proposal.

One of my concerns is how long will that process can take. Sourcefinder has been out of WU for almost the entirety of its being whitelisted, as I (in addition to many others) pointed out here.

I think a project ranking system may be the way to go. Perhaps it is better than a tier system since we could score projects on a variety of different criteria and then generate some type of aggregate score. We could, for example, have the community vote on specific criteria for each project and then generate a ranking. These criteria could reflect science (for example, publication output), Work unit availability, popularity (# of active hosts), security/stats updates, GPU/CPU inclusion, area of focus/benefit to humanity etc. Based on these metrics an aggregate score could then be used to generate a ranking so that top performing projects receive a greater share of newly minted GRC.

This is definitely an improvement over the current system, in fact, it may even be the optimal solution. Some concerns of mine are:

  1. What happens to new users who want to participate, but think disagree with the current ranking? Will there be regular voting?

  2. Is there a way we could include smaller, short-term projects?

  3. It doesn't address the incentive problem. However, what you just described would definitely reduce it, maybe to the point of irrelevance, as I pointed out in the original post.

The question is, how centralized do we want the reward mechanism to be? The last suggestion in my post is completely decentralized, as the top-ranked projects would just be the most crunched. On the other hand, I share the concerns of @jringo (and you, I think) that there are projects that are more worthy of being crunched than others.

Regular voting seems like it would be a necessity in order to keep up with new projects/retiring old projects. I think once every 3 - 6 months would be a decent time frame.

That's a good point regarding centralization. From the ranking, maybe there could still be tiers to include the top projects so that within a single tier, no project is treated any better than another. For example, Tier 1 could consist of the top 10 projects and be given 50% of all newly minted GRC. Tier 2 could consist of the next 10-20 projects and be given 35% of all newly minted GRC. A third Tier could consist of the remaining projects and be given the remaining share of GRC*

*All numbers are examples only

This would make monopolization difficult since one would need to dominate the entire 1st Tier of projects.

maybe there could still be tiers to include the top projects so that within a single tier, no project is treated any better than another

The current magnitude distribution is highly centralized, and it can cause users on very popular projects like SETI@home to lose potential magnitude by crunching a project they prefer less. The question is, is it fair that some users are receiving more for the same hardware ~= energy costs than other users? I think eliminating this incentive entirely by a direct FLOP -> Magnitude distribution that @jringo mentioned above would make the process a lot cleaner.

Can we can predict how this would incentivize people to move to higher tiered or lower tiered projects? I think it would depend on the input in either direction, which can vary over time. The more finely tiered the ranking, the less important this becomes, like taking a Riemman sum with smaller and smaller widths, converging to less overlap (in positive or negative directions) with the actual curve. The more finely grained, the more decentralized. We hit a stopping point when we finely grain to the point where each tier has only one project in it - a singleton tier - i.e., all projects are ranked in order, not in tiers. This system would have the lowest amount of centralization, assuming we can't chop projects up into pieces*. What we have right now is the exact opposite - all projects are in the same tier => they all carry equal magnitude.

There is a good argument for centralization that benefits particular projects more than others. However, even this tiered system is not guaranteed to prevent incentivizing users to point IPP to another, lower tiered project.

*if we can implement the FLOPS -> magnitude distribution, accounting for different hardware, then this is possible. Since you mentioned treating projects in each tier equally, I think this would be a good way to do it. If we make the entire distribution decentralized, i.e. normalize all performance - it is possible the incentive problem disappears (I think).

All that said, this would overall be less centralized than the current distribution, it would benefit many more users that like to crunch more popular projects, and it leaves room for more decentralization.

This would make monopolization difficult since one would need to dominate the entire 1st Tier of projects.

Which is exactly against the interests of any such organization.

Congratulations @ilikechocolate! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of comments

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!