You are viewing a single comment's thread from:

RE: Researching a FLOP and Energy Based Model for GridCoin Reward Mechanism

in #gridcoin7 years ago

Under the current model, a researcher who is running a BOINC project awards BOINC credit in accordance with his/her perceived value in completing some work unit. Maybe that's tied to the number of flops needed to compute the work unit, but essentially it is up to the researcher to distribute those credits equitably based on the 'value' being provided to the researcher. On the Gridcoin end, RAC is used to try to distribute GRC in a way which matches the distribution of BOINC credit which has been done by said researcher (within one project). Right now, we treat all projects on the whitelist equally in terms of GRC/project/day.

There is a profit motive in GRC which pushes users towards under-crunched projects. No one is being 'punished' for crunching popular projects. A person makes a choice based on a mixture of motivations, monetary reward is only one of them. If you are crunching a project which makes you feel good about taking part in a particular field of research, then that feeling is part of your reward. If someone else is purely optimizing for GRC reward then that is their prerogative.

I don't understand why you want to down-rate people who are using more capable or more efficient hardware. If GRC can motivate people to use more efficient computation to provide more 'value' to a project's researchers, isn't that a good thing? It's nice to put some old or otherwise inefficient hardware to use towards scientific computing, but is it necessary to demand they get paid based on how much heat they produce rather than how much scientific 'value'?

This sounds to me like a lot of extra work to ask of the system (when you consider the importance of growth), which would require some further degree of centralization and possibly create new security holes or avenues for abuse (how hard would it be to spoof what hardware I'm using for something with a higher TDP?). This could be the basis for an interesting statistical analysis to understand how much energy is being used by different projects, which could provide value to the BOINC/GRC community, but I don't think it has any place in the rewards mechanism whatsoever.

If the real issue is a perceived difference in the value of different projects, and as a community we think that different projects should have a different amount of GRC distributed amongst their crunchers, there are simpler and more robust ways of addressing those issues.

Thanks for thinking about the rewards mechanism and its inherent questions of fairness and bringing out a new idea which tackles the frustrations and issues you see. The conversation is essential to trying to improve the system.

Sort:  

I think it is going to be important to develop a systems away from credits in the future for 2 reasons:

  1. Different BOINC projects award credits differently
  2. Any project outside of BOINC that wants to use Gridcoin is going to want to know how they should measure contributions.

I think that any way to get a direct FLOP -> GRC relationship is ideal.

There is a profit motive in GRC which pushes users towards under-crunched projects. No one is being 'punished' for crunching popular projects.

I have to disagree. If you make less GRC by crunching a more popular project, I see that as the reward mechanism effectively punishing you. Why should you make less if you did the same amount of FLOP on a different project?

I don't understand why you want to down-rate people who are using more capable or more efficient hardware... is it necessary to demand they get paid based on how much heat they produce rather than how much scientific 'value'?

This is exactly the opposite of what I'm proposing. If you take a look at the 7970 vs. 1080 comparison that's a good example of how better, more efficient hardware gets more GRC.

(how hard would it be to spoof what hardware I'm using for something with a higher TDP?)

That's a good question. I don't know how BOINC specifically collects the data on my hardware, but they do know all of the specs of the machines I run BOINC on.

That being said, the point of this model is that you are only rewarded for the FLOP you do, nothing else. The weighted average of all the hardware is just used to calculate the equivalence between CPUs and GPUs. Theoretically a person could try to manipulate this balance, but given the fact that this is being averaged over many thousands of users, trying that would likely be ineffective, and moreover, probably quite noticeable if it was effective. Maybe some sort of verification/alert would be needed, but good point.

This sounds to me like a lot of extra work to ask of the system

I'm not proposing that we implement this right now at all. There might be a lot of problems with this proposal, and this might not even be the best way of approaching the problem. Besides that, there are other, more important things to focus on right now. But I think in the long term, investigating a direct FLOP -> Magnitude relationship is something that might be useful in GridCoin's growth, as it gives a clear, FLOP-measured value to a currency that's based on doing computations.

I don't understand why you want to down-rate people who are using more capable or more efficient hardware... is it necessary to demand they get paid based on how much heat they produce rather than how much scientific 'value'?

Wanted to comment on this too. As I understand the proposal, the network-averaged energy ratio FLOP/Joule for each (CPUs, FP32, FP64) is used only as a means to define the conversion from FLOP --> GRC within each class. The reason for defining the conversion factor this way is to allocate rewards more equitably between CPUs and GPUs. It remains true that FLOP is proportional to GRC rewarded within each of the three classes.

Hi @hownixt:

Just to be clear, I agree with you that a better/more efficient CPU should receive more GRC than a less efficient CPU, and a better/more efficient GPU should receive more GRC than a less efficient GPU. That's a basic principle that I had in mind when I was thinking about this. Unfortunately, that's currently not how it works. It's true that within a single project, a better CPU/GPU will receive more, but across projects that's not the case, and that's what I'm trying to address.