[Part 2] Everything is going to land, why should I design PP.io into three stages?
In a previous article, I explained the three phases of PP.io, "Strong Center", "Weak Center", "Decenter". Why should I gradually implement PP.io decentralized storage network in three steps:
Simply put, in blockchain impossible triangle, I temporarily gave up decentralized for scalable and consistent.
Let me first explain what impossible triangle theory is. That is to say, Scalable, Decentralized, Consistent, these three points cannot be selected all together. Only two of them can be selected at most. Bitcoin and Ethereum sacrificed scalability because they did it as an encrypted digital currency.
When I was designing PP.io, I sacrificed decentralization for the real needs of the scene.
How do I think in detail? The main three aspects are considered.
The very complicated proof mechanism
The cryptocurrency, bitcoin and Ethereum proof mechanisms are very simple. They are doing math problems and guessing numbers based on an integer. Whoever guesses a number first makes the front of the calculation result 0 enough. Whoever guesses this number first wins, so he gets the reward from the new block.
Look, how simple the algorithm is. Because of its simplicity, this algorithm is mathematically very rigorous. As long as this calculation process is mathematically impossible to calculate backward, this is very safe and has no loopholes. Because this is only a mathematical game, the algorithm is of course simple. However, this mathematical calculation is meaningless: the more people bitcoin mines, the more human resources are being consumed.
One of the great missions of the decentralized storage public chain is to turn bitcoin, a wasteful resource mining method, into a service that is beneficial to humans. I also followed this methodology when I was designing PP.io. Then how to measure the services provided, the most basic is to use the storage unit and the flow unit to measure. Traditional cloud services also measure by these two factors.
What is storage? Storage is to store a file, how big it is, how much time needs to be stored, this is a measurement factor; what is traffic? Traffic is how many bytes transferred, which is also a measured factor. To prove these two factors, not only can it be done by a single-machine algorithm, it is necessary to use network communication, and the third party node can witness the two sides to prove that it is useful. However, if this third-party node is dishonest, more third-party nodes are needed to witness it together, so Byzantine consensus must be reached among these witnesses to complete the proof. This process is much more complicated than the stand-alone algorithm of Bitcoin.
Another point is that each time Bitcoin generates a block, only one node can get rewards because it is the first to do the math problem right. So it's very simple. However, decentralized storage systems should reward for contributions based on all nodes that have served during this time. It's much more complicated than Bitcoin. (Maybe you mined Bitcoin, mistakenly thinking that Bitcoin based on computing power to distribute rewards. Actually not, because Bitcoin introduces a centralized node called the mine pool. The mine pool gets the bits obtained by very few nodes. The mine pool is equivalent to insurance; it takes the original only a few nodes got Bitcoin rewards for all those involved in mining.)
So the proof mechanism of decentralized storage is much more difficult than the proof mechanism of Bitcoin. To what extent is the difficulty, for example, Filecoin is developed by the IPFS team released a white paper in mid-2017. The white paper mentions the PoSt algorithm and the PoRep algorithm, accounting for 80% of the entire white paper. In the fall of 2018, the IPFS team published a 50-page paper on PoRep. It can see how complicated the PoRep algorithm is. Filecoin wants to directly achieve the ultimate goal of decentralization. It is now approaching the end of 2018, apart from two papers and several demo videos, there is no other substantive information. They are said to be a team of scientists. I believe they must encounter a considerable challenge.
I considered this when designing PP.io. In an entirely decentralized environment, all nodes can do evil. Under the premise that all nodes may be evil, developing any mechanism will be very complicated, especially the proof mechanism. The more complex the proof of mechanism, the more security holes there are. And if you initially giving up an entirely untrustworthy environment and make some of the characters credible, only the miners who provide storage can do evil, and the whole algorithm will be much simpler.
When I was designing the "strong center" phase of PP.io, I dispatched the Indexer Node, the Verifier Node and the settlement center were completed by the centralized service. Only the storage miners were decentralized so that they may do evil. At this phase, the proof mechanism related to the storage miners is first implemented, and the entire system is first to run. Other factors are more critical during this time, such as performance, Qos, economic models, etc.
Then, in the "weak center" phase, the previously centralized service becomes a node that can be deployed separately. Scheduling inodes and supervisory nodes can allow for authorized deployment, but must be constrained by offline commercial terms and guaranteed not to be done afterwards. During this time, we technically do the proof mechanism for these authorized nodes. When we are impeccable both in engineering and in mathematics, we can open the barrier to entry and go to the center.
Iteration and optimization of quality of service (QoS)
I have previously introduced the importance of quality of service (Qos), see https://medium.com/@omnigeeker/what-is-the-qos-of-decentralized-storage-9d330457f390
Let me first talk about the story of technology I did during the PPTV startup process. How did we do Qos? I had been doing P2P technology for ten years, started in 2004. Throughout these ten years, I have experienced P2P live broadcast, P2P VOD, and P2P technology on embedded devices. Among them, I did it on the P2P live Qos. When millions of people watched the same program online, the average time to start playing was 1.2 seconds; the average count of interruption was 1.6 seconds per half an hour; the whole network latest delays from broadcast source was up to 90 seconds. The VOD Qos achieved 90% bandwidth saving ratio; the average time to start playing was 1.5 seconds; the average count of interruption was 2.2 seconds per half an hour; the average time to play when seek position was 0.9 second.
We have achieved such a good Qos, and it provides a rock-solid foundation for us to reach 500 million users worldwide.
We achieved such excellent results at that time; it's not at the beginning, but after long-term day-by-day optimization and numerous versions of the upgrade iteration. In this process, we refined at least 100 QoS indicators, established a big data analysis system, and splits different regions of different countries to optimize one by one. We also developed an AB testing mechanism. A network is for most users; they use a stable kernel. B network is a small number of voluntary users; they use the latest kernel. We quickly evaluate the effectiveness of the algorithm for QoS. The B network will frequently upgrade the P2P core until it is determined that the QoS of the new network kernel is better and very stable. After done, we will fully upgrade the A network kernel for all users, which will allow most users to use the best user experience.
However, if we are in a completely decentralized environment, the upgrade needs the consensus of most nodes, like Bitcoin and Ethereum. Bitcoin upgrades have experienced many soft forks and hard forks, and large upgrades have been going on for a long time. Such product upgrade efficiency is very unfavorable for doing Qos. Good QoS is iterative, not done at one time. In the early stages of the project, if going directly to a decentralized upgrade, the cost of the upgrade will be extraordinarily high and the iteration cycle will be prolonged.
Doing good QoS is a product, but doing bad QoS is a toy. Now there are some so-called internationally renowned public chain projects, the story is "very good", "religious believers" are also many, but in my opinion, this is a toy, because it is very difficult to use.
When I designed PP.io, I chose "strong center" at the beginning to adjust QoS very efficiently. Because of the good QoS, more users will come in and use it so that the more data stored. With more data, it will attract more miners and form a virtuous circle. When QoS has been well, we go to decentralization again, because QoS is better, users are more, then public trust will be more and more critical.
Economic model
When I designed PP.io, I designed an incentive mechanism that miners can benefit from providing services on PP.io according to this economic incentive model. The quality of the economic model directly determines the success or failure of the project.
The economic mechanism looks very simple, but the actual operation is very complicated. For example, the security problem of decentralized storage mentioned above also shows that the impossible triangle theory. Only the mathematical proof of heavyweight cannot wholly solve the problem of node evil, but also depends on economic punishment.
First of all, I asked a few questions, what do you think:
Do you want miners to have a mortgage to mine?
Counter-point: Miners can mine without mortgage. If the miner must have a mortgage, the miner's entry threshold will be very high.
The square view: miners must mortgage to mine at first. Because of the mortgage, he can be punished when he does not follow the rules, to ensure the stability of the miners. If the miners are free to go online and offline at his will, then the P2P network will be very unstable, which will lower the stability of the entire service.
Ask again, if the miner suddenly goes offline, but does not deliberately go offline, such as a power outage, should it be punished?
Counter-Opinion: It should not punish. When designing the algorithm, the miner should be designed with a fault-tolerant mechanism to allow the miners to go offline occasionally. As long as the miners are not doing evil, they can not be punished.
The square view: In the computer program, it is impossible to identify whether the miner is deliberate, so the design incentives must be consistent. As long as it is offline, it is necessary to punish, so that it can eliminate low-quality miners and maintain high-quality miners. The better the miners, the better the quality of service across the network.
These two questions, what is your opinion? These two issues are primarily economic issues. No matter how you choose, you will have unpredictable results. Economics advocates what you hope is not what it is. Economics is a discipline that specializes in research.
When I designed PP.io, how did I solve this problem? I first set up an economic incentive plan. This initial plan was not decided by the brain, but by a series of data modeling and conditional assumptions. Later I will open the source code for the economic modeling program in the official Github of PP.io.
The impact factors of all aspects of the economic model are very complex. And if you start with a decentralized approach, there will be problems that are difficult to upgrade. The miners are likely to refuse to upgrade because they are not beneficial to themselves, just as the Bitmain does not care about the bitcoin core team and initiates the BCH hard fork.
When I designed PP.io, I realized that a good economic mechanism is critical, and a good economic model is adjusting in practice. However, premature decentralization is not conducive to the adjustment of the economic model. Therefore, the early adoption of the "strong center" is more conducive to the adjustment of the economic model. When the economic model is gradually stable and reasonable, PP.io will move progressively toward decentralization.
The above three reasons are why I went through the three phases of "strong center", "weak center" and "decenter" when designing PP.io.
Article author:Wayne Wong
If you want to reprint, please indicate the source
If you have an exchange about blockchain learning, you can contact me in the following ways:
Github: https://github.com/omnigeeker