Nakamoto's blockchain is intended to enable consensus to be reached in a permissionless setting.
Although anyone can join or leave the protocol, the protocol should prevent “sybil attacks”. To do this it relies upon solving a computational puzzles: the so called proof of work. However, the assumption behind proving that this is sufficient to prevent attacks has always been that the network on which it operates is synchronous, something that is not quite true in the real world. Likewise possible attacks have typically made the same assumption, such as those analysed by Yonatan Sompolinsky and Aviv Zohar.
Not surprising then that I was interested when a paper was published entitled Analysis of the Blockchain Protocol in Asynchronous Network. Thankfully for all the Bitcoin fans it shows a degree of robustness in networks with limited delays, but outside of certain bounds it also demonstrates a simple attack which shows that the proof of work needs to be made harder.
If the blockchain is to work it has to achieve a strong degree of "consistency" (also known as T-consistency) by which we mean that all blocks are agreed upon by honest players except for a few, as yet unconfirmed, blocks at the end of the chain. If we are to say that a blockchain is secure it must satisfy several other assumptions, which the researchers state thus:
- Consistency: with overwhelming probability (in T), at any point, the chains of two honest players can diﬀer only in the last T blocks
- Future self-consistence: with overwhelming probability (in T), at any two points r,s the chains of any honest user at r and s diﬀers only in blocks within the last T blocks
- g-chain-growth: with overwhelming probability (in T), at any point in the execution, the chain of honest players grew by at least T messages in the last T g rounds, where g is called the chain-growth of the protocol
- The µ-chain quality: with overwhelming probability (in T), for any T consecutive messages in any chain held by some honest player, the fraction of messages that were “contributed by honest players” is at least µ.
"Does Nakamoto’s blockchain protocol satisfy consistency when executed in asynchronous networks with ∆-bounded delays?"
It's important to note that Nakamoto's protocol does vary the hardness of the proof of work, depending on the proportion of the mining network computational power that might be controlled by an adversary but obviously this affects the time for an agreed block to be added.
Interestingly the most effective attack mounted by the researchers came when they would have controlled 49.79% of the network. Whilst everyone understands the problems of a dishonest party controlling 51% of the network, 49.79% would seem to set a lower limit on what we should be wary of. But the results show that attacks are possible with only a tiny fraction of the networks computational power. The formal proofs presented show that Nakamoto's protocol is not consistent in a fully asynchronous network without an upper bound ∆ on the network delay.
The results also show that positive chain quality is also insufficient ie one of the four categories for a secure blockchain.
So, if you have make the proof of work harder to cope with longer delays in networks, you will inevitably slow down transaction times. This is something that the whole Bitcoin community is desperate to avoid as it is already seen as a significant impediment to Bitcoin become a payment system that could rival Visa, PayPal, etc. Furthermore, as I've written about recently, researchers are finding ways to reduce the amount of work required to solve the computational puzzles inherent in the blockchain protocol.
These researchers do not stop at Nakamoto's protocol. They have extended the thinking to a more generic analysis of a variety of blockchain protocols, and of particular interesting few of the UK government's suggestion that public ledgers might play a significant part in future government systems, they have analysed just such a system.
I suspect this is the start of some further research on blockchains in general, but it is encouraging to see that researchers have grappled with how the technologies behaves in what could be real network conditions. This type of analysis has been avoided for a long time as it is well known to be fiendishly difficult. However, unless real world conditions are analysed we will never prove whether or not we can truly rely upon the technologies.