Skip to main content

Data & Intelligence

Trust models in distributed ledgers

Consensus, getting distributed processes to agree on a single value, is a fundamental problem in computer science. Distributed processing is difficult. In fact, there are logical proofs that show pretty conclusively that there won’t be a single perfect algorithm for handling consensus in an asynchronous system made of imperfect nodes. As long as there is the possibility of a fault, there is a certainty of a problem. Cryptocurrencies such as Bitcoin and smart contract platforms like Ethereum are recent examples of the evolution of consensus algorithms blockchain represents.

Consensus Algorithms

All consensus algorithms address termination, agreement and integrity properties. Termination, a liveness property, means all correct (non-faulty) processes eventually produce a value. Agreement, a safety property, means that all correct processes select the same value and these values are valid within the dictates of the protocol.  In other words, the state of being “correct” is determined by how many other processes share your value (all, some quorum or even one). Integrity addresses the ability to recover from the failure of a participating node. When a process stops communicating in the case of failure, this is called a fail-stop. A byzantine fault refers to a process where a process continues to communicate but sends faulty, possibly malicious. data. At this point, all of these networks for private, permissioned networks and malicious actors were treated as insider threat detection incidents, which is mostly an HR concern. The Satoshi Nakamoto proposal was the first to managing byzantine fault tolerance in a public, trustless decentralized, asynchronous network using existing technologies in practical framework. The core insight involved a novel implementation of the Proof-of-Work algoith to reach consensus in a peer-to-peer network.

Imagine a distributed ledger of accounts available to everyone on the internet. This is a completely trustless model; anyone could technically create a fraudulent ledger. Imagine that a malicious entity creates a ledger state showing them to be the sender of some transaction, receive compensation and then present another ledger state that negates the earlier transaction. We need to not just agree that a state is valid, but we must also agree that this state will not be changed at a later time. Satoshi proposed that if there are two conflicting states, the state that was the hardest to generate computationally wins. Satoshi had a number of existing consensus algorithms to choose from, but Proof of Work fit his proposal most directly.

Proof of Work

Proof of Work provides a mechanism for injecting artificial computational complexity to the process of adding to a ledger. In Bitcoin, that unit of work involves finding a hash value that is less than some number set by the network called a difficulty level. First node to solve the puzzle wins and gets to add its proposed block to the blockchain and receive a mining reward. In case of ties, the longest blockchain will ultimately win thus providing eventual consistency. With this solution, we assume that an adversary cannot break the cryptography used in an existing blockchain nor do they have the computational power necessary to compete with the entire existing network. But what if they do?

There are two key assumptions in the Bitcoin implementation of blockchain: we assume that an adversary cannot break the cryptography used in an existing blockchain nor do they have the computational power necessary to compete with the entire existing network. A threat to the integrity of a peer to peer system will likely focus on one of these vectors. There is also the possibility that an exploitable gap may lie in the consensus algorithm itself because it is not mathematically possible that a system can be otherwise.

In 1985, Michael Fischer, Nancy Lynch and Michael Paterson published a cheery little paper showing that in an asynchronous network where messages may be delayed but not lost, there is no consensus algorithm that is guaranteed to terminate in every execution for all starting conditions if at least one node may fail-stop. Since there is no upper bound to f, there is no guarantee that a failure has occurred; it may just be a very long f. There cannot exist a deterministic consensus protocol that can guarantee safety, liveness and fault tolerance in an asynchronous system where fail-stop is a possibility. FLP further implies that no agreement can be guaranteed in an asynchronous system with byzantine faults, since they are more difficult. There has been an enormous amount of work in this area in the decades since this paper and you’ll see all the good solutions referencing back to this limitation.

Limitations

Most practical considerations are focused around technical failures and bad actors. A distributed peer-to-peer system like blockchain is not vulnerable to system failures since node failures are actually expected. A technical failure in blockchain would likely be focused around the cryptography that actually binds the individual transactions together. There is also the assumption that no bad actors would have the computational power necessary to promote a bad ledger. These are both assumptions, though.

As a Bitcoin owner, you have a private key that proves ownership and authorizes transactions. A private key can never be reset or recovered if lost. For every private key in the Bitcoin network there exists exactly one public key. In Bitcoin, private keys produce public keys using an algorithm called Elliptical Curve Digital Signature Algorithm (ECDSA). ECDSA guarantees that the same private key will always produce the same public key, but the private key cannot be reverse-engineered from the public key.

The Bitcoin private key is a 256-bit number created using the SHA-256 encryption algorithm. This had been considered practically unbreakable. Bruce Schneider in Applied Cryptography, citing thermodynamic limitations, implied that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space. Google had made a claim that they could break the encryption using quantum computing, but that seems impractical. Qubits are the basic units of quantum information. Researchers estimate that it would take 1500 quibits to begin the entanglement problem Even Google’s supercomputer only has 50 quibits.

Conclusion

Distributed processing is inherently difficult. Agreeing on truth is a tradeoff. Different consensus algorithms approach this problem by making different tradeoffs. Knowing your business problem allows you to make an informed choice among the different algorithms available. Since there is no single correct answer, you need to know the details of the tradeoffs of your selected algorithms.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

David Callaghan, Solutions Architect

As a solutions architect with Perficient, I bring twenty years of development experience and I'm currently hands-on with Hadoop/Spark, blockchain and cloud, coding in Java, Scala and Go. I'm certified in and work extensively with Hadoop, Cassandra, Spark, AWS, MongoDB and Pentaho. Most recently, I've been bringing integrated blockchain (particularly Hyperledger and Ethereum) and big data solutions to the cloud with an emphasis on integrating Modern Data produces such as HBase, Cassandra and Neo4J as the off-blockchain repository.

More from this Author

Follow Us