IOST: A Scalable & Secure Blockchain App Platform For Your Next Big Idea
The infrastructure of IOSChain is similar to existing well-known blockchains like Bitcoin and Ethereum, where nodes disseminate data through gossip protocol. The system split its data and state into shards. Each node is responsible for one shard of the system. Unspent transactions (UTXOs) are stored in the memory of the nodes in the corresponding shards. This raises several new challenges.
- How to divide the system into shards.
- How to reach consensus in each shard.
- How to perform inter-shard transactions.
In order to solve the above challenges in a fair and secure manner, we have to perform many random operations, for example, assigning nodes into shards, electing leaders in each shard. As a result, we have to first design an unforgeable and unbiased (uniformly random) distributed random number generation protocol. With the random number generation protocol, the above challenges can be addressed one by one.
In the rest of this paper, we present the techniques and methods used to address these challenges.
- In Section 4, we present Distributed Randomness Protocol (DRP), which is unforgeable and unbiased when the ratio of malicious nodes are below some certain predefined threshold. The random numbers generated by DRP is used to divide the system into shards, assign nodes to different shards, and elect leaders in each shard.
- In Section 5, we present Efficient Distributed Sharding (EDS) - a novel scheme to form shards (subsets of validators to record state and process transactions) that are both sufficiently large and strongly bias-resistant using a combination of DRP and a VRF-based leader election via cryptographic sortition.
- In Section 6, we present TransEpoch - a secure validators-to-shards assignment protocol during epoch transitions while maintaining system operability.
- In Section 7, we present Atomix - a novel two-step inter-shard atomic commit protocol that guarantees transaction atomicity in Byzantine setting.
- In Section 8, we present Proof-of-Believability (PoB) - a novel Byzantine consensus protocol with a Believable-First approach that guarantees safety and liveness of the system while largely maximizing the transaction throughput by size-one-shard.
- In Section 9, we present Micro State Blocks (MSB) - a novel mechanism to minimize the storage and bootstrapping costs for validators.
Traditional approach to generate randomness like Proof-of-Work based mechanism  or a trusted beacon  have computational wastes and centralization concerns. It is desirable to use cryptographic tools to generate distributed random numbers, which not only saves resources but also is provably secure.
There are multiple algorithms to generate distributed random numbers for the purpose of node-shard assignment and leader election in IOSChain. Here we present the one that, to our knowledge, the best fits the requirement in the IOSChain scenario. In IOSChain, the distributed random number generator has the following requirements.
- It has to be resistant to dishonest participants (including clients and servers) with in a certain ratio. To be detailed, the system is able to make progress when the ratio of dishonest participants are below the threshold, and nothing bad happens when making progress.
- The final random number must be unforgeable and unbiased (uniformly random), except negligible probability.
- Dishonest participant is not able to try multiple times to generate the random number that favors the participant, even multiple dishonest participants collude.
- Third parties are able to verify the output is generated by faithfully running the protocol (i.e., verify that it satisfies all the above requirements).
In order to achieve these requirements, we propose to use a client-server protocol, called Distributed Randomness Protocol (DRP) , where a client communicates with a set of servers to generate an unforgeable, uniformly random value through non-interactive zero-knowledge proof (NIZK) and publicly verifiable secret sharing (PVSS). In a certain protocol run, before the protocol finishes and the final random output is revealed, no entity in the protocol is able to learn any information about the final output, which makes sure a dishonest client is not able to try multiple runs to generate the random number that favors the dishonest client.
The protocol consists of two phases - randomness generation and randomness verification. It works as follows. Initially, the client starts a protocol run by broadcasting to all the servers a message including a randomly generated balanced grouping. In the first phase, each server generates a random input value and creates shares only for other members of the same group using PVSS. Upon receiving encrypted shares with the NIZK  proofs from all the servers (or timeout), the client chooses a subset of server inputs from each group. This allows the client fix each group’s secret and the output of the protocol. In the second phase, the servers decrypt and send their shares to the client as soon as the client receives a sign-off on input values in a global run of collective signature (CoSi) . Then the client combines the recovered group secrets to reveal the final random output.
With the distributed randomness protocol (DRP) presented above, it is not difficult to implement efficient distributed sharding. However, the protocol only works well without malicious or failure nodes, since it is performed by validators collectively. Therefore, we have to design backup protocols for scenarios with malicious or failure nodes. To conquer this problem, we propose a solution that uses Algorand  and Omniledger  to elect a leader.
- is a view counter is a validator is the private key for is the current epoch is the synchrony bound which has the minimum-value valid lottery to run DRP, each computes a lottery using Verifiable Random Function with its view and the node’s private key ., the validators fix the minimum-value valid lottery they have seen so far. to all other validators with its correctness proof. can use to compute a permutation and divide the result into buckets with same size, thus the mapping from nodes to shards is determined.