Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Consensus module provides an interface for consensus mechanisms.
Currently, the following consensus engines are available:
IBFT PoS
The Zchains wants to maintain a state of modularity and pluggability. This is why the core consensus logic has been abstracted away, so new consensus mechanisms can be built on top, without compromising on usability and ease of use.
The Consensus interface is the core of the mentioned abstraction.
The VerifyHeader method represents a helper function which the consensus layer exposes to the blockchain layer It is there to handle header verification
The Start method simply starts the consensus process, and everything associated with it. This includes synchronization, sealing, everything that needs to be done
The Close method closes the consensus connection
There may be times when you might want to pass in a custom location for the consensus protocol to store data, or perhaps a custom key-value map that you want the consensus mechanism to use. This can be achieved through the Config struct, which gets read when a new consensus instance is created.
The blockchain header object, among other fields, has a field called ExtraData. To review the fields present in the block header, please check out the State in Ethereum section.
IBFT uses this extra field to store operational information regarding the block, answering questions like:
"Who signed this block?"
"Who are the validators for this block?"
These extra fields for IBFT are defined as follows:
In order for the node to sign information in IBFT, it leverages the signHash method:
Another notable method is the VerifyCommittedFields method, which verifies that the committed seals are from valid validators:
Snapshots, as the name implies, are there to provide a snapshot, or the state of a system at any block height (number).
Snapshots contain a set of nodes who are validators, as well as voting information (validators can vote for other validators). Validators include voting information in the Miner header filed, and change the value of the nonce:
Nonce is all 1s if the node wants to remove a validator
Nonce is all 0s if the node wants to add a validator
Snapshots are calculated using the processHeaders method:
This method is usually called with 1 header, but the flow is the same even with multiple headers. For each passed-in header, IBFT needs to verify that the proposer of the header is the validator. This can be done easily by grabbing the latest snapshot, and checking if the node is in the validator set.
Next, the nonce is checked. The vote is included, and tallied - and if there are enough votes a node is added/removed from the validator set, following which the new snapshot is saved.
Snapshot Store
The snapshot service manages and updates an entity called the snapshotStore, which stores the list of all available snapshots. Using it, the service is able to quickly figure out which snapshot is associated with which block height.
To start up IBFT, the Polygon Edge firstly needs to set up the IBFT transport:
It essentially creates a new topic with IBFT proto, with a new proto buff message. The messages are meant to be used by validators. The Polygon Edge then subscribes to the topic and handles messages accordingly.
MessageReq
The message exchanged by validators:
The View field in the MessageReq represents the current node position inside the chain. It has a round, and a sequence attribute.
round represents the proposer round for the height
sequence represents the height of the blockchain
The msgQueue filed in the IBFT implementation has the purpose of storing message requests. It orders messages by the View (firstly by sequence, then by round). The IBFT implementation also possesses different queues for different states in the system.
After the consensus mechanism is started using the Start method, it runs into an infinite loop which simulates a state machine:
SyncState
All nodes initially start in the Sync state.
This is because fresh data needs to be fetched from the blockchain. The client needs to find out if it's the validator, find the current snapshot. This state resolves any pending blocks.
After the sync finishes, and the client determines it is indeed a validator, it needs to transfer to AcceptState. If the client is not a validator, it will continue syncing, and stay in SyncState
AcceptState
The Accept state always check the snapshot and the validator set. If the current node is not in the validators set, it moves back to the Sync state.
On the other hand, if the node is a validator, it calculates the proposer. If it turns out that the current node is the proposer, it builds a block, and sends preprepare and then prepare messages.
Preprepare messages - messages sent by proposers to validators, to let them know about the proposal
Prepare messages - messages where validators agree on a proposal. All nodes receive all prepare messages
Commit messages - messages containing commit information for the proposal
If the current node is not a validator, it uses the getNextMessage method to read a message from the previously shown queue. It waits for the preprepare messages. Once it is confirmed everything is correct, the node moves to the Validate state.
ValidateState
The Validate state is rather simple - all nodes do in this state is read messages and add them to their local snapshot state.
One of the main modules of the ZChains are Blockchain and State.
Blockchain is the powerhouse that deals with block reorganizations. This means that it deals with all the logic that happens when a new block is included in the blockchain.
State represents the state transition object. It deals with how the state changes when a new block is included. Among other things, State handles:
Executing transactions
Executing the EVM
Changing the Merkle tries
Much more, which is covered in the corresponding State section 🙂
The key takeaway is that these 2 parts are very connected, and they work closely together in order for the client to function. For example, when the Blockchain layer receives a new block (and no reorganization occurred), it calls the State to perform a state transition.
Blockchain also has to deal with some parts relating to consensus (ex. is this ethHash correct?, is this PoW correct?). In one sentence, it is the main core of logic through which all blocks are included.
One of the most important parts relating to the Blockchain layer is the WriteBlocks method:
The WriteBlocks method is the entry point to write blocks into the blockchain. As a parameter, it takes in a range of blocks. Firstly, the blocks are validated. After that, they are written to the chain.
The actual state transition is performed by calling the processBlock method within WriteBlocks.
It is worth mentioning that, because it is the entry point for writing blocks to the blockchain, other modules (such as the Sealer) utilize this method.
There needs to be a way to monitor blockchain-related changes. This is where Subscriptions come in.
Subscriptions are a way to tap into blockchain event streams and instantly receive meaningful data.
The Blockchain Events contain information regarding any changes made to the actual chain. This includes reorganizations, as well as new blocks:
:::tip Refresher Do you remember when we mentioned the monitor command in the CLI Commands?
The Blockchain Events are the original events that happen in Zchains, and they're later mapped to a Protocol Buffers message format for easy transfer. :::
As mentioned before, ZChains is a set of different modules, all connected to each other. The Blockchain is connected to the State, or for example, Synchronization, which pipes new blocks into the Blockchain.
Minimal is the cornerstone for these inter-connected modules. It acts as a central hub for all the services that run on the ZChains.
Among other things, Minimal is responsible for:
Setting up data directories
Creating a keystore for libp2p communication
Creating storage
Setting up consensus
Setting up the blockchain object with GRPC, JSON RPC, and Synchronization
The JSON RPC module implements the JSON RPC API layer, something that dApp developers use to interact with the blockchain.
It includes support for standard json-rpc endpoints, as well as websocket endpoints.
TheZchains uses the blockchain interface to define all the methods that the JSON RPC module needs to use, in order to deliver its endpoints.
The blockchain interface is implemented by the Minimal server. It is the base implementation that's passed into the JSON RPC layer.
All the standard JSON RPC endpoints are implemented in:
The Filter Manager is a service that runs alongside the JSON RPC server.
It provides support for filtering blocks on the blockchain. Specifically, it includes both a log and a block level filter.
The Filter Manager relies heavily on Subscription Events, mentioned in the Blockchain section
Filter Manager events get dispatched in the Run method:
A node has to communicate with other nodes on the network, in order to exchange useful information. To accomplish this task, the ZChains leverages the battle-tested libp2p framework.
The choice to go with libp2p is primarily focused on:
Speed - libp2p has a significant performance improvement over devp2p (used in GETH and other clients)
Extensibility - it serves as a great foundation for other features of the system
Modularity - libp2p is modular by nature, just like the Polygon Edge. This gives greater flexibility, especially when parts of the Polygon Edge need to be swappable
On top of libp2p, the Polygon Edge uses the GRPC protocol. Technically, the Polygon Edge uses several GRPC protocols, which will be covered later on.
The GRPC layer helps abstract all the request/reply protocols and simplifies the streaming protocols needed for the Polygon Edge to function.
GRPC relies on Protocol Buffers to define services and message structures. The services and structures are defined in .proto files, which are compiled and are language-agnostic.
Earlier, we mentioned that the Polygon Edge leverages several GRPC protocols. This was done to boost the overall UX for the node operator, something which often lags with clients like GETH and Parity.
The node operator has a better overview of what is going on with the system by calling the GRPC service, instead of sifting through logs to find the information they're looking for.
The following section might seem familiar because it was briefly covered in the CLI Commands section.
The GRPC service that is intended to be used by node operators is defined like so:
:::tip The CLI commands actually call the implementations of these service methods.
These methods are implemented in minimal/system_service.go. :::
The ZChains also implements several service methods that are used by other nodes on the network. The mentioned service is described in the Protocol section.
The Crypto module contains crypto utility functions.
The Chain module contains chain parameters (active forks, consensus engine, etc.)
chains - Predefined chain configurations (mainnet, goerli, ibft)
The Helper module contains helper packages.
dao - Dao utils
enode - Enode encoding/decoding function
hex - Hex encoding/decoding functions
ipc - IPC connection functions
keccak - Keccak functions
rlputil - Rlp encoding/decoding helper function
The Command module contains interfaces for CLI commands.
We started with the idea of making software that is modular.
This is something that is present in almost all parts of the ZChains. Below, you will find a brief overview of the built architecture and its layering.
It all starts at the base networking layer, which utilizes libp2p. We decided to go with this technology because it fits into the designing philosophies of ZChains. Libp2p is:
Modular
Extensible
Fast
Most importantly, it provides a great foundation for more advanced features, which we'll cover later on.
The separation of the synchronization and consensus protocols allows for modularity and implementation of custom sync and consensus mechanisms - depending on how the client is being run.
ZChains is designed to offer off-the-shelf pluggable consensus algorithms.
The current list of supported consensus algorithms:
IBFT PoS
The Blockchain layer is the central layer that coordinates everything in the ZChains system. It is covered in depth in the corresponding Modules section.
The State inner layer contains state transition logic. It deals with how the state changes when a new block is included. It is covered in depth in the corresponding Modules section.
The JSON RPC layer is an API layer that dApp developers use to interact with the blockchain. It is covered in depth in the corresponding Modules section.
The TxPool layer represents the transaction pool, and it is closely linked with other modules in the system, as transactions can be added from multiple entry points.
The GRPC layer is vital for operator interactions. Through it, node operators can easily interact with the client, providing an enjoyable UX.
The Sealer is an entity that gathers the transactions, and creates a new block. Then, that block is sent to the Consensus module to seal it.
The final sealing logic is located within the Consensus module.
:::caution Work in progress The Sealer and the Consensus modules will be combined into a single entity in the near future.
The new module will incorporate modular logic for different kinds of consensus mechanisms, which require different sealing implementations:
PoS (Proof of Stake)
PoA (Proof of Authority)
Currently, the Sealer and the Consensus modules work with PoW (Proof of Work). :::
To truly understand how State works, you must understand some basic Ethereum concepts.
We highly recommend reading the .
Now that we've familiarized ourselves with basic Ethereum concepts, the next overview should be easy.
We mentioned that the World state trie has all the Ethereum accounts that exist. These accounts are the leaves of the Merkle trie. Each leaf has encoded Account State information.
This enables the Zchains to get a specific Merkle trie, for a specific point in time. For example, we can get the hash of the state at block 10.
The Merkle trie, at any point in time, is called a Snapshot.
We can have Snapshots for the state trie, or for the storage trie - they are basically the same. The only difference is in what the leaves represent:
In the case of the storage trie, the leaves contain an arbitrary state, which we cannot process or know what's in there
In the case of the state trie, the leaves represent accounts
The Snapshot interface is defined as such:
The information that can be committed is defined by the Object struct:
The implementation for the Merkle trie is in the state/immutable-trie folder. state/immutable-trie/state.go implements the State interface.
state/immutable-trie/trie.go is the main Merkle trie object. It represents an optimized version of the Merkle trie, which reuses as much memory as possible.
state/executor.go includes all the information needed for the Zchains to decide how a block changes the current state. The implementation of ProcessBlock is located here.
The apply method does the actual state transition. The executor calls the EVM.
When a state transition is executed, the main module that executes the state transition is the EVM (located in state/runtime/evm).
The dispatch table does a match between the opcode and the instruction.
The core logic that powers the EVM is the Run loop.
This is the main entry point for the EVM. It does a loop and checks the current opcode, fetches the instruction, checks if it can be executed, consumes gas and executes the instruction until it either fails or stops.
The TxPool module represents the transaction pool implementation, where transactions are added from different parts of the system. The module also exposes several useful features for node operators, which are covered below.
Node operators can query these GRPC endpoints, as described in the section.
The addImpl method is the bread and butter of the TxPool module. It is the central place where transactions are added in the system, being called from the GRPC service, JSON RPC endpoints, and whenever the client receives a transaction through the gossip protocol.
It takes in as an argument ctx, which just denotes the context from which the transactions are being added (GRPC, JSON RPC...). The other parameter is the list of transactions to be added to the pool.
The key thing to note here is the check for the From field within the transaction:
If the From field is empty, it is regarded as an unencrypted/unsigned transaction. These kinds of transactions are only accepted in development mode
If the From field is not empty, that means that it's a signed transaction, so signature verification takes place
After all these validations, the transactions are considered to be valid.
The fields in the TxPool object that can cause confusion are the queue and sorted lists.
queue - Heap implementation of a sorted list of account transactions (by nonce)
sorted - Sorted list for all the current promoted transactions (all executable transactions). Sorted by gas price
Whenever you submit a transaction, there are three ways it can be processed by the TxPool.
All pending transactions can fit in a block
One or more pending transactions can not fit in a block
One or more pending transactions will never fit in a block
Here, the word fit means that the transaction has a gas limit that is lower than the remaining gas in the TxPool.
The first scenario does not produce any error.
The TxPool remaining gas is set to the gas limit of the last block, lets say 5000
A first transaction is processed and consumes 3000 gas of the TxPool
The remaining gas of the TxPool is now 2000
A second transaction, which is the same as the first one - they both consume 3000 units of gas, is submitted
Since the remaining gas of the TxPool is lower than the transaction gas, it cannot be processed in the current block
It is put back into a pending transaction queue so that it can be processed in the next block
The first block is written, lets call it block #1
The TxPool remaining gas is set to the parent block - block #1's gas limit
The transaction which was put back into the TxPool pending transaction queue is now processed and written in the block
The TxPool remaining gas is now 2000
The second block is written
...
The TxPool remaining gas is set to the gas limit of the last block, lets say 5000
A first transaction is processed and consumes 3000 gas of the TxPool
The remaining gas of the TxPool is now 2000
A second transaction, with a gas field set to 6000 is submitted
Since the block gas limit is lower than the transaction gas, this transaction is discarded
It will never be able to fit in a block
The first block is written
...
This happens whenever you get the following error:
There are situations when nodes want to keep the block gas limit below or at a certain target on a running chain.
The node operator can set the target gas limit on a specific node, which will try to apply this limit to newly created blocks. If the majority of the other nodes also have a similar (or same) target gas limit set, then the block gas limit will always hover around that block gas target, slowly progressing towards it (at max 1/1024 * parent block gas limit
) as new blocks are created.
The node operator sets the block gas limit for a single node to be 5000
Other nodes are configured to be 5000
as well, apart from a single node which is configured to be 7000
When the nodes who have their gas target set to 5000
become proposers, they will check to see if the gas limit is already at the target
If the gas limit is not at the target (it is greater / lower), the proposer node will set the block gas target to at most (1/1024 * parent gas limit) in the direction of the target
Ex: parentGasLimit = 4500
and blockGasTarget = 5000
, the proposer will calculate the gas limit for the new block as 4504.39453125
(4500/1024 + 4500
)
Ex: parentGasLimit = 5500
and blockGasTarget = 5000
, the proposer will calculate the gas limit for the new block as 5494.62890625
(5500 - 5500/1024
)
This ensures that the block gas limit in the chain will be kept at the target, because the single proposer who has the target configured to 7000
cannot advance the limit much, and the majority of the nodes who have it set at 5000
will always attempt to keep it there
The ZChains currently utilizes LevelDB for data storage, as well as an in-memory data store.
Throughout the ZChains, when modules need to interact with the underlying data store, they don't need to know which DB engine or service they're speaking to.
The DB layer is abstracted away between a module called Storage, which exports interfaces that modules query.
Each DB layer, for now only LevelDB, implements these methods separately, making sure they fit in with their implementation.
In order to make querying the LevelDB storage deterministic, and to avoid key storage clashing, the ZChains leverages prefixes and sub-prefixes when storing data
The plans for the near future include adding some of the most popular DB solutions, such as:
PostgreSQL
MySQL
The Types module implements core object types, such as:
Address
Hash
Header
lots of helper functions
Unlike clients such as GETH, the ZChains doesn't use reflection for the encoding. The preference was to not use reflection because it introduces new problems, such as performance degradation, and harder scaling.
The Types module provides an easy-to-use interface for RLP marshaling and unmarshalling, using the FastRLP package.
Marshaling is done through the MarshalRLPWith and MarshalRLPTo methods. The analogous methods exist for unmarshalling.
By manually defining these methods, the ZChains doesn't need to use reflection. In rlp_marshal.go, you can find methods for marshaling:
Bodies
Blocks
Headers
Receipts
Logs
Transactions