arrow-left

All pages
gitbookPowered by GitBook
1 of 15

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Blockchain

hashtag
Overview

One of the main modules of the ZChains are Blockchain and State.

Blockchain is the powerhouse that deals with block reorganizations. This means that it deals with all the logic that happens when a new block is included in the blockchain.

State represents the state transition object. It deals with how the state changes when a new block is included. Among other things, State handles:

  • Executing transactions

  • Executing the EVM

  • Changing the Merkle tries

  • Much more, which is covered in the corresponding

The key takeaway is that these 2 parts are very connected, and they work closely together in order for the client to function. For example, when the Blockchain layer receives a new block (and no reorganization occurred), it calls the State to perform a state transition.

Blockchain also has to deal with some parts relating to consensus (ex. is this ethHash correct?, is this PoW correct?). In one sentence, it is the main core of logic through which all blocks are included.

hashtag
WriteBlocks

One of the most important parts relating to the Blockchain layer is the WriteBlocks method:

The WriteBlocks method is the entry point to write blocks into the blockchain. As a parameter, it takes in a range of blocks. Firstly, the blocks are validated. After that, they are written to the chain.

The actual state transition is performed by calling the processBlock method within WriteBlocks.

It is worth mentioning that, because it is the entry point for writing blocks to the blockchain, other modules (such as the Sealer) utilize this method.

hashtag
Blockchain Subscriptions

There needs to be a way to monitor blockchain-related changes. This is where Subscriptions come in.

Subscriptions are a way to tap into blockchain event streams and instantly receive meaningful data.

The Blockchain Events contain information regarding any changes made to the actual chain. This includes reorganizations, as well as new blocks:

:::tip Refresher Do you remember when we mentioned the monitor command in the ?

The Blockchain Events are the original events that happen in Zchains, and they're later mapped to a Protocol Buffers message format for easy transfer. :::

State
section 🙂
CLI Commands
// WriteBlocks writes a batch of blocks
func (b *Blockchain) WriteBlocks(blocks []*types.Block) error {
	if len(blocks) == 0 {
		return fmt.Errorf("no headers found to insert")
	}

	parent, ok := b.readHeader(blocks[0].ParentHash())
	if !ok {
		return fmt.Errorf("parent of %s (%d) not found: %s", blocks[0].Hash().String(), blocks[0].Number(), blocks[0].ParentHash())
	}

	// validate chain
	for i := 0; i < len(blocks); i++ {
		block := blocks[i]

		if block.Number()-1 != parent.Number {
			return fmt.Errorf("number sequence not correct at %d, %d and %d", i, block.Number(), parent.Number)
		}
		if block.ParentHash() != parent.Hash {
			return fmt.Errorf("parent hash not correct")
		}
		if err := b.consensus.VerifyHeader(parent, block.Header, false, true); err != nil {
			return fmt.Errorf("failed to verify the header: %v", err)
		}

		// verify body data
		if hash := buildroot.CalculateUncleRoot(block.Uncles); hash != block.Header.Sha3Uncles {
			return fmt.Errorf("uncle root hash mismatch: have %s, want %s", hash, block.Header.Sha3Uncles)
		}
		
		if hash := buildroot.CalculateTransactionsRoot(block.Transactions); hash != block.Header.TxRoot {
			return fmt.Errorf("transaction root hash mismatch: have %s, want %s", hash, block.Header.TxRoot)
		}
		parent = block.Header
	}

	// Write chain
	for indx, block := range blocks {
		header := block.Header

		body := block.Body()
		if err := b.db.WriteBody(header.Hash, block.Body()); err != nil {
			return err
		}
		b.bodiesCache.Add(header.Hash, body)

		// Verify uncles. It requires to have the bodies on memory
		if err := b.VerifyUncles(block); err != nil {
			return err
		}
		// Process and validate the block
		if err := b.processBlock(blocks[indx]); err != nil {
			return err
		}

		// Write the header to the chain
		evnt := &Event{}
		if err := b.writeHeaderImpl(evnt, header); err != nil {
			return err
		}
		b.dispatchEvent(evnt)

		// Update the average gas price
		b.UpdateGasPriceAvg(new(big.Int).SetUint64(header.GasUsed))
	}

	return nil
}
type Subscription interface {
    // Returns a Blockchain Event channel
	GetEventCh() chan *Event
	
	// Returns the latest event (blocking)
	GetEvent() *Event
	
	// Closes the subscription
	Close()
}
type Event struct {
	// Old chain removed if there was a reorg
	OldChain []*types.Header

	// New part of the chain (or a fork)
	NewChain []*types.Header

	// Difficulty is the new difficulty created with this event
	Difficulty *big.Int

	// Type is the type of event
	Type EventType

	// Source is the source that generated the blocks for the event
	// right now it can be either the Sealer or the Syncer. TODO
	Source string
}

Architecture Overview

We started with the idea of making software that is modular.

This is something that is present in almost all parts of the ZChains. Below, you will find a brief overview of the built architecture and its layering.

hashtag
ZChains Layering

Polygon Edge Architecture

hashtag
Libp2p

It all starts at the base networking layer, which utilizes libp2p. We decided to go with this technology because it fits into the designing philosophies of ZChains. Libp2p is:

  • Modular

  • Extensible

  • Fast

Most importantly, it provides a great foundation for more advanced features, which we'll cover later on.

hashtag
Synchronization & Consensus

The separation of the synchronization and consensus protocols allows for modularity and implementation of custom sync and consensus mechanisms - depending on how the client is being run.

ZChains is designed to offer off-the-shelf pluggable consensus algorithms.

The current list of supported consensus algorithms:

  • IBFT PoS

hashtag
Blockchain

The Blockchain layer is the central layer that coordinates everything in the ZChains system. It is covered in depth in the corresponding Modules section.

hashtag
State

The State inner layer contains state transition logic. It deals with how the state changes when a new block is included. It is covered in depth in the corresponding Modules section.

hashtag
JSON RPC

The JSON RPC layer is an API layer that dApp developers use to interact with the blockchain. It is covered in depth in the corresponding Modules section.

hashtag
TxPool

The TxPool layer represents the transaction pool, and it is closely linked with other modules in the system, as transactions can be added from multiple entry points.

hashtag
GRPC

The GRPC layer is vital for operator interactions. Through it, node operators can easily interact with the client, providing an enjoyable UX.

Sealer

hashtag
Overview

The Sealer is an entity that gathers the transactions, and creates a new block. Then, that block is sent to the Consensus module to seal it.

The final sealing logic is located within the Consensus module.

hashtag
Run Method

:::caution Work in progress The Sealer and the Consensus modules will be combined into a single entity in the near future.

The new module will incorporate modular logic for different kinds of consensus mechanisms, which require different sealing implementations:

  • PoS (Proof of Stake)

  • PoA (Proof of Authority)

Currently, the Sealer and the Consensus modules work with PoW (Proof of Work). :::

Networking

hashtag
Overview

A node has to communicate with other nodes on the network, in order to exchange useful information. To accomplish this task, the ZChains leverages the battle-tested libp2p framework.

The choice to go with libp2p is primarily focused on:

  • Speed - libp2p has a significant performance improvement over devp2p (used in GETH and other clients)

  • Extensibility - it serves as a great foundation for other features of the system

  • Modularity - libp2p is modular by nature, just like the Polygon Edge. This gives greater flexibility, especially when parts of the Polygon Edge need to be swappable

hashtag
GRPC

On top of libp2p, the Polygon Edge uses the GRPC protocol. Technically, the Polygon Edge uses several GRPC protocols, which will be covered later on.

The GRPC layer helps abstract all the request/reply protocols and simplifies the streaming protocols needed for the Polygon Edge to function.

GRPC relies on Protocol Buffers to define services and message structures. The services and structures are defined in .proto files, which are compiled and are language-agnostic.

Earlier, we mentioned that the Polygon Edge leverages several GRPC protocols. This was done to boost the overall UX for the node operator, something which often lags with clients like GETH and Parity.

The node operator has a better overview of what is going on with the system by calling the GRPC service, instead of sifting through logs to find the information they're looking for.

hashtag
GRPC for Node Operators

The following section might seem familiar because it was briefly covered in the section.

The GRPC service that is intended to be used by node operators is defined like so:

:::tip The CLI commands actually call the implementations of these service methods.

These methods are implemented in minimal/system_service.go. :::

hashtag
GRPC for Other Nodes

The ZChains also implements several service methods that are used by other nodes on the network. The mentioned service is described in the section.

hashtag
📜 Resources

CLI Commandsarrow-up-right
Protocolarrow-up-right
Protocol Buffersarrow-up-right
libp2parrow-up-right
gRPCarrow-up-right
func (s *Sealer) run(ctx context.Context) {
	sub := s.blockchain.SubscribeEvents()
	eventCh := sub.GetEventCh()

	for {
		if s.config.DevMode {
			// In dev-mode we wait for new transactions to seal blocks
			select {
			case <-s.wakeCh:
			case <-ctx.Done():
				return
			}
		}

		// start sealing
		subCtx, cancel := context.WithCancel(ctx)
		done := s.sealAsync(subCtx)

		// wait for the sealing to be done
		select {
		case <-done:
			// the sealing process has finished
		case <-ctx.Done():
			// the sealing routine has been canceled
		case <-eventCh:
			// there is a new head, reset sealer
		}

		// cancel the sealing process context
		cancel()

		if ctx.Err() != nil {
			return
		}
	}
}
service System {
    // GetInfo returns info about the client
    rpc GetStatus(google.protobuf.Empty) returns (ServerStatus);

    // PeersAdd adds a new peer
    rpc PeersAdd(PeersAddRequest) returns (google.protobuf.Empty);

    // PeersList returns the list of peers
    rpc PeersList(google.protobuf.Empty) returns (PeersListResponse);

    // PeersInfo returns the info of a peer
    rpc PeersStatus(PeersStatusRequest) returns (Peer);

    // Subscribe subscribes to blockchain events
    rpc Subscribe(google.protobuf.Empty) returns (stream BlockchainEvent);
}

Storage

hashtag
Overview

The ZChains currently utilizes LevelDB for data storage, as well as an in-memory data store.

Throughout the ZChains, when modules need to interact with the underlying data store, they don't need to know which DB engine or service they're speaking to.

The DB layer is abstracted away between a module called Storage, which exports interfaces that modules query.

Each DB layer, for now only LevelDB, implements these methods separately, making sure they fit in with their implementation.

hashtag
LevelDB

hashtag
Prefixes

In order to make querying the LevelDB storage deterministic, and to avoid key storage clashing, the ZChains leverages prefixes and sub-prefixes when storing data

hashtag
Future Plans

The plans for the near future include adding some of the most popular DB solutions, such as:

  • PostgreSQL

  • MySQL

hashtag
📜 Resources

LevelDBarrow-up-right
// Storage is a generic blockchain storage
type Storage interface {
	ReadCanonicalHash(n uint64) (types.Hash, bool)
	WriteCanonicalHash(n uint64, hash types.Hash) error

	ReadHeadHash() (types.Hash, bool)
	ReadHeadNumber() (uint64, bool)
	WriteHeadHash(h types.Hash) error
	WriteHeadNumber(uint64) error

	WriteForks(forks []types.Hash) error
	ReadForks() ([]types.Hash, error)

	WriteDiff(hash types.Hash, diff *big.Int) error
	ReadDiff(hash types.Hash) (*big.Int, bool)

	WriteHeader(h *types.Header) error
	ReadHeader(hash types.Hash) (*types.Header, error)

	WriteCanonicalHeader(h *types.Header, diff *big.Int) error

	WriteBody(hash types.Hash, body *types.Body) error
	ReadBody(hash types.Hash) (*types.Body, error)

	WriteSnapshot(hash types.Hash, blob []byte) error
	ReadSnapshot(hash types.Hash) ([]byte, bool)

	WriteReceipts(hash types.Hash, receipts []*types.Receipt) error
	ReadReceipts(hash types.Hash) ([]*types.Receipt, error)

	WriteTxLookup(hash types.Hash, blockHash types.Hash) error
	ReadTxLookup(hash types.Hash) (types.Hash, bool)

	Close() error
}
// Prefixes for the key-value store
var (
	// DIFFICULTY is the difficulty prefix
	DIFFICULTY = []byte("d")

	// HEADER is the header prefix
	HEADER = []byte("h")

	// HEAD is the chain head prefix
	HEAD = []byte("o")

	// FORK is the entry to store forks
	FORK = []byte("f")

	// CANONICAL is the prefix for the canonical chain numbers
	CANONICAL = []byte("c")

	// BODY is the prefix for bodies
	BODY = []byte("b")

	// RECEIPTS is the prefix for receipts
	RECEIPTS = []byte("r")

	// SNAPSHOTS is the prefix for snapshots
	SNAPSHOTS = []byte("s")

	// TX_LOOKUP_PREFIX is the prefix for transaction lookups
	TX_LOOKUP_PREFIX = []byte("l")
)

// Sub-prefixes
var (
	HASH   = []byte("hash")
	NUMBER = []byte("number")
	EMPTY  = []byte("empty")
)

Consensus

hashtag
Overview

The Consensus module provides an interface for consensus mechanisms.

Currently, the following consensus engines are available:

  • IBFT PoS

The Zchains wants to maintain a state of modularity and pluggability. This is why the core consensus logic has been abstracted away, so new consensus mechanisms can be built on top, without compromising on usability and ease of use.

hashtag
Consensus Interface

The Consensus interface is the core of the mentioned abstraction.

  • The VerifyHeader method represents a helper function which the consensus layer exposes to the blockchain layer It is there to handle header verification

  • The Start method simply starts the consensus process, and everything associated with it. This includes synchronization, sealing, everything that needs to be done

  • The Close method closes the consensus connection

hashtag
Consensus Configuration

There may be times when you might want to pass in a custom location for the consensus protocol to store data, or perhaps a custom key-value map that you want the consensus mechanism to use. This can be achieved through the Config struct, which gets read when a new consensus instance is created.

hashtag
IBFT

hashtag
ExtraData

The blockchain header object, among other fields, has a field called ExtraData. To review the fields present in the block header, please check out the section.

IBFT uses this extra field to store operational information regarding the block, answering questions like:

  • "Who signed this block?"

  • "Who are the validators for this block?"

These extra fields for IBFT are defined as follows:

hashtag
Signing Data

In order for the node to sign information in IBFT, it leverages the signHash method:

Another notable method is the VerifyCommittedFields method, which verifies that the committed seals are from valid validators:

hashtag
Snapshots

Snapshots, as the name implies, are there to provide a snapshot, or the state of a system at any block height (number).

Snapshots contain a set of nodes who are validators, as well as voting information (validators can vote for other validators). Validators include voting information in the Miner header filed, and change the value of the nonce:

  • Nonce is all 1s if the node wants to remove a validator

  • Nonce is all 0s if the node wants to add a validator

Snapshots are calculated using the processHeaders method:

This method is usually called with 1 header, but the flow is the same even with multiple headers. For each passed-in header, IBFT needs to verify that the proposer of the header is the validator. This can be done easily by grabbing the latest snapshot, and checking if the node is in the validator set.

Next, the nonce is checked. The vote is included, and tallied - and if there are enough votes a node is added/removed from the validator set, following which the new snapshot is saved.

Snapshot Store

The snapshot service manages and updates an entity called the snapshotStore, which stores the list of all available snapshots. Using it, the service is able to quickly figure out which snapshot is associated with which block height.

hashtag
IBFT Startup

To start up IBFT, the Polygon Edge firstly needs to set up the IBFT transport:

It essentially creates a new topic with IBFT proto, with a new proto buff message. The messages are meant to be used by validators. The Polygon Edge then subscribes to the topic and handles messages accordingly.

MessageReq

The message exchanged by validators:

The View field in the MessageReq represents the current node position inside the chain. It has a round, and a sequence attribute.

  • round represents the proposer round for the height

  • sequence represents the height of the blockchain

The msgQueue filed in the IBFT implementation has the purpose of storing message requests. It orders messages by the View (firstly by sequence, then by round). The IBFT implementation also possesses different queues for different states in the system.

hashtag
IBFT States

After the consensus mechanism is started using the Start method, it runs into an infinite loop which simulates a state machine:

SyncState

All nodes initially start in the Sync state.

This is because fresh data needs to be fetched from the blockchain. The client needs to find out if it's the validator, find the current snapshot. This state resolves any pending blocks.

After the sync finishes, and the client determines it is indeed a validator, it needs to transfer to AcceptState. If the client is not a validator, it will continue syncing, and stay in SyncState

AcceptState

The Accept state always check the snapshot and the validator set. If the current node is not in the validators set, it moves back to the Sync state.

On the other hand, if the node is a validator, it calculates the proposer. If it turns out that the current node is the proposer, it builds a block, and sends preprepare and then prepare messages.

  • Preprepare messages - messages sent by proposers to validators, to let them know about the proposal

  • Prepare messages - messages where validators agree on a proposal. All nodes receive all prepare messages

  • Commit messages - messages containing commit information for the proposal

If the current node is not a validator, it uses the getNextMessage method to read a message from the previously shown queue. It waits for the preprepare messages. Once it is confirmed everything is correct, the node moves to the Validate state.

ValidateState

The Validate state is rather simple - all nodes do in this state is read messages and add them to their local snapshot state.

State in Ethereumarrow-up-right
// Consensus is the interface for consensus
type Consensus interface {
	// VerifyHeader verifies the header is correct
	VerifyHeader(parent, header *types.Header) error

	// Start starts the consensus
	Start() error

	// Close closes the connection
	Close() error
}
// Config is the configuration for the consensus
type Config struct {
	// Logger to be used by the backend
	Logger *log.Logger

	// Params are the params of the chain and the consensus
	Params *chain.Params

	// Specific configuration parameters for the backend
	Config map[string]interface{}

	// Path for the consensus protocol to store information
	Path string
}
type IstanbulExtra struct {
	Validators    []types.Address
	Seal          []byte
	CommittedSeal [][]byte
}
func signHash(h *types.Header) ([]byte, error) {
	//hash := istambulHeaderHash(h)
	//return hash.Bytes(), nil

	h = h.Copy() // make a copy since we update the extra field

	arena := fastrlp.DefaultArenaPool.Get()
	defer fastrlp.DefaultArenaPool.Put(arena)

	// when hashign the block for signing we have to remove from
	// the extra field the seal and commitedseal items
	extra, err := getIbftExtra(h)
	if err != nil {
		return nil, err
	}
	putIbftExtraValidators(h, extra.Validators)

	vv := arena.NewArray()
	vv.Set(arena.NewBytes(h.ParentHash.Bytes()))
	vv.Set(arena.NewBytes(h.Sha3Uncles.Bytes()))
	vv.Set(arena.NewBytes(h.Miner.Bytes()))
	vv.Set(arena.NewBytes(h.StateRoot.Bytes()))
	vv.Set(arena.NewBytes(h.TxRoot.Bytes()))
	vv.Set(arena.NewBytes(h.ReceiptsRoot.Bytes()))
	vv.Set(arena.NewBytes(h.LogsBloom[:]))
	vv.Set(arena.NewUint(h.Difficulty))
	vv.Set(arena.NewUint(h.Number))
	vv.Set(arena.NewUint(h.GasLimit))
	vv.Set(arena.NewUint(h.GasUsed))
	vv.Set(arena.NewUint(h.Timestamp))
	vv.Set(arena.NewCopyBytes(h.ExtraData))

	buf := keccak.Keccak256Rlp(nil, vv)
	return buf, nil
}
func verifyCommitedFields(snap *Snapshot, header *types.Header) error {
	extra, err := getIbftExtra(header)
	if err != nil {
		return err
	}
	if len(extra.CommittedSeal) == 0 {
		return fmt.Errorf("empty committed seals")
	}

	// get the message that needs to be signed
	signMsg, err := signHash(header)
	if err != nil {
		return err
	}
	signMsg = commitMsg(signMsg)

	visited := map[types.Address]struct{}{}
	for _, seal := range extra.CommittedSeal {
		addr, err := ecrecoverImpl(seal, signMsg)
		if err != nil {
			return err
		}

		if _, ok := visited[addr]; ok {
			return fmt.Errorf("repeated seal")
		} else {
			if !snap.Set.Includes(addr) {
				return fmt.Errorf("signed by non validator")
			}
			visited[addr] = struct{}{}
		}
	}

	validSeals := len(visited)
	if validSeals <= 2*snap.Set.MinFaultyNodes() {
		return fmt.Errorf("not enough seals to seal block")
	}
	return nil
}
func (i *Ibft) processHeaders(headers []*types.Header) error {
	if len(headers) == 0 {
		return nil
	}

	parentSnap, err := i.getSnapshot(headers[0].Number - 1)
	if err != nil {
		return err
	}
	snap := parentSnap.Copy()

	saveSnap := func(h *types.Header) error {
		if snap.Equal(parentSnap) {
			return nil
		}

		snap.Number = h.Number
		snap.Hash = h.Hash.String()

		i.store.add(snap)

		parentSnap = snap
		snap = parentSnap.Copy()
		return nil
	}

	for _, h := range headers {
		number := h.Number

		validator, err := ecrecoverFromHeader(h)
		if err != nil {
			return err
		}
		if !snap.Set.Includes(validator) {
			return fmt.Errorf("unauthroized validator")
		}

		if number%i.epochSize == 0 {
			// during a checkpoint block, we reset the voles
			// and there cannot be any proposals
			snap.Votes = nil
			if err := saveSnap(h); err != nil {
				return err
			}

			// remove in-memory snaphots from two epochs before this one
			epoch := int(number/i.epochSize) - 2
			if epoch > 0 {
				purgeBlock := uint64(epoch) * i.epochSize
				i.store.deleteLower(purgeBlock)
			}
			continue
		}

		// if we have a miner address, this might be a vote
		if h.Miner == types.ZeroAddress {
			continue
		}

		// the nonce selects the action
		var authorize bool
		if h.Nonce == nonceAuthVote {
			authorize = true
		} else if h.Nonce == nonceDropVote {
			authorize = false
		} else {
			return fmt.Errorf("incorrect vote nonce")
		}

		// validate the vote
		if authorize {
			// we can only authorize if they are not on the validators list
			if snap.Set.Includes(h.Miner) {
				continue
			}
		} else {
			// we can only remove if they are part of the validators list
			if !snap.Set.Includes(h.Miner) {
				continue
			}
		}

		count := snap.Count(func(v *Vote) bool {
			return v.Validator == validator && v.Address == h.Miner
		})
		if count > 1 {
			// there can only be one vote per validator per address
			return fmt.Errorf("more than one proposal per validator per address found")
		}
		if count == 0 {
			// cast the new vote since there is no one yet
			snap.Votes = append(snap.Votes, &Vote{
				Validator: validator,
				Address:   h.Miner,
				Authorize: authorize,
			})
		}

		// check the tally for the proposed validator
		tally := snap.Count(func(v *Vote) bool {
			return v.Address == h.Miner
		})

		if tally > snap.Set.Len()/2 {
			if authorize {
				// add the proposal to the validator list
				snap.Set.Add(h.Miner)
			} else {
				// remove the proposal from the validators list
				snap.Set.Del(h.Miner)

				// remove any votes casted by the removed validator
				snap.RemoveVotes(func(v *Vote) bool {
					return v.Validator == h.Miner
				})
			}

			// remove all the votes that promoted this validator
			snap.RemoveVotes(func(v *Vote) bool {
				return v.Address == h.Miner
			})
		}

		if err := saveSnap(h); err != nil {
			return nil
		}
	}

	// update the metadata
	i.store.updateLastBlock(headers[len(headers)-1].Number)
	return nil
}
type snapshotStore struct {
	lastNumber uint64
	lock       sync.Mutex
	list       snapshotSortedList
}
func (i *Ibft) setupTransport() error {
	// use a gossip protocol
	topic, err := i.network.NewTopic(ibftProto, &proto.MessageReq{})
	if err != nil {
		return err
	}

	err = topic.Subscribe(func(obj interface{}) {
		msg := obj.(*proto.MessageReq)

		if !i.isSealing() {
			// if we are not sealing we do not care about the messages
			// but we need to subscribe to propagate the messages
			return
		}

		// decode sender
		if err := validateMsg(msg); err != nil {
			i.logger.Error("failed to validate msg", "err", err)
			return
		}

		if msg.From == i.validatorKeyAddr.String() {
			// we are the sender, skip this message since we already
			// relay our own messages internally.
			return
		}
		i.pushMessage(msg)
	})
	if err != nil {
		return err
	}

	i.transport = &gossipTransport{topic: topic}
	return nil
}
message MessageReq {
    // type is the type of the message
    Type type = 1;

    // from is the address of the sender
    string from = 2;

    // seal is the committed seal if message is commit
    string seal = 3;

    // signature is the crypto signature of the message
    string signature = 4;

    // view is the view assigned to the message
    View view = 5;

    // hash of the locked block
    string digest = 6;

    // proposal is the rlp encoded block in preprepare messages
    google.protobuf.Any proposal = 7;

    enum Type {
        Preprepare = 0;
        Prepare = 1;
        Commit = 2;
        RoundChange = 3;
    }
}

message View {
    uint64 round = 1;
    uint64 sequence = 2;
}
func (i *Ibft) start() {
	// consensus always starts in SyncState mode in case it needs
	// to sync with other nodes.
	i.setState(SyncState)

	header := i.blockchain.Header()
	i.logger.Debug("current sequence", "sequence", header.Number+1)

	for {
		select {
		case <-i.closeCh:
			return
		default:
		}

		i.runCycle()
	}
}

func (i *Ibft) runCycle() {
	if i.state.view != nil {
		i.logger.Debug(
		    "cycle", 
		    "state", 
		    i.getState(), 
		    "sequence", 
		    i.state.view.Sequence, 
		    "round", 
		    i.state.view.Round,
	    )
	}

	switch i.getState() {
	case AcceptState:
		i.runAcceptState()

	case ValidateState:
		i.runValidateState()

	case RoundChangeState:
		i.runRoundChangeState()

	case SyncState:
		i.runSyncState()
	}
}

State

To truly understand how State works, you must understand some basic Ethereum concepts.

We highly recommend reading the State in Ethereum guidearrow-up-right.

hashtag
Overview

Now that we've familiarized ourselves with basic Ethereum concepts, the next overview should be easy.

We mentioned that the World state trie has all the Ethereum accounts that exist. These accounts are the leaves of the Merkle trie. Each leaf has encoded Account State information.

This enables the Zchains to get a specific Merkle trie, for a specific point in time. For example, we can get the hash of the state at block 10.

The Merkle trie, at any point in time, is called a Snapshot.

We can have Snapshots for the state trie, or for the storage trie - they are basically the same. The only difference is in what the leaves represent:

  • In the case of the storage trie, the leaves contain an arbitrary state, which we cannot process or know what's in there

  • In the case of the state trie, the leaves represent accounts

The Snapshot interface is defined as such:

The information that can be committed is defined by the Object struct:

The implementation for the Merkle trie is in the state/immutable-trie folder. state/immutable-trie/state.go implements the State interface.

state/immutable-trie/trie.go is the main Merkle trie object. It represents an optimized version of the Merkle trie, which reuses as much memory as possible.

hashtag
Executor

state/executor.go includes all the information needed for the Zchains to decide how a block changes the current state. The implementation of ProcessBlock is located here.

The apply method does the actual state transition. The executor calls the EVM.

hashtag
Runtime

When a state transition is executed, the main module that executes the state transition is the EVM (located in state/runtime/evm).

The dispatch table does a match between the opcode and the instruction.

The core logic that powers the EVM is the Run loop.

This is the main entry point for the EVM. It does a loop and checks the current opcode, fetches the instruction, checks if it can be executed, consumes gas and executes the instruction until it either fails or stops.

JSON RPC

hashtag
Overview

The JSON RPC module implements the JSON RPC API layer, something that dApp developers use to interact with the blockchain.

It includes support for standard json-rpc endpointsarrow-up-right, as well as websocket endpoints.

hashtag
Blockchain Interface

TheZchains uses the blockchain interface to define all the methods that the JSON RPC module needs to use, in order to deliver its endpoints.

The blockchain interface is implemented by the server. It is the base implementation that's passed into the JSON RPC layer.

hashtag
ETH Endpoints

All the standard JSON RPC endpoints are implemented in:

hashtag
Filter Manager

The Filter Manager is a service that runs alongside the JSON RPC server.

It provides support for filtering blocks on the blockchain. Specifically, it includes both a log and a block level filter.

The Filter Manager relies heavily on Subscription Events, mentioned in the section

Filter Manager events get dispatched in the Run method:

hashtag
📜 Resources

Protocol

hashtag
Overview

The Protocol module contains the logic for the synchronization protocol.

The Zchains uses libp2p as the networking layer, and on top of that runs gRPC.

Modules

Minimal
Blockchainarrow-up-right
Ethereum JSON-RPCarrow-up-right
hashtag
GRPC for Other Nodes

hashtag
Status Object

service V1 {
    // Returns status information regarding the specific point in time
    rpc GetCurrent(google.protobuf.Empty) returns (V1Status);
    
    // Returns any type of object (Header, Body, Receipts...)
    rpc GetObjectsByHash(HashRequest) returns (Response);
    
    // Returns a range of headers
    rpc GetHeaders(GetHeadersRequest) returns (Response);
    
    // Watches what new blocks get included
    rpc Watch(google.protobuf.Empty) returns (stream V1Status);
}
message V1Status {
    string difficulty = 1;
    string hash = 2;
    int64 number = 3;
}
type State interface {
    // Gets a snapshot for a specific hash
	NewSnapshotAt(types.Hash) (Snapshot, error)
	
	// Gets the latest snapshot
	NewSnapshot() Snapshot
	
	// Gets the codeHash
	GetCode(hash types.Hash) ([]byte, bool)
}
type Snapshot interface {
    // Gets a specific value for a leaf
	Get(k []byte) ([]byte, bool)
	
	// Commits new information
	Commit(objs []*Object) (Snapshot, []byte)
}
// Object is the serialization of the radix object
type Object struct {
	Address  types.Address
	CodeHash types.Hash
	Balance  *big.Int
	Root     types.Hash
	Nonce    uint64
	Deleted  bool

	DirtyCode bool
	Code      []byte

	Storage []*StorageObject
}
func (t *Transition) apply(msg *types.Transaction) ([]byte, uint64, bool, error) {
	// check if there is enough gas in the pool
	if err := t.subGasPool(msg.Gas); err != nil {
		return nil, 0, false, err
	}

	txn := t.state
	s := txn.Snapshot()

	gas, err := t.preCheck(msg)
	if err != nil {
		return nil, 0, false, err
	}
	if gas > msg.Gas {
		return nil, 0, false, errorVMOutOfGas
	}

	gasPrice := new(big.Int).SetBytes(msg.GetGasPrice())
	value := new(big.Int).SetBytes(msg.Value)

	// Set the specific transaction fields in the context
	t.ctx.GasPrice = types.BytesToHash(msg.GetGasPrice())
	t.ctx.Origin = msg.From

	var subErr error
	var gasLeft uint64
	var returnValue []byte

	if msg.IsContractCreation() {
		_, gasLeft, subErr = t.Create2(msg.From, msg.Input, value, gas)
	} else {
		txn.IncrNonce(msg.From)
		returnValue, gasLeft, subErr = t.Call2(msg.From, *msg.To, msg.Input, value, gas)
	}
	
	if subErr != nil {
		if subErr == runtime.ErrNotEnoughFunds {
			txn.RevertToSnapshot(s)
			return nil, 0, false, subErr
		}
	}

	gasUsed := msg.Gas - gasLeft
	refund := gasUsed / 2
	if refund > txn.GetRefund() {
		refund = txn.GetRefund()
	}

	gasLeft += refund
	gasUsed -= refund

	// refund the sender
	remaining := new(big.Int).Mul(new(big.Int).SetUint64(gasLeft), gasPrice)
	txn.AddBalance(msg.From, remaining)

	// pay the coinbase
	coinbaseFee := new(big.Int).Mul(new(big.Int).SetUint64(gasUsed), gasPrice)
	txn.AddBalance(t.ctx.Coinbase, coinbaseFee)

	// return gas to the pool
	t.addGasPool(gasLeft)

	return returnValue, gasUsed, subErr != nil, nil
}
func init() {
	// unsigned arithmetic operations
	register(STOP, handler{opStop, 0, 0})
	register(ADD, handler{opAdd, 2, 3})
	register(SUB, handler{opSub, 2, 3})
	register(MUL, handler{opMul, 2, 5})
	register(DIV, handler{opDiv, 2, 5})
	register(SDIV, handler{opSDiv, 2, 5})
	register(MOD, handler{opMod, 2, 5})
	register(SMOD, handler{opSMod, 2, 5})
	register(EXP, handler{opExp, 2, 10})

	...

	// jumps
	register(JUMP, handler{opJump, 1, 8})
	register(JUMPI, handler{opJumpi, 2, 10})
	register(JUMPDEST, handler{opJumpDest, 0, 1})
}

// Run executes the virtual machine
func (c *state) Run() ([]byte, error) {
	var vmerr error

	codeSize := len(c.code)
	
	for !c.stop {
		if c.ip >= codeSize {
			c.halt()
			break
		}

		op := OpCode(c.code[c.ip])

		inst := dispatchTable[op]
		
		if inst.inst == nil {
			c.exit(errOpCodeNotFound)
			break
		}
		
		// check if the depth of the stack is enough for the instruction
		if c.sp < inst.stack {
			c.exit(errStackUnderflow)
			break
		}
		
		// consume the gas of the instruction
		if !c.consumeGas(inst.gas) {
			c.exit(errOutOfGas)
			break
		}

		// execute the instruction
		inst.inst(c)

		// check if stack size exceeds the max size
		if c.sp > stackSize {
			c.exit(errStackOverflow)
			break
		}
		
		c.ip++
	}

	if err := c.err; err != nil {
		vmerr = err
	}
	
	return c.ret, vmerr
}
type blockchainInterface interface {
	// Header returns the current header of the chain (genesis if empty)
	Header() *types.Header

	// GetReceiptsByHash returns the receipts for a hash
	GetReceiptsByHash(hash types.Hash) ([]*types.Receipt, error)

	// Subscribe subscribes for chain head events
	SubscribeEvents() blockchain.Subscription

	// GetHeaderByNumber returns the header by number
	GetHeaderByNumber(block uint64) (*types.Header, bool)

	// GetAvgGasPrice returns the average gas price
	GetAvgGasPrice() *big.Int

	// AddTx adds a new transaction to the tx pool
	AddTx(tx *types.Transaction) error

	// State returns a reference to the state
	State() state.State

	// BeginTxn starts a transition object
	BeginTxn(parentRoot types.Hash, header *types.Header) (*state.Transition, error)

	// GetBlockByHash gets a block using the provided hash
	GetBlockByHash(hash types.Hash, full bool) (*types.Block, bool)

	// ApplyTxn applies a transaction object to the blockchain
	ApplyTxn(header *types.Header, txn *types.Transaction) ([]byte, bool, error)

	stateHelperInterface
}
jsonrpc/eth_endpoint.go
type Filter struct {
	id string

	// block filter
	block *headElem

	// log cache
	logs []*Log

	// log filter
	logFilter *LogFilter

	// index of the filter in the timer array
	index int

	// next time to timeout
	timestamp time.Time

	// websocket connection
	ws wsConn
}


type FilterManager struct {
	logger hclog.Logger

	store   blockchainInterface
	closeCh chan struct{}

	subscription blockchain.Subscription

	filters map[string]*Filter
	lock    sync.Mutex

	updateCh chan struct{}
	timer    timeHeapImpl
	timeout  time.Duration

	blockStream *blockStream
}
func (f *FilterManager) Run() {

	// watch for new events in the blockchain
	watchCh := make(chan *blockchain.Event)
	go func() {
		for {
			evnt := f.subscription.GetEvent()
			if evnt == nil {
				return
			}
			watchCh <- evnt
		}
	}()

	var timeoutCh <-chan time.Time
	for {
		// check for the next filter to be removed
		filter := f.nextTimeoutFilter()
		if filter == nil {
			timeoutCh = nil
		} else {
			timeoutCh = time.After(filter.timestamp.Sub(time.Now()))
		}

		select {
		case evnt := <-watchCh:
			// new blockchain event
			if err := f.dispatchEvent(evnt); err != nil {
				f.logger.Error("failed to dispatch event", "err", err)
			}

		case <-timeoutCh:
			// timeout for filter
			if !f.Uninstall(filter.id) {
				f.logger.Error("failed to uninstall filter", "id", filter.id)
			}

		case <-f.updateCh:
			// there is a new filter, reset the loop to start the timeout timer

		case <-f.closeCh:
			// stop the filter manager
			return
		}
	}
}

Types

hashtag
Overview

The Types module implements core object types, such as:

  • Address

  • Hash

  • Header

  • lots of helper functions

hashtag
RLP Encoding / Decoding

Unlike clients such as GETH, the ZChains doesn't use reflection for the encoding. The preference was to not use reflection because it introduces new problems, such as performance degradation, and harder scaling.

The Types module provides an easy-to-use interface for RLP marshaling and unmarshalling, using the FastRLP package.

Marshaling is done through the MarshalRLPWith and MarshalRLPTo methods. The analogous methods exist for unmarshalling.

By manually defining these methods, the ZChains doesn't need to use reflection. In rlp_marshal.go, you can find methods for marshaling:

  • Bodies

  • Blocks

  • Headers

Other modules

hashtag
Crypto

The Crypto module contains crypto utility functions.

hashtag
Chain

TxPool

hashtag
Overview

The TxPool module represents the transaction pool implementation, where transactions are added from different parts of the system. The module also exposes several useful features for node operators, which are covered below.

hashtag

Receipts
  • Logs

  • Transactions

  • The Chain module contains chain parameters (active forks, consensus engine, etc.)
    • chains - Predefined chain configurations (mainnet, goerli, ibft)

    hashtag
    Helper

    The Helper module contains helper packages.

    • dao - Dao utils

    • enode - Enode encoding/decoding function

    • hex - Hex encoding/decoding functions

    • ipc - IPC connection functions

    • keccak - Keccak functions

    • rlputil - Rlp encoding/decoding helper function

    hashtag
    Command

    The Command module contains interfaces for CLI commands.

    Architecture

    Operator Commands

    Node operators can query these GRPC endpoints, as described in the CLI Commandsarrow-up-right section.

    hashtag
    Processing Transactions

    The addImpl method is the bread and butter of the TxPool module. It is the central place where transactions are added in the system, being called from the GRPC service, JSON RPC endpoints, and whenever the client receives a transaction through the gossip protocol.

    It takes in as an argument ctx, which just denotes the context from which the transactions are being added (GRPC, JSON RPC...). The other parameter is the list of transactions to be added to the pool.

    The key thing to note here is the check for the From field within the transaction:

    • If the From field is empty, it is regarded as an unencrypted/unsigned transaction. These kinds of transactions are only accepted in development mode

    • If the From field is not empty, that means that it's a signed transaction, so signature verification takes place

    After all these validations, the transactions are considered to be valid.

    hashtag
    Data structures

    The fields in the TxPool object that can cause confusion are the queue and sorted lists.

    • queue - Heap implementation of a sorted list of account transactions (by nonce)

    • sorted - Sorted list for all the current promoted transactions (all executable transactions). Sorted by gas price

    hashtag
    Gas limit error management

    Whenever you submit a transaction, there are three ways it can be processed by the TxPool.

    1. All pending transactions can fit in a block

    2. One or more pending transactions can not fit in a block

    3. One or more pending transactions will never fit in a block

    Here, the word fit means that the transaction has a gas limit that is lower than the remaining gas in the TxPool.

    The first scenario does not produce any error.

    hashtag
    Second scenario

    • The TxPool remaining gas is set to the gas limit of the last block, lets say 5000

    • A first transaction is processed and consumes 3000 gas of the TxPool

      • The remaining gas of the TxPool is now 2000

    • A second transaction, which is the same as the first one - they both consume 3000 units of gas, is submitted

    • Since the remaining gas of the TxPool is lower than the transaction gas, it cannot be processed in the current block

      • It is put back into a pending transaction queue so that it can be processed in the next block

    • The first block is written, lets call it block #1

    • The TxPool remaining gas is set to the parent block - block #1's gas limit

    • The transaction which was put back into the TxPool pending transaction queue is now processed and written in the block

      • The TxPool remaining gas is now 2000

    • The second block is written

    • ...

    TxPool Error scenario #1

    hashtag
    Third scenario

    • The TxPool remaining gas is set to the gas limit of the last block, lets say 5000

    • A first transaction is processed and consumes 3000 gas of the TxPool

      • The remaining gas of the TxPool is now 2000

    • A second transaction, with a gas field set to 6000 is submitted

    • Since the block gas limit is lower than the transaction gas, this transaction is discarded

      • It will never be able to fit in a block

    • The first block is written

    • ...

    TxPool Error scenario #2

    This happens whenever you get the following error:

    hashtag
    Block Gas Target

    There are situations when nodes want to keep the block gas limit below or at a certain target on a running chain.

    The node operator can set the target gas limit on a specific node, which will try to apply this limit to newly created blocks. If the majority of the other nodes also have a similar (or same) target gas limit set, then the block gas limit will always hover around that block gas target, slowly progressing towards it (at max 1/1024 * parent block gas limit) as new blocks are created.

    hashtag
    Example scenario

    • The node operator sets the block gas limit for a single node to be 5000

    • Other nodes are configured to be 5000 as well, apart from a single node which is configured to be 7000

    • When the nodes who have their gas target set to 5000 become proposers, they will check to see if the gas limit is already at the target

    • If the gas limit is not at the target (it is greater / lower), the proposer node will set the block gas target to at most (1/1024 * parent gas limit) in the direction of the target

      1. Ex: parentGasLimit = 4500 and blockGasTarget = 5000, the proposer will calculate the gas limit for the new block as 4504.39453125 (4500/1024 + 4500)

    • This ensures that the block gas limit in the chain will be kept at the target, because the single proposer who has the target configured to 7000 cannot advance the limit much, and the majority of the nodes who have it set at 5000 will always attempt to keep it there

    Minimal

    hashtag
    Overview

    As mentioned before, ZChains is a set of different modules, all connected to each other. The Blockchain is connected to the State, or for example, Synchronization, which pipes new blocks into the Blockchain.

    Minimal is the cornerstone for these inter-connected modules. It acts as a central hub for all the services that run on the ZChains.

    hashtag
    Startup Magic

    Among other things, Minimal is responsible for:

    • Setting up data directories

    • Creating a keystore for libp2p communication

    • Creating storage

    • Setting up consensus

    2021-11-04T15:41:07.665+0100 [ERROR] polygon.consensus.dev: failed to write transaction: transaction's gas limit exceeds block gas limit
    service TxnPoolOperator {
        // Status returns the current status of the pool
        rpc Status(google.protobuf.Empty) returns (TxnPoolStatusResp);
    
        // AddTxn adds a local transaction to the pool
        rpc AddTxn(AddTxnReq) returns (google.protobuf.Empty);
    
        // Subscribe subscribes for new events in the txpool
        rpc Subscribe(google.protobuf.Empty) returns (stream TxPoolEvent);
    }
    
    // AddTx adds a new transaction to the pool
    func (t *TxPool) AddTx(tx *types.Transaction) error {
    	if err := t.addImpl("addTxn", tx); err != nil {
    		return err
    	}
    
    	// broadcast the transaction only if network is enabled
    	// and we are not in dev mode
    	if t.topic != nil && !t.dev {
    		txn := &proto.Txn{
    			Raw: &any.Any{
    				Value: tx.MarshalRLP(),
    			},
    		}
    		if err := t.topic.Publish(txn); err != nil {
    			t.logger.Error("failed to topic txn", "err", err)
    		}
    	}
    
    	if t.NotifyCh != nil {
    		select {
    		case t.NotifyCh <- struct{}{}:
    		default:
    		}
    	}
    	return nil
    }
    
    func (t *TxPool) addImpl(ctx string, txns ...*types.Transaction) error {
    	if len(txns) == 0 {
    		return nil
    	}
    
    	from := txns[0].From
    	for _, txn := range txns {
    		// Since this is a single point of inclusion for new transactions both
    		// to the promoted queue and pending queue we use this point to calculate the hash
    		txn.ComputeHash()
    
    		err := t.validateTx(txn)
    		if err != nil {
    			return err
    		}
    
    		if txn.From == types.ZeroAddress {
    			txn.From, err = t.signer.Sender(txn)
    			if err != nil {
    				return fmt.Errorf("invalid sender")
    			}
    			from = txn.From
    		} else {
    			// only if we are in dev mode we can accept
    			// a transaction without validation
    			if !t.dev {
    				return fmt.Errorf("cannot accept non-encrypted txn")
    			}
    		}
    
    		t.logger.Debug("add txn", "ctx", ctx, "hash", txn.Hash, "from", from)
    	}
    
    	txnsQueue, ok := t.queue[from]
    	if !ok {
    		stateRoot := t.store.Header().StateRoot
    
    		// initialize the txn queue for the account
    		txnsQueue = newTxQueue()
    		txnsQueue.nextNonce = t.store.GetNonce(stateRoot, from)
    		t.queue[from] = txnsQueue
    	}
    	for _, txn := range txns {
    		txnsQueue.Add(txn)
    	}
    
    	for _, promoted := range txnsQueue.Promote() {
    		t.sorted.Push(promoted)
    	}
    	return nil
    }
    // TxPool is a pool of transactions
    type TxPool struct {
    	logger hclog.Logger
    	signer signer
    
    	store      store
    	idlePeriod time.Duration
    
    	queue map[types.Address]*txQueue
    	sorted *txPriceHeap
    
    	// network stack
    	network *network.Server
    	topic   *network.Topic
    
    	sealing  bool
    	dev      bool
    	NotifyCh chan struct{}
    
    	proto.UnimplementedTxnPoolOperatorServer
    }
  • Setting up the blockchain object with GRPC, JSON RPC, and Synchronization

  • func NewServer(logger hclog.Logger, config *Config) (*Server, error) {
    	m := &Server{
    		logger: logger,
    		config: config,
    		chain:      config.Chain,
    		grpcServer: grpc.NewServer(),
    	}
    
    	m.logger.Info("Data dir", "path", config.DataDir)
    
    	// Generate all the paths in the dataDir
    	if err := setupDataDir(config.DataDir, dirPaths); err != nil {
    		return nil, fmt.Errorf("failed to create data directories: %v", err)
    	}
    
    	// Get the private key for the node
    	keystore := keystore.NewLocalKeystore(filepath.Join(config.DataDir, "keystore"))
    	key, err := keystore.Get()
    	if err != nil {
    		return nil, fmt.Errorf("failed to read private key: %v", err)
    	}
    	m.key = key
    
    	storage, err := leveldb.NewLevelDBStorage(filepath.Join(config.DataDir, "blockchain"), logger)
    	if err != nil {
    		return nil, err
    	}
    	m.storage = storage
    
    	// Setup consensus
    	if err := m.setupConsensus(); err != nil {
    		return nil, err
    	}
    
    	stateStorage, err := itrie.NewLevelDBStorage(filepath.Join(m.config.DataDir, "trie"), logger)
    	if err != nil {
    		return nil, err
    	}
    
    	st := itrie.NewState(stateStorage)
    	m.state = st
    
    	executor := state.NewExecutor(config.Chain.Params, st)
    	executor.SetRuntime(precompiled.NewPrecompiled())
    	executor.SetRuntime(evm.NewEVM())
    
    	// Blockchain object
    	m.blockchain, err = blockchain.NewBlockchain(logger, storage, config.Chain, m.consensus, executor)
    	if err != nil {
    		return nil, err
    	}
    
    	executor.GetHash = m.blockchain.GetHashHelper
    
    	// Setup sealer
    	sealerConfig := &sealer.Config{
    		Coinbase: crypto.PubKeyToAddress(&m.key.PublicKey),
    	}
    	m.Sealer = sealer.NewSealer(sealerConfig, logger, m.blockchain, m.consensus, executor)
    	m.Sealer.SetEnabled(m.config.Seal)
    
    	// Setup the libp2p server
    	if err := m.setupLibP2P(); err != nil {
    		return nil, err
    	}
    
    	// Setup the GRPC server
    	if err := m.setupGRPC(); err != nil {
    		return nil, err
    	}
    
    	// Setup jsonrpc
    	if err := m.setupJSONRPC(); err != nil {
    		return nil, err
    	}
    
    	// Setup the syncer protocol
    	m.syncer = protocol.NewSyncer(logger, m.blockchain)
    	m.syncer.Register(m.libp2pServer.GetGRPCServer())
    	m.syncer.Start()
    
    	// Register the libp2p GRPC endpoints
    	proto.RegisterHandshakeServer(m.libp2pServer.GetGRPCServer(), &handshakeService{s: m})
    
    	m.libp2pServer.Serve()
    	return m, nil
    }
    Ex: parentGasLimit = 5500 and blockGasTarget = 5000, the proposer will calculate the gas limit for the new block as 5494.62890625 (5500 - 5500/1024)