Things to Know Before You Select A Crypto Wallet

Things to Know Before You Select A Crypto Wallet

For quite a long time, we have been using physical wallets to carry our identity cards, money in the form of gold, silver, and other metal coins, and fiat paper currencies. We are also carrying plastic money in our wallets. But now digital modes are gaining traction. Banks and other financial institutions have started offering digital wallets to ease peer-to-peer transactions, bill payments, and money transactions. A detailed look into this system reveals that digital wallets are changing human behavior when it comes to moving money and assets smartly. The wallets we use are basically a vendor-defined identity mechanism that enables us to maintain our cash and assets. This has eased and secured our access to valuable assets.

Crypto wallets do the same thing. It helps us to identify ourselves in the blockchain world and maintain our digital assets like coins, NFTs, etc. We have been developing blockchain networks and applications here at Talentica for the past few years and have gained significant market understanding through our research works and hands-on experience. You can know in detail about the blockchain framework in our blog, Simple Blockchain Framework: An Introduction to Block & Transaction Structure – Talentica.

The blockchain ecosystem is inadequate without a crypto wallet. Let’s know in detail about the crypto wallet.


What Is Crypto Wallet?

A crypto wallet is a type of wallet that holds our identity and information that we can use to connect with decentralized applications or assets. To interact with blockchain networks, DApps or cryptocurrencies, we need to have a crypto wallet. Its software runs each time you interact with the blockchain application. This interaction enables receiving or sending a coin, updating or fetching an NFT asset, etc.

Types Of Crypto Wallets

There are mainly two different categories of crypto wallets. The first one is “cold wallet,” and the second one is “hot wallet.”

Cold Wallet 

The cold wallet is a type of wallet that doesn’t share connection with the internet. This wallet is a hardware device that contains our identity and other network connection details in it. To interact with the wallet, we need to connect this device with the application. The details present inside the hardware device never leave it. If you want to make a transaction, create the transaction and then pass it to the wallet. The wallet will return the signed transaction. This method is the most secure form of crypto wallet implementation. But at the same time, it is costly, has low user experience, and needs secured device with high safety procedures.

Hot Wallet

The hot wallet refers to the wallet category connected to the internet. This wallet is a software code that contains our identity and other network connection details. While interacting with the application, we run this piece of code. The software wallet can be developed in many ways to support blockchain applications. The most common implementations of the hot wallets are “the custodian wallet” and “the non-custodian wallet.” Hot wallets can be used via different ways, such as the web browser, the desktop client, and the mobile client.

Hot wallets have their pros and cons over cold wallets. Since hot wallets are connected to the internet, they are more vulnerable to hacks than cold wallets. On the other side, hot wallets are easy to access and more user-friendly than cold wallets.

Now, we will dive deep into the hot wallet. As we saw earlier, hot wallets can be built in different ways, such as custodial and non-custodial wallets. So, let’s understand them in detail.


Custodial Wallet vs. Non-Custodial Wallet

Custodial Wallet 

A custodial wallet is a type of crypto wallet where the vendor keeps the private keys. Here, the third party has complete control over the private keys. They will give users the right to transact on the application, but they are transacting on the user’s behalf.

In the custodial wallet, the private key is secured by the vendor. So, this wallet comes with a single point of failure. If any malicious hacker gets access to the application database, he/she could get access to information of every single user. This wallet implementation is highly prone to attack by malicious groups. The vendor will maintain the mapping between the private key and the belongings of the end-user in this implementation. In this way, each action can be linked back to the user. The users get some login credentials in a general standard format to access the application. Even if the end-user lost its app login credentials, the vendor provides them with recovery functionality. The end-user does not have to maintain high-security measures for their credentials. Thus, it is a better user experience with less freedom over the data.

The custodial wallet is appropriate for crypto exchange like use-cases. In this way, they are giving a better user experience and earning brokerage on each transaction. Also, at the same time, they are retaining the users since they are holding their wallet key pair.

Non-Custodial Wallet 

A non-custodial wallet is a type of wallet where the end-user holds the private key. The third-party has no control over the user’s identity. They cannot restrict the user’s actions and cannot transact on behalf of the user. Users are solely responsible for all sorts of security measures for the private key, such as storing it with safety, not sharing it with anyone else, etc.

Now, to make any malicious transaction, the malicious hacker needs to attack the user itself. There is no single point of failure. This makes it a more secure implementation than the custodian wallet. Users can do transactions anonymously since they own the private-public key, and their real-world identity is not linked with the public key. They are not dependent on the application to make transactions. This freedom comes with great responsibility. Assume that the end-user forgets the private key. Since the vendor has no control over the private keys, they cannot help the end-user recover their private key. Losing the private key means losing all of your assets.

The non-custodial wallet is the proper form of decentralized application. This form of wallet is supported by almost every application. MetaMask, Bitski, WalletConnect, Fortmatic are examples of live non-custodial wallets.

Some applications do support non-custodial wallets or identity management. But having said that, they do want to know who the owner of the public key is. Most of the time, the reason behind this requirement is their use case. For example, they are working in the supply chain management domain. Then, they would require the transactor’s real-world identity, the transactor’s role or designation in the organization, etc.

The developers can use a digital certificate to bind the user information to the public key. An X.509 is a standard format of a public key certificate. This public-key certificate binds the public key with the real-world user identity. A user can make N number of identities, but all identities are linked back to the user via certificates. These certificates build trust and ownership using a chain of digital certificates. These signing certificates are publicly well-known trusted third parties. Their root certificate is a self-signed identity. It is beneficial to use cases where we need to build an audit system with the DApps.

A typical pattern has emerged across blockchain-based enterprises. In general, enterprises adopt consortium-based private blockchain networks to improve security and hide the data from the public domain. During the early stages of development, enterprises create a custodial wallet in their ecosystem. After that, gradually make a transition to non-custodial hot wallets. This process also aids them in gaining a better understanding of the end-user preferences, which they can then use to develop the preferred wallet functionality over time.


Final Words

Each wallet type provides a different level of security and freedom to application development and the end-user experience. Selecting one specific wallet type for an application depends on the app use case. Since each wallet directly impacts the end-user, we need to consider their point of view while selecting the specific crypto wallet implementation.

In my next blog, I will talk about the custodial wallet’s implementation details. Till then, stay connected and stay safe!

Blockchain Interoperability Solution: How Chainbridge Can Be A Way Out?

Blockchain technology offers providential results. Its potential for improving business processes, providing transactional transparency and security in the value chain, and reducing operational costs is obvious.

The past few years have seen continuous growth in blockchain-related projects. It signifies that developers are leveraging blockchain’s capabilities by thinking outside the box. Besides, we have to understand there is no perfect solution to address all blockchain needs at once.

Each day the number of solutions that rely on blockchain technology is increasing. But, the technology’s evolution is taking a hit due to the lack of interoperability among blockchain solutions. Many solutions are available for blockchain interoperability, all having their pros and cons. I have used one such solution, Chainbridge, which is an extensible cross-chain communication protocol. It currently supports bridging between EVM and Substrate based chains.

In this blog, we’ll discuss a specific use case of supply chain management that uses the blockchain interoperability between substrate and ethereum chains.

The Usecase

Blockchain holds great promise in the area of supply chain management. It can improve supply chains by enabling faster and more cost-efficient product delivery, enhancing product traceability, improving coordination between partners, and aiding access to financing.

The usecase I have is a simplified version of champagne bottle supply-chain management. I did so to learn and showcase substrate development and the interoperability between Substrate and Ethereum blockchains. By using this application, the end consumer can track and verify the authenticity of the champagne bottle.

Every bottle created will have a unique id, which we will use to track it further down the process. The use-case source code can be found here. Now let’s look at the list of actors and their respective roles in the system.

Actors and Roles

For simplicity, I will go with these four types of actors. Their roles are as follows.


  • Bottle Creation – Creates and registers new bottles in the system.
  • Shipment Registration – Create a new shipment, assign a carrier to complete the delivery, and provide the retailer details and bottles to be delivered.


  • Pickup Shipment – Picks up the shipment from the manufacturer.
  • Deliver Shipment – Delivers the shipment to the retailer.


  • Sell Bottle – Sells the bottles to the end customer.


  • Buy Bottles – Buys bottles from the retailer.

System Modules or Substrate Pallets

The entire supply chain process of the application is built with two modules.

Registrar: Registrar pallet is responsible for registering and keeping a record of various actors and bottles in the system. It exposes some functions like registerManufacturer(), registerCarrier() etc to register the members of a particular type. A manufacturer can invoke the registerBottle() function to register a new bottle with a unique id in the system.

Bottle Tracking: The bottle shipment process is tracked using this module. Functions registerShipment() and trackShipment() are used for tracking the bottle from shipment registration to delivery to the retailer. For final customer sell, sellToCustomer() function is called, which transfers the bottle ownership to the end customer.

Chainbridge pallets –

Chainbridge, example-pallet, and example-erc721: Chainbridge provides these three pallets for interchain communication. Follow Chainbridge-substrate for their documentation.

Process Flow

Let’s discuss the entire process from bottle creation to end customer sales, step by step. We’ll have a look at the external function used to interact with the application for each step.

Member Registration

First of all, we have to register various actors in the system. There are four types of actors. So, we have to use four functions, registering each of them using the registrar pallet. These are registerManufacturer(), registerCarrier(), registerRetailer() and registerCustomer(). All 4 functions have the same function signature, one is explained below.

Function Signature:


It takes no argument. The caller of the function is registered as a manufacturer. There can be any number of manufacturer, carrier, etc. If there are multiple manufacturers, all can invoke this function separately to register themselves. The same goes for carriers, retailers, and customers.

Bottle Creation

A manufacturer can register a new bottle.

Function Signature:

registerBottle(id: BottleId)

The manufacture will invoke this method from the registrar pallet providing the BottleId to be registered.

Shipment Registration

The shipment will be registered by the manufacturer.

Function Signature:

registerShipment(id: ShipmentId, carrier: AccountId, retailer: AccountId, bottles: Vec<BottleId>)

To register a shipment, the manufacturer will provide a unique ShipmentId, account Id of the carrier who will carry the package, account id of the retailer to which shipment has to be delivered and the list of bottle ids to be delivered.

Shipment Pickup and Delivery

The assigned carrier will pick up the shipment from the manufacturer and deliver it to the retailer.

Function Signature:

trackShipment(id: ShipmentId, operation: ShipmentOperation)

The carrier will provide the Shipment Id and the operation that it wants to perform on the shipment for tracking a shipment. The ShipmentOperation is an enum that holds two values: Pickup and Deliver. After the delivery operation is completed, the bottle will be in the ownership of the retailer.

End Customer Sell and Payments

In all the steps performed before, it is assumed that all the payments have been made off-chain. To sell the bottle to the end customer, I have assumed that the process will be initiated off-chain where both the parties (Customer and Retailer) will agree on how many bottles have to be sold and the total amount. Once the customer transfers the agreed amount to the retailer’s substrate account, the off-chain system will automatically trigger/invoke a method in the substrate chain, i.e., sellToCustomer(), which will only transfer the ownership of the bottles to the customer.

Function Signature:

sellToCustomer(customer: AccountId, bottles: Vec<BottleId>)

This function will be invoked using the retailer’s account, providing the customer’s account id and the bottles to be sold.

The customer can transfer the amount to the retailer’s substrate account by any means. But to support our interchain operability use case perspective, let us assume that the customer has some Ethereum smart contract tokens. This token holds some equivalent value to the substrate native token. Now the customer wants to transfer these Ethereum tokens directly to the retailer’s substrate account. This interchain communication can be achieved using Chainbridge, a cross-chain communication protocol.


Chainbridge is a modular multi-directional blockchain bridge. Currently, it supports interoperability between Ethereum and Substrate-based chains. There are three main roles:

    • Listener: it extracts events from a source chain and constructs the message
    • Router: its role is to pass the message from Listener to Writer
    • Writer: interprets messages and sends transactions to the target chain.

Both sides of the bridge have a set of smart contracts (or pallets in the substrate), where each has a specific function:

    • Bridge – Interaction between users and relayers happens using the bridge. It starts a transaction on the source chain, executes proposals on the target chain and delegates calls to the handler contracts for deposits.
    • Handler- validates the parameters provided by the user, creating a deposit/execution record.
    • Target – as the name suggests, this is the contract we are going to interact with on each side of the bridge.

Below diagram is a summarized workflow of Chainbridge:

Chainbridge currently relies on trusted relayers. However, it has mechanisms to stop power abuse and mishandling of funds by any single relayer.

Chainbridge Setup


Starting Local Chains

Follow the instructions at provenance usecase repo to start the substrate chain.

The command below will start the geth instance.

docker run -p 8545:8545 chainsafe/chainbridge-geth:20200505131100-5586a65

Ethereum Chain Setup

Deploy Contracts

To deploy the contracts onto the Ethereum chain, run the following:

cb-sol-cli deploy –all –relayerThreshold 1

Register fungible resource

cb-sol-cli bridge register-resource –resourceId “0x000000000000000000000000000000c76ebe4a02bbc34786d860b355f5a5ce00” –targetContract “0x21605f71845f372A9ed84253d2D024B7B10999f4”

Specify Token Semantics

# Register the erc20 contract as mintable/burnable

cb-sol-cli bridge set-burn –tokenContract “0x21605f71845f372A9ed84253d2D024B7B10999f4”

# Register the associated handler as a minter

cb-sol-cli erc20 add-minter –minter “0x3167776db165D8eA0f51790CA2bbf44Db5105ADF”

Substrate Chain Setup

Register Relayer

Select the Sudo tab in the PolkadotJS UI. Choose the addRelayer method of chainBridge, and select Alice as the relayer.

Register Resources For Fungible Transfer

Select the Sudo tab and call chainBridge.setResource with the below method parameters:

Id: 0x000000000000000000000000000000c76ebe4a02bbc34786d860b355f5a5ce00

Method: 0x4578616d706c652e7472616e73666572 (utf-8 encoding of “Example.transfer”)

Whitelist Chains

Using the Sudo tab, call chainBridge.whitelistChain, specifying 0 for ethereum chain ID.

Running A Relayer

Here is an example configuration for a single relayer (“Alice”) using the contracts we’ve deployed. Save this JSON inside a file and name it config.json.


“chains”: [


“name”: “eth”,

“type”: “ethereum”,

“id”: “0”,

“endpoint”: “ws://localhost:8545”,

“from”: “0xff93B45308FD417dF303D6515aB04D9e89a750Ca”,

“opts”: {

“bridge”: “0x62877dDCd49aD22f5eDfc6ac108e9a4b5D2bD88B”,

“erc20Handler”: “0x3167776db165D8eA0f51790CA2bbf44Db5105ADF”,

“erc721Handler”: “0x3f709398808af36ADBA86ACC617FeB7F5B7B193E”,

“genericHandler”: “0x2B6Ab4b880A45a07d83Cf4d664Df4Ab85705Bc07”,

“gasLimit”: “1000000”,

“maxGasPrice”: “20000000”




“name”: “sub”,

“type”: “substrate”,

“id”: “1”,

“endpoint”: “ws://localhost:9944”,

“from”: “5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY”,

“opts”: {

“useExtendedCall” : “true”





First, pull the Chainbridge docker image.

docker pull chainsafe/chainbridge:latest

Then start the relayer as a docker container.

docker run -v $(pwd)/config.json:/config.json –network host chainsafe/chainbridge –testkey alice –latest

With this setup complete, now we should be able to do fungible transfers over the two chains.

Interchain token transfer

Substrate Native Token ⇒ Ethereum ERC 20

In the Polkadot JS UI select the Developer -> Extrinsics tab and call example.transferNative with these parameters:

    • Amount: 1000 Unit
    • Recipient: 0xff93B45308FD417dF303D6515aB04D9e89a750Ca
    • Dest Id: 0

To query the recipients balance on ethereum use this:

cb-sol-cli erc20 balance –address “0xff93B45308FD417dF303D6515aB04D9e89a750Ca”

Ethereum ERC20 ⇒ Substrate Native Token

If necessary, tokens can be minted:

cb-sol-cli erc20 mint –amount 1000

Before initiating the transfer we have to approve the bridge to take ownership of the tokens:

cb-sol-cli erc20 approve –amount 1000 –recipient “0x3167776db165D8eA0f51790CA2bbf44Db5105ADF”

To initiate a transfer on the ethereum chain use this command (Note: there will be a 10 block delay before the relayer will process the transfer):

cb-sol-cli erc20 deposit –amount 1 –dest 1 –recipient “0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d” –resourceId “0x000000000000000000000000000000c76ebe4a02bbc34786d860b355f5a5ce00”

Chainbridge Findings

    • At the time of writing this blog, the version of Chainbridge-substrate pallet is not supported with the substrate-parachain-template. This makes it hard to use the bridge with Polkadot parachains.
    • On the substrate chain, the provided Chainbridge-substrate handlers do not mint/burn substrate native tokens. Which leads to inconsistency. Example case:
      • When sending tokens from Substrate -> Ethereum. On the substrate chain, tokens are transferred to the Chainbridge account and on the Ethereum chain, tokens are minted/released to the receivers account.
      • After this transaction completes, on the substrate chain, whatever amount is received by the bridge account is then will be available to transfer at the time of Ethereum -> Substrate transaction.
      • This implies that if our first transaction is to transfer tokens from Ethereum -> Substrate, we will not receive anything on the substrate chain. Because on the substrate chain, the bridge account does not have any tokens to transfer to. In such cases, the transaction is not reverted and inconsistent balances are stored on both sides.
    • It is a fully trusted system; admin users are in control of reviewed components. For instance, minting an infinite amount of tokens or withdrawing all the tokens from the Bridge contract on the original chain is a risk.
    • Users should have full trust in admin and relayers-based consensus not only when using the Bridge, but also when using tokens created by the Bridge (tokens on a foreign blockchain that represent the original tokens).
    • For efficient operation of the whole system, you need to depend on a significant amount of configuration and additional manual work.


Blockchain-based networks are being built to offer specific capabilities.  These different networks should be able to share their data and talk to each other to make the most out of these capabilities, which makes blockchain interoperability a must.

Chainbridge offers an excellent solution with its modular and multi-directional design. They have been continuously working towards enhancing the bridge system and their community channels are very active and supportive. With version 1.0, Chainbridge has built the scaffoldings necessary for message passing functionality. The upcoming versions of Chainbridge can achieve this functionality in a decentralized and trustless manner to ensure there are no central points of failure.

Cryptography – How to Get Started?

Cryptography is an ancient art about techniques used to secure communication in the presence of adversaries. From the early hieroglyphic inscriptions to the latest digital age, it has evolved in sync with the communication. It is no longer constrained by the definition of idea sharing between individuals. With the advent of Internet, it has become an integral part of commercial activities and personal information sharing.

However, it has also triggered a need for solutions regarding secrecy, authentication, integrity, and dishonesty. And it has inspired me to explore cryptography as a subject.

From day one, that is my first encounter with the concept of cryptography, I was intrigued by its structure, nuances, and application. The learning part has become easier now. As a senior engineer working in cryptography projects, I get to learn a lot from first-hand experiences. But the starting days were not easy.

Before delving deep into the discussion about the best ways to learn cryptography, let’s see what makes it so crucial.

Why It is Important

It is better to leave parts like designing cryptographic algorithms/protocols and coming up with the standards to the cryptography community and institutions like NIST. On the other hand, a developer working on the security aspects of an application would need an understanding of the core concepts and implementation experience to make the system robust and hack-proof. The developer should know how to utilize the best available technique and provide the desired level of secrecy along with expected performance. He should be aware of the common mistakes that occur while implementing techniques and security tools to verify the application and a good test-case suite.

I recently came across this article by Columbia University wherein they discussed a new tool developed by the university team. With the tool, they discovered basic cryptographic misuse and unsafe security practices in popular android apps from Google Play Store.

Not all misuses will result in a possible attack. However, it exposes the lack of proper security guidelines in the software community. Most developers are not aware of the cryptography fundamentals and treat it as a black box just like any other software library. The problem with cryptography code is that it does not mean it’s secure even if it is working perfectly.

Where to get started

In case a developer is motivated enough to learn cryptography, there are not many structured resources available to cover the basics for a developer. Below are the resources that helped me a lot when I started exploring the area.

  • Cryptography I: This online course is offered on Coursera by Stanford Prof. Dan Boneh and it covers most of the basic topics. One of my colleagues recommended it, so I decided to take it up to understand cryptography behind the scenes. From a developer’s point of view, some of the highlights of this course are:
    1. It covers most of the topics such as Block Ciphers, MAC, Hashing, Authenticated encryption, Public key cryptography, PRG, PRP and PRFs.
    2. A quick revision of number theory and discreet probability and in-depth treatment were given along with the link to relevant textbooks for further reading to absorb the concepts better.
    3. It was explained with the use of drawings and examples. Even the security holes are explained using simplified examples.
    4. A generic approach to proving any theorem is given so that the same can be applied to their variations as well.
    5. Fast-paced. I had to pause and review most of the videos.
    6. The Programming assignments are really interesting though they are optional to keep the course duration under control.

A capstone project with hands-on experience will indeed help you better. But this course provides a good insight into cryptographic algorithms and has hands-on assignment work to make it interesting within a given time for the developers.

  • Video lecture series: If you are looking for a more thorough treatment of the subject, something like a graduate course, there is a video lecture series on YouTube ‘Introduction to Cryptography by Christof Paar’ that I found to be very useful. The book “Understanding Cryptography” by the same author is an excellent reference to this lecture series.
  • Online reading: To devise simple encryption or set up the certificate authority, or exchange signed documents, you don’t need to be a cryptography expert. This site is a good resource for understanding the common tools and techniques that you can use to implement these in your system and enhance security. There are code examples given for better understanding.

Depending upon your involvement and time availability, you can pick up these resources. These resources are good even for DevOps and QA engineers if they deal with crypto algorithms setup and testing.

Some of the libraries I used for learning are CryptoPP (C++) and gmpy2 (python) and libsodium (rust).


I would recommend these resources to anyone who is either starting a career in the cybersecurity domain or deals with cryptography daily.

If you find these sources useful, feel free to share it. You can also post your queries for further discussions.

Powering New-Age Blockchain Development with Polkadot

Powering New-Age Blockchain Development with Polkadot

Polkadot is a fast-growing ecosystem that enables cross-chain communication within its parachains. Such interoperability and scalability can take blockchain technology to the next level and solve multiple problems like low TPS, high transaction fees, hard forks, and more.

“Polkadot is a sharded blockchain, meaning it connects several chains in a single network, allowing them to process transactions in parallel and exchange data between chains with security guarantees”

Polkadot Litepaper

Talentica’s blockchain team has a lot of experience in building blockchain-based solutions and we always observed evolving innovative technologies to increase our blockchain expertise. We have been working on multiple blockchain frameworks. Among those, we found Polkadot gaining a lot of traction recently. We started exploring this protocol and did some hands-on. Our prior experience with blockchain technology and Rust programming language helped us to gain a good grasp on the framework.

In this blog, I’ll help you understand how to develop and launch a simple Parachain project on Polkadot in the following sections. It is not intended to explore the basics of Polkadot in detail here, but the article will help you have a working local environment for Polkadot parachain development and get familiar with the development process.

Local Setup

Polkadot provides PDKs (which stands for Parachain Development Kit) to ease the development. Currently, there are two functioning PDKs which are Substrate and Cumulus.

The substrate is the underlying framework on which Polkadot itself is built. It is a toolset for blockchain innovators that provides the necessary building blocks for constructing a chain.

Cumulus provides the consensus implementation for a Parachain and the block production logic. It has the interfaces and extensions to convert a Substrate FRAME runtime into a Parachain runtime.

We will do some significant compiling while performing the steps below as everything has to be built locally in the system. Compiling can take noticeable time (based on your system configuration) and storage space.

Install Substrate Prerequisites

Follow the instructions at for setting up a local development environment for Substrate.

Compile the Relay Chain

# Compile Polkadot with the real overseer feature

git clone

cd polkadot

git fetch

git checkout rococo-v1

cargo build --release --features=real-overseer

# Generate a raw chain spec

./target/release/polkadot build-spec --chain rococo-local --disable-default-bootnode --raw > rococo-local-cfde-real-overseer.json

Clone the Substrate Parachain Template

Substrate Parachain Template internally uses Cumulus to convert the substrate sovereign chain to Polkadot parachain. We will use this template as a starting point for our parachain development.

# Clone substrate-parachain-template repo

git clone

Building a Simple Parachain on Polkadot

In this section, we will create a custom "Proof of Existence" chain using the Substrate blockchain development framework and FRAME runtime libraries. This chain is made with the help of substrate tutorials provided by the substrate dev team.

We will use Substrate to create our runtime logic, which will be then compiled to a Wasm executable. This Wasm code blob will contain the entire state transition function of the chain, and is what we will need to deploy our project to Polkadot as a parachain.

About Proof of Existence

Proof of existence is a service that enables the identification of the real owner of a computer file. A user submits a file to the application, from the submitted file a hash value is calculated. The calculated hash can be safely assumed to be unique for an individual file. The hash value of this file is then mapped with some unique properties of the user for identification. A user with the original file can prove his ownership by simply recomputing the hash and matching it with the one stored in the blockchain. With this mechanism, we can certify the existence, ownership, and integrity of the document without the need for a central authority.

Interface and Design

Our PoE API will expose two callable functions:

  • create_claim - allows a user to claim the existence of a file by uploading a file digest.
  • revoke_claim - allows the current owner of a claim to revoke their ownership.

To implement this, we will only need to store information about the proofs that have been claimed, and who made those claims.

Building a Custom Pallet

The Substrate Parachain Template has a FRAME-based runtime. FRAME is a library of code that allows you to build a Substrate runtime by composing modules called "pallets". You can think of these pallets as individual pieces of logic that define what your blockchain can do! The substrate provides you with multiple pre-built pallets for use in FRAME-based runtimes.

For example, FRAME includes a Balances pallet that controls the underlying currency of your blockchain by managing the balance of all the accounts in your system.

File Structure

Most of our changes will be in the pallets/template/src/ file inside the substrate parachain template. You can open the Substrate Parachain Template in any code editor, then open the file pallets/template/src/

There will be some pre-written code that acts as a template for a new pallet. You can read over this file if you'd like, and then delete the contents since we will start from scratch for full transparency.

Imports and Dependencies

Add the below imports to the file.

#![cfg_attr(not(feature = "std"), no_std)]

use frame_support::{

decl_module, decl_storage, decl_event, decl_error, ensure, StorageMap


use frame_system::ensure_signed;

use sp_std::vec::Vec;

Most of these imports are already available because they were used in the template pallet whose code we just deleted. However, sp_std is not available and we need to list it as a dependency.

Add this block to your pallets/template/Cargo.toml file.


default-features = false

version = '2.0.0'

Then, Update the existing [features] block to look like this.


default = ['std']

std = [




'sp-std/std',         <-- This line is new



Every pallet has a component called Trait that is used for configuration.

/// Configure the pallet by specifying the parameters and types on which it depends.

pub trait Trait: frame_system::Trait {

/// Because this pallet emits events, it depends on the runtime's definition of an event.

type Event: From<Event<Self>> + Into<<Self as frame_system::Trait>::Event>;



Our pallet will only emit an event in two circumstances:

  • When a new proof is added to the blockchain.
  • When proof is removed.

// Pallets use events to inform users when important changes are made.

// Event documentation should end with an array that provides descriptive names for parameters.

decl_event! {

pub enum Event<T> where AccountId = <T as frame_system::Trait>::AccountId {

/// Event emitted when a proof has been claimed. [who, claim]

ClaimCreated(AccountId, Vec<u8>),

/// Event emitted when a claim is revoked by the owner. [who, claim]

ClaimRevoked(AccountId, Vec<u8>),




An error can occur when attempting to claim or revoke proof.

// App errors are declared here

decl_error! {

pub enum Error for Module<T: Trait> {

/// The proof has already been claimed.


/// The proof does not exist, so it cannot be revoked.


/// The proof is claimed by another account, so the caller can't revoke it.





To add a new proof to the blockchain, we will simply store that proof in our pallet's storage. To store that value, we will create a hashmap from the proof to the owner of that proof and the block number the proof was made.

// The pallet's runtime storage items.

decl_storage! {

trait Store for Module<T: Trait> as TemplateModule {

/// The storage item for our proofs.

/// It maps proof to the user who made the claim and when they made it.

Proofs: map hasher(blake2_128_concat) Vec<u8> => (T::AccountId, T::BlockNumber);



If proof has an owner and a block number, then we know that it has been claimed! Otherwise, the proof is available to be claimed.

Callable Functions

As implied by our pallet's events and errors, we will have two "dispatchable functions" the user can call in this FRAME pallet:

  • create_claim(): Allow a user to claim the existence of a file with proof.
  • revoke_claim(): Allow the owner of a claim to revoke their claim.

// Dispatchable functions allow users to interact with the pallet and invoke state changes.

// These functions materialize as "extrinsic", which are often compared to transactions.

// Dispatchable functions must be annotated with weight and must return a DispatchResult.

decl_module! {

pub struct Module<T: Trait> for enum Call where origin: T::Origin {

// Errors must be initialized if they are used by the pallet.

type Error = Error<T>;

// Events must be initialized if they are used by the pallet.

fn deposit_event() = default;

/// Allow a user to claim ownership of an unclaimed proof.

#[weight = 10_000]

fn create_claim(origin, proof: Vec<u8>) {

// Check that the extrinsic was signed and get the signer.

// This function will return an error if the extrinsic is not signed.

let sender = ensure_signed(origin)?;

// Verify that the specified proof has not already been claimed.

ensure!(!Proofs::<T>::contains_key(&proof), Error::<T>::ProofAlreadyClaimed);

// Get the block number from the FRAME System module.

let current_block = <frame_system::Module<T>>::block_number();

// Store the proof with the sender and block number.

Proofs::<T>::insert(&proof, (&sender, current_block));

// Emit an event that the claim was created.

Self::deposit_event(RawEvent::ClaimCreated(sender, proof));


/// Allow the owner to revoke their claim.

#[weight = 10_000]

fn revoke_claim(origin, proof: Vec<u8>) {

// Check that the extrinsic was signed and get the signer.

// This function will return an error if the extrinsic is not signed.

let sender = ensure_signed(origin)?;

// Verify that the specified proof has been claimed.

ensure!(Proofs::<T>::contains_key(&proof), Error::<T>::NoSuchProof);

// Get the owner of the claim.

let (owner, _) = Proofs::<T>::get(&proof);

// Verify that the sender of the current call is the claim owner.

ensure!(sender == owner, Error::<T>::NotProofOwner);

// Remove claim from storage.


// Emit an event that the claim was erased.

Self::deposit_event(RawEvent::ClaimRevoked(sender, proof));




Compiling the Parachain

After you've copied all of the parts of this pallet correctly into your pallets/template/ file, you should be able to compile your node without warning or error. Run this command in the root directory of the substrate-parachain-template repository to build the node:

# Build the parachain template collator

cargo build --release

# Print the help page to ensure the node built correctly

./target/release/parachain-collator --help

Starting the Nodes

Launch Relay Chain

Run these commands inside the polkadot directory.

# Alice

./target/release/polkadot --chain rococo-local-cfde-real-overseer.json --alice --tmp

# Bob (In a separate terminal)

./target/release/polkadot --chain rococo-local-cfde-real-overseer.json --bob --tmp --port 30334

After starting Bob’s node, Bob’s terminal log should display 1 peer. If not, your local nodes are not discovering each other, in that case, you can add --discover-local subcommand at the end of Bob nodes launch command.

# Bob - If local nodes failed to discover each other

./target/release/polkadot --chain rococo-local-cfde-real-overseer.json --bob --tmp --port 30334 --discover-local

If the problem persists, we have to explicitly specify the bootnodes. For that --bootnodes /ip4/<Node IP>/tcp/<Node p2p port>/p2p/<Node Peer ID> subcommand can be added. To give Alice as Bob’s boot node we have to provide the Alice nodes details with the boot nodes subcommand.

Alice Node IP: (As all the nodes are running in local).

Alice Node p2p port: 30333 (By default, if not specified, any node’s p2p will try to run on 30333 port).

Alice Node Peer ID: Check the Local node identity in Alice Node’s terminal log.

# Bob - If local nodes failed to discover each other

./target/release/polkadot --chain rococo-local-cfde-real-overseer.json --bob --tmp --port 30334 --bootnodes /ip4/

Export Parachain Genesis State and Wasm

Run these commands inside the substrate-parachain-template directory.

# Export genesis state

# --parachain-id 200 as an example that can be chosen freely. Make sure to everywhere use the same parachain id

./target/release/parachain-collator export-genesis-state --parachain-id 200 > genesis-state


# Export genesis wasm

./target/release/parachain-collator export-genesis-wasm > genesis-wasm

Launch the Parachain

Run these commands inside the substrate-parachain-template directory.

# Replace <parachain_id_u32_type_range> with the parachain id


# Collator 1

./target/release/parachain-collator --collator --tmp --parachain-id <parachain_id_u32_type_range> --port 40335 --ws-port 9946 -- --execution wasm --chain ../polkadot/rococo-local-cfde-real-overseer.json --port 30335


# Collator 2

./target/release/parachain-collator --collator --tmp --parachain-id <parachain_id_u32_type_range> --port 40336 --ws-port 9947 -- --execution wasm --chain ../polkadot/rococo-local-cfde-real-overseer.json --port 30336


# Parachain Full Node 1

./target/release/parachain-collator --tmp --parachain-id <parachain_id_u32_type_range> --port 40337 --ws-port 9948 -- --execution wasm --chain ../polkadot/rococo-local-cfde-real-overseer.json --port 30337

A collator node maintains a full node for the parachain as well as the relay chain. It can be noticed in the above commands that several arguments are passed before the lone --, and several more are passed after it. The arguments passed before -- are for the actual collator (parachain) node, and the arguments after the -- are for the embedded relay chain node.

Similar to the relay chain if your local nodes are not able to detect each other, you can use the boot nodes subcommand. But in the parachain case, you have to provide the parachain boot node (Collator 1 - Use the Parachain Local node identity) before -- and relay chain boot node (Alice) after --.

# Collator 2 - If local nodes failed to discover each other

./target/release/rococo-collator --collator --tmp --parachain-id <parachain_id_u32_type_range> --port 40336 --ws-port 9947 --bootnodes /ip4/ -- --execution wasm --chain ../polkadot/rococo-local-cfde-real-overseer.json --port 30336 --discover-local --bootnodes /ip4/

Register the parachain

Open Polkadot-js App and connect to your local relay chain node(Alice/Bob). After successful connection goto Developer→Sudo and fill in the data like the image below.

Once the parachain is registered, you can explore more on the Polkadot-js to get familiar with the app.

Interacting with Proof of Existence Pallet

After successfully registering the parachain, now we should be able to use our parachain to create/revoke a claim on a file.

The Polkadot-js app allows users to interact with all the available pallets in the node. Connect your parachain node with the Polkadot-js app and go to Developer→Extrinsic. Here, select the templateModule pallet and createClaim function. Then select a file to be claimed from your computer and submit the transaction.

If all went well, once the Block is finalized you should see a success message on the screen. Remember, only the owner can revoke the claim! If you select another user account and try to claim the same file, it will throw an error saying The proof has already been claimed.

Note - The Polkadot-js app can be used as an initial testing platform for your parachain. Making a complete application will require you to have a custom UI. You can follow Polkadot's documentation on tools, utilities, and libraries which will help your front-end javascript application to interact with the polkadot network.


Smart Contracts Security Analysis with Manticore

Powered with blockchain technology, smart contracts have become an important factor concerning business transparency. It eliminates the necessity for intermediary services like brokers and agents to facilitate a transaction, is less time-consuming, and enables agent neutrality and automation in signing deals.

The steady adoption of smart contracts on the Ethereum Blockchain has enabled millions of contracts holding numerous dollars in digital currencies, and tiny mistakes during the development of smart contracts on immutable blockchain have caused substantial losses and bring danger for future incidents. Hence, today the secure development of smart contracts is a crucial topic and a variety of other attacks and incidents associated with vulnerable smart contracts could have been avoided.

Smart Contract Attacks

The following is the list of known attacks one should be aware of and defend against while writing a smart contract.


It occurs when a function makes an external call to another untrusted contract before it resolves the effects that should have been resolved. This can have unexpected effects. In this attack, the attack surface is high in terms of fund loss. The attacker can simply call the function of your smart contract and then re-enter the same piece of code and eventually drain the funds.

Real-World Impact: On 17-Jun-2016, The DAO (Decentralized Autonomous Organization) was one of the major attacks where reentrancy played a major role. As an overview, the attacker was analyzing DAO.sol, where he noticed that there is a function that updates user balances and totals at the end. Later, he found a way to recursively call the function before it finishes its execution so that he can move as many funds as he wants. This resulted in the transfer of around one-third (3.6 million) of ether that had been committed to The DAO.

Access Control

Access control is a very common issue. This problem occurs when someone tries to access functionality in the smart contract to which he was not authorized. To access external contracts functionality, the method or property must be set as external or public. While insecure visibility settings give attackers straightforward ways to access a contract’s private values or logic, access control bypasses are sometimes more subtle.

Solidity has a global variable, tx.origin (deprecated), which returns the address of the account that originally sent the call. Using this variable for authentication leaves the contract susceptible to a phishing-like attack.

Real-World Impact: Rubixi’s (Ponzi scheme) Fees stolen because the constructor function had an incorrect name (used DynamicPyramid instead of Rubixi), allowing anyone to become the owner.


The data types available in solidity to store an integer can only hold the numbers in a specific range. An over/underflow occurs when an operation is performed that needs a fixed size variable to store a number (or piece of data) that’s outside the range of the variable’s data type. A uint8 for instance can only store numbers within the range [0,255]. Trying to store 256 into a uint8 will end in 0. This can be exploited if user input is unchecked and calculations are performed which result in numbers that lie outside the range of the data type that stores them.

Real-World Impact: The 4chan group who built Proof of Weak Hands Coin (PoWHC, a Ponzi scheme) lost $800k overnight because of over/under flow issues. The problem was in PoWH’s implementation of ERC-20. The attacker exploited the underflow issue to gain an exceedingly large balance of PoWH Coins.

Similarly, on 4/22/2018, an unusual BEC token transaction was recorded. In this particular transaction, someone transferred an extremely large amount of BEC tokens. Later, the analysis proved it to be a classic integer overflow issue.

Unchecked Low-Level Calls

In solidity, you can either use low-level calls such as:, address.callcode(), address.delegatecall(), and address.send(); or you can use contract calls such as: ExternalContract.doSomething(). Low-level calls will never throw an exception, rather will return false if they encounter an exception, whereas contract calls will automatically throw.

If the return value of a low-level call is not checked, the execution resumes despite the function call throwing errors. This can cause unexpected behavior and break the program logic. A failed call can even be caused by an attacker, who may be able to further exploit the application.

Real-World Impact: In the King of the Ether, an unchecked failed send() caused some monarch compensation payments and over/underpayment refunds to fail to be sent.

Denial of Service

Typically, there are three types of DoS attacks that can happen on a smart contract.

One of them would be an Unexpected Revert, where you were not expecting a transaction to revert, but it is reverting. Despite writing all kinds of functionalities in your Smart Contracts, which are imitating your business requirements, there is still a scenario that a different type of a user, which is not an individual Ethereum address, but another malicious Smart Contract can play with your system and not allow you to completely execute the entire algorithm, which you have written.

The next one is related to Block Gas Limit. The Block Gas Limit is the maximum amount of gas that can go through in a single block. The more complexity, the more competition that your transaction has, the higher is the gas that is required. So, there are chances where some heavy competition can result in your transactions going out of gas. This is often the case in systems that loop over an array or mapping that can be enlarged by users at little cost.

Last is Block Stuffing. So, in this attack, the attacker can place a transaction and then blocks the entire Ethereum network with a lot of transactions so that nobody else can participate in the Smart Contract. To ensure their transactions are being processed by miners, the attacker can prefer to pay higher transaction fees. By controlling the quantity of gas consumed by their transactions, the attacker can influence the number of transactions that get to be mined and included within the block.

Real-World Impact: At one time, GovenMental (an old Ponzi scheme) had accumulated 1100 ether. This Reddit Post describes how the contract required the deletion of a large mapping to withdraw the ether. The deletion of this mapping had a gas cost that exceeded the block gas limit at the time and thus was not possible to withdraw the 1100 ether.

Code Injection via delegatecall

There exists a special variant of a message call, named delegatecall. The DELEGATECALL opcode is just like the standard message call, except that the code executed at the targeted address is run in the context of the calling contract along with the very fact that msg.sender and msg.value remain unchanged. This allows a smart contract to dynamically load code from a different address at runtime. Storage, current address, and balance still refer to the calling contract. Calling into untrusted contracts is extremely dangerous because the code at the target address can change any storage values of the caller and has full control over the caller’s balance.

Real-World Impact: About $31M worth of ether was stolen in the second parity multi-sig attack from primarily 3 wallets. A method initWallet() which was supposed to initialize the wallet was not given proper visibility and was left public. This allowed the attacker to call these functions on deployed contracts, resetting the ownership to the attacker’s address. Then the attacker called the kill() method to self-destruct the contract. As a result, all Parity multi-sig wallets became useless and all funds or tokens in the Parity multi-sig were frozen forever.

Signature Replay

The basic idea of signature replay is that you can use the same signature to execute a transaction multiple times. The attacker can listen to the communication channel and make a copy of the signed message, which is possible if the communication channel is accessible. Then the attacker can resubmit it to the message receiver. The receiver won’t be able to tell the difference unless there is something in the message which identifies whether this message is sent before or not. Typically a cryptographic signature only identifies the signer and the message integrity and nothing more. There is no information on whether the signature is already used or the message has been sent several times.

Time Manipulation

The basic idea of this exploit is that miners can manipulate block.timestamp with some constraints. The constraints are that the time must be after the previous block timestamp and it cannot be too far in the future. Miner’s have the ability to adjust timestamps slightly which can prove to be quite dangerous if block timestamps are used incorrectly in smart contracts.

Real-World Impact: GovernMental (a Ponzi scheme, also discussed in the DoS attack) was also exposed to timestamp attack. In the scheme, the player who joined the round at last and was there for at least a minute got paid. Thus, a miner, who’s a player, could adjust the timestamp (to a future time, to make it look like a minute had elapsed) to make it appear that the player was the last to join for over a minute (even though this is not true in reality).

Front Running

Front-running is a course of action where someone benefits from early access to market information about upcoming transactions and trades, typically because of a privileged position along with the transmission of this information.

Every new blockchain transaction first relays around the network, then it’s selected by a miner and put into a valid block, and finally, the block is well-enough incorporated in the blockchain that is unlikely to be changed. Front-running is an attack where a malicious node observes a transaction after it is broadcast but before it is finalized, and attempts to have its etransaction confirmed before or instead of the observed transaction.

Real-World Impact: Bancor is an ICO that spectacularly raised over $150M in funding over a few minutes. Researchers at Cornell revealed that Bancor was vulnerable to front-running. They pointed out that miners would be able to front-run any transactions on Bancor since miners are free to re-order transactions within a block they’ve mined.

Other Vulnerabilities

The Smart Contract Weakness Classification Registry offers a complete and up-to-date catalog of known smart contract vulnerabilities and anti-patterns along with real-world examples. Browsing the registry is a good way of keeping up-to-date with the latest attacks.

All these vulnerabilities suggest that despite their potential, repeated security concerns have shaken the trust in handling billions of amounts by smart contracts. All the security issues should be arrested before deploying the contract otherwise the cost of vulnerability is much higher. Another attack like DAO that almost brought down the world’s second-largest blockchain will result in a catastrophe.

In the next section, will discuss Manticore. It is based on the symbolic execution of smart contracts for analyzing and detecting various types of vulnerabilities. A symbolic execution tool tries to explore all possible paths of your contract and generate reproducible input for each case. Symbolic execution has a remarkable potential for programmatically detecting broad classes of security vulnerabilities in modern software.


Manticore is a symbolic execution tool for the analysis of smart contracts and binaries. It enables the exploration of a large number of execution paths by replacing program inputs with symbolic parameters and studying the conditions on these parameters that determine the execution of each element of the program. It’s pure Python with minimal dependencies.

According to the official documentation, these are the features of Manticore:

  • Program Exploration: Manticore can execute a program with symbolic inputs and explore all the possible states it can
  • Input Generation: Manticore can automatically produce concrete inputs that result in a given

program state.

  • Error Discovery: Manticore can detect crashes and other failure cases in binaries and smart contracts.
  • Instrumentation: Manticore provides fine-grained control of state exploration via event callbacks and instruction
  • Programmatic Interface: Manticore exposes programmatic access to its analysis engine via a Python

Not only the Ethereum Smart Contracts but it can also analyze Linux ELF binaries (x86, x86_64, aarch64, and ARMv7) and WASM Modules. Manticore is notably slow because it goes through different sections of code with a variety of attack scenarios but the end result is worth the wait.

In this blog, our main focus is on Manticore’s CLI tool, although it also offers an expressive and scriptable Python API for custom analyses and application-specific optimizations. Anyone with experience in exploitation or reversing can use the API to create specialized binary analysis tools, and answer a range of questions, such as:

“What is a program input that will cause the execution of this code?” “Can the program reach code X at runtime?”

“At point X in execution, is it possible for variable Y to be a specified value?”

“Is user input ever used as a parameter to libc function X?” “How many times does the program execute function X?”

“How many instructions does the program execute if given input X?”

Please follow this link for installation.

Detecting Vulnerabilities in Smart Contracts

Manticore comes with an easy to use Command Line Tool (CLI) which allows you to quickly generate new program test cases with symbolic execution. It’s capable of input generation, crash discovery, execution tracing, etc.

Let’s take VulnerableToken.sol as an example.

In this contract, users can buy a token by calling the fund method and destroy the token by calling the burn method. A user can take control of the contract by calling the takeOwnership method, which requires the user to have more tokens than the current owner.

Since the fund method automatically grants more tokens to the current owner if the purchased tokens are less than the owner’s balance, it should only be possible to take over the contract by buying more tokens than the current owner has in a single transaction.

Manticore by default will create an attacker account with a balance of 1000, which won’t be enough to surpass the owners’ starting balance of 1 million tokens. It should be impossible to

take over the contract. However, there is a bug in the burn function, if an attacker tries to burn more tokens than they currently own, the possibility is that they will overflow their balance and gain an absurd number of tokens. Let’s run Manticore on this contract and see if it can find the bug.

Use the below command to start the symbolic execution.

$manticore VulnerableToken.sol

Once started, it will automatically generate various test cases with detailed output.

All the generated result files will be stored in a separate folder. We will mainly focus on user_xxxxxxxx.tx and global.findings files in this blog. Details about each file can be found on this link.

The user_xxxxxxxx.tx file contains the details of the transactions that happened in a single test case and global.findings has the results of any detectors that were triggered during execution.

We can see in the CLI, manticores overflow detector first warns an overflow after the second transaction.

It must be calling the fund method immediately followed by burn. By looking through the resulting test cases, we can find the test case which allowed the attacker to take ownership of the contract.

In this test case, after the owner creates the contract, the attacker buys a symbolic number of tokens (902637). Then it calls the burn function with a massive symbolic argument, which overflows the balance and grants them a massive number of tokens. After that, they are able to call takeOwnership and have it stop successfully instead of throwing a revert.

As we have just seen, Manticore was able to symbolically evaluate sequences of the transaction, including one that revealed the bug in the contract

Manticore Detectors

Manticore cli comes with a pack of default detectors turned on (ex. Integer Overflow). These will print warnings as soon as they suspect that a state behaves in a certain way and at the end, this is going to be collapsed at global.findings.

If there is nothing printed on global.findings, we can say that no “bug” has been detected. It means that the implemented detectors did not find any contract path that matches their specific property. You can check the obtained coverage as a measure of exploration completeness.

Manticore will make a great effort to exercise all possible contract paths, though there are certain limitations (dynamic of great size arguments, call data size, number of symbolic transactions, etc).

In the output folder, you will find the transaction trace for all different contract states that manticore found. You can see account balances and all that to check manually that nothing really bad happened.

The findings and detected things depend on the default detectors enabled. The best part of the analysis is all the high coverage test cases you’ll find in the output folder. Ideally, you should check all of it to see if you can break any contract invariant (whatever that is for you) at any of them.

Below is the list of detectors available and activated by default by Manticore.

Detector Description





Detect the usage of instructions that query environmental/block information:




Sometimes environmental information can be manipulated. Contracts should avoid

using it. Unless in special situations.

DetectSuicidal Reachable self destruct instructions


Reachable external call or ether leak to sender or arbitrary address
DetectInvalid Invalid instruction detection


Simple detector for reentrancy bugs. Alert if contract changes the state of storage (does a


write) after a call with >2300 gas to a user controlled/symbolic external address or the msg.sender address.







Detector for reentrancy bugs. Given an optional concrete list of attacker addresses, warn on the following conditions.


1)  A successful call to an attacker address (address in attacker list), or any human account address (if no list is given). With enough gas (>2300).


2)  A SSTORE after the execution of the CALL.


3)  The storage slot of the SSTORE must be used in some path to control flow

DetectIntegerOverflow Detects potential overflow and underflow conditions on ADD and SUB instructions.


Detects unused return value from internal transactions






Detects DELEGATECALLs to controlled addresses and or with controlled function id. This detector finds and reports on any delegatecall instruction any the following propositions are hold:


*  the destination address can be controlled by the caller


*  the first 4 bytes of the calldata are controlled by the caller

DetectUninitializedMemory Detects uses of uninitialized memory
DetectUninitializedStorage Detects uses of uninitialized storage



Detects possible transaction race conditions (transaction order dependencies). The RaceCondition detector might not work properly for contracts that have only a fallback function.
DetectManipulableBalance Detects the use of manipulable balance in strict compare.


Manticore is a great tool to do static analysis of smart contracts, it’s very flexible and covers a variety of bug types. These bugs, if not detected, could have resulted in a significant loss. Due to manticores’ way of work, it takes a long time to analyze smart contracts and in some cases it even times-out. Also, it consumes significant memory space as well in the system. Apart from these two issues, it’s an excellent tool to find bugs that gets even better with the scriptable Python APIs.


Simple Blockchain Framework: An Introduction to Block & Transaction Structure

SimpleBlockchain is a modular, developer-friendly, and open-source framework to develop blockchain applications. In this article, I will be taking through the explanation of the Block and the Transaction structure of the SimpleBlockchain framework. Keep following the GitHub repository for updates.

SimpleBlockchain framework is modular enough to integrate different consensus without changing its other core component. Also, it is generic enough to support multiple applications simultaneously using its generic Block and Transaction structures. We are using Rust language to develop the SimpleBlockchain framework. This article may contain Rust specific code snaps, as I will explain the block and the transaction structures and how they are capable to support these functionalities.


A Blockchain is a chain of blocks where each block is linked with the previous block (the parent block) via adding the previous block hash. Generally, a block contains the previous block hash, miner’s id, transactions list, creation timestamp, state headers, block height, and signature. Figure (1) shows the structure of a block.

Figure (1): – Block Structure in Blockchain

A root block is a topmost block of the blockchain. A peer or an active miner node gathers transactions, executing them on the updated global state from the root block, and then includes other headers details to forge a new bock. In Blockchain, a parent hash or a previous block hash are interchangeable terms. Both terms represent the hash value of the n-1th index block for the nth index block. Since, each block holds a hash of the parent block so that if a malicious peer tries to modify data of any previously appended block, it needs to re-compute and update the parent hash of each block up to the latest block. That is why data tempering in the blockchain is near to impossible. In Figure (2), three blocks are shown Block 101, Block 102, and Block 103. Block 102 is the child of Block 101 and Block 103 is the child of Block 102. Each block has only one child. In the case of two children, one child will be discarded by the blockchain eventually. Now the question emerges is, who is the parent of the first block. Each Blockchain creates a genesis block (first block). This genesis block is created by using a predefined set of values known to everyone in the network.

Figure (2): – Simplified Blockchain

How the block structure generic enough to support the different consensus

The block structure shown in figure (1) is imprecise. In actual implementation, the block structure may contain various other fields depending on the blockchain consensus and the blockchain permission level. Example: – Blockchain consensus POW needs extra fields in block structure such as nonce, a block difficulty unit, and a block reward, etc. Blockchain consensus Gosig needs extra fields in the block such as signer’s list, a block reward, and round number, etc.

There is one more thing we need to consider. Not every field in block structure is used to generate the block signature. Example: Signer’s list in Gosig consensus will be used for the authentication process and will be excluded while generating signatures. We can call these types of extra fields as authentication headers. Besides, the nonce integer, the block difficulty unit, and the block reward are extra fields that are included while generating a block signature. We can call these types of extra fields as custom headers. It is possible to have the only either kind of header type require in the blockchain.

While working on the SimpleBlockchain Framework, we addressed this generic block structure issue, so that developers can integrate different consensus with the SimpleBlockchain framework without doing any extra work on Block Structure. Figure (3) shows the generic block structure of the SimpleBlockchain framework.

Figure (3): – Generic Block Structure

Figure (4) shows an example of the custom headers in the case of Aura Consensus

Figure (4): – Consensus Specific Custom Header


A transaction is an activity that tries to modify a blockchain global state.

A Peer executes transactions to forge a new block. When a transaction gets executed, it invokes a function of a smart-contract. Typically, a transaction structure contains From Account, Smart Contract, function, headers, function payload, and signature, etc.

Figure (5): – Transaction Structure in Blockchain

Figure (5) shows a general structure of a transaction. From Account is the transaction invoker’s identity and this identity will be used to authentication the transaction’s digital signature. The smart-contract and the function field contain the application information which will be going to validate and handle the payload data. The function payload is the list of input parameters to the function call. The header field can have various fields such as nonce, timestamp, transaction fee, etc.

The transaction structure depends on the blockchain consensus and application it is supporting at present. That is why we need to make sure that our transaction structure should be generic enough to support these modifications.

Let me show you how the blockchain consensus and applications affect the transaction structure. Let us assume a user wants to build one application on top of the SimpleBlockchain framework that has support for Multi-Signature. In that case, the framework must have that much structural flexibility to add support for the same. On the consensus side, one consensus can have fields such as Gas price or Transaction fee. To resolve the upper mentioned challenges, we created a generic transaction structure shown in Figure (6).

Figure (6): – Generic Signed Transaction Structure

As shown in Figure (6), the txn field stands for serialized data of User-defined internal transaction details. The app_name is an application identification. The header may hold some consensus defined values and timestamp in key-value pair format. The signature field as the name suggests holds the digital signature of the transaction. This signature field data can be multi-signed or the normal one and its validation process will be defined accordingly by the application itself. The developer needs to take care of a transaction data sanitization and the other validations.

Figure (7): – User-defined Transaction Structure for Cryptocurrency Use Case

Figure (8): – User-defined Transaction Structure for Document Review Use Case

How does Transaction Structure support multiple applications?

As shown in Figure (6), the “txn” field contains serialized transaction data of the user-defined application. That means the application developer got free hands to develop application business flow, the validation mechanism, the state management, etc. The only constraint is that the developer must implement traits shown in line no 2 & 3 figure (7) on its Transaction structure. Figure (7) shows a user-defined transaction structure for a cryptocurrency use case where one can trade money with others. Figure (8) shows a user-defined transaction structure for the Document Review use case.

You can find both applications for your reference under a simpleblockchain/src/user module. If you happen to have a new bug or a new idea, feel free to open a new issue.

State Channels: Use Cases and Applications

As I explained in the previous blog, the State Channel solves blockchain scalability, transaction fees, and privacy concerns. In this blog, we’ll look at the potential use cases that can be implemented as decentralized applications (DApps) using State Channels and the approach involved.

For a quick recap of the previous blog, the State Channel is a technique designed to allow users to make multiple transactions without committing all of them to the Blockchain. In the traditional State Channel, the Smart-Contracts defines the initial state of the State Channel. Users can carry out an infinite number of transactions outside the Blockchain by making continuous changes, starting from the initial state. Any user can send the latest state as a closing statement to Smart-Contract on the Blockchain. Also, users can verify each state transition validity using Smart-Contract on the Blockchain.

The State Channel off-chain gives instant-finality and data-privacy with negligible transaction fees. This, in turn, has gone a long way in terms of establishing their importance in case of DApps scalability. Gaming DApps can use the scalability potential of State Channels, wherein participants either buy or sell game artefacts or indulge in gambling like in case of card games. Likewise, video streaming DApps can use State Channels to facilitate applications on a pay-per-use basis. Domains such as Supply Chain Management and P2P micro-payments can also benefit from the scalability and privacy potential of State Channels.

Use Case of State Channels: Supply Chain Management

Our businesses have widened globally, complicating the whole ecosystem of supply chain management. Let us consider the case of the food supply chain. Have we ever imagined the source from where we get our food?

The supply chain in the food industry is defined by associating the following:

  • Crop Origination
  • Food Processing at Refineries
  • Distribution of processed food to retailers
  • Selling of Food Items to Consumers

Since the food supply chain consists of millions of people worldwide with tons of raw materials and food crops, it becomes challenging for the food manufacturers and consumers to know where the different components of the food item belong.

The supply chain’s contracts can be quite complicated, costly, and susceptible to errors due to the involvement of paper-based trails for the change of ownership, letters of credit, bills of lading, and complicated payment terms. Therefore, we can use State Channels to bring in transparency and real-time traceability in the whole supply chain system.

How do State Channels Help?

Supply chain management consists of raw materials, manufacturing, distribution, and consumers. The quality of each raw material, the manufacturing processes, and distributors are tagged uniquely using Hashing algorithms.

Any product includes various raw materials and goes through different manufacturing processes. At the end of the production cycle, they are delivered to the consumer. Therefore, the product off-chain state should keep a record of raw materials, manufacturing processes, and shipping processes.

Figure (1): – State Channel between Supplier and Manufacturer

A Supplier supplies different raw materials to the manufacturer. This process will run in its separate State Channel. By the end of the process of purchasing, the manufacturer receives raw materials from different suppliers. Since these states were finalized already, they will be updated on the Blockchain and represented by the Non-Fungible tokens.

Figure (2): – State Channel for manufacturing Process

The manufacturing step includes multiple processes and consumes a variety of raw materials. When the product is ready, its state will consist of used raw materials token tag from the previous ERC20 tokens and processes tag. The Distribution process will deliver products to retailers, and the end consumer can get it from the retailer.

Figure (3): – State Channel for the delivery process

Figure (4): – State Channel for Retailer

In each process, the off-chain states can include different details and can be verified using a specific verification pattern. These details can be either product’s raw materials, manufacturing process information, or payment information. Therefore, the final state will update the ownership of the product and transfer the equivalent coins to the respected wallets on the Blockchain.

Here, we do not need to attach original letters of credit, bills of lading on the Blockchain. Instead, we should attach hash of that document on the Blockchain, so that users can compare document hash with hash updated on the Blockchain and verify authenticity of the original bills.

In Figure (4), the consumer can verify the product manufacturing details and the delivery process using the finalized state of Figure (3) State Channel. The finalized state from Figure (2) State Channel can be linked-to the verification process of the State Channel in Figure (3). The finalized state of the raw materials purchasing process from Figure (1) State Channel will be attached to the product being made in the State Channel in Figure (2).

This will bring transparency, privacy, and cost reduction in the whole supply chain management.

DApp Components

DApp built using State Channels would be based on a similar set of components, every DApp consists of a user DB component to ensure off-chain states security and a client application responsible for interacting with the Blockchain smart contract and with other participants.

The Smart contract defines the details that are going to be a part of the state structure. The same state structure will be used in the digital signature and verification process. Example: – A State Channel structure for chess game must hold chessboard state, chessboard id, and the Smart-Contract address.

The client application opens a State Channel on the Blockchain, and then the smart contract will initialize the State Channel using given inputs and predefined business logic. This business logic could require an on-chain confirmation from other participants before the State Channel goes into the active state. Once State Channel is activated, the initial state has been updated and available to all participants on the Blockchain. The client applications use this initial state and start off-chain transactions among them. The client applications must keep the latest off-chain state precisely for that they can use separate secure DB components. The non-tempered latest state will prevent all malicious actions of other participants.

Figure (5) shows a simplified architectural overview for a State Channel DApp.

Figure (5): – State Channel Generic Architecture


State Channels trade-off speed, finality, and transaction cost along with unparalleled security. We kept off-chain transaction states in a separate DB component to minimize the security trade-off. We make sure the uniqueness of the state details across the Smart-Contracts on the Blockchain network to prevent replay attacks.

I hope you find this article useful and insightful. To deep dive into some of our work around State Channels, you can check out the GitHub repository. Happy reading!


State Channels: An Introduction to Off-chain Transactions

In recent years, Blockchain technology has become a running theme, although the worldwide acceptance of this technology is still inconclusive due to its scalability, anonymity, and transaction costs. In this article, I will make you understand how the issues mentioned above are restricting Blockchain adoption across everyday applications. Let us assume that Alice and Bob are playing a game of chess that is designed at the top of Blockchain technology. For making a move, a player obliges to pay the transaction fee and wait for the move confirmation on the Blockchain as the chess move requires state changes and state changes need to be committed on the Blockchain. Such confirmation time and validation fee inclusion make Blockchain technology inaccessible to tiny hands. Even if we omit the transaction fee issue, the `current Blockchain solutions are not scalable for decentralized applications (DApps). State channel addresses these concerns without significantly increasing the risk of any participant.

What is the State Channel?

State channel is a technique designed to allow users to make multiple Blockchain transactions such as state changes or money transfers, without committing all of the transactions to the Blockchain. In the traditional state channel, only two transactions are added to the Blockchain, but an infinite or almost infinite number of transactions can be made between the participants. Example: In a Chess game built on top of state channels, the chess game beginning move and closing move should be committed on the Blockchain. All other moves can be made off-chain without the involvement of the Blockchain. These off-chain transactions do not require a fee with instant finality.

A  Payment channel is one implementation of the state channels, which deals with money transfers. A state channel is a smart-contract that enforces predefined rules for off-chain transactions. Each transaction creates a new state based on the previous state, signed by each party, which is cryptographically provable on the blockchain. Every new state makes the last state invalid since the smart-contract acknowledges only the highest state as a valid state.

State channels don’t have a “direction” because they are a generalization and a more powerful version of payment channels. Consider a unidirectional channel as one whose state is simply one state value: “Alice’s payment to Bob”. Consider a bidirectional channel as one that has two state values: “Alice’s balance” and “Bob’s balance”.

Working of State channels

In a state channel application, each party must sign an initial (opening) channel transaction, and deposit money according to application business logic. Users need to pay predefined transaction cost each time they either open a new channel or deposit money into the active channel. A deposit transaction deducts money from the depositor’s account and transfers it to the smart-contract address. This depositing mechanism will ensure that there will be no double-spent occur in on-chain or off-chain network. The smart-contract is not authorized to mint or destroy money, therefore in each valid state, all participants combined money equals total deposited money no more and no less. Figure (1) demonstrates a generic idea of the State Channels.


Let us re-consider the example cited above. Alice and Bob want to open one payment channel since they are playing a tic-tac-toe game, and after each game, they want to transfer money. Initially, they both signed the opening transaction and deposited 100 and 100 cash on the board. Alice and Bob are expected to pay transaction fees only at the time of channel opening, and now they can play the infinite number of game rounds without paying transaction fees with instant transaction finality. Assuming they decide to leave the game after the nth round, and the latest state was Alice 75 and Bob 125. Either Alice or Bob can send a channel closing transaction with the latest valid cryptographic state. It will take some time and transaction cost to validate this closing transaction, and the transaction, in turn, will send cash back to the respective wallet.

Payment Channel benefits over on-chain transactions


Participants pay validator fees at the time of opening and closing channels. All other transactions are free even though the number of transactions is hundreds or thousands.

Instant Finality

On average, Bitcoin will take about 10 minutes to complete the transaction, and Ethereum will take 15 seconds to 5 minutes if you pay the regular gas price. That means if Alice made a move, the game would stop until the move is confirmed on the chain. On the contrary, payment channel transaction finality depends on the bandwidth of the network, more the bandwidth faster the finality.


All on-transactions are registered in the Blockchain ledger and are available in the public domain. Anyone can analyze these Blockchain data and get insights into the individual. On the contrary, state channel off-chain transactions are not committed in the Blockchain except for opening and closing transactions that give participants a considerable degree of privacy.


Off-chain transactions do not change the on-chain state, therefore the payment channel Apps are scalable. And if we can build a network of payment channels like the Raiden Network or the Lightning Network, then we don’t have to open a direct channel between the two parties if there is some indirect channel that leads to scalability.


Security of payment channel states depends on how the smart- contract validates the states, what information is included in the states such as (1) state nonce, (2) smart-contract address, (3) channel Id, (4) state and stakeholder status, etc. Each participant shall make a digital signature to validate the current state. The aim of including this information in the state is to make each state universally unique like UUIDS. The smart contracts address and channel id, together used to prevent cross-contract and in-contract replay attacks.

Remaining Challenges

The Payment Channel will lock down the deposited money in the smart-contract and release it after the channel has closed. No one wants to lock up a massive amount of capital in a smart-contract that makes payment channels useful for micro-payments. Each state compels all participants to sign, which is why one offline participant can stop the processing of the payment channel.

I hope you found this article to be insightful and useful. To deep dive into some of our work around state channels, you can check out the GitHub repository.

Should a blockchain node save all the transaction logs?


Blockchain is a technology that drives all the cryptocurrencies. In every one of them, a set of validator nodes are responsible for validating all the transactions. The validators are assumed to be rational and self-interested, i.e. they are only interested in making as much money possible for themselves. Under such assumptions, it is generally assumed that a required majority of the validators would agree on the sequence of transactions that have ever happened on the blockchain.

However, such blockchain validator nodes are generally expensive in terms of the size of the disk space they need. The oldest and most popular cryptocurrency Bitcoin, for example, needs about 200 GB of disk space to store the entire transaction log. This makes it necessary to have a high-speed connection and a lot of time to even get started on mining or validation. This is a problem that prompted researchers to suggest sharding as a solution, i.e. storing only part of the log in each node, but storing the entire log as a whole. Sharding comes with its own challenges when it comes to validating transactions.

But, does it make sense for a node to store the transaction log starting all the way from the genesis block? This is an important question that needs to be answered before such solutions are crafted.

To understand whether saving all the transaction log is possible/necessary, we consider the following points –

1. Is it necessary to store the entire transaction log to validate transactions?
2. Is it more secure to store the transaction log than not storing it?
3. Is it possible to incentivize the validator nodes to store the log?

We take on these points in the rest of the article.

Is it necessary to store the entire transaction log to validate transactions?

To validate a transaction, the only thing that a node needs is to know that state of the blockchain right before the transaction. It is immaterial how that state was achieved. So, it is enough to store the state after each block. In fact, we can go even further – since the blocks that have been mined a long time before the current time are hardly ever going to be undone, it is safe to delete all the previous blocks.

The natural question that comes to our mind is if it would somehow compromise the safety of the blockchain, i.e. – would it somehow make the blockchain to approve a transaction that is not correct based on the current state of the blockchain? To answer this question, we move onto the next section.

Is it more secure to store the transaction log than not storing it?

When thinking about it, we need to think in terms of security of the whole transactions and not just the part on the blockchain. Most transactions have two parts – a payment on blockchain and the receipt of something of value in exchange. The second part of the transaction is not stored in the blockchain. This means the seller that sells the product or the service in exchange for some cryptocurrency relies on the blockchain not to revert the transaction after he/she provides the product or the service. This, in turn, means that there has to be a reasonable time after which the transaction must become completely immutable requiring the block in which it is included to be immutable too. In other words, we need some finality of the transaction and the block. When a block is finalized in some form or other, it is okay to forget what happened before that and simply continue as if that block is the genesis block. This in general means that the transaction history must be stored only up to a short period of time. In the case of Bitcoin, people normally assume that a block is pretty much finalized after an hour of time, so it makes sense to delete all the history before that. That means, in the case of Bitcoin, a validator only needs the list of UTXOs at the end of each of the last few blocks.

We now turn towards our final point.

Is it possible to incentivize the validator nodes to store the log?

As stated earlier, the validators are assumed to be selfish and rational. Which means, they need to be paid or rewarded for doing anything. Since the validators only get paid in cryptocurrency for mining the blocks in all public blockchains, that’s the only thing they should be doing. We have also seen that the storage of the data is not necessary for doing the job of validating. Therefore, there is no incentive for any validator to store the entirety of the transaction logs starting from the genesis. If we indeed want the rational validator to store the entire history of the transactions, we must sufficiently incentivize the validators to do so. We may want to require proof of storage of all the transactions in a block for it to be considered valid. Is it possible to do it? Yes indeed, if we change the proof-of-work consensus protocol as follows –
1. Let a block being proposed be B and the corresponding proof-of-work be p. This means that p is the nounce such that hash(B||p) < threshold.
2. Let the number of transactions before that block be N.
3. Let h = hash(B||hash(p)), where || means concatenation.
4. Compute the remainder r when h is divided by N. Since, h is typically a 256-bit number, it must be the case that h is way greater than N, so this means r is less than both h and N and can be considered a random sample from the set of natural numbers less than N.
5. Let the transaction with the sequence number r be tr. The transactions are indexed from 0 to N-1.
6. Now, the block proposal must be (B||p|| tr ) for it to be considered valid.

It is easy to see that if the validator does not store all of the logs of transactions, it would need more effort to generate valid blocks since it has to throw away all the proof-of-work and start over if the corresponding transaction is not stored by it. More specifically, if a validator stores on a fraction f of all the history, it has to throw away the proof-of-work 1/f times on an average, significantly reducing its average mining reward for the same amount of processing power. It is, therefore, most efficient for any validator to simply store all the transactions from the genesis block. But, such a system is not currently used, so the validators are really doing a social service by still storing all the transactions. However, from our discussion on security, it is clear that such a modification can be quite unnecessary.

Another issue is that when a new validator joins, it must download the entire transaction log from its peers. The peers, however, are not incentivized at all to provide this data to their new peer. They are simply adding a competing mining power while also burdening their own bandwidth to broadcast the information. The least we can do is to relieve them from having to broadcast the entirety of the history of transactions.


We can see that it is possible to mandate the storage of all the transaction history if we choose to design the blockchain in such a way. However, it seems quite unnecessary and cumbersome. It may even provide more motivation for the existing miners to provide the needed data to any new joiners. We have also seen that there is no advantage of storing the transaction history in terms of the security of the blockchain, so we might as well not want to do so.