Partitioning Database – A Divide and Rule Strategy

https://github.com/priyakartalentica/postgresqlPartitioningTable.git

Introduction

Imagine a dump yard full of scrapped cars. If you have to find a particular Ford Mustang there, you might end up spending days before locating the right one. Now, think of a trip to the Walmart. If you have to find a needle there, it will hardly take a few minutes. Why so? The right answer is proper partitioning. Such segregation is a must for effective operation.

The same is true for data and database. High volume of data leads to slower read and write. Read and write status improves when you implement partitioning well.

 

What is Partitioning?

Partitioning is dividing the large grown tables into physical pieces. So given the situation, the table could grow either horizontally or vertically.

When to Partition a table

The precise point that ensures benefits from partitioning depends on the application design. General strategies to take partitioning decision are as follows:

  1. The size of the table has grown huge.
  2. As a rule of thumb, the size of the table should exceed the database server’s physical memory

Types of Partitioning

  1. Vertical Partitioning

If your table has grown fat, i.e., there are too many columns which might be a major reason for slower writes, then you have to think whether the columns are needed in one single table or can be normalized. Such a partitioning process is also known as “row splitting”.

  1. Horizontal Partitioning

If your table has grown tall with a huge number of records, it will consume high table scan time to fetch records. For such cases, indexing might be a good solution. Indexing stores pointers to unique elements so that you can quickly reach the data. Just like the index section of a book where you can search the keyword for the page number. It speeds up the process of getting hands on the content you want. But with a growing number of records, your operation slows down. Consider a hypothetical situation where the Glossary (Index) grows so huge that it starts to consume more of your processing time. A possible way out would be dividing the book into logical volumes. Similarly, when your table grows massive, think of sharding, which is a part of the horizontal partitioning strategy.

It creates replicas of the schema and then divides the data stored in each partition based on keys. This requires the load to be distributed and spaced evenly across shards based on data-access patterns and space considerations. Horizontal partitioning requires the classification of different rows into different tables. It can be further classified as follow:

If you have user data like Facebook or LinkedIn, you might prefer to partition it based on regions or a list of cities in a region which is a List-based partitioning strategy.

In the case of a table storing sales data, you can partition it “month-wise,” which is Range-based partitioning. Range-based partitioning maps the data to different partitions based on ranges of values of the partitioning key that you establish for each partition.

Benefits of Partitioning

    1. You can radically improve the query performance by storing frequently accessed rows or groups of rows in similar partitions or a small number of partitions.
    2. Partitioning can improve the performance by using a sequential scan of a partition instead of using an index and random access reads scattered across the whole table in certain scenarios when queries or updates access a large percentage of a single partition.
    3. If you are serving use-cases where Bulk loads and deletes are required (based on partitioning criteria), it can be efficiently done by adding or removing partitions.
    4. Less-used data can be migrated to cheaper and slower storage media thus saving cost.

Problem Statement / POC for Horizontal List based partitioning

Recently, we were in a situation where the reads and writes were extremely slow for a PostgreSQL DB. On examining, we found that the table had around 10 million rows. We replicated the scenario and carried out a benchmarking exercise to see how each use-case behaves, given each table has 10 million records. Following are the list of use-cases we tested for:

    1. Normal table without indexes
    2. Table will Indexes
    3. Partitioned table without Indexes

Steps to Carry out the Benchmarking Exercise

    1. Started a docker instance to have PostgreSQL and pgAdmin running. docker-compose.yml file is present in the GitHub repo to spawn a similar instance for you.
    2. We created 3 tables (Normal, Indexed, and Partitioned table). Each with 10 million records. Scripts to create the tables and insert 10 million records are present in the GitHub repo link mentioned below.
    3. Each table had the following fields:
      1. item_id
      2. item_name
      3. store_id
      4. category_id
      5. country_id
      6. retailor_id
      7. score
    4. Data Distribution in Table: 10 million items were evenly distributed across 10 categories, 50 stores, and 20 retailers across 5 countries.
    5. Background of the client – Our client had a chain of stores wherein customers can ask regarding items in a store inventory.
    6. Partitioning strategy – We observed that the data was a bit evenly distributed across stores. Even customers visiting the stores would query the data for a particular store. So, in our scenario, the store id was considered as the partition key.
    7. Ran the following queries against each table using pgAdmin. List of Queries are as follows:
        1. select avg(score) where store_id=?
        2.  insert 100 records for a store_id
        3.  update 100 records for a store_id
        4.  select avg(score) where category_id=?
        5. insert 1000 records for a store_id
        6. update 1000 records for a store_id
        7. insert 10000 records for a store_id

update 10000 records for a store_idBenchmarking Observations

From the Graph and the reading in the table it is very clear that Partitioned tables functioned better than indexed tables under higher data loads. And it is an eye opening observation that indexing can slow down your system when the data load increased and might trigger full memory scan itself.

Reads Operations

For instance, when every store has around 20K records and every category has around 2 Lakh records, performing an aggregate function table takes 655ms and the indexed table takes 125ms. At the same time, a partitioned table outperforms others by completing the task in 97ms.

While performing an aggregate function for 2 lakh records, a normal table takes 279 ms and the indexed table fetches result in 62 ms. There, the partition table scores marginally well by providing the output within 57ms. The numbers will grow in sync with the data load.

So in terms of reading, opting for a Partition table will be a good decision with the given underlying scenario, followed by Indexing, and the least preferred will be the normal table without indexes in terms of Reading data.

Inserts and Updates (Write Operations)

We carried out the write operation, which consists of inserts and updates in a batch of 100, 1000, and 10000 items. Each insert/update was performed and stored for random categories having random scores so that PostgreSQL doesn’t introduce its optimizations.

After carrying out the inserts, we observed that the normal table is performing better than indexed tables and partitioning outperformed with lower response time for bulk or higher load. It resulted in faster writes as the partitioning was done based on the store. We realized that all the data needs to be written for a single store, i.e., in one partitioning and without the overhead of maintaining indexes.

In short, partitioning performed way better than the indexed table in terms of reads and writes with the growing data load.

Conclusion

Partitioning divides logical data elements into multiple entities to improve performance, availability, and maintainability. You can make the decision based on the application type. Also, go for it if your tables grow too large and none of the optimization techniques work anymore. Under heavy data load, partitioning will improve the read response time and function better in terms of writes.

Github Link for the scripts

https://github.com/priyakartalentica/postgresqlPartitioningTable

References:

https://dzone.com/articles/how-to-handle-huge-database-tables

https://www.postgresql.org/docs/current/ddl-partitioning.html

Blockchain Interoperability Solution: How Chainbridge Can Be A Way Out?

Blockchain technology offers providential results. Its potential for improving business processes, providing transactional transparency and security in the value chain, and reducing operational costs is obvious.

The past few years have seen continuous growth in blockchain-related projects. It signifies that developers are leveraging blockchain’s capabilities by thinking outside the box. Besides, we have to understand there is no perfect solution to address all blockchain needs at once.

Each day the number of solutions that rely on blockchain technology is increasing. But, the technology’s evolution is taking a hit due to the lack of interoperability among blockchain solutions. Many solutions are available for blockchain interoperability, all having their pros and cons. I have used one such solution, Chainbridge, which is an extensible cross-chain communication protocol. It currently supports bridging between EVM and Substrate based chains.

In this blog, we’ll discuss a specific use case of supply chain management that uses the blockchain interoperability between substrate and ethereum chains.

The Usecase

Blockchain holds great promise in the area of supply chain management. It can improve supply chains by enabling faster and more cost-efficient product delivery, enhancing product traceability, improving coordination between partners, and aiding access to financing.

The usecase I have is a simplified version of champagne bottle supply-chain management. I did so to learn and showcase substrate development and the interoperability between Substrate and Ethereum blockchains. By using this application, the end consumer can track and verify the authenticity of the champagne bottle.

Every bottle created will have a unique id, which we will use to track it further down the process. The use-case source code can be found here. Now let’s look at the list of actors and their respective roles in the system.

Actors and Roles

For simplicity, I will go with these four types of actors. Their roles are as follows.

Manufacturer

  • Bottle Creation – Creates and registers new bottles in the system.
  • Shipment Registration – Create a new shipment, assign a carrier to complete the delivery, and provide the retailer details and bottles to be delivered.

Carrier

  • Pickup Shipment – Picks up the shipment from the manufacturer.
  • Deliver Shipment – Delivers the shipment to the retailer.

Retailer

  • Sell Bottle – Sells the bottles to the end customer.

Customer

  • Buy Bottles – Buys bottles from the retailer.

System Modules or Substrate Pallets

The entire supply chain process of the application is built with two modules.

Registrar: Registrar pallet is responsible for registering and keeping a record of various actors and bottles in the system. It exposes some functions like registerManufacturer(), registerCarrier() etc to register the members of a particular type. A manufacturer can invoke the registerBottle() function to register a new bottle with a unique id in the system.

Bottle Tracking: The bottle shipment process is tracked using this module. Functions registerShipment() and trackShipment() are used for tracking the bottle from shipment registration to delivery to the retailer. For final customer sell, sellToCustomer() function is called, which transfers the bottle ownership to the end customer.

Chainbridge pallets –

Chainbridge, example-pallet, and example-erc721: Chainbridge provides these three pallets for interchain communication. Follow Chainbridge-substrate for their documentation.

Process Flow

Let’s discuss the entire process from bottle creation to end customer sales, step by step. We’ll have a look at the external function used to interact with the application for each step.

Member Registration

First of all, we have to register various actors in the system. There are four types of actors. So, we have to use four functions, registering each of them using the registrar pallet. These are registerManufacturer(), registerCarrier(), registerRetailer() and registerCustomer(). All 4 functions have the same function signature, one is explained below.

Function Signature:

registerManufacturer()

It takes no argument. The caller of the function is registered as a manufacturer. There can be any number of manufacturer, carrier, etc. If there are multiple manufacturers, all can invoke this function separately to register themselves. The same goes for carriers, retailers, and customers.

Bottle Creation

A manufacturer can register a new bottle.

Function Signature:

registerBottle(id: BottleId)

The manufacture will invoke this method from the registrar pallet providing the BottleId to be registered.

Shipment Registration

The shipment will be registered by the manufacturer.

Function Signature:

registerShipment(id: ShipmentId, carrier: AccountId, retailer: AccountId, bottles: Vec<BottleId>)

To register a shipment, the manufacturer will provide a unique ShipmentId, account Id of the carrier who will carry the package, account id of the retailer to which shipment has to be delivered and the list of bottle ids to be delivered.

Shipment Pickup and Delivery

The assigned carrier will pick up the shipment from the manufacturer and deliver it to the retailer.

Function Signature:

trackShipment(id: ShipmentId, operation: ShipmentOperation)

The carrier will provide the Shipment Id and the operation that it wants to perform on the shipment for tracking a shipment. The ShipmentOperation is an enum that holds two values: Pickup and Deliver. After the delivery operation is completed, the bottle will be in the ownership of the retailer.

End Customer Sell and Payments

In all the steps performed before, it is assumed that all the payments have been made off-chain. To sell the bottle to the end customer, I have assumed that the process will be initiated off-chain where both the parties (Customer and Retailer) will agree on how many bottles have to be sold and the total amount. Once the customer transfers the agreed amount to the retailer’s substrate account, the off-chain system will automatically trigger/invoke a method in the substrate chain, i.e., sellToCustomer(), which will only transfer the ownership of the bottles to the customer.

Function Signature:

sellToCustomer(customer: AccountId, bottles: Vec<BottleId>)

This function will be invoked using the retailer’s account, providing the customer’s account id and the bottles to be sold.

The customer can transfer the amount to the retailer’s substrate account by any means. But to support our interchain operability use case perspective, let us assume that the customer has some Ethereum smart contract tokens. This token holds some equivalent value to the substrate native token. Now the customer wants to transfer these Ethereum tokens directly to the retailer’s substrate account. This interchain communication can be achieved using Chainbridge, a cross-chain communication protocol.

Chainbridge

Chainbridge is a modular multi-directional blockchain bridge. Currently, it supports interoperability between Ethereum and Substrate-based chains. There are three main roles:

    • Listener: it extracts events from a source chain and constructs the message
    • Router: its role is to pass the message from Listener to Writer
    • Writer: interprets messages and sends transactions to the target chain.

Both sides of the bridge have a set of smart contracts (or pallets in the substrate), where each has a specific function:

    • Bridge – Interaction between users and relayers happens using the bridge. It starts a transaction on the source chain, executes proposals on the target chain and delegates calls to the handler contracts for deposits.
    • Handler- validates the parameters provided by the user, creating a deposit/execution record.
    • Target – as the name suggests, this is the contract we are going to interact with on each side of the bridge.

Below diagram is a summarized workflow of Chainbridge:

Chainbridge currently relies on trusted relayers. However, it has mechanisms to stop power abuse and mishandling of funds by any single relayer.

Chainbridge Setup

Prerequisites

Starting Local Chains

Follow the instructions at provenance usecase repo to start the substrate chain.

The command below will start the geth instance.

docker run -p 8545:8545 chainsafe/chainbridge-geth:20200505131100-5586a65

Ethereum Chain Setup

Deploy Contracts

To deploy the contracts onto the Ethereum chain, run the following:

cb-sol-cli deploy –all –relayerThreshold 1

Register fungible resource

cb-sol-cli bridge register-resource –resourceId “0x000000000000000000000000000000c76ebe4a02bbc34786d860b355f5a5ce00” –targetContract “0x21605f71845f372A9ed84253d2D024B7B10999f4”

Specify Token Semantics

# Register the erc20 contract as mintable/burnable

cb-sol-cli bridge set-burn –tokenContract “0x21605f71845f372A9ed84253d2D024B7B10999f4”

# Register the associated handler as a minter

cb-sol-cli erc20 add-minter –minter “0x3167776db165D8eA0f51790CA2bbf44Db5105ADF”

Substrate Chain Setup

Register Relayer

Select the Sudo tab in the PolkadotJS UI. Choose the addRelayer method of chainBridge, and select Alice as the relayer.

Register Resources For Fungible Transfer

Select the Sudo tab and call chainBridge.setResource with the below method parameters:

Id: 0x000000000000000000000000000000c76ebe4a02bbc34786d860b355f5a5ce00

Method: 0x4578616d706c652e7472616e73666572 (utf-8 encoding of “Example.transfer”)

Whitelist Chains

Using the Sudo tab, call chainBridge.whitelistChain, specifying 0 for ethereum chain ID.

Running A Relayer

Here is an example configuration for a single relayer (“Alice”) using the contracts we’ve deployed. Save this JSON inside a file and name it config.json.

{

“chains”: [

{

“name”: “eth”,

“type”: “ethereum”,

“id”: “0”,

“endpoint”: “ws://localhost:8545”,

“from”: “0xff93B45308FD417dF303D6515aB04D9e89a750Ca”,

“opts”: {

“bridge”: “0x62877dDCd49aD22f5eDfc6ac108e9a4b5D2bD88B”,

“erc20Handler”: “0x3167776db165D8eA0f51790CA2bbf44Db5105ADF”,

“erc721Handler”: “0x3f709398808af36ADBA86ACC617FeB7F5B7B193E”,

“genericHandler”: “0x2B6Ab4b880A45a07d83Cf4d664Df4Ab85705Bc07”,

“gasLimit”: “1000000”,

“maxGasPrice”: “20000000”

}

},

{

“name”: “sub”,

“type”: “substrate”,

“id”: “1”,

“endpoint”: “ws://localhost:9944”,

“from”: “5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY”,

“opts”: {

“useExtendedCall” : “true”

}

}

]

}

First, pull the Chainbridge docker image.

docker pull chainsafe/chainbridge:latest

Then start the relayer as a docker container.

docker run -v $(pwd)/config.json:/config.json –network host chainsafe/chainbridge –testkey alice –latest

With this setup complete, now we should be able to do fungible transfers over the two chains.

Interchain token transfer

Substrate Native Token ⇒ Ethereum ERC 20

In the Polkadot JS UI select the Developer -> Extrinsics tab and call example.transferNative with these parameters:

    • Amount: 1000 Unit
    • Recipient: 0xff93B45308FD417dF303D6515aB04D9e89a750Ca
    • Dest Id: 0

To query the recipients balance on ethereum use this:

cb-sol-cli erc20 balance –address “0xff93B45308FD417dF303D6515aB04D9e89a750Ca”

Ethereum ERC20 ⇒ Substrate Native Token

If necessary, tokens can be minted:

cb-sol-cli erc20 mint –amount 1000

Before initiating the transfer we have to approve the bridge to take ownership of the tokens:

cb-sol-cli erc20 approve –amount 1000 –recipient “0x3167776db165D8eA0f51790CA2bbf44Db5105ADF”

To initiate a transfer on the ethereum chain use this command (Note: there will be a 10 block delay before the relayer will process the transfer):

cb-sol-cli erc20 deposit –amount 1 –dest 1 –recipient “0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d” –resourceId “0x000000000000000000000000000000c76ebe4a02bbc34786d860b355f5a5ce00”

Chainbridge Findings

    • At the time of writing this blog, the version of Chainbridge-substrate pallet is not supported with the substrate-parachain-template. This makes it hard to use the bridge with Polkadot parachains.
    • On the substrate chain, the provided Chainbridge-substrate handlers do not mint/burn substrate native tokens. Which leads to inconsistency. Example case:
      • When sending tokens from Substrate -> Ethereum. On the substrate chain, tokens are transferred to the Chainbridge account and on the Ethereum chain, tokens are minted/released to the receivers account.
      • After this transaction completes, on the substrate chain, whatever amount is received by the bridge account is then will be available to transfer at the time of Ethereum -> Substrate transaction.
      • This implies that if our first transaction is to transfer tokens from Ethereum -> Substrate, we will not receive anything on the substrate chain. Because on the substrate chain, the bridge account does not have any tokens to transfer to. In such cases, the transaction is not reverted and inconsistent balances are stored on both sides.
    • It is a fully trusted system; admin users are in control of reviewed components. For instance, minting an infinite amount of tokens or withdrawing all the tokens from the Bridge contract on the original chain is a risk.
    • Users should have full trust in admin and relayers-based consensus not only when using the Bridge, but also when using tokens created by the Bridge (tokens on a foreign blockchain that represent the original tokens).
    • For efficient operation of the whole system, you need to depend on a significant amount of configuration and additional manual work.

Conclusion

Blockchain-based networks are being built to offer specific capabilities.  These different networks should be able to share their data and talk to each other to make the most out of these capabilities, which makes blockchain interoperability a must.

Chainbridge offers an excellent solution with its modular and multi-directional design. They have been continuously working towards enhancing the bridge system and their community channels are very active and supportive. With version 1.0, Chainbridge has built the scaffoldings necessary for message passing functionality. The upcoming versions of Chainbridge can achieve this functionality in a decentralized and trustless manner to ensure there are no central points of failure.

How to Break and Test Large Classes with a Friendly Builder Pattern?

Before we dive into breaking and testing large classes, we need to understand what a large class actually is. A common method to avoid any unnecessary problem is “Prevention is better than cure.” So, we will first discuss how to avoid it.

I will then discuss why it is difficult to avoid large classes in a fast-paced development environment, especially at the initial development phase. We will then focus on fixing and refactoring it.

If you ask me, I will say “Friendly Builder” is the right pattern to tackle large classes. I have the experience of handling it in one of our projects at Talentica. Since we have implemented it in a C++ project, I will discuss it in C++ here. However, this is one program that you can also use with C# and Java as well using inner class constructs.

What is a large class?

A large class is the one that has more than one responsibility. Most of the time, we tweak existing features a bit while developing the product and developers add the new code in the existing classes.

Gradually after some time, the classes become bulky. This happens in sync with the law of Least Effort. In simpler words, when it comes to deciding between similar options, people naturally gravitate towards the option that hardly requires any effort. After all, they just need some addition of little code and maybe a few more methods that appear very harmless then.

All these do not happen overnight. It is a slow process and equally difficult to manage as the definition of the large class is as nebulous as the Single Responsibility Principle.

What is the problem with so-called large classes?

Well, when a class has many methods (hence many responsibilities) and member variables, the understanding of the effects of changing a particular method or variable becomes complicated. Sometimes, these unsettle a developer, especially when the required changes are more invasive and demand a proper understanding. On top of it, these bulky classes are inherently difficult to test.

How to avoid leading to a large class?

As we said “Prevention is better than cure”, There are two methods mainly described by Michael C. Feathers in his book Working Effectively with Legacy Code, and they are :

Sprout Method: If you have added a feature to a system, which can be entirely formulated as new code, take the pain to write the code anew. Call it from the places where the new functionality needs to be. Assuming that you can create objects of that class in a test harness easily, you can write tests for the new code. This technique works as long as the new code is not an added responsibility to the existing class.

Sprout Class: Consider a case in which you have to make changes to a class, but there is no way to create objects of that bulky class in a test harness in a reasonable amount of time. It means there is no way to sprout a method and write tests for it on that class. Maybe you have a large set of creational dependencies, things that make it hard to instantiate your class, or you have many hidden dependencies.

To get rid of them, you need to do many invasive refactorings to separate them well enough to compile the class in a test harness. In these cases, you can create another class to hold your changes and use it from the source class. Moreover, if the added code is a piled-on responsibility, it is always better to delegate it to a new class. The good side is that we can test the newly added code since it resides in a separate class.

Are these two methods sufficient?

When the product is in the MVP stage or very early phase, very few developers work on the product idea. Deadlines also remain very tight. In this phase, the goals are different. The aim is to quickly enter the market,  get early feedback, understand the need, and make improvements rapidly to make it market fit.

Moreover, the so-called Large classes are bad when new or more than one developers work on them. This happens typically at the growth phase because companies start hiring more people then. During this development phase, the focus shifts to maintaining a speedy development without breaking the existing functionalities. We need to set the quality standard very high at this stage to ensure the customer doesn’t have a negative experience.

People love to argue about doing MVPs with TDD vs. without TDD, but we will suspend judgment for now. We will explore systematic ways to handle it if we have to work with large classes. Believe me, most often, we do end up with large classes in our code.

How to break a Large Class?

Let’s discuss what tools are available to break a large class. Among all, which I find very useful is the “Feature sketches” method (Again coming from Michael C. Feathers’s book). It is a great tool for finding different responsibilities in the class. Let’s take an example.

The first step is to draw circles for each method and member variable and then draw lines from each method to the instance variables and methods it accesses or modifies. For the example taken, we will have the following feature sketch.

As you see, a few clusters are forming. Take two clusters at least. These two clusters can help us find different responsibilities. But in addition to helping us find responsibilities, feature sketches allow us to see the dependency structure inside classes, and that can often be just as important as responsibility when we are deciding what and how to extract/refactor.

The Dilemma

All these are good, but the bigger question is should we break the class into little bits? If you have time (especially at the start of the sprint), then do it. Most often, it is not the case. Moreover, after any large refactoring exercise, the state of the product will be broken for a few days even if developers do the refactoring very carefully.

What is the way forward once you identify the responsibility and you are short of time to refactor or don’t want to take the risk of large refactoring, at least for the time being?

It would be good if we can test these identified responsibilities without refactoring and at the same time, you do your required changes. This way, you have tested your code and make things easier when you decide to refactor later, which is inevitable.

Testing responsibilities using Friendly Builder Pattern

The goal is to make very few changes in the large class but still be able to test the identified responsibilities. A common issue while testing large classes is that they are difficult to create/ instantiate in the test harness because of multiple creational dependencies. As it is large, it is bound to have many dependencies. Consider the following code.

Here is the ProgressReport class.

Below is the classroom class.

If you want to test the “GetAverageMarksInEnglish” method, you don’t need the complete “ProgressReport” object. You need only English marks in it. Similarly, for the method “GetStudentNameWithHighestMarkInEnglish” to test, we need the English number and the student name. You require the full “ProgressReport” object if you want to calculate the Grade. So this means that we may not need to populate the whole object to test a particular function.

Also, the “Classroom” class holds both the daily class routine and the Progress Report of all students. To test “GetAverageMarksInEnglish”, we don’t need the daily class routine object. It can be a null or empty object. So, suppose somehow we get access to the private method and variables of a class. In that case, we can actually populate/fill the object partially as per the testing requirement of identified responsibilities.

A friend in need is a friend indeed

In C++, the friend concept is indeed your real friend here. It provides full access to class internals. You must have heard about the builder pattern too; it is a creational design pattern for the step-by-step construction of complex objects (large class). The pattern helps you produce different types and representations of an object where you can use the same construction code. The different representations here refer to the need for the large class’s partially populated object to test different or the targeted responsibility.

In a nutshell, here is what you need.

  • Every class should have an empty constructor (if not, then constructor with minimal parameters for aggregation/dependency injection pattern)
  • Define the builder class as its friend to get access to the class internals.

You can create different representation (i.e. Create_R1,Create_R2 etc) to test potentially different responsibilities i.e. GetAverageMarksInEnglish and GetStudentNameWithHighestMarkInEnglish. You must have already figured out that you can have a class that just calculates average marks given some marks as input. That means calculating the average can go to a different class. Till you don’t refactor your code, you can have multiple create methods for each responsibility in the builder class and test them.

You may argue, the ProgressReport class is too simple. What if the large class has an aggregation relation with some other classes? Aggregation uses references (in C++) as the member object is not owned by the referring class, and also, their lifetimes are not bound, unlike a composition. You won’t be able to create an empty constructor. Likewise, “Classroom” cannot create an empty parameter constructor as ProgressReport can exist independently. In this case, we need to create a constructor having a minimum number of parameters, i.e., at least with “ProgressReport”. If you implement a similar pattern for these referenced member object classes (i.e., for ProgressReport, which we did), it is not that difficult to bring them into the test harness.

Test code:

The above approach should work for composition and aggregation. Still, if the large class uses inheritance and you need access to some base class’s private variable, you may have to define them from private to protected. The friend class cannot access the private member of the base class.

The real limitation is when your class has a dependency on a third-party library object as you can’t go and modify that class. Here, you have to extract the dependency and move to an interface to inject/ initialize a fake test object in your builder class.

What Next?

Suppose we never refactor our code and keep implementing this pattern again and again as we do new development. In that case, we introduce the overhead of maintaining the builder class as it will eventually become bulky over time. The Next thing is to break the large class for the identified responsibilities, which you can refactor as you have the unit test cases at your disposal. Since this refactoring is not much risky, you can do it whenever you have time.

Conclusion

As you make changes to your large class, you should always try to apply the Sprout Method and Sprout Class for the new changes to avoid making the class bulky. But suppose the class is already bulky and caters to a lot of responsibilities. In that case, you may need to draw the feature sketch diagram to figure out the responsibilities and their dependencies. If you want to avoid the risks of refactoring the code, you can apply the friendly builder pattern to test your impacted code. Refactoring can be a choice, but unit testing cannot be, especially if the goal is to have good quality with speedy development.

Over time, the different clusters/responsibilities of the large class will be covered in your test cases. Later, when the team has time to work on technical debt (especially at the start of the sprint), they can easily refactor some of these classes. They can just concentrate on the client code, which uses this large class.

The friend construct is not available in other languages except C++. But inner classes (in Java, C#) do allow accessing the private variables of the outer class.  You need to write your builder class as an inner class in C# and Java.

Angular — How to render HTML containing Angular Components dynamically at run-time

Introduction

Say, you had a set of components defined in your Angular application, <your-component-1> <your-component-2> <your-component-3> …. Wouldn’t it be awesome if you could create standalone templates consisting of <your-components> and other “Angular syntaxes” (Inputs, Outputs, Interpolation, Directives, etc.), load them at run-time from back-end, and Angular would render them all — all <your-components> created and all Angular syntaxes evaluated? Wouldn’t that be perfect!?

You can’t have everything, but at least something will work.

Alas! We have to accept the fact that Angular does not support rendering templates dynamically by itself. Additionally, there may be some security considerations too. Do make sure you understand these important factors first.

But, all is not lost! So, let us explore an approach to achieve, at least, one small but significant subset of supporting dynamic template rendering: dynamic HTML containing Angular Components, as the title of this article suggests.

Terminology Notes

    1. I have used terms like “dynamic HTML” or “dynamic template” or “dynamic template HTML” quite frequently in this article. They all mean the same: the description of “standalone templates” in the introduction above.
    2. The term “dynamic components” means all <your-components> put/used in a dynamic HTML; please do not confuse it for “components defined or compiled dynamically at run-time” — we won’t be doing that in this article.
    3. Host Component”. For a component or projected component in a dynamic HTML, its Host Component is the component that created it, e.g. in “Sample dynamic HTML” gist below, the Host Component of [yourComponent6] is the component that will create that dynamic HTML, not <your-component-3> inside which [yourComponent6] is placed in the dynamic HTML.

Knowledge Prerequisites

As a prerequisite to fully understand the proposed solution, I recommend that you get an idea about the following topics if not aware of them already.

    1. Dynamic component loader using ComponentFactoryResolver.
    2. Content Projection in Angular— pick your favorite article from Google.

Sample dynamic components

Let us define a few of <your-components> that we will be using in our sample “dynamic template HTML” in the next section.

@Component({
    selector: 'your-component-1',
    template: `
        <div>This is your component 1.</div>
        <div [ngStyle]="{ 'color': status }">My name is: {{ name }}</div>
    `,
})
export class YourComponent1 {
    @Input() name: string = '';
    @Input() status: string = 'green';
}


@Component({
    selector: 'your-component-2',
    template: `
        <div>This is your component 2 - {{ name }}.</div>
        <div *ngIf="filtering === 'true'">Filtered Id: {{ id }}.</div>
    `,
})
export class YourComponent2 {
    @Input() id: string = '0';
    @Input() name: string = '';
    @Input() filtering: 'true' | 'false' = 'false';
}


@Component({
    selector: 'your-component-3',
    template: `
        <div>This is your component 3 - {{ name }} ({{ ghostName || 'Ghost' }}).</div>

        <ng-content></ng-content>

        <div>End of your component 3</div>
    `,
})
export class YourComponent3 implements OnInit {

    @Input() id: number = 0; // Beware! `number` data-type

    @Input() name: string = ''; // Initialized - Will work
    @Input() ghostName: string; // Not initialized - Will not be available in `anyComp` for-in loop

    ngOnInit(): void {
        console.log(this.id === 45); // prints false
        console.log(this.id === '45'); // prints true
        console.log(typeof this.id === 'number'); // prints false
        console.log(typeof this.id === 'string'); // prints true
    }
}


@Component({
    selector: '[yourComponent6]', // Attribute selector based component
    template: `
    <div *ngIf="!offSide || !strongSide">
        <div [hidden]="!offSide">This is your component 6.</div>
        <div [ngStyle]="{ 'color': status }">The official motto is: {{ offSide }} - {{ strongSide }}.</div>
    </div>`,
})
export class YourComponent6 {
    @Input() offSide: string = '';
    @Input() strongSide: string = 'green field';
}

 

Take a quick look at YourComponent3, the comments against name, ghostName and ngOnInit. This translates to the first restriction of my proposed solution: an @Input property must be initialized to a string value. There are two parts here.

  1. Inputs must be of string type. I impose this restriction because the value of any attribute of an HTML Element in a dynamic HTML is going to be of string type. So, better to keep your components’ inputs’ data types consistent with that of the value you will be setting on them.
  2. Inputs must be initialized. Otherwise, Typescript removes that property during transpilation, which causes problems for setComponentAttrs() (see later in the solution) — it cannot find the input property of that component at run-time, hence won’t set that property even if dynamic HTML has the appropriate HTML Element attribute defined.

Sample dynamic HTML

Let us also define a dynamic HTML. All syntaxes mentioned here will work. The process will not support Angular syntaxes that I have NOT covered here.

<div>
    <p>Any HTML Element</p>

    <!-- simple component -->
    <your-component-1></your-component-1>

    <!-- simple component with "string" inputs -->
    <your-component-2 id="111" name="Krishnan" filtering="true"></your-component-2>

    <!-- component containing another content projected component -->
    <your-component-3 id="45">
        <your-component-1 name="George"></your-component-1>
        <div yourComponent6 offSide="hello" strongSide="world">...</div>
    </your-component-3>

    <!-- simple component with string inputs -->
    <your-component-4 ...></your-component-4>

    <!-- simple component with string inputs -->
    <your-component-5 ...></your-component-5>
</div>

 

Let me clarify again the second restriction of my proposed solution: no support for Directives, Pipes, interpolation, two-way data-binding, [variable] data-binding, ng-template, ng-container, etc. in a dynamic template HTML.

To clarify further the “@Input string” restriction, only hard-coded string values are supported, i.e. no variables like [attrBinding]=”stringVariable”. This is because, to support such object binding, we need to parse the HTML attributes and evaluate their values at run-time. Easier said than done!

Alternatives for unsupported syntaxes

  1. Directives.
    If you really want to use an attribute directive, the best alternative here is to create a @Component({ selector: ‘[attrName]’ }) instead. In other words, you can create your component with any Angular-supported selector — tag-name selector, [attribute] selector, .class-name selector, or even a combination of them, e.g. a[href].
  2. Object/Variable data-binding, Interpolation, etc.
    Once you attach your dynamic HTML to DOM, you can easily search for that attribute using host.Element.getElementById|Name|TagName or querySelector|All and set its value before you create the components. Alternatively, you could manipulate the HTML string itself before attaching it to the DOM. (This will become clearer in the next section.)

Attach dynamic template HTML to DOM

It is now time to make the dynamic HTML available in DOM. There are multiple ways to achieve this: using ElementRef@ViewChild[innerHTML] attribute directive, etc. The below snippet provides a few examples that subscribe to an Observable<string> representing a template HTML stream and attaching it to the DOM on resolution.

import { AfterViewInit, Component, ElementRef } from '@angular/core';
import { Observable, of } from 'rxjs';
import { delay } from 'rxjs/operators';

@Component({
    selector: 'app-binder',
    template: `<div #divBinder></div>`,
})
export class BinderComponent implements AfterViewInit {

    templateHtml$: Observable<string> = of('load supported-features.html here').pipe(delay(1000));

    @ViewChild('divBinder') divBinder: ElementRef<HTMLElement>;

    constructor(private elementRef: ElementRef) { }

    ngAfterViewInit(): void {
        // Technique #1
        this.templateHtml$.subscribe(tpl => this.elementRef.nativeElement.innerHTML = tpl);
        // Technique #2
        this.templateHtml$.subscribe(tpl => divBinder.nativeElement.innerHTML = tpl);
    }

}

 

The dynamic components rendering factory

What do we need to achieve here?

    1. Find Components’ HTML elements in DOM (of the dynamic HTML).
    2. Create appropriate Angular Components and set their @Input
    3. Wire them up into the Angular application.

That is exactly what DynamicComponentFactory<T>.create() does below.

import { Injectable, ComponentRef, Injector, Type, ComponentFactory, ComponentFactoryResolver, ApplicationRef } from '@angular/core';

export class DynamicComponentFactory<T> {

    private embeddedComponents: ComponentRef<T>[] = [];

    constructor(
        private appRef: ApplicationRef,
        private factory: ComponentFactory<T>,
        private injector: Injector,
    ) { }

    //#region Creation process
    /**
     * Creates components (of type `T`) as detected inside `hostElement`.
     * @param hostElement The host/parent Dom element inside which component selector needs to be searched.
     * _rearrange_ components rendering order in Dom, and also remove any not present in this list.
     */
    create(hostElement: Element): ComponentRef<T>[] {
        // Find elements of given Component selector type and put it into an Array (slice.call).
        const htmlEls = Array.prototype.slice.call(hostElement.querySelectorAll(this.factory.selector)) as Element[];
        // Create components
        const compRefs = htmlEls.map(el => this.createComponent(el));
        // Add to list
        this.embeddedComponents.push(...compRefs);
        // Attach created components into ApplicationRef to include them change-detection cycles.
        compRefs.forEach(compRef => this.appRef.attachView(compRef.hostView));
        // Return newly created components in case required outside
        return compRefs;
    }

    private createComponent(el: Element): ComponentRef<T> {
        // Convert NodeList into Array, cuz Angular dosen't like having a NodeList passed for projectableNodes
        const projectableNodes = [Array.prototype.slice.call(el.childNodes)];

        // Create component
        const compRef = this.factory.create(this.injector, projectableNodes, el);
        const comp = compRef.instance;

        // Apply ALL attributes inputs into the dynamic component (NOTE: This is a generic function. Not required
        // when you are sure of initialized component's input requirements.
        // Also note that only static property values work here since this is the only time they're set.
        this.setComponentAttrs(comp, el);

        return compRef;
    }

    private setComponentAttrs(comp: T, el: Element): void {
        const anyComp = (comp as any);
        for (const key in anyComp) {
            if (
                Object.prototype.hasOwnProperty.call(anyComp, key)
                && el.hasAttribute(key)
            ) {
                anyComp[key] = el.getAttribute(key);
                // console.log(el.getAttribute('name'), key, el.getAttribute(key));
            }
        }
    }
    //#endregion

    //#region Destroy process
    destroy(): void {
        this.embeddedComponents.forEach(compRef => this.appRef.detachView(compRef.hostView));
        this.embeddedComponents.forEach(compRef => compRef.destroy());
    }
    //#endregion
}

/**
 * Use this Factory class to create `DynamicComponentFactory<T>` instances.
 *
 * @tutorial PROVIDERS: This class should be "provided" in _each individual component_ (a.k.a. Host component)
 * that wants to use it. Also, you will want to inject this class with `@Self` decorator.
 *
 * **Reason**: Well, you could have `providedIn: 'root'` (and without `@Self`, but that causes the following issues:
 *  1. Routing does not work correctly - you don't get the correct instance of ActivatedRoute.
 */
@Injectable()
export class DynamicComponentFactoryFactory {

    constructor(
        private appRef: ApplicationRef,
        private injector: Injector,
        private resolver: ComponentFactoryResolver,
    ) { }

    create<T>(componentType: Type<T>): DynamicComponentFactory<T> {
        const factory = this.resolver.resolveComponentFactory(componentType);
        return new DynamicComponentFactory<T>(this.appRef, factory, this.injector);
    }

}

 

I hope that the code and comments are self-explanatory. So, let me cover only certain parts that require additional explanation.

    1. this.factory.create: This is the heart of this solution— the API provided by Angular to create a component by code.
    2. The first argument injector is required by Angular to inject dependencies into the instances being created.
    3. The second argument projectableNodes is an array of all “Projectable Nodes” of the component to be created, e.g. in “Sample dynamic HTML” gist, <your-component-1> and <div yourComponent6> are the projectable nodes of <your-component-3>. If this argument is not provided, then these Nodes inside <your-component-3> will not be rendered in the final view.
    4. setComponentAttrs(): This function loops through all public properties of the created component’s instance and sets their values to corresponding attributes’ values of the Host Element el, but only if found, otherwise the input holds its default value defined in the component.
    5. this.appRef.attachView(): This makes Angular aware of the components created and includes them in its change detection cycle.
    6. destroy(): Angular will not dispose of any dynamically created component for us automatically. Hence, we need to do it explicitly when the Host Component is being destroyed. In our current example, our Host Component is going to be BinderComponent explained in the next section.
    7. Note that DynamicComponentFactory<T> works for only one component type <T> per instance of that factory class. So, to bind multiple types of Components, you must create multiple such factory instances per Component Type. To make this process easier, we make use of DynamicComponentFactoryFactory class (Sorry, couldn’t think of a better name.) Apart from that, the other reason to have this wrapper class is that you cannot directly inject Angular’s ComponentFactory<T>, which is the second constructor dependency of DynamicComponentFactory<T>(There must be better ways to manage the factory creation process. Open to suggestions.)

We are now ready to use this factory class to create dynamic components.

Create Angular Components in dynamic HTML

Finally, we create instances of DynamicComponentFactory<T> per “dynamic component” type using DynamicComponentFactoryFactory and call its create(element) methods in the loop, where element is the HTML Node that contains the dynamic HTML. We may also perform custom “initialization” operations on the newly created components. See Lines 55–65.

import { AfterViewInit, ComponentRef, Component, ElementRef, EventEmitter, OnDestroy, Self } from '@angular/core';
import { Observable, of } from 'rxjs';
import { delay } from 'rxjs/operators';
import { DynamicComponentFactory, DynamicComponentFactoryFactory } from './dynamic-component-factory';

@Component({
    selector: 'app-binder',
    template: ``,
    providers: [
        DynamicComponentFactoryFactory, // IMPORTANT!
    ],
})
export class BinderComponent implements AfterViewInit, OnDestroy {

    private readonly factories: DynamicComponentFactory<any>[] = [];
    private readonly hostElement: Element;

    templateHtml$: Observable<string> = of('load supported-features.html here').pipe(delay(1000));

    components: any[] = [
        YourComponent1,
        YourComponent2,
        YourComponent3,
        // ... add others
    ];

    constructor(
        // @Self - best practice; to avoid potential bugs if you forgot to `provide` it here
        @Self() private cmpFactory: DynamicComponentFactoryFactory,
        elementRef: ElementRef,
    ) {
        this.hostElement = elementRef.nativeElement;
    }

    ngAfterViewInit(): void {
        this.templateHtml$.subscribe(tpl => {
            this.hostElement.innerHTML = tpl
            this.initFactories();
            this.createAllComponents();
        });
    }

    private initFactories(): void {
        components.forEach(c => {
            const f = this.cmpFactory.create(c);
            this.factories.push(f);
        });
    }

    // Create components dynamically
    private createAllComponents(): void {
        const el = this.hostElement;
        const compRefs: ComponentRef<any>[] = [];
        this.factories.forEach(f => {
            const comps = f.create(el);
            compRefs.push(...comps);
            // Here you can make use of compRefs, filter them, etc.
            // to perform any custom operations, if required.
            compRefs
                .filter(c => c.instance instanceof YourComponent2)
                .forEach(c => {
                    c.instance.name = 'hello';
                    c.instance.filtering = 'false';
                    c.instance.someFoo('welcome');
                );
        });
    }

    private removeAllComponents(): void {
        this.factories.forEach(f => f.destroy());
    }

    ngOnDestroy(): void {
        this.removeAllComponents();
    }

}

 

DynamicComponentFactoryFactory Provider (Important!)

Notice in BinderComponent that DynamicComponentFactoryFactory has been provided in its own @Component decorator and is injected using @Self. As mentioned in its JSDoc comments, this is important because we want the correct instance of Injector to be used for creating components dynamically. If the factory class is not provided at the Host Component level and instead providedIn: ‘root’ or some ParentModule, then the Injector instance will be of that level, which may have unintended consequences, e.g. relative link in [routerLink]=”[‘.’, ‘..’, ‘about-us’]” used in, say, YourComponent1 may not work correctly.

That’s it!

Conclusion

If you have made it this far, you may be thinking, “Meh! This is a completely stripped-down version of Angular templates’ capabilities. That’s no good for me!”. Yes, I will not deny that. But, believe me, it is still quite a “power-up”! I have been able to create a full-fledged Website that renders dynamic, user-defined templates using this approach, and it works perfectly well.

Even though we cannot render fully-loaded dynamic templates at run-time, we have seen in this article how we can render at least “components with static string inputs”. This may seem like a crippled solution if you compare it with all the wonderful features that Angular provides at compile-time. But, practically, this may still solve a lot of use cases requiring dynamic template rendering.

Let’s consider this a partial success.

Hope you found the article useful.

Part-3: Building a bidirectional-streaming gRPC service using Golang

If you have been through part-2 of this blog series, then you know that the gRPC framework has got support for uni-directional streaming RPCs. But that is not the end. gRPC has support for bi-directional RPCs as well. Being said that, a gRPC client and a gRPC server can stream requests and responses simultaneously utilizing the same TCP connection.

Objective

In this blog, you’ll get to know what bi-directional streaming RPCs are. How to implement, test, and run them using a live, fully functional example.

Previously in part-2 of this blog series, we’ve learned the basics of uni-directional streaming gRPCs, how to implement those gRPC, how to write unit tests, how to launch the server & client. Part-2 walks you through a step-by-step guide to implement a Stack Machine server & client leveraging the uni-directional streaming RPC.

If you’ve missed that, I would recommend you to go through it to get familiar with the basics of the gRPC framework & streaming RPCs.

Introduction

Let’s understand how bi-directional streaming RPCs work at a very high level.

Bidirectional streaming RPCs where:

    • both sides send a sequence of messages using a read-write stream
    • the two streams operate independently, so clients and servers can read and write in whatever order they like
    • for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes

The best thing is, the order of messages in each stream is preserved.

Now let’s improve the “Stack Machine” server & client codes to support bidirectional streaming.

Implementing Bidirectional Streaming RPC

We already have Server-streaming RPC ServerStreamingExecute() to handle FIB operation which streams the numbers from Fibonacci series, and Client-streaming RPC Execute() to handle the stream of instructions to perform basic ADD/SUB/MUL/DIV operations and return a single response.

In this blog we’ll merge both the functionality to make the Execute() RPC a bidirectional streaming one.

Update the protobuf

Let’s update the machine/machine.proto to make Execute() a bi-directional (server & client streaming) RPC, so that the client can stream the instructions rather than sending a set of instructions to the server & the server can respond with a stream of results. Doing so, we’re getting rid of InstructionSet and ServerStreamingExecute().

The updated machine/machine.proto now looks like:

service Machine {

-     rpc Execute(stream Instruction) returns (Result) {}

-     rpc ServerStreamingExecute(InstructionSet) returns (stream Result) {}

+     rpc Execute(stream Instruction) returns (stream Result) {}

 }

 

source: machine/machine.proto

Notice the stream keyword at two places in the Execute() RPC declaration.

Generating the updated client and server interface Go code

Now let’s generate an updated golang code from the machine/machine.proto by running:

~/disk/E/workspace/grpc-eg-go

$ SRC_DIR=./

$ DST_DIR=$SRC_DIR

$ protoc \

  -I=$SRC_DIR \

  --go_out=plugins=grpc:$DST_DIR \

  $SRC_DIR/machine/machine.proto

 

You’ll notice that declaration of ServerStreamingExecute() is not there in MachineServer & MachineClient interfaces. However, the signature of Execute() is intact.

...


 type MachineServer interface {

    Execute(Machine_ExecuteServer) error

-   ServerStreamingExecute(*InstructionSet, Machine_ServerStreamingExecuteServer) error

 }


...


 type MachineClient interface {

    Execute(ctx context.Context, opts ...grpc.CallOption) (Machine_ExecuteClient, error)

-   ServerStreamingExecute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (Machine_ServerStreamingExecuteClient, error)

 }

...

 

source: machine/machine.pb.go

Update the Server

Now we need to update the server code to make Execute() a bi-directional (server & client streaming) RPC so that it should be able to accept stream of the instructions from the client and, at the same time, it can respond with a stream of results.

func (s *MachineServer) Execute(stream machine.Machine_ExecuteServer) error {

    var stack stack.Stack

    for {

        instruction, err := stream.Recv()

        if err == io.EOF {

            log.Println("EOF")

            return nil

        }

        if err != nil {

            return err

        }



        operand := instruction.GetOperand()

        operator := instruction.GetOperator()

        op_type := OperatorType(operator)


        fmt.Printf("Operand: %v, Operator: %v\n", operand, operator)


        switch op_type {

        case PUSH:

            stack.Push(float32(operand))

        case POP:

            stack.Pop()

        case ADD, SUB, MUL, DIV:

            item2, popped := stack.Pop()

            item1, popped := stack.Pop()


            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            var res float32

            if op_type == ADD {

                res = item1 + item2

            } else if op_type == SUB {

                res = item1 - item2

            } else if op_type == MUL {

                res = item1 * item2

            } else if op_type == DIV {

                res = item1 / item2

            }


            stack.Push(res)

            if err := stream.Send(&machine.Result{Output: float32(res)}); err != nil {

                return err

            }

        case FIB:

            n, popped := stack.Pop()


            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            if op_type == FIB {

                for f := range utils.FibonacciRange(int(n)) {

                    if err := stream.Send(&machine.Result{Output: float32(f)}); err != nil {

                        return err

                    }

                }

            }

        default:

            return status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)

        }

    }

}

 

source: server/machine.go

Update the Client

Let’s update the client code to make client.Execute() a bi-directional streaming RPC so that the client can stream the instructions to the server and can receive a stream of results at the same time.

func runExecute(client machine.MachineClient, instructions []*machine.Instruction) {

    log.Printf("Streaming %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(ctx) = %v, %v: ", client, stream, err)

    }

    waitc := make(chan struct{})

    go func() {

        for {

            result, err := stream.Recv()

            if err == io.EOF {

                log.Println("EOF")

                close(waitc)

                return

            }

            if err != nil {

                log.Printf("Err: %v", err)

            }

            log.Printf("output: %v", result.GetOutput())

        }

    }()


    for _, instruction := range instructions {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

        time.Sleep(500 * time.Millisecond)

    }

    if err := stream.CloseSend(); err != nil {

        log.Fatalf("%v.CloseSend() got error %v, want %v", stream, err, nil)

    }

    <-waitc

}

 

source: client/machine.go

Test

Before we start updating the unit test, let’s generate mocks for MachineClient, Machine_ExecuteClient, and Machine_ExecuteServer interfaces to mock the stream type while testing the bidirectional streaming RPC Execute().

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ExecuteServer,Machine_ExecuteClient > mock_machine/machine_mock.go

 

The updated mock_machine/machine_mock.go should look like this.

Now, we’re good to write a unit test for bidirectional streaming RPC Execute().

Server

Let’s update the unit test to test the server-side logic of Execute() RPC for bidirectional streaming using mock:

func TestExecute(t *testing.T) {

    s := MachineServer{}


    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockServerStream := mock_machine.NewMockMachine_ExecuteServer(ctrl)


    mockResults := []*machine.Result{}

    callRecv1 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 1, Operator: "PUSH"}, nil)

    callRecv2 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 2, Operator: "PUSH"}, nil).After(callRecv1)

    callRecv3 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "MUL"}, nil).After(callRecv2)

    callRecv4 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 3, Operator: "PUSH"}, nil).After(callRecv3)

    callRecv5 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "ADD"}, nil).After(callRecv4)

    callRecv6 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "FIB"}, nil).After(callRecv5)

    mockServerStream.EXPECT().Recv().Return(nil, io.EOF).After(callRecv6)

    mockServerStream.EXPECT().Send(gomock.Any()).DoAndReturn(

        func(result *machine.Result) error {

            mockResults = append(mockResults, result)

            return nil

        }).AnyTimes()

    wants := []float32{2, 5, 0, 1, 1, 2, 3, 5}


    err := s.Execute(mockServerStream)

    if err != nil {

        t.Errorf("Execute(%v) got unexpected error: %v", mockServerStream, err)

    }

    for i, result := range mockResults {

        got := result.GetOutput()

        want := wants[i]

        if got != want {

            t.Errorf("got %v, want %v", got, want)

        }

    }

}

 

Please refer to the server/machine_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test server/machine.go server/machine_test.go

ok      command-line-arguments  0.004s

 

Client

Now, add unit test to test client-side logic of Execute() RPC for bidirectional streaming using mock:

func testExecute(t *testing.T, client machine.MachineClient) {

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()


    instructions := []*machine.Instruction{}

    instructions = append(instructions, &machine.Instruction{Operand: 5, Operator: "PUSH"})

    instructions = append(instructions, &machine.Instruction{Operand: 6, Operator: "PUSH"})

    instructions = append(instructions, &machine.Instruction{Operator: "MUL"})


    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(%v) = _, %v: ", client, ctx, err)

    }

    for _, instruction := range instructions {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

    }

    result, err := stream.Recv()

    if err != nil {

        log.Fatalf("%v.Recv() got error %v, want %v", stream, err, nil)

    }


    got := result.GetOutput()

    want := float32(30)

    if got != want {

        t.Errorf("got %v, want %v", got, want)

    }

}


func TestExecute(t *testing.T) {

    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockMachineClient := mock_machine.NewMockMachineClient(ctrl)


    mockClientStream := mock_machine.NewMockMachine_ExecuteClient(ctrl)

    mockClientStream.EXPECT().Send(gomock.Any()).Return(nil).AnyTimes()

    mockClientStream.EXPECT().Recv().Return(&machine.Result{Output: 30}, nil)


    mockMachineClient.EXPECT().Execute(

        gomock.Any(), // context

    ).Return(mockClientStream, nil)


    testExecute(t, mockMachineClient)

}

 

Please refer to the mock_machine/machine_mock_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test mock_machine/machine_mock.go mock_machine/machine_mock_test.go

ok      command-line-arguments  0.003s

 

Run

Unit tests assure us that the business logic of the server & client codes is working as expected. Let’s try running the server and communicating to it via our client code.

Server

To spin up the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

 

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go

$ go run client/machine.go

Streaming [operator:"PUSH" operand:1  operator:"PUSH" operand:2  operator:"ADD"  operator:"PUSH" operand:3  operator:"DIV"  operator:"PUSH" operand:4  operator:"MUL"  operator:"FIB"  operator:"PUSH" operand:5  operator:"PUSH" operand:6  operator:"SUB" ]

output: 3

output: 1

output: 4

output: 0

output: 1

output: 1

output: 2

output: 3

output: -1

EOF

 

Bonus

There are situations when one has to choose between mocking a dependency versus incorporating the dependencies into the test environment & running them live.

The decision – whether to mock or not could be made based on:

    1. how many dependencies are there
    2. which are the essential & most used dependencies
    3. is it feasible to install dependencies on the test (and even the developer’s) environment, etc.

To one extreme we can mock everything. But the mocking effort should pay us off.

For the gRPC framework, we can run the gRPC server live & write client codes to test against the business logic.

But spinning up a server from the test file can lead to unintended consequences that may require you to allocate a TCP port (parallel runs, multiple runs under the same CI server).

To solve this, gRPC community has introduced a package called bufconn under gRPC’s testing package. bufconn is a package that provides a Listener object that implements net.Conn. We can substitute this listener in a gRPC server – allowing us to spin up a server that acts as a full-fledged server that can be used for testing that talks over an in-memory buffer instead of a real port.

As bufconn already comes with the grpc go module – which we already have installed, we don’t need to install it explicitly.

So, let’s create a new test file server/machine_live_test.go write the following test code to launch the gRPC server live using bufconn, and write a client to test the bidirectional RPC Execute().

const bufSize = 1024 * 1024


var lis *bufconn.Listener


func init() {

    lis = bufconn.Listen(bufSize)

    s := grpc.NewServer()

    machine.RegisterMachineServer(s, &MachineServer{})

    go func() {

        if err := s.Serve(lis); err != nil {

            log.Fatalf("Server exited with error: %v", err)

        }

    }()

}


func bufDialer(context.Context, string) (net.Conn, error) {

    return lis.Dial()

}


func testExecute_Live(t *testing.T, client machine.MachineClient, instructions []*machine.Instruction, wants []float32) {

    log.Printf("Streaming %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(ctx) = %v, %v: ", client, stream, err)

    }

    waitc := make(chan struct{})

    go func() {

        i := 0

        for {

            result, err := stream.Recv()

            if err == io.EOF {

                log.Println("EOF")

                close(waitc)

                return

            }

            if err != nil {

                log.Printf("Err: %v", err)

            }

            log.Printf("output: %v", result.GetOutput())

            got := result.GetOutput()

            want := wants[i]

            if got != want {

                t.Errorf("got %v, want %v", got, want)

            }

            i++

        }

    }()


    for _, instruction := range instructions {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

    }

    if err := stream.CloseSend(); err != nil {

        log.Fatalf("%v.CloseSend() got error %v, want %v", stream, err, nil)

    }

    <-waitc

}


func TestExecute_Live(t *testing.T) {

    ctx := context.Background()

    conn, err := grpc.DialContext(ctx, "bufnet", grpc.WithContextDialer(bufDialer), grpc.WithInsecure())

    if err != nil {

        t.Fatalf("Failed to dial bufnet: %v", err)

    }

    defer conn.Close()

    client := machine.NewMachineClient(conn)



    // try Execute()

    instructions := []*machine.Instruction{

        {Operand: 1, Operator: "PUSH"},

        {Operand: 2, Operator: "PUSH"},

        {Operator: "ADD"},

        {Operand: 3, Operator: "PUSH"},

        {Operator: "DIV"},

        {Operand: 4, Operator: "PUSH"},

        {Operator: "MUL"},

        {Operator: "FIB"},

        {Operand: 5, Operator: "PUSH"},

        {Operand: 6, Operator: "PUSH"},

        {Operator: "SUB"},

    }

    wants := []float32{3, 1, 4, 0, 1, 1, 2, 3, -1}

    testExecute_Live(t, client, instructions, wants)

}

 

source: server/machine_live_test.go

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go 

$ go test server/machine.go server/machine_live_test.go 

ok      command-line-arguments  0.005s

 

Fantastic!!! Everything worked as expected.

At the end of this blog, we’ve learned:

    • How to define an interface for bi-directional streaming RPC using protobuf
    • How to write gRPC server & client logic for bi-directional streaming RPC
    • How to write and run the unit test for bi-directional streaming RPC
    • How to write and run the unit test for bi-directional streaming RPC by running the server live leveraging the bufconn package
    • How to run the gRPC server and a client can communicate to it

The source code of this example is available at toransahu/grpc-eg-go.

Feel free to write your queries or opinions. Let’s have a healthy discussion.

A Step-by-Step Guide to Easy Android in-App Review Setup.

Introduction

Like any other developer, I always want to see how people react to my Android apps. After all, it determines the path ahead, where you can improve and what you can discard. A user can provide feedback in various ways, including via email, social media networks like Twitter, and so on. However, a rating and review on the Play Store can make or break an app’s deal. It does not just help us gather user feedback, but it also helps users decide which app to download from a similar category based on ratings.

The need for in-app review

We know how essential app reviews are for the app development team. To get feedback, we have to rely on the user who would probably visit the Google Play Store to review it. There is one other method where we can add some kind of button or indicator, a click on which will redirect them to the app details page.

However, redirecting users away from your app s feels like providing a way for app abandonment.

It is Google that finally came up with a new In-App review API which is an easy way to get feedback from the user without even leaving the app.

The good thing is we can plan when to prompt this review request. We can wait for the user to get familiar with the app, and then we can ask them when they reach a certain level of a game or unlock a new feature on the app.

Getting Started

In this blog, we will learn how to implement the In-App review and how we can test it.

Device Requirements

Only the following devices allow in-app reviews:

    • Android smartphones (phones and tablets) that have the Google Play Store enabled and are running Android 5.0 (API level 21) or higher.
    • Chrome OS devices that have the Google Play Store installed.

Play Core Library Requirements

Your app must use version 1.8.0 or higher of the Play Core library to allow in-app ratings.

implementation ‘com.google.android.play:core:1.9.1’

Design Suggestions

    • Avoid modifying the design of the Review card (popup).
    • There should be no overlay on top of or around the card.
    • Don’t remove the review dialogue programmatically (It will automatically be removed based on user actions)

Implementation

Add Play core dependency

Add the play-core library to your build.gradle file as a dependency.

implementation ‘com.google.android.play:core:1.9.1’

Create the ReviewManager

The ReviewManager is the user interface for beginning an in-app review flow in your app. Build an instance of the ReviewManagerFactory to get it.

ReviewManager reviewManager = ReviewManagerFactory.create(this)

Get a ReviewInfo object

To make a request task, use the ReviewManager instance. If it is successful the API returns the ReviewInfo item, which is necessary to begin the in-app review flow.

Launch the in-app review flow

To start the in-app review flow, use the ReviewInfo instance. Before starting with the usual user experience, wait until the user has finished the in-app review flow.

;

Testing In-App review

Testing the In-App review is not as straightforward as running the app from the android studio and submitting the review.
To protect user privacy and avoid API misuse, this API has a limited quota per user.

We will be testing our implementation using Internal Testing Track.

Internal testing

This is a tool to create and manage internal testing releases to make our app available to up to 100 internal testers.

We can create internal testing releases from the play console.

    • Select your app in the console for which you want to test In-App Review

    • Go to “Internal Testing” under “Testing” under Release on the left pane

    • Click on to “Create new Release”

    • Upload your signed APK, after a successful upload you will see something like this

    • Add Release details and Save it

    • Click on Review release

    • Click on Sart rollout to Internal testing

    • Add Testers and Save changes

    • After adding tester and successfully saving it, get the link from “Copy link”, share the invite link via whatever convenient for you

Testing Internal Release

Before starting with the internal build made sure to have installed the production build from the play store

    • After opening the link we have to accept the invitation

    • After Accepting the invitation if we go to app detail on Google Play we will see that we are an internal tester and are available with a new update

    • Update the app and then open the app, you will see the review request in the bottom half of the screen from where you have requested the review.

    • Add the review and rating (I give mine 5 stars 😛 )

    • Submit review

    • Let’s check Google Play to see whether our review can be seen there

Hurray! We have done it.

 

Best Practices

    • Never ask the user for a feedback request from the initial journey of your application usage, better wait to let him/her get familiar with your app and then request for the same. This way you will get correct and detailed feedback you can work on to improve user experience.
    • Do not bombard the customer for a summary relentlessly. This way we can reduce user annoyance while still limiting API use.
    • We should never ask a question before or while asking for the review.
    • Never customize the review screen at all, use the default on as suggested and requested by Google

 

Note as we are testing this with Internal release, we can get the review dialog multiple times as there is no quota for internal testing and rating, and review will be overridden with the latest one. But the same is not valid for the production environment, the quota is 1 per user so the dialog will not be shown to the user if the review is already done.

Thanks for reading till the end, hope you have learned about the In-App review and will incorporate it in your apps.

 

Reference: https://developer.android.com/guide/playcore/in-app-review

Part -2: Building a unidirectional-streaming gRPC service using Golang

Have you ever wondered while developing a REST API that the server could get the capability to stream responses using the same TCP connection? Or, reversely, the REST client could have the ability to stream the requests to the server? This could have saved the cost of bringing up another service (like WebSocket) just to fulfill such a requirement.

For such cases, REST isn’t the only API architecture available. People can now bank on the gRPC model as it has begun to play a crucial role. gRPC’s unidirectional-streaming RPC feature could be the perfect choice to meet those requirements.

Objective

In this blog, you’ll get to know what client streaming & server streaming uni-directional RPCs are. I will also discuss how to implement, test, and run them using a live, fully functional example.

Previously, in Part-1 of this blog series, we’ve learned the basics of gRPC, how to implement a Simple/Unary gRPC, how to write unit tests, how to launch the server & client. Part-1 is a step-by-step guide to implement a Stack Machine server & client leveraging Simple/Unary RPC.

If you’ve missed that, it is highly recommended to go through it to get familiar with the basics of the gRPC framework.

Introduction

Let’s understand how Client streaming & Server streaming RPCs work at a very high level.

Client streaming RPCs where:

    • the client writes a sequence of messages and sends them to the server using a provided stream
    • once the client has finished writing the messages, it waits for the server to read them and return its response

Server streaming RPCs where:

    • the client sends a request to the server and gets a stream to read a sequence of messages back
    • the client reads from the returned stream until there are no more messages

The best thing is gRPC guarantees message ordering within an individual RPC call.

Now let’s improve the “Stack Machine” server & client codes to support unidirectional streaming.

Implementing Server Streaming RPC

We’ll see an example of Server Streaming first by implementing the FIB operation.

Where the FIB RPC will:

    • perform a Fibonacci operation
    • accept an integer input i.e. generate first N numbers of the Fibonacci series
    • will respond with a stream of integers i.e. first N numbers of the Fibonacci series

And later we’ll see how Client Streaming can be implemented so that a client can input a stream of instructions to the Stack Machine in real-time rather than sending a single request consisting of a set of instructions.

Update Protobuf

We already have defined the gRPC service Machine and a Simple (Unary) RPC method Execute inside our service definition in part-1 of the blog series. Now, let’s update the service definition to add one server streaming RPC called ServerStreamingExecute.

    • A server streaming RPC where the client sends a request to the server using the stub and waits for a response to come back as a stream of result
    • To specify a server-side streaming method, need to place the stream keyword before the response type

// ServerStreamingExecute accepts a set of Instructions from client and returns a stream of Result.

rpc ServerStreamingExecute(InstructionSet) returns (stream Result) {}

source: machine/machine.proto

Generating pdated Client & Server Interface Go Code

We need to generate the gRPC client and server interfaces from our machine/machine.proto service definition.

~/disk/E/workspace/grpc-eg-go

$ SRC_DIR=./

$ DST_DIR=$SRC_DIR

$ protoc \

  -I=$SRC_DIR \

  --go_out=plugins=grpc:$DST_DIR \

  $SRC_DIR/machine/machine.proto

 

You can observe that the declaration of ServerStreamingExecute() in the MachineClient and MachineServer interface has been auto-generated:

...

 type MachineClient interface {

    Execute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (*Result, error)

+   ServerStreamingExecute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (Machine_ServerStreamingExecuteClient, error)

 }

...

 type MachineServer interface {

    Execute(context.Context, *InstructionSet) (*Result, error)

+   ServerStreamingExecute(*InstructionSet, Machine_ServerStreamingExecuteServer) error

 }

 

source: machine/machine.pb.go

Update the Server

Just in case if you’re wondering, What if my service doesn’t implement some of the RPCs declared in the machine.pb.go file, then you’ll encounter the following error while launching your gRPC server.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

# command-line-arguments

cmd/run_machine_server.go:32:44: cannot use &server.MachineServer literal (type *server.MachineServer) as type machine.MachineServer in argument to machine.RegisterMachineServer:

        *server.MachineServer does not implement machine.MachineServer (missing ServerStreamingExecute method)

 

So, it’s always the best practice to keep your service in sync with the service definition i.e. machine/machine.proto & machine/machine.pb.go. If you do not want to support a particular RPC, or its implementation is not yet ready, just respond with Unimplemented error status. Example:

// ServerStreamingExecute runs the set of instructions given and streams a sequence of Results.

func (s *MachineServer) ServerStreamingExecute(instructions *machine.InstructionSet, stream machine.Machine_ServerStreamingExecuteServer) error {

    return status.Error(codes.Unimplemented, "ServerStreamingExecute() not implemented yet")

}

 

source: server/machine.go

Before we implement the ServerStreamingExecute() RPC, let’s write a Fibonacci series generator called FibonacciRange().

package utils

func FibonacciRange(n int) <-chan int {

    ch := make(chan int)

    fn := make([]int, n+1, n+2)

    fn[0] = 0

    fn[1] = 1

    go func() {

        defer close(ch)

        for i := 0; i <= n; i++ {

            var f int

            if i < 2 {

                f = fn[i]

            } else {

                f = fn[i-1] + fn[i-2]

            }

            fn[i] = f

            ch <- f

        }

    }()

    return ch

}

source: utils/fibonacci.go

The blog series assumes that you’re familiar with Golang basics & its concurrency paradigms & concepts like Channels. You can read more about the Channels from the official document.

This function yields the numbers of Fibonacci series till the Nth position.

Let’s also add a small unit test to validate the FibonacciRange() generator.

package utils

import (

    "testing"

)

func TestFibonacciRange(t *testing.T) {

    fibOf5 := []int{0, 1, 1, 2, 3, 5}

    i := 0

    for f := range FibonacciRange(5) {

        if f != fibOf5[i] {

            t.Errorf("got %d, want %d", f, fibOf5[i])

        }

        i++

    }

}

source: utils/fibonacci_test.go

Let’s implement ServerStreamingExecute() to handle the basic instructions PUSH/POP, and FIB with proper error handling. On completion of the execution of instructions set, it should POP the result from the Stack and should respond with a Result object to the client.

func (s *MachineServer) ServerStreamingExecute(instructions *machine.InstructionSet, stream machine.Machine_ServerStreamingExecuteServer) error {

    if len(instructions.GetInstructions()) == 0 {

        return status.Error(codes.InvalidArgument, "No valid instructions received")

    }

    var stack stack.Stack

    for _, instruction := range instructions.GetInstructions() {

        operand := instruction.GetOperand()

        operator := instruction.GetOperator()

        op_type := OperatorType(operator)

        log.Printf("Operand: %v, Operator: %v\n", operand, operator)

        switch op_type {

        case PUSH:

            stack.Push(float32(operand))

        case POP:

            stack.Pop()

        case FIB:

            n, popped := stack.Pop()

            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }

            if op_type == FIB {

                for f := range utils.FibonacciRange(int(n)) {

                    log.Println(float32(f))

                    stream.Send(&machine.Result{Output: float32(f)})

                }

            }

        default:

            return status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)

        }

    }

    return nil

}

 

source: server/machine.go

Update the Client

Now, update the client code to call ServerStreamingExecute() where the client will receivenumbers of the Fibonacci series through the stream and print the same.

func runServerStreamingExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

    log.Printf("Executing %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.ServerStreamingExecute(ctx, instructions)

    if err != nil {

        log.Fatalf("%v.Execute(_) = _, %v: ", client, err)

    }

    for {

        result, err := stream.Recv()

        if err == io.EOF {

            log.Println("EOF")

            break

        }

        if err != nil {

            log.Printf("Err: %v", err)

            break

        }

        log.Printf("output: %v", result.GetOutput())

    }

    log.Println("DONE!")

}

source: client/machine.go

Test

To write the unit test we’ll have to generate the mock of multiple interfaces as required.

mockgen is the ready-to-go framework for mocking in Golang, so we’ll be leveraging it in our unit tests.

Server

As we’ve updated our interface i.e. machine/machine.pb.go, let’s update the mock for MachineClient interface. And as we’ve introduced a new RPC ServerStreamingExecute(), let’s generate the mock for ServerStream interface Machine_ServerStreamingExecuteServer as well.

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ServerStreamingExecuteServer > mock_machine/machine_mock.go

The updated mock_machine/machine_mock.go should look like this.

Now, we’re good to write unit test for server-side streaming RPC ServerStreamingExecute():

func TestServerStreamingExecute(t *testing.T) {

    s := MachineServer{}


    // set up test table

    tests := []struct {

        instructions []*machine.Instruction

        want         []float32

    }{

        {

            instructions: []*machine.Instruction{

                {Operand: 5, Operator: "PUSH"},

                {Operator: "FIB"},

            },

            want: []float32{0, 1, 1, 2, 3, 5},

        },

        {

            instructions: []*machine.Instruction{

                {Operand: 6, Operator: "PUSH"},

                {Operator: "FIB"},

            },

            want: []float32{0, 1, 1, 2, 3, 5, 8},

        },

    }


    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockServerStream := mock_machine.NewMockMachine_ServerStreamingExecuteServer(ctrl)

    for _, tt := range tests {

        mockResults := []*machine.Result{}

        mockServerStream.EXPECT().Send(gomock.Any()).DoAndReturn(

            func(result *machine.Result) error {

                mockResults = append(mockResults, result)

                return nil

            }).AnyTimes()


        req := &machine.InstructionSet{Instructions: tt.instructions}


        err := s.ServerStreamingExecute(req, mockServerStream)

        if err != nil {

            t.Errorf("ServerStreamingExecute(%v) got unexpected error: %v", req, err)

        }

        for i, result := range mockResults {

            got := result.GetOutput()

            want := tt.want[i]

            if got != want {

                t.Errorf("got %v, want %v", got, want)

            }

        }

    }

}
 

Please refer to the server/machine_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test server/machine.go server/machine_test.go

ok      command-line-arguments  0.003s

 

Client

For our new RPC ServerStreamingExecute(), let’s add the mock for ClientStream interface Machine_ServerStreamingExecuteClient as well.

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ServerStreamingExecuteServer,Machine_ServerStreamingExecuteClient > mock_machine/machine_mock.go

 

source: mock_machine/machine_mock.go

Let’s add unit test to test client-side logic for server-side streaming RPC ServerStreamingExecute() using mock MockMachine_ServerStreamingExecuteClient :

func TestServerStreamingExecute(t *testing.T) {

    instructions := []*machine.Instruction{}

    instructions = append(instructions, &machine.Instruction{Operand: 1, Operator: "PUSH"})

    instructions = append(instructions, &machine.Instruction{Operator: "FIB"})

    instructionSet := &machine.InstructionSet{Instructions: instructions}


    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockMachineClient := mock_machine.NewMockMachineClient(ctrl)

    clientStream := mock_machine.NewMockMachine_ServerStreamingExecuteClient(ctrl)


    clientStream.EXPECT().Recv().Return(&machine.Result{Output: 0}, nil)


    mockMachineClient.EXPECT().ServerStreamingExecute(

        gomock.Any(),   // context

        instructionSet, // rpc uniary message

    ).Return(clientStream, nil)


    if err := testServerStreamingExecute(t, mockMachineClient, instructionSet); err != nil {

        t.Fatalf("Test failed: %v", err)

    }

}

Please refer to the mock_machine/machine_mock_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test mock_machine/machine_mock.go mock_machine/machine_mock_test.go

ok      command-line-arguments  0.003s

 

Run

nit tests assure us that the business logic of the server & client codes is working as expected, let’s try running the server and communicating to it via our client code.

Server

To start the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

 

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go

$ go run client/machine.go

Executing instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

Executing instructions:<operator:"PUSH" operand:6 > instructions:<operator:"FIB" >

output: 0

output: 1

output: 1

output: 2

output: 3

output: 5

output: 8

EOF

DONE!

 

Awesome! A Server Streaming RPC has been successfully implemented.

Implementing Client Streaming RPC

We have learned how to implement a Server Streaming RPC, now it’s time to explore the Client Streaming RPC.

To do so, we’ll not introduce another RPC, rather we’ll update the existing Execute() RPC to accept a stream of Instructions from the client in real-time rather than sending a single request comprisesa set of Instructions.

Update the protobuf

So, let’s update the interface:

service Machine {

-     rpc Execute(InstructionSet) returns (Result) {}

+     rpc Execute(stream Instruction) returns (Result) {}

      rpc ServerStreamingExecute(InstructionSet) returns (stream Result) {}

 }

 

source: machine/machine.proto

Generating the updated client and server interface Go code

Now let’s generate an updated golang code from the machine/machine.proto by running:

~/disk/E/workspace/grpc-eg-go

$ SRC_DIR=./

$ DST_DIR=$SRC_DIR

$ protoc \

  -I=$SRC_DIR \

  --go_out=plugins=grpc:$DST_DIR \

  $SRC_DIR/machine/machine.proto

 

You’ll notice that the declaration of Execute() has been updated from MachineServer & MachineClient interfaces.

type MachineServer interface {

-   Execute(context.Context, *InstructionSet) (*Result, error)

+   Execute(Machine_ExecuteServer) error

    ServerStreamingExecute(*InstructionSet, Machine_ServerStreamingExecuteServer) error

 }

 type MachineClient interface {

-    Execute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (*Result, error)

+    Execute(ctx context.Context, opts ...grpc.CallOption) (Machine_ExecuteClient, error)

     ServerStreamingExecute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (Machine_ServerStreamingExecuteClient, error)

 }

 

source: machine/machine.pb.go

Update the Server

Let’s update the server code to make Execute() a client streaming uni-directional RPC so that it should be able to accept stream the instructions from the client and respond with a Result struct.

func (s *MachineServer) Execute(stream machine.Machine_ExecuteServer) error {

    var stack stack.Stack

    for {

        instruction, err := stream.Recv()

        if err == io.EOF {

            log.Println("EOF")

            output, popped := stack.Pop()

            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            if err := stream.SendAndClose(&machine.Result{

                Output: output,

            }); err != nil {

                return err

            }


            return nil

        }

        if err != nil {

            return err

        }


        operand := instruction.GetOperand()

        operator := instruction.GetOperator()

        op_type := OperatorType(operator)


        fmt.Printf("Operand: %v, Operator: %v\n", operand, operator)


        switch op_type {

        case PUSH:

            stack.Push(float32(operand))

        case POP:

            stack.Pop()

        case ADD, SUB, MUL, DIV:

            item2, popped := stack.Pop()

            item1, popped := stack.Pop()


            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            if op_type == ADD {

                stack.Push(item1 + item2)

            } else if op_type == SUB {

                stack.Push(item1 - item2)

            } else if op_type == MUL {

                stack.Push(item1 * item2)

            } else if op_type == DIV {

                stack.Push(item1 / item2)

            }


        default:

            return status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)

        }

    }

}

source: server/machine.go

Update the Client

Now update the client code to make client.Execute() a uni-directional streaming RPC, so that the client can stream the instructions to the server and can receive a Result struct once the streaming completes.

func runExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

    log.Printf("Streaming %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(ctx) = %v, %v: ", client, stream, err)

    }

    for _, instruction := range instructions.GetInstructions() {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

    }

    result, err := stream.CloseAndRecv()

    if err != nil {

        log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil)

    }

    log.Println(result)

}

 

source: client/machine.go

Test

Generate mock for Machine_ExecuteClient and Machine_ExecuteServer interface to test client-streaming RPC Execute():

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ServerStreamingExecuteClient,Machine_ServerStreamingExecuteServer,Machine_ExecuteServer,Machine_ExecuteClient > mock_machine/machine_mock.go

 

The updated mock_machine/machine_mock.go should look like this.

Server

Let’s update the unit test to test the server-side logic of client streaming Execute() RPC using mock:

func TestExecute(t *testing.T) {

    s := MachineServer{}

    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockServerStream := mock_machine.NewMockMachine_ExecuteServer(ctrl)

    mockResult := &machine.Result{}

    callRecv1 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 5, Operator: "PUSH"}, nil)

    callRecv2 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 6, Operator: "PUSH"}, nil).After(callRecv1)

    callRecv3 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "MUL"}, nil).After(callRecv2)

    mockServerStream.EXPECT().Recv().Return(nil, io.EOF).After(callRecv3)

    mockServerStream.EXPECT().SendAndClose(gomock.Any()).DoAndReturn(

        func(result *machine.Result) error {

            mockResult = result

            return nil

        })



    err := s.Execute(mockServerStream)

    if err != nil {

        t.Errorf("Execute(%v) got unexpected error: %v", mockServerStream, err)

    }

    got := mockResult.GetOutput()

    want := float32(30)

    if got != want {

        t.Errorf("got %v, wanted %v", got, want)

    }

}

 

Please refer to the server/machine_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test server/machine.go server/machine_test.go

ok      command-line-arguments  0.003s

 

Client

Now, add unit test to test client-side logic of client streaming Execute() RPC using mock:

func TestExecute(t *testing.T) {

    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockMachineClient := mock_machine.NewMockMachineClient(ctrl)


    mockClientStream := mock_machine.NewMockMachine_ExecuteClient(ctrl)

    mockClientStream.EXPECT().Send(gomock.Any()).Return(nil).AnyTimes()

    mockClientStream.EXPECT().CloseAndRecv().Return(&machine.Result{Output: 30}, nil)


    mockMachineClient.EXPECT().Execute(

        gomock.Any(), // context

    ).Return(mockClientStream, nil)


    testExecute(t, mockMachineClient)

}

Please refer to the mock_machine/machine_mock_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test mock_machine/machine_mock.go mock_machine/machine_mock_test.go

ok      command-line-arguments  0.003s

 

Run all the unit tests at once:

~/disk/E/workspace/grpc-eg-go

$ go test ./...

?       github.com/toransahu/grpc-eg-go/client  [no test files]

?       github.com/toransahu/grpc-eg-go/cmd     [no test files]

?       github.com/toransahu/grpc-eg-go/machine [no test files]

ok      github.com/toransahu/grpc-eg-go/mock_machine    (cached)

ok      github.com/toransahu/grpc-eg-go/server  (cached)

ok      github.com/toransahu/grpc-eg-go/utils   (cached)

?       github.com/toransahu/grpc-eg-go/utils/stack     [no test files]

 

Run

Now we are assured through unit tests that the business logic of the server & client codes is working as expected. Let’s try running the server and communicating with it via our client code.

Server

To launch the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

 

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go

$ go run client/machine.go

Streaming instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

Executing instructions:<operator:"PUSH" operand:6 > instructions:<operator:"FIB" >

output: 0

output: 1

output: 1

output: 2

output: 3

output: 5

output: 8

EOF

DONE!

 

Awesome!! We have successfully transformed a Unary RPC into Server Streaming RPC.

At the end of this blog, we’ve learned:

      • How to define an interface for uni-directional streaming RPCs using protobuf?
      • How to write gRPC server & client logic for uni-directional streaming RPCs?
      • How to write and run the unit test for server-streaming & client-streaming RPCs?
      • How to run the gRPC server and a client can communicate to it?

The source code of this example is available at toransahu/grpc-eg-go.

You can also git checkout to this commit SHA for Part-2(a) and to this commit SHA for Part-2(b).

See you in the next part of this blog series.

 

How to Integrate Firebase Authentication for Google Sign-in Functionality?

Introduction

Personalized experience is now a buzzword. Industries have started realizing the importance of learning what each individual thinks while making a decision. It helps in optimizing resources and expanding product reach. Consequently, most applications now ask for a user identity to create a premise for a personalized experience. However, it’s not easy to have your own Authentication/Identity providing the solution. When such an occasion strikes, Firebase Authentications enters the frame to save the day for us.

What is Firebase authentication?

Firebase authentication is a tool to rapidly and securely authenticate users. It offers a very clear flow for the authentication and login method and is easy to use.

Why one should use Firebase authentication

    • Time, cost, security, and stability are advantages of using authentication as a service instead of constructing it yourself. Firebase Authentication has already done the legwork for you in terms of safe account storage, email/phone authentication flows, and so on.
    • Firebase Authentication has many integrations with Developed products like Cloud Firestore, Realtime Database, and Cloud Storage. Declarative security rules are used to protect these products, and Firebase Authentication is used to introduce granular per-user security.
    • You can easily sign-in using any platform. It provides an end-to-end identity solution, supporting email and password accounts, phone authentication, and Google, Twitter, Facebook, and GitHub login, and more.

Getting Started

In this blog, we will focus on integrating the Firebase Authentication for Google Sign-In functionality in our Android application using Google and Firebase APIs.

Create an Android Project

  • Create a new project and choose the template of your choice.

I chose “Empty Activity”.

  • Add name and click finish

  • Let’s change the layout of the main activity to have some user’s info like name, profile photo. Also, add a sign out button so that we have an understanding of signout flow as well.

Create a Firebase Project

To use any firebase tool we need to create a firebase project for the same.

Let’s create one for our app RemoteConfigSample

  • Give any name for your project and click `Continue`

  • Select your Analytics location and create the project.

Add Android App to Our Firebase Project

 

  • Click on the Android icon to create an Android app in the firebase project

  • We have to add SHA-1 setting for Google Sign -in, Run the command below in the project directory to determine the SHA1 of your debug key:

./gradlew signingReport

  • Use the SHA1 from above in the app registration on firebase
  • After filing the relevant info click on the Register app

  • Download and add google-services.json to your project as per the instruction provided.

  • Click Next after following the instructions to connect the Firebase SDK to your project.

Add Firebase Authentication Dependencies

  • Go to Authentication Setting under Build in Left Pane

  • Click on “Get started”

  • Select the Sign In Method tab
  • Toggle the Google switch to enabled (blue)

  • Set a support email and Save it.

  • Go to Project Setting

  • And download the latest google-services.json which will be having the authentication setting for google sign in we enabled, and replace the old json file with the new one.
  • Add Firebase authentication dependency in build.gradle
    implementation ‘com.google.firebase:firebase-auth’
  • Add Google sign in dependency in build.gradle
    implementation ‘com.google.android.gms:play-services-auth:19.0.0’

Create a Sign-in Flow

To begin authentication, a simple Sign-In button is used. In this stage, you’ll implement the logic for signing in with Google and then authenticating with Firebase using that Google account.

  • Create a new sign in activity with a button to launch sign in flow
  • Initialize FirebaseAuth

mFirebaseAuth = FirebaseAuth.getInstance();

  • Initialize GoogleSignInClient

  • Create a SignIn method which we can call with the click of the SignIn button

  • Sign in results need to be handled in onActivityResult. If the result of the Google Sign-In was successful, use the account to authenticate with Firebase:
  • Authenticate GoogleSignInAccount with firebase to get Firebase user

Extract User Data

On successful authentication of google account with firebase authentication, we can extract relevant user information.

Get User Name

Get User Profile photo URL

SignOut

We’ve finished the login process. If the user is still logged in, our next goal is to log them out. To do this, we create a method called signOut ().

See App in Action

We can see the user also got created in Firebase Authentication “Users” tab

 

Hurray, we have done it, now we know how to have an authentication solution with no server of our own to get user identity and serve a personalized user experience.

For more details and an in-depth view, you can find the code here

 

References: https://firebase.google.com/products/auth , https://firebase.google.com/docs/auth

 

 

Part-1: A Quick Overview of gRPC in Golang

REST API is now the most popular framework among developers for web application development as it is very easy to use. REST is used to provide the business application service to the outer world and internal communication among internal microservices. However, ease and flexibility come with some pitfalls. REST requires a stringent Human Agreement and relies on Documentation. Also, in cases like internal communication and real-time applications, it has certain limitations. In 2015, gRPC kicked in. gRPC, initially developed at Google, is now disrupting the industry. gRPC is a modern open-source high-performance RPC framework, which comes with a simple language-agnostic Interface Definition Language (IDL) system, leveraging Protocol Buffers.

Objective

This blog aims to get you started with gRPC in Go with a simple working example. The blog covers basic information like What, Why, When/Where, and How about the gRPC. We’ll majorly focus on the How section, to establish a connection between the client and server and write unit tests for testing the client and server code separately. We’ll also run the code to establish a client-server communication.

What is gRPC?

gRPC – Remote Procedure Call

    • gRPC is a high performance, open-source universal RPC Framework
    • It enables the server and client applications to communicate transparently and build connected systems
    • gRPC is developed and open-sourced by Google (but no, the g doesn’t stand for Google)

Why Use gRPC?

    1. Better Design
      • With gRPC, we can define our service once in a .proto file and implement clients and servers in any of gRPC’s supported languages
      • Ability to auto-generate and publish SDKs as opposed to publishing the APIs for services
    2. High Performance
      • Advantages of working with protocol buffers, including efficient serialization, a simple IDL, and easy interface updating
      • Advantages of improved features of HTTP/2
      • Multiplexing: This forces the service client to utilize a single TCP connection to handle multiple requests simultaneously
      • Binary Framing and Compression
    3. Multi-way communication
      • Simple/Unary RPC
      • Server-side streaming RPC
      • Client-side streaming RPC
      • Bidirectional streaming RPC

Where to Use gRPC?

The “where” is pretty easy: we can leverage gRPC almost anywhere. We just need two computers communicating over a network:

    • Microservices
    • Client-Server Applications
    • Integrations and APIs
    • Browser-based Web Applications

How to Use gRPC?

Our example is a simple “Stack Machine” as a service that lets clients perform operations like PUSH, ADD, SUB, MUL, DIV, FIBB, AP, GP.

In Part-1, we’ll focus on Simple RPC implementation. In Part-2, we’ll focus on Server-side & Client-side streaming RPC, and in Part-3, we’ll implement Bidirectional streaming RPC.

Let’s get started with installing the prerequisites of the development.

Prerequisites

Go

    • Version 1.6 or higher.
    • For installation instructions, see Go’s Getting Started guide.

gRPC

Use the following command to install gRPC.

~/disk/E/workspace/grpc-eg-go
$ go get -u google.golang.org/grpc

Protocol Buffers v3

~/disk/E/workspace/grpc-eg-go
$ go get -u github.com/golang/protobuf/proto

 

  • Update the environment variable PATH to include the path to the protoc binary file.
  • Install the protoc plugin for Go
~/disk/E/workspace/grpc-eg-go
$ go get -u github.com/golang/protobuf/protoc-gen-go

Setting Project Structure

~/disk/E/workspace/grpc-eg-go
$ go mod init github.com/toransahu/grpc-eg-go
$ mkdir machine
$ mkdir server
$ mkdir client
$ tree
.
├── client/
├── go.mod
├── machine/
└── server/

Defining the service

Our first step is to define the gRPC service and the method request and response types using protocol buffers.

To define a service, we specify a named service in our machine/machine.proto file:

service Machine {

}

Then we define a Simple RPC method inside our service definition, specifying their request and response types.

  • A simple RPC where the client sends a request to the server using the stub and waits for a response to come back
// Execute accepts a set of Instructions from the client and returns a Result.
rpc Execute(InstructionSet) returns (Result) {}

 

  • machine/machine.proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods.
// Result represents the output of execution of the instruction(s).
message Result {
float output = 1;
}

 

Our machine/machine.proto file should look like this considering Part-1 of this blog series.

Generating client and server code

We need to generate the gRPC client and server interfaces from the machine/machine.proto service definition.

~/disk/E/workspace/grpc-eg-go
$ SRC_DIR=./
$ DST_DIR=$SRC_DIR
$ protoc \
-I=$SRC_DIR \
--go_out=plugins=grpc:$DST_DIR \
$SRC_DIR/machine/machine.proto

 

Running this command generates the machine.pb.go file in the machine directory under the repository:

~/disk/E/workspace/grpc-eg-go
$ tree machine/
.
├── machine/
│   ├── machine.pb.go
│   └── machine.proto

Server

Let’s create the server.

There are two parts to making our Machine service do its job:

  • Create server/machine.go: Implementing the service interface generated from our service definition; writing our service’s business logic.
  • Running the Machine gRPC server: Run the server to listen for clients’ requests and dispatch them to the right service implementation.

Take a look at how our MachineServer interface should appear: grpc-eg-go/server/machine.go

type MachineServer struct{}

// Execute runs the set of instructions given.

func (s *MachineServer) Execute(ctx context.Context, instructions *machine.InstructionSet) (*machine.Result, error) {
return nil, status.Error(codes.Unimplemented, "Execute() not implemented yet")
}

Implementing Simple RPC

MachineServer implements only Execute() service method as of now – as per Part-1 of this blog series.

Execute(), just gets a InstructionSet from the client and returns the value in a Result by executing every Instruction in the InstructionSet into our Stack Machine.

Before implementing Execute(), let’s implement a basic Stack. It should look like this.

type Stack []float32

func (s *Stack) IsEmpty() bool {
return len(*s) == 0
}

func (s *Stack) Push(input float32) {
*s = append(*s, input)
}

func (s *Stack) Pop() (float32, bool) {
if s.IsEmpty() {
return -1.0, false
}

item := (*s)[len(*s)-1]
*s = (*s)[:len(*s)-1]
return item, true
}

 

Now, let’s implement the Execute(). It should look like this.

type OperatorType string

const (
PUSH OperatorType   = "PUSH"
POP               = "POP"
ADD               = "ADD"
SUB               = "SUB"
MUL               = "MUL"
DIV               = "DIV"
)

type MachineServer struct{}
// Execute runs the set of instructions given.
func (s *MachineServer) Execute(ctx context.Context, instructions *machine.InstructionSet) (*machine.Result, error) {

if len(instructions.GetInstructions()) == 0 {
return nil, status.Error(codes.InvalidArgument, "No valid instructions received")
}

var stack stack.Stack

for _, instruction := range instructions.GetInstructions() {

operand := instruction.GetOperand()
operator := instruction.GetOperator()
op_type := OperatorType(operator)
fmt.Printf("Operand: %v, Operator: %v", operand, operator)

switch op_type {

case PUSH:
stack.Push(float32(operand))

case POP:
stack.Pop()

case ADD, SUB, MUL, DIV:
item2, popped := stack.Pop()
item1, popped := stack.Pop()
if !popped {
return &machine.Result{}, status.Error(codes.Aborted, "Invalide sets of instructions. Execution aborted")
}
if op_type == ADD {
stack.Push(item1 + item2)
} else if op_type == SUB {
stack.Push(item1 - item2)
} else if op_type == MUL {
stack.Push(item1 * item2)
} else if op_type == DIV {
stack.Push(item1 / item2)
}

default:
return nil, status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)
}
}

item, popped := stack.Pop()
if !popped {
return &machine.Result{}, status.Error(codes.Aborted, "Invalide sets of instructions. Execution aborted")
}
return &machine.Result{Output: item}, nil
}

 

We have implemented the Execute() to handle basic instructions like PUSH, POP, ADD, SUB, MUL, and DIV with proper error handling. On completion of the instructions set’s execution, it pops the result from Stack and returns as a Result object to the client.

Code to run the gRPC server

To run the gRPC server we need to:

  • Create a new instance of the gRPC struct and make it listen to one of the TCP ports at our localhost address. As a convention default port selected for gRPC is 9111.
  • To serve our StackMachine service over the gRPC server, we need to register the service with the newly created gRPC server.

For the development purpose, the basic insecure code to run the gRPC server should look like this.

var (
port = flag.Int("port", 9111, "Port on which gRPC server should listen TCP conn.")
)

func main() {
flag.Parse()
lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))

if err != nil {
log.Fatalf("failed to listen: %v", err)
}

grpcServer := grpc.NewServer()
machine.RegisterMachineServer(grpcServer, &server.MachineServer{})
grpcServer.Serve(lis)
log.Printf("Initializing gRPC server on port %d", *port)
}

 

We must consider strong TLS-based security for our production environment. I’ll try planning to include an example of TLS implementation in this blog series.

Client

As we already know that the same machine/machine.proto file, which is our IDL (Interface Definition Language) is capable of generating interfaces for clients as well, one has to just implement those interfaces to communicate with the gRPC server.

With a .proto, either the service provider can implement an SDK, or the consumer of the service itself can implement a client in the desired programming language.

Let’s implement our version of a basic client code, which will call the Execute() method of the service. The client should look like this.

var (
serverAddr = flag.String("server_addr", "localhost:9111", "The server address in the format of host:port")
)

func runExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

log.Printf("Executing %v", instructions)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
result, err := client.Execute(ctx, instructions)
if err != nil {
log.Fatalf("%v.Execute(_) = _, %v: ", client, err)
}
log.Println(result)
}

func main() {

flag.Parse()
var opts []grpc.DialOption

opts = append(opts, grpc.WithInsecure())
opts = append(opts, grpc.WithBlock())
conn, err := grpc.Dial(*serverAddr, opts...)

if err != nil {
log.Fatalf("fail to dial: %v", err)
}

defer conn.Close()
client := machine.NewMachineClient(conn)

// try Execute()

instructions := []*machine.Instruction{}
instructions = append(instructions, &machine.Instruction{Operand: 5, Operator: "PUSH"})
instructions = append(instructions, &machine.Instruction{Operand: 6, Operator: "PUSH"})
instructions = append(instructions, &machine.Instruction{Operator: "MUL"})
runExecute(client, &machine.InstructionSet{Instructions: instructions})
}

Test

Server

Let’s write a unit test to validate our business logic of Execute() method.

    • Create a test file server/machine_test.go
    • Write the unit test, it should look like this.

Run the test file.

~/disk/E/workspace/grpc-eg-go
$ go test server/machine.go server/machine_test.go -v

=== RUN   TestExecute

--- PASS: TestExecute (0.00s)
PASS
ok      command-line-arguments    0.004s

Client

To test client-side code without the overhead of connecting to a real server, we’ll use Mock. Mocking enables users to write light-weight unit tests to check functionalities on the client-side without invoking RPC calls to a server.

To write a unit test to validate client side business logic of calling the Execute() method:

    • Install golang/mock package
    • Generate mock for MachineClient
    • Create a test file mock/machine_mock_test.go
    • Write the unit test

As we are leveraging the golang/mock package, to install the package we need to run the following command:

~/disk/E/workspace/grpc-eg-go
$ go get github.com/golang/mock/mockgen@latest

 

To generate a mock of the MachineClient run the following command, the file should look like this.

~/disk/E/workspace/grpc-eg-go
$ mkdir mock_machine && cd mock_machine
$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient > machine_mock.go

 

Write the unit test, it should look like this.

Run the test file.

~/disk/E/workspace/grpc-eg-go
$ go test mock_machine/machine_mock.go  mock_machine/machine_mock_test.go -v

=== RUN   TestExecute

output:30
--- PASS: TestExecute (0.00s)
PASS
ok      command-line-arguments    0.004s

Run

Now we are assured through unit tests that the business logic of the server & client codes is working as expected, let’s try running the server and communicating to it via our client code.

Server

To turn on the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go
$ go run cmd/run_machine_server.go

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go
$ go run client/machine.go

Executing instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

 

Hurray!!! It worked.

At the end of this blog, we’ve learned:

    • Importance of gRPC – What, Why, Where
    • How to install all the prerequisites
    • How to define an interface using protobuf
    • How to write gRPC server & client logic for Simple RPC
    • How to write and run the unit test for server & client logic
    • How to run the gRPC server and a client can communicate to it

The source code of this example is available at toransahu/grpc-eg-go.

You can also git checkout to this commit SHA to walk through the source code specific to this Part-1 of the blog series.

See you in the next part of this blog series.