Angular — How to render HTML containing Angular Components dynamically at run-time

Introduction

Say, you had a set of components defined in your Angular application, <your-component-1> <your-component-2> <your-component-3> …. Wouldn’t it be awesome if you could create standalone templates consisting of <your-components> and other “Angular syntaxes” (Inputs, Outputs, Interpolation, Directives, etc.), load them at run-time from back-end, and Angular would render them all — all <your-components> created and all Angular syntaxes evaluated? Wouldn’t that be perfect!?

You can’t have everything, but at least something will work.

Alas! We have to accept the fact that Angular does not support rendering templates dynamically by itself. Additionally, there may be some security considerations too. Do make sure you understand these important factors first.

But, all is not lost! So, let us explore an approach to achieve, at least, one small but significant subset of supporting dynamic template rendering: dynamic HTML containing Angular Components, as the title of this article suggests.

Terminology Notes

    1. I have used terms like “dynamic HTML” or “dynamic template” or “dynamic template HTML” quite frequently in this article. They all mean the same: the description of “standalone templates” in the introduction above.
    2. The term “dynamic components” means all <your-components> put/used in a dynamic HTML; please do not confuse it for “components defined or compiled dynamically at run-time” — we won’t be doing that in this article.
    3. Host Component”. For a component or projected component in a dynamic HTML, its Host Component is the component that created it, e.g. in “Sample dynamic HTML” gist below, the Host Component of [yourComponent6] is the component that will create that dynamic HTML, not <your-component-3> inside which [yourComponent6] is placed in the dynamic HTML.

Knowledge Prerequisites

As a prerequisite to fully understand the proposed solution, I recommend that you get an idea about the following topics if not aware of them already.

    1. Dynamic component loader using ComponentFactoryResolver.
    2. Content Projection in Angular— pick your favorite article from Google.

Sample dynamic components

Let us define a few of <your-components> that we will be using in our sample “dynamic template HTML” in the next section.

@Component({
    selector: 'your-component-1',
    template: `
        <div>This is your component 1.</div>
        <div [ngStyle]="{ 'color': status }">My name is: {{ name }}</div>
    `,
})
export class YourComponent1 {
    @Input() name: string = '';
    @Input() status: string = 'green';
}


@Component({
    selector: 'your-component-2',
    template: `
        <div>This is your component 2 - {{ name }}.</div>
        <div *ngIf="filtering === 'true'">Filtered Id: {{ id }}.</div>
    `,
})
export class YourComponent2 {
    @Input() id: string = '0';
    @Input() name: string = '';
    @Input() filtering: 'true' | 'false' = 'false';
}


@Component({
    selector: 'your-component-3',
    template: `
        <div>This is your component 3 - {{ name }} ({{ ghostName || 'Ghost' }}).</div>

        <ng-content></ng-content>

        <div>End of your component 3</div>
    `,
})
export class YourComponent3 implements OnInit {

    @Input() id: number = 0; // Beware! `number` data-type

    @Input() name: string = ''; // Initialized - Will work
    @Input() ghostName: string; // Not initialized - Will not be available in `anyComp` for-in loop

    ngOnInit(): void {
        console.log(this.id === 45); // prints false
        console.log(this.id === '45'); // prints true
        console.log(typeof this.id === 'number'); // prints false
        console.log(typeof this.id === 'string'); // prints true
    }
}


@Component({
    selector: '[yourComponent6]', // Attribute selector based component
    template: `
    <div *ngIf="!offSide || !strongSide">
        <div [hidden]="!offSide">This is your component 6.</div>
        <div [ngStyle]="{ 'color': status }">The official motto is: {{ offSide }} - {{ strongSide }}.</div>
    </div>`,
})
export class YourComponent6 {
    @Input() offSide: string = '';
    @Input() strongSide: string = 'green field';
}

 

Take a quick look at YourComponent3, the comments against name, ghostName and ngOnInit. This translates to the first restriction of my proposed solution: an @Input property must be initialized to a string value. There are two parts here.

  1. Inputs must be of string type. I impose this restriction because the value of any attribute of an HTML Element in a dynamic HTML is going to be of string type. So, better to keep your components’ inputs’ data types consistent with that of the value you will be setting on them.
  2. Inputs must be initialized. Otherwise, Typescript removes that property during transpilation, which causes problems for setComponentAttrs() (see later in the solution) — it cannot find the input property of that component at run-time, hence won’t set that property even if dynamic HTML has the appropriate HTML Element attribute defined.

Sample dynamic HTML

Let us also define a dynamic HTML. All syntaxes mentioned here will work. The process will not support Angular syntaxes that I have NOT covered here.

<div>
    <p>Any HTML Element</p>

    <!-- simple component -->
    <your-component-1></your-component-1>

    <!-- simple component with "string" inputs -->
    <your-component-2 id="111" name="Krishnan" filtering="true"></your-component-2>

    <!-- component containing another content projected component -->
    <your-component-3 id="45">
        <your-component-1 name="George"></your-component-1>
        <div yourComponent6 offSide="hello" strongSide="world">...</div>
    </your-component-3>

    <!-- simple component with string inputs -->
    <your-component-4 ...></your-component-4>

    <!-- simple component with string inputs -->
    <your-component-5 ...></your-component-5>
</div>

 

Let me clarify again the second restriction of my proposed solution: no support for Directives, Pipes, interpolation, two-way data-binding, [variable] data-binding, ng-template, ng-container, etc. in a dynamic template HTML.

To clarify further the “@Input string” restriction, only hard-coded string values are supported, i.e. no variables like [attrBinding]=”stringVariable”. This is because, to support such object binding, we need to parse the HTML attributes and evaluate their values at run-time. Easier said than done!

Alternatives for unsupported syntaxes

  1. Directives.
    If you really want to use an attribute directive, the best alternative here is to create a @Component({ selector: ‘[attrName]’ }) instead. In other words, you can create your component with any Angular-supported selector — tag-name selector, [attribute] selector, .class-name selector, or even a combination of them, e.g. a[href].
  2. Object/Variable data-binding, Interpolation, etc.
    Once you attach your dynamic HTML to DOM, you can easily search for that attribute using host.Element.getElementById|Name|TagName or querySelector|All and set its value before you create the components. Alternatively, you could manipulate the HTML string itself before attaching it to the DOM. (This will become clearer in the next section.)

Attach dynamic template HTML to DOM

It is now time to make the dynamic HTML available in DOM. There are multiple ways to achieve this: using ElementRef@ViewChild[innerHTML] attribute directive, etc. The below snippet provides a few examples that subscribe to an Observable<string> representing a template HTML stream and attaching it to the DOM on resolution.

import { AfterViewInit, Component, ElementRef } from '@angular/core';
import { Observable, of } from 'rxjs';
import { delay } from 'rxjs/operators';

@Component({
    selector: 'app-binder',
    template: `<div #divBinder></div>`,
})
export class BinderComponent implements AfterViewInit {

    templateHtml$: Observable<string> = of('load supported-features.html here').pipe(delay(1000));

    @ViewChild('divBinder') divBinder: ElementRef<HTMLElement>;

    constructor(private elementRef: ElementRef) { }

    ngAfterViewInit(): void {
        // Technique #1
        this.templateHtml$.subscribe(tpl => this.elementRef.nativeElement.innerHTML = tpl);
        // Technique #2
        this.templateHtml$.subscribe(tpl => divBinder.nativeElement.innerHTML = tpl);
    }

}

 

The dynamic components rendering factory

What do we need to achieve here?

    1. Find Components’ HTML elements in DOM (of the dynamic HTML).
    2. Create appropriate Angular Components and set their @Input
    3. Wire them up into the Angular application.

That is exactly what DynamicComponentFactory<T>.create() does below.

import { Injectable, ComponentRef, Injector, Type, ComponentFactory, ComponentFactoryResolver, ApplicationRef } from '@angular/core';

export class DynamicComponentFactory<T> {

    private embeddedComponents: ComponentRef<T>[] = [];

    constructor(
        private appRef: ApplicationRef,
        private factory: ComponentFactory<T>,
        private injector: Injector,
    ) { }

    //#region Creation process
    /**
     * Creates components (of type `T`) as detected inside `hostElement`.
     * @param hostElement The host/parent Dom element inside which component selector needs to be searched.
     * _rearrange_ components rendering order in Dom, and also remove any not present in this list.
     */
    create(hostElement: Element): ComponentRef<T>[] {
        // Find elements of given Component selector type and put it into an Array (slice.call).
        const htmlEls = Array.prototype.slice.call(hostElement.querySelectorAll(this.factory.selector)) as Element[];
        // Create components
        const compRefs = htmlEls.map(el => this.createComponent(el));
        // Add to list
        this.embeddedComponents.push(...compRefs);
        // Attach created components into ApplicationRef to include them change-detection cycles.
        compRefs.forEach(compRef => this.appRef.attachView(compRef.hostView));
        // Return newly created components in case required outside
        return compRefs;
    }

    private createComponent(el: Element): ComponentRef<T> {
        // Convert NodeList into Array, cuz Angular dosen't like having a NodeList passed for projectableNodes
        const projectableNodes = [Array.prototype.slice.call(el.childNodes)];

        // Create component
        const compRef = this.factory.create(this.injector, projectableNodes, el);
        const comp = compRef.instance;

        // Apply ALL attributes inputs into the dynamic component (NOTE: This is a generic function. Not required
        // when you are sure of initialized component's input requirements.
        // Also note that only static property values work here since this is the only time they're set.
        this.setComponentAttrs(comp, el);

        return compRef;
    }

    private setComponentAttrs(comp: T, el: Element): void {
        const anyComp = (comp as any);
        for (const key in anyComp) {
            if (
                Object.prototype.hasOwnProperty.call(anyComp, key)
                && el.hasAttribute(key)
            ) {
                anyComp[key] = el.getAttribute(key);
                // console.log(el.getAttribute('name'), key, el.getAttribute(key));
            }
        }
    }
    //#endregion

    //#region Destroy process
    destroy(): void {
        this.embeddedComponents.forEach(compRef => this.appRef.detachView(compRef.hostView));
        this.embeddedComponents.forEach(compRef => compRef.destroy());
    }
    //#endregion
}

/**
 * Use this Factory class to create `DynamicComponentFactory<T>` instances.
 *
 * @tutorial PROVIDERS: This class should be "provided" in _each individual component_ (a.k.a. Host component)
 * that wants to use it. Also, you will want to inject this class with `@Self` decorator.
 *
 * **Reason**: Well, you could have `providedIn: 'root'` (and without `@Self`, but that causes the following issues:
 *  1. Routing does not work correctly - you don't get the correct instance of ActivatedRoute.
 */
@Injectable()
export class DynamicComponentFactoryFactory {

    constructor(
        private appRef: ApplicationRef,
        private injector: Injector,
        private resolver: ComponentFactoryResolver,
    ) { }

    create<T>(componentType: Type<T>): DynamicComponentFactory<T> {
        const factory = this.resolver.resolveComponentFactory(componentType);
        return new DynamicComponentFactory<T>(this.appRef, factory, this.injector);
    }

}

 

I hope that the code and comments are self-explanatory. So, let me cover only certain parts that require additional explanation.

    1. this.factory.create: This is the heart of this solution— the API provided by Angular to create a component by code.
    2. The first argument injector is required by Angular to inject dependencies into the instances being created.
    3. The second argument projectableNodes is an array of all “Projectable Nodes” of the component to be created, e.g. in “Sample dynamic HTML” gist, <your-component-1> and <div yourComponent6> are the projectable nodes of <your-component-3>. If this argument is not provided, then these Nodes inside <your-component-3> will not be rendered in the final view.
    4. setComponentAttrs(): This function loops through all public properties of the created component’s instance and sets their values to corresponding attributes’ values of the Host Element el, but only if found, otherwise the input holds its default value defined in the component.
    5. this.appRef.attachView(): This makes Angular aware of the components created and includes them in its change detection cycle.
    6. destroy(): Angular will not dispose of any dynamically created component for us automatically. Hence, we need to do it explicitly when the Host Component is being destroyed. In our current example, our Host Component is going to be BinderComponent explained in the next section.
    7. Note that DynamicComponentFactory<T> works for only one component type <T> per instance of that factory class. So, to bind multiple types of Components, you must create multiple such factory instances per Component Type. To make this process easier, we make use of DynamicComponentFactoryFactory class (Sorry, couldn’t think of a better name.) Apart from that, the other reason to have this wrapper class is that you cannot directly inject Angular’s ComponentFactory<T>, which is the second constructor dependency of DynamicComponentFactory<T>(There must be better ways to manage the factory creation process. Open to suggestions.)

We are now ready to use this factory class to create dynamic components.

Create Angular Components in dynamic HTML

Finally, we create instances of DynamicComponentFactory<T> per “dynamic component” type using DynamicComponentFactoryFactory and call its create(element) methods in the loop, where element is the HTML Node that contains the dynamic HTML. We may also perform custom “initialization” operations on the newly created components. See Lines 55–65.

import { AfterViewInit, ComponentRef, Component, ElementRef, EventEmitter, OnDestroy, Self } from '@angular/core';
import { Observable, of } from 'rxjs';
import { delay } from 'rxjs/operators';
import { DynamicComponentFactory, DynamicComponentFactoryFactory } from './dynamic-component-factory';

@Component({
    selector: 'app-binder',
    template: ``,
    providers: [
        DynamicComponentFactoryFactory, // IMPORTANT!
    ],
})
export class BinderComponent implements AfterViewInit, OnDestroy {

    private readonly factories: DynamicComponentFactory<any>[] = [];
    private readonly hostElement: Element;

    templateHtml$: Observable<string> = of('load supported-features.html here').pipe(delay(1000));

    components: any[] = [
        YourComponent1,
        YourComponent2,
        YourComponent3,
        // ... add others
    ];

    constructor(
        // @Self - best practice; to avoid potential bugs if you forgot to `provide` it here
        @Self() private cmpFactory: DynamicComponentFactoryFactory,
        elementRef: ElementRef,
    ) {
        this.hostElement = elementRef.nativeElement;
    }

    ngAfterViewInit(): void {
        this.templateHtml$.subscribe(tpl => {
            this.hostElement.innerHTML = tpl
            this.initFactories();
            this.createAllComponents();
        });
    }

    private initFactories(): void {
        components.forEach(c => {
            const f = this.cmpFactory.create(c);
            this.factories.push(f);
        });
    }

    // Create components dynamically
    private createAllComponents(): void {
        const el = this.hostElement;
        const compRefs: ComponentRef<any>[] = [];
        this.factories.forEach(f => {
            const comps = f.create(el);
            compRefs.push(...comps);
            // Here you can make use of compRefs, filter them, etc.
            // to perform any custom operations, if required.
            compRefs
                .filter(c => c.instance instanceof YourComponent2)
                .forEach(c => {
                    c.instance.name = 'hello';
                    c.instance.filtering = 'false';
                    c.instance.someFoo('welcome');
                );
        });
    }

    private removeAllComponents(): void {
        this.factories.forEach(f => f.destroy());
    }

    ngOnDestroy(): void {
        this.removeAllComponents();
    }

}

 

DynamicComponentFactoryFactory Provider (Important!)

Notice in BinderComponent that DynamicComponentFactoryFactory has been provided in its own @Component decorator and is injected using @Self. As mentioned in its JSDoc comments, this is important because we want the correct instance of Injector to be used for creating components dynamically. If the factory class is not provided at the Host Component level and instead providedIn: ‘root’ or some ParentModule, then the Injector instance will be of that level, which may have unintended consequences, e.g. relative link in [routerLink]=”[‘.’, ‘..’, ‘about-us’]” used in, say, YourComponent1 may not work correctly.

That’s it!

Conclusion

If you have made it this far, you may be thinking, “Meh! This is a completely stripped-down version of Angular templates’ capabilities. That’s no good for me!”. Yes, I will not deny that. But, believe me, it is still quite a “power-up”! I have been able to create a full-fledged Website that renders dynamic, user-defined templates using this approach, and it works perfectly well.

Even though we cannot render fully-loaded dynamic templates at run-time, we have seen in this article how we can render at least “components with static string inputs”. This may seem like a crippled solution if you compare it with all the wonderful features that Angular provides at compile-time. But, practically, this may still solve a lot of use cases requiring dynamic template rendering.

Let’s consider this a partial success.

Hope you found the article useful.

Part-3: Building a bidirectional-streaming gRPC service using Golang

If you have been through part-2 of this blog series, then you know that the gRPC framework has got support for uni-directional streaming RPCs. But that is not the end. gRPC has support for bi-directional RPCs as well. Being said that, a gRPC client and a gRPC server can stream requests and responses simultaneously utilizing the same TCP connection.

Objective

In this blog, you’ll get to know what bi-directional streaming RPCs are. How to implement, test, and run them using a live, fully functional example.

Previously in part-2 of this blog series, we’ve learned the basics of uni-directional streaming gRPCs, how to implement those gRPC, how to write unit tests, how to launch the server & client. Part-2 walks you through a step-by-step guide to implement a Stack Machine server & client leveraging the uni-directional streaming RPC.

If you’ve missed that, I would recommend you to go through it to get familiar with the basics of the gRPC framework & streaming RPCs.

Introduction

Let’s understand how bi-directional streaming RPCs work at a very high level.

Bidirectional streaming RPCs where:

    • both sides send a sequence of messages using a read-write stream
    • the two streams operate independently, so clients and servers can read and write in whatever order they like
    • for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes

The best thing is, the order of messages in each stream is preserved.

Now let’s improve the “Stack Machine” server & client codes to support bidirectional streaming.

Implementing Bidirectional Streaming RPC

We already have Server-streaming RPC ServerStreamingExecute() to handle FIB operation which streams the numbers from Fibonacci series, and Client-streaming RPC Execute() to handle the stream of instructions to perform basic ADD/SUB/MUL/DIV operations and return a single response.

In this blog we’ll merge both the functionality to make the Execute() RPC a bidirectional streaming one.

Update the protobuf

Let’s update the machine/machine.proto to make Execute() a bi-directional (server & client streaming) RPC, so that the client can stream the instructions rather than sending a set of instructions to the server & the server can respond with a stream of results. Doing so, we’re getting rid of InstructionSet and ServerStreamingExecute().

The updated machine/machine.proto now looks like:

service Machine {

-     rpc Execute(stream Instruction) returns (Result) {}

-     rpc ServerStreamingExecute(InstructionSet) returns (stream Result) {}

+     rpc Execute(stream Instruction) returns (stream Result) {}

 }

 

source: machine/machine.proto

Notice the stream keyword at two places in the Execute() RPC declaration.

Generating the updated client and server interface Go code

Now let’s generate an updated golang code from the machine/machine.proto by running:

~/disk/E/workspace/grpc-eg-go

$ SRC_DIR=./

$ DST_DIR=$SRC_DIR

$ protoc \

  -I=$SRC_DIR \

  --go_out=plugins=grpc:$DST_DIR \

  $SRC_DIR/machine/machine.proto

 

You’ll notice that declaration of ServerStreamingExecute() is not there in MachineServer & MachineClient interfaces. However, the signature of Execute() is intact.

...


 type MachineServer interface {

    Execute(Machine_ExecuteServer) error

-   ServerStreamingExecute(*InstructionSet, Machine_ServerStreamingExecuteServer) error

 }


...


 type MachineClient interface {

    Execute(ctx context.Context, opts ...grpc.CallOption) (Machine_ExecuteClient, error)

-   ServerStreamingExecute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (Machine_ServerStreamingExecuteClient, error)

 }

...

 

source: machine/machine.pb.go

Update the Server

Now we need to update the server code to make Execute() a bi-directional (server & client streaming) RPC so that it should be able to accept stream of the instructions from the client and, at the same time, it can respond with a stream of results.

func (s *MachineServer) Execute(stream machine.Machine_ExecuteServer) error {

    var stack stack.Stack

    for {

        instruction, err := stream.Recv()

        if err == io.EOF {

            log.Println("EOF")

            return nil

        }

        if err != nil {

            return err

        }



        operand := instruction.GetOperand()

        operator := instruction.GetOperator()

        op_type := OperatorType(operator)


        fmt.Printf("Operand: %v, Operator: %v\n", operand, operator)


        switch op_type {

        case PUSH:

            stack.Push(float32(operand))

        case POP:

            stack.Pop()

        case ADD, SUB, MUL, DIV:

            item2, popped := stack.Pop()

            item1, popped := stack.Pop()


            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            var res float32

            if op_type == ADD {

                res = item1 + item2

            } else if op_type == SUB {

                res = item1 - item2

            } else if op_type == MUL {

                res = item1 * item2

            } else if op_type == DIV {

                res = item1 / item2

            }


            stack.Push(res)

            if err := stream.Send(&machine.Result{Output: float32(res)}); err != nil {

                return err

            }

        case FIB:

            n, popped := stack.Pop()


            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            if op_type == FIB {

                for f := range utils.FibonacciRange(int(n)) {

                    if err := stream.Send(&machine.Result{Output: float32(f)}); err != nil {

                        return err

                    }

                }

            }

        default:

            return status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)

        }

    }

}

 

source: server/machine.go

Update the Client

Let’s update the client code to make client.Execute() a bi-directional streaming RPC so that the client can stream the instructions to the server and can receive a stream of results at the same time.

func runExecute(client machine.MachineClient, instructions []*machine.Instruction) {

    log.Printf("Streaming %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(ctx) = %v, %v: ", client, stream, err)

    }

    waitc := make(chan struct{})

    go func() {

        for {

            result, err := stream.Recv()

            if err == io.EOF {

                log.Println("EOF")

                close(waitc)

                return

            }

            if err != nil {

                log.Printf("Err: %v", err)

            }

            log.Printf("output: %v", result.GetOutput())

        }

    }()


    for _, instruction := range instructions {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

        time.Sleep(500 * time.Millisecond)

    }

    if err := stream.CloseSend(); err != nil {

        log.Fatalf("%v.CloseSend() got error %v, want %v", stream, err, nil)

    }

    <-waitc

}

 

source: client/machine.go

Test

Before we start updating the unit test, let’s generate mocks for MachineClient, Machine_ExecuteClient, and Machine_ExecuteServer interfaces to mock the stream type while testing the bidirectional streaming RPC Execute().

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ExecuteServer,Machine_ExecuteClient > mock_machine/machine_mock.go

 

The updated mock_machine/machine_mock.go should look like this.

Now, we’re good to write a unit test for bidirectional streaming RPC Execute().

Server

Let’s update the unit test to test the server-side logic of Execute() RPC for bidirectional streaming using mock:

func TestExecute(t *testing.T) {

    s := MachineServer{}


    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockServerStream := mock_machine.NewMockMachine_ExecuteServer(ctrl)


    mockResults := []*machine.Result{}

    callRecv1 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 1, Operator: "PUSH"}, nil)

    callRecv2 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 2, Operator: "PUSH"}, nil).After(callRecv1)

    callRecv3 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "MUL"}, nil).After(callRecv2)

    callRecv4 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 3, Operator: "PUSH"}, nil).After(callRecv3)

    callRecv5 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "ADD"}, nil).After(callRecv4)

    callRecv6 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "FIB"}, nil).After(callRecv5)

    mockServerStream.EXPECT().Recv().Return(nil, io.EOF).After(callRecv6)

    mockServerStream.EXPECT().Send(gomock.Any()).DoAndReturn(

        func(result *machine.Result) error {

            mockResults = append(mockResults, result)

            return nil

        }).AnyTimes()

    wants := []float32{2, 5, 0, 1, 1, 2, 3, 5}


    err := s.Execute(mockServerStream)

    if err != nil {

        t.Errorf("Execute(%v) got unexpected error: %v", mockServerStream, err)

    }

    for i, result := range mockResults {

        got := result.GetOutput()

        want := wants[i]

        if got != want {

            t.Errorf("got %v, want %v", got, want)

        }

    }

}

 

Please refer to the server/machine_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test server/machine.go server/machine_test.go

ok      command-line-arguments  0.004s

 

Client

Now, add unit test to test client-side logic of Execute() RPC for bidirectional streaming using mock:

func testExecute(t *testing.T, client machine.MachineClient) {

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()


    instructions := []*machine.Instruction{}

    instructions = append(instructions, &machine.Instruction{Operand: 5, Operator: "PUSH"})

    instructions = append(instructions, &machine.Instruction{Operand: 6, Operator: "PUSH"})

    instructions = append(instructions, &machine.Instruction{Operator: "MUL"})


    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(%v) = _, %v: ", client, ctx, err)

    }

    for _, instruction := range instructions {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

    }

    result, err := stream.Recv()

    if err != nil {

        log.Fatalf("%v.Recv() got error %v, want %v", stream, err, nil)

    }


    got := result.GetOutput()

    want := float32(30)

    if got != want {

        t.Errorf("got %v, want %v", got, want)

    }

}


func TestExecute(t *testing.T) {

    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockMachineClient := mock_machine.NewMockMachineClient(ctrl)


    mockClientStream := mock_machine.NewMockMachine_ExecuteClient(ctrl)

    mockClientStream.EXPECT().Send(gomock.Any()).Return(nil).AnyTimes()

    mockClientStream.EXPECT().Recv().Return(&machine.Result{Output: 30}, nil)


    mockMachineClient.EXPECT().Execute(

        gomock.Any(), // context

    ).Return(mockClientStream, nil)


    testExecute(t, mockMachineClient)

}

 

Please refer to the mock_machine/machine_mock_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test mock_machine/machine_mock.go mock_machine/machine_mock_test.go

ok      command-line-arguments  0.003s

 

Run

Unit tests assure us that the business logic of the server & client codes is working as expected. Let’s try running the server and communicating to it via our client code.

Server

To spin up the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

 

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go

$ go run client/machine.go

Streaming [operator:"PUSH" operand:1  operator:"PUSH" operand:2  operator:"ADD"  operator:"PUSH" operand:3  operator:"DIV"  operator:"PUSH" operand:4  operator:"MUL"  operator:"FIB"  operator:"PUSH" operand:5  operator:"PUSH" operand:6  operator:"SUB" ]

output: 3

output: 1

output: 4

output: 0

output: 1

output: 1

output: 2

output: 3

output: -1

EOF

 

Bonus

There are situations when one has to choose between mocking a dependency versus incorporating the dependencies into the test environment & running them live.

The decision – whether to mock or not could be made based on:

    1. how many dependencies are there
    2. which are the essential & most used dependencies
    3. is it feasible to install dependencies on the test (and even the developer’s) environment, etc.

To one extreme we can mock everything. But the mocking effort should pay us off.

For the gRPC framework, we can run the gRPC server live & write client codes to test against the business logic.

But spinning up a server from the test file can lead to unintended consequences that may require you to allocate a TCP port (parallel runs, multiple runs under the same CI server).

To solve this, gRPC community has introduced a package called bufconn under gRPC’s testing package. bufconn is a package that provides a Listener object that implements net.Conn. We can substitute this listener in a gRPC server – allowing us to spin up a server that acts as a full-fledged server that can be used for testing that talks over an in-memory buffer instead of a real port.

As bufconn already comes with the grpc go module – which we already have installed, we don’t need to install it explicitly.

So, let’s create a new test file server/machine_live_test.go write the following test code to launch the gRPC server live using bufconn, and write a client to test the bidirectional RPC Execute().

const bufSize = 1024 * 1024


var lis *bufconn.Listener


func init() {

    lis = bufconn.Listen(bufSize)

    s := grpc.NewServer()

    machine.RegisterMachineServer(s, &MachineServer{})

    go func() {

        if err := s.Serve(lis); err != nil {

            log.Fatalf("Server exited with error: %v", err)

        }

    }()

}


func bufDialer(context.Context, string) (net.Conn, error) {

    return lis.Dial()

}


func testExecute_Live(t *testing.T, client machine.MachineClient, instructions []*machine.Instruction, wants []float32) {

    log.Printf("Streaming %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(ctx) = %v, %v: ", client, stream, err)

    }

    waitc := make(chan struct{})

    go func() {

        i := 0

        for {

            result, err := stream.Recv()

            if err == io.EOF {

                log.Println("EOF")

                close(waitc)

                return

            }

            if err != nil {

                log.Printf("Err: %v", err)

            }

            log.Printf("output: %v", result.GetOutput())

            got := result.GetOutput()

            want := wants[i]

            if got != want {

                t.Errorf("got %v, want %v", got, want)

            }

            i++

        }

    }()


    for _, instruction := range instructions {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

    }

    if err := stream.CloseSend(); err != nil {

        log.Fatalf("%v.CloseSend() got error %v, want %v", stream, err, nil)

    }

    <-waitc

}


func TestExecute_Live(t *testing.T) {

    ctx := context.Background()

    conn, err := grpc.DialContext(ctx, "bufnet", grpc.WithContextDialer(bufDialer), grpc.WithInsecure())

    if err != nil {

        t.Fatalf("Failed to dial bufnet: %v", err)

    }

    defer conn.Close()

    client := machine.NewMachineClient(conn)



    // try Execute()

    instructions := []*machine.Instruction{

        {Operand: 1, Operator: "PUSH"},

        {Operand: 2, Operator: "PUSH"},

        {Operator: "ADD"},

        {Operand: 3, Operator: "PUSH"},

        {Operator: "DIV"},

        {Operand: 4, Operator: "PUSH"},

        {Operator: "MUL"},

        {Operator: "FIB"},

        {Operand: 5, Operator: "PUSH"},

        {Operand: 6, Operator: "PUSH"},

        {Operator: "SUB"},

    }

    wants := []float32{3, 1, 4, 0, 1, 1, 2, 3, -1}

    testExecute_Live(t, client, instructions, wants)

}

 

source: server/machine_live_test.go

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go 

$ go test server/machine.go server/machine_live_test.go 

ok      command-line-arguments  0.005s

 

Fantastic!!! Everything worked as expected.

At the end of this blog, we’ve learned:

    • How to define an interface for bi-directional streaming RPC using protobuf
    • How to write gRPC server & client logic for bi-directional streaming RPC
    • How to write and run the unit test for bi-directional streaming RPC
    • How to write and run the unit test for bi-directional streaming RPC by running the server live leveraging the bufconn package
    • How to run the gRPC server and a client can communicate to it

The source code of this example is available at toransahu/grpc-eg-go.

Feel free to write your queries or opinions. Let’s have a healthy discussion.

A Step-by-Step Guide to Easy Android in-App Review Setup.

Introduction

Like any other developer, I always want to see how people react to my Android apps. After all, it determines the path ahead, where you can improve and what you can discard. A user can provide feedback in various ways, including via email, social media networks like Twitter, and so on. However, a rating and review on the Play Store can make or break an app’s deal. It does not just help us gather user feedback, but it also helps users decide which app to download from a similar category based on ratings.

The need for in-app review

We know how essential app reviews are for the app development team. To get feedback, we have to rely on the user who would probably visit the Google Play Store to review it. There is one other method where we can add some kind of button or indicator, a click on which will redirect them to the app details page.

However, redirecting users away from your app s feels like providing a way for app abandonment.

It is Google that finally came up with a new In-App review API which is an easy way to get feedback from the user without even leaving the app.

The good thing is we can plan when to prompt this review request. We can wait for the user to get familiar with the app, and then we can ask them when they reach a certain level of a game or unlock a new feature on the app.

Getting Started

In this blog, we will learn how to implement the In-App review and how we can test it.

Device Requirements

Only the following devices allow in-app reviews:

    • Android smartphones (phones and tablets) that have the Google Play Store enabled and are running Android 5.0 (API level 21) or higher.
    • Chrome OS devices that have the Google Play Store installed.

Play Core Library Requirements

Your app must use version 1.8.0 or higher of the Play Core library to allow in-app ratings.

implementation ‘com.google.android.play:core:1.9.1’

Design Suggestions

    • Avoid modifying the design of the Review card (popup).
    • There should be no overlay on top of or around the card.
    • Don’t remove the review dialogue programmatically (It will automatically be removed based on user actions)

Implementation

Add Play core dependency

Add the play-core library to your build.gradle file as a dependency.

implementation ‘com.google.android.play:core:1.9.1’

Create the ReviewManager

The ReviewManager is the user interface for beginning an in-app review flow in your app. Build an instance of the ReviewManagerFactory to get it.

ReviewManager reviewManager = ReviewManagerFactory.create(this)

Get a ReviewInfo object

To make a request task, use the ReviewManager instance. If it is successful the API returns the ReviewInfo item, which is necessary to begin the in-app review flow.

Launch the in-app review flow

To start the in-app review flow, use the ReviewInfo instance. Before starting with the usual user experience, wait until the user has finished the in-app review flow.

;

Testing In-App review

Testing the In-App review is not as straightforward as running the app from the android studio and submitting the review.
To protect user privacy and avoid API misuse, this API has a limited quota per user.

We will be testing our implementation using Internal Testing Track.

Internal testing

This is a tool to create and manage internal testing releases to make our app available to up to 100 internal testers.

We can create internal testing releases from the play console.

    • Select your app in the console for which you want to test In-App Review

    • Go to “Internal Testing” under “Testing” under Release on the left pane

    • Click on to “Create new Release”

    • Upload your signed APK, after a successful upload you will see something like this

    • Add Release details and Save it

    • Click on Review release

    • Click on Sart rollout to Internal testing

    • Add Testers and Save changes

    • After adding tester and successfully saving it, get the link from “Copy link”, share the invite link via whatever convenient for you

Testing Internal Release

Before starting with the internal build made sure to have installed the production build from the play store

    • After opening the link we have to accept the invitation

    • After Accepting the invitation if we go to app detail on Google Play we will see that we are an internal tester and are available with a new update

    • Update the app and then open the app, you will see the review request in the bottom half of the screen from where you have requested the review.

    • Add the review and rating (I give mine 5 stars 😛 )

    • Submit review

    • Let’s check Google Play to see whether our review can be seen there

Hurray! We have done it.

 

Best Practices

    • Never ask the user for a feedback request from the initial journey of your application usage, better wait to let him/her get familiar with your app and then request for the same. This way you will get correct and detailed feedback you can work on to improve user experience.
    • Do not bombard the customer for a summary relentlessly. This way we can reduce user annoyance while still limiting API use.
    • We should never ask a question before or while asking for the review.
    • Never customize the review screen at all, use the default on as suggested and requested by Google

 

Note as we are testing this with Internal release, we can get the review dialog multiple times as there is no quota for internal testing and rating, and review will be overridden with the latest one. But the same is not valid for the production environment, the quota is 1 per user so the dialog will not be shown to the user if the review is already done.

Thanks for reading till the end, hope you have learned about the In-App review and will incorporate it in your apps.

 

Reference: https://developer.android.com/guide/playcore/in-app-review

Part -2: Building a unidirectional-streaming gRPC service using Golang

Have you ever wondered while developing a REST API that the server could get the capability to stream responses using the same TCP connection? Or, reversely, the REST client could have the ability to stream the requests to the server? This could have saved the cost of bringing up another service (like WebSocket) just to fulfill such a requirement.

For such cases, REST isn’t the only API architecture available. People can now bank on the gRPC model as it has begun to play a crucial role. gRPC’s unidirectional-streaming RPC feature could be the perfect choice to meet those requirements.

Objective

In this blog, you’ll get to know what client streaming & server streaming uni-directional RPCs are. I will also discuss how to implement, test, and run them using a live, fully functional example.

Previously, in Part-1 of this blog series, we’ve learned the basics of gRPC, how to implement a Simple/Unary gRPC, how to write unit tests, how to launch the server & client. Part-1 is a step-by-step guide to implement a Stack Machine server & client leveraging Simple/Unary RPC.

If you’ve missed that, it is highly recommended to go through it to get familiar with the basics of the gRPC framework.

Introduction

Let’s understand how Client streaming & Server streaming RPCs work at a very high level.

Client streaming RPCs where:

    • the client writes a sequence of messages and sends them to the server using a provided stream
    • once the client has finished writing the messages, it waits for the server to read them and return its response

Server streaming RPCs where:

    • the client sends a request to the server and gets a stream to read a sequence of messages back
    • the client reads from the returned stream until there are no more messages

The best thing is gRPC guarantees message ordering within an individual RPC call.

Now let’s improve the “Stack Machine” server & client codes to support unidirectional streaming.

Implementing Server Streaming RPC

We’ll see an example of Server Streaming first by implementing the FIB operation.

Where the FIB RPC will:

    • perform a Fibonacci operation
    • accept an integer input i.e. generate first N numbers of the Fibonacci series
    • will respond with a stream of integers i.e. first N numbers of the Fibonacci series

And later we’ll see how Client Streaming can be implemented so that a client can input a stream of instructions to the Stack Machine in real-time rather than sending a single request consisting of a set of instructions.

Update Protobuf

We already have defined the gRPC service Machine and a Simple (Unary) RPC method Execute inside our service definition in part-1 of the blog series. Now, let’s update the service definition to add one server streaming RPC called ServerStreamingExecute.

    • A server streaming RPC where the client sends a request to the server using the stub and waits for a response to come back as a stream of result
    • To specify a server-side streaming method, need to place the stream keyword before the response type

// ServerStreamingExecute accepts a set of Instructions from client and returns a stream of Result.

rpc ServerStreamingExecute(InstructionSet) returns (stream Result) {}

source: machine/machine.proto

Generating pdated Client & Server Interface Go Code

We need to generate the gRPC client and server interfaces from our machine/machine.proto service definition.

~/disk/E/workspace/grpc-eg-go

$ SRC_DIR=./

$ DST_DIR=$SRC_DIR

$ protoc \

  -I=$SRC_DIR \

  --go_out=plugins=grpc:$DST_DIR \

  $SRC_DIR/machine/machine.proto

 

You can observe that the declaration of ServerStreamingExecute() in the MachineClient and MachineServer interface has been auto-generated:

...

 type MachineClient interface {

    Execute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (*Result, error)

+   ServerStreamingExecute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (Machine_ServerStreamingExecuteClient, error)

 }

...

 type MachineServer interface {

    Execute(context.Context, *InstructionSet) (*Result, error)

+   ServerStreamingExecute(*InstructionSet, Machine_ServerStreamingExecuteServer) error

 }

 

source: machine/machine.pb.go

Update the Server

Just in case if you’re wondering, What if my service doesn’t implement some of the RPCs declared in the machine.pb.go file, then you’ll encounter the following error while launching your gRPC server.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

# command-line-arguments

cmd/run_machine_server.go:32:44: cannot use &server.MachineServer literal (type *server.MachineServer) as type machine.MachineServer in argument to machine.RegisterMachineServer:

        *server.MachineServer does not implement machine.MachineServer (missing ServerStreamingExecute method)

 

So, it’s always the best practice to keep your service in sync with the service definition i.e. machine/machine.proto & machine/machine.pb.go. If you do not want to support a particular RPC, or its implementation is not yet ready, just respond with Unimplemented error status. Example:

// ServerStreamingExecute runs the set of instructions given and streams a sequence of Results.

func (s *MachineServer) ServerStreamingExecute(instructions *machine.InstructionSet, stream machine.Machine_ServerStreamingExecuteServer) error {

    return status.Error(codes.Unimplemented, "ServerStreamingExecute() not implemented yet")

}

 

source: server/machine.go

Before we implement the ServerStreamingExecute() RPC, let’s write a Fibonacci series generator called FibonacciRange().

package utils

func FibonacciRange(n int) <-chan int {

    ch := make(chan int)

    fn := make([]int, n+1, n+2)

    fn[0] = 0

    fn[1] = 1

    go func() {

        defer close(ch)

        for i := 0; i <= n; i++ {

            var f int

            if i < 2 {

                f = fn[i]

            } else {

                f = fn[i-1] + fn[i-2]

            }

            fn[i] = f

            ch <- f

        }

    }()

    return ch

}

source: utils/fibonacci.go

The blog series assumes that you’re familiar with Golang basics & its concurrency paradigms & concepts like Channels. You can read more about the Channels from the official document.

This function yields the numbers of Fibonacci series till the Nth position.

Let’s also add a small unit test to validate the FibonacciRange() generator.

package utils

import (

    "testing"

)

func TestFibonacciRange(t *testing.T) {

    fibOf5 := []int{0, 1, 1, 2, 3, 5}

    i := 0

    for f := range FibonacciRange(5) {

        if f != fibOf5[i] {

            t.Errorf("got %d, want %d", f, fibOf5[i])

        }

        i++

    }

}

source: utils/fibonacci_test.go

Let’s implement ServerStreamingExecute() to handle the basic instructions PUSH/POP, and FIB with proper error handling. On completion of the execution of instructions set, it should POP the result from the Stack and should respond with a Result object to the client.

func (s *MachineServer) ServerStreamingExecute(instructions *machine.InstructionSet, stream machine.Machine_ServerStreamingExecuteServer) error {

    if len(instructions.GetInstructions()) == 0 {

        return status.Error(codes.InvalidArgument, "No valid instructions received")

    }

    var stack stack.Stack

    for _, instruction := range instructions.GetInstructions() {

        operand := instruction.GetOperand()

        operator := instruction.GetOperator()

        op_type := OperatorType(operator)

        log.Printf("Operand: %v, Operator: %v\n", operand, operator)

        switch op_type {

        case PUSH:

            stack.Push(float32(operand))

        case POP:

            stack.Pop()

        case FIB:

            n, popped := stack.Pop()

            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }

            if op_type == FIB {

                for f := range utils.FibonacciRange(int(n)) {

                    log.Println(float32(f))

                    stream.Send(&machine.Result{Output: float32(f)})

                }

            }

        default:

            return status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)

        }

    }

    return nil

}

 

source: server/machine.go

Update the Client

Now, update the client code to call ServerStreamingExecute() where the client will receivenumbers of the Fibonacci series through the stream and print the same.

func runServerStreamingExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

    log.Printf("Executing %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.ServerStreamingExecute(ctx, instructions)

    if err != nil {

        log.Fatalf("%v.Execute(_) = _, %v: ", client, err)

    }

    for {

        result, err := stream.Recv()

        if err == io.EOF {

            log.Println("EOF")

            break

        }

        if err != nil {

            log.Printf("Err: %v", err)

            break

        }

        log.Printf("output: %v", result.GetOutput())

    }

    log.Println("DONE!")

}

source: client/machine.go

Test

To write the unit test we’ll have to generate the mock of multiple interfaces as required.

mockgen is the ready-to-go framework for mocking in Golang, so we’ll be leveraging it in our unit tests.

Server

As we’ve updated our interface i.e. machine/machine.pb.go, let’s update the mock for MachineClient interface. And as we’ve introduced a new RPC ServerStreamingExecute(), let’s generate the mock for ServerStream interface Machine_ServerStreamingExecuteServer as well.

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ServerStreamingExecuteServer > mock_machine/machine_mock.go

The updated mock_machine/machine_mock.go should look like this.

Now, we’re good to write unit test for server-side streaming RPC ServerStreamingExecute():

func TestServerStreamingExecute(t *testing.T) {

    s := MachineServer{}


    // set up test table

    tests := []struct {

        instructions []*machine.Instruction

        want         []float32

    }{

        {

            instructions: []*machine.Instruction{

                {Operand: 5, Operator: "PUSH"},

                {Operator: "FIB"},

            },

            want: []float32{0, 1, 1, 2, 3, 5},

        },

        {

            instructions: []*machine.Instruction{

                {Operand: 6, Operator: "PUSH"},

                {Operator: "FIB"},

            },

            want: []float32{0, 1, 1, 2, 3, 5, 8},

        },

    }


    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockServerStream := mock_machine.NewMockMachine_ServerStreamingExecuteServer(ctrl)

    for _, tt := range tests {

        mockResults := []*machine.Result{}

        mockServerStream.EXPECT().Send(gomock.Any()).DoAndReturn(

            func(result *machine.Result) error {

                mockResults = append(mockResults, result)

                return nil

            }).AnyTimes()


        req := &machine.InstructionSet{Instructions: tt.instructions}


        err := s.ServerStreamingExecute(req, mockServerStream)

        if err != nil {

            t.Errorf("ServerStreamingExecute(%v) got unexpected error: %v", req, err)

        }

        for i, result := range mockResults {

            got := result.GetOutput()

            want := tt.want[i]

            if got != want {

                t.Errorf("got %v, want %v", got, want)

            }

        }

    }

}
 

Please refer to the server/machine_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test server/machine.go server/machine_test.go

ok      command-line-arguments  0.003s

 

Client

For our new RPC ServerStreamingExecute(), let’s add the mock for ClientStream interface Machine_ServerStreamingExecuteClient as well.

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ServerStreamingExecuteServer,Machine_ServerStreamingExecuteClient > mock_machine/machine_mock.go

 

source: mock_machine/machine_mock.go

Let’s add unit test to test client-side logic for server-side streaming RPC ServerStreamingExecute() using mock MockMachine_ServerStreamingExecuteClient :

func TestServerStreamingExecute(t *testing.T) {

    instructions := []*machine.Instruction{}

    instructions = append(instructions, &machine.Instruction{Operand: 1, Operator: "PUSH"})

    instructions = append(instructions, &machine.Instruction{Operator: "FIB"})

    instructionSet := &machine.InstructionSet{Instructions: instructions}


    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockMachineClient := mock_machine.NewMockMachineClient(ctrl)

    clientStream := mock_machine.NewMockMachine_ServerStreamingExecuteClient(ctrl)


    clientStream.EXPECT().Recv().Return(&machine.Result{Output: 0}, nil)


    mockMachineClient.EXPECT().ServerStreamingExecute(

        gomock.Any(),   // context

        instructionSet, // rpc uniary message

    ).Return(clientStream, nil)


    if err := testServerStreamingExecute(t, mockMachineClient, instructionSet); err != nil {

        t.Fatalf("Test failed: %v", err)

    }

}

Please refer to the mock_machine/machine_mock_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test mock_machine/machine_mock.go mock_machine/machine_mock_test.go

ok      command-line-arguments  0.003s

 

Run

nit tests assure us that the business logic of the server & client codes is working as expected, let’s try running the server and communicating to it via our client code.

Server

To start the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

 

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go

$ go run client/machine.go

Executing instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

Executing instructions:<operator:"PUSH" operand:6 > instructions:<operator:"FIB" >

output: 0

output: 1

output: 1

output: 2

output: 3

output: 5

output: 8

EOF

DONE!

 

Awesome! A Server Streaming RPC has been successfully implemented.

Implementing Client Streaming RPC

We have learned how to implement a Server Streaming RPC, now it’s time to explore the Client Streaming RPC.

To do so, we’ll not introduce another RPC, rather we’ll update the existing Execute() RPC to accept a stream of Instructions from the client in real-time rather than sending a single request comprisesa set of Instructions.

Update the protobuf

So, let’s update the interface:

service Machine {

-     rpc Execute(InstructionSet) returns (Result) {}

+     rpc Execute(stream Instruction) returns (Result) {}

      rpc ServerStreamingExecute(InstructionSet) returns (stream Result) {}

 }

 

source: machine/machine.proto

Generating the updated client and server interface Go code

Now let’s generate an updated golang code from the machine/machine.proto by running:

~/disk/E/workspace/grpc-eg-go

$ SRC_DIR=./

$ DST_DIR=$SRC_DIR

$ protoc \

  -I=$SRC_DIR \

  --go_out=plugins=grpc:$DST_DIR \

  $SRC_DIR/machine/machine.proto

 

You’ll notice that the declaration of Execute() has been updated from MachineServer & MachineClient interfaces.

type MachineServer interface {

-   Execute(context.Context, *InstructionSet) (*Result, error)

+   Execute(Machine_ExecuteServer) error

    ServerStreamingExecute(*InstructionSet, Machine_ServerStreamingExecuteServer) error

 }

 type MachineClient interface {

-    Execute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (*Result, error)

+    Execute(ctx context.Context, opts ...grpc.CallOption) (Machine_ExecuteClient, error)

     ServerStreamingExecute(ctx context.Context, in *InstructionSet, opts ...grpc.CallOption) (Machine_ServerStreamingExecuteClient, error)

 }

 

source: machine/machine.pb.go

Update the Server

Let’s update the server code to make Execute() a client streaming uni-directional RPC so that it should be able to accept stream the instructions from the client and respond with a Result struct.

func (s *MachineServer) Execute(stream machine.Machine_ExecuteServer) error {

    var stack stack.Stack

    for {

        instruction, err := stream.Recv()

        if err == io.EOF {

            log.Println("EOF")

            output, popped := stack.Pop()

            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            if err := stream.SendAndClose(&machine.Result{

                Output: output,

            }); err != nil {

                return err

            }


            return nil

        }

        if err != nil {

            return err

        }


        operand := instruction.GetOperand()

        operator := instruction.GetOperator()

        op_type := OperatorType(operator)


        fmt.Printf("Operand: %v, Operator: %v\n", operand, operator)


        switch op_type {

        case PUSH:

            stack.Push(float32(operand))

        case POP:

            stack.Pop()

        case ADD, SUB, MUL, DIV:

            item2, popped := stack.Pop()

            item1, popped := stack.Pop()


            if !popped {

                return status.Error(codes.Aborted, "Invalid sets of instructions. Execution aborted")

            }


            if op_type == ADD {

                stack.Push(item1 + item2)

            } else if op_type == SUB {

                stack.Push(item1 - item2)

            } else if op_type == MUL {

                stack.Push(item1 * item2)

            } else if op_type == DIV {

                stack.Push(item1 / item2)

            }


        default:

            return status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)

        }

    }

}

source: server/machine.go

Update the Client

Now update the client code to make client.Execute() a uni-directional streaming RPC, so that the client can stream the instructions to the server and can receive a Result struct once the streaming completes.

func runExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

    log.Printf("Streaming %v", instructions)

    ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)

    defer cancel()

    stream, err := client.Execute(ctx)

    if err != nil {

        log.Fatalf("%v.Execute(ctx) = %v, %v: ", client, stream, err)

    }

    for _, instruction := range instructions.GetInstructions() {

        if err := stream.Send(instruction); err != nil {

            log.Fatalf("%v.Send(%v) = %v: ", stream, instruction, err)

        }

    }

    result, err := stream.CloseAndRecv()

    if err != nil {

        log.Fatalf("%v.CloseAndRecv() got error %v, want %v", stream, err, nil)

    }

    log.Println(result)

}

 

source: client/machine.go

Test

Generate mock for Machine_ExecuteClient and Machine_ExecuteServer interface to test client-streaming RPC Execute():

~/disk/E/workspace/grpc-eg-go

$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient,Machine_ServerStreamingExecuteClient,Machine_ServerStreamingExecuteServer,Machine_ExecuteServer,Machine_ExecuteClient > mock_machine/machine_mock.go

 

The updated mock_machine/machine_mock.go should look like this.

Server

Let’s update the unit test to test the server-side logic of client streaming Execute() RPC using mock:

func TestExecute(t *testing.T) {

    s := MachineServer{}

    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockServerStream := mock_machine.NewMockMachine_ExecuteServer(ctrl)

    mockResult := &machine.Result{}

    callRecv1 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 5, Operator: "PUSH"}, nil)

    callRecv2 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operand: 6, Operator: "PUSH"}, nil).After(callRecv1)

    callRecv3 := mockServerStream.EXPECT().Recv().Return(&machine.Instruction{Operator: "MUL"}, nil).After(callRecv2)

    mockServerStream.EXPECT().Recv().Return(nil, io.EOF).After(callRecv3)

    mockServerStream.EXPECT().SendAndClose(gomock.Any()).DoAndReturn(

        func(result *machine.Result) error {

            mockResult = result

            return nil

        })



    err := s.Execute(mockServerStream)

    if err != nil {

        t.Errorf("Execute(%v) got unexpected error: %v", mockServerStream, err)

    }

    got := mockResult.GetOutput()

    want := float32(30)

    if got != want {

        t.Errorf("got %v, wanted %v", got, want)

    }

}

 

Please refer to the server/machine_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test server/machine.go server/machine_test.go

ok      command-line-arguments  0.003s

 

Client

Now, add unit test to test client-side logic of client streaming Execute() RPC using mock:

func TestExecute(t *testing.T) {

    ctrl := gomock.NewController(t)

    defer ctrl.Finish()

    mockMachineClient := mock_machine.NewMockMachineClient(ctrl)


    mockClientStream := mock_machine.NewMockMachine_ExecuteClient(ctrl)

    mockClientStream.EXPECT().Send(gomock.Any()).Return(nil).AnyTimes()

    mockClientStream.EXPECT().CloseAndRecv().Return(&machine.Result{Output: 30}, nil)


    mockMachineClient.EXPECT().Execute(

        gomock.Any(), // context

    ).Return(mockClientStream, nil)


    testExecute(t, mockMachineClient)

}

Please refer to the mock_machine/machine_mock_test.go for detailed content.

Let’s run the unit test:

~/disk/E/workspace/grpc-eg-go

$ go test mock_machine/machine_mock.go mock_machine/machine_mock_test.go

ok      command-line-arguments  0.003s

 

Run all the unit tests at once:

~/disk/E/workspace/grpc-eg-go

$ go test ./...

?       github.com/toransahu/grpc-eg-go/client  [no test files]

?       github.com/toransahu/grpc-eg-go/cmd     [no test files]

?       github.com/toransahu/grpc-eg-go/machine [no test files]

ok      github.com/toransahu/grpc-eg-go/mock_machine    (cached)

ok      github.com/toransahu/grpc-eg-go/server  (cached)

ok      github.com/toransahu/grpc-eg-go/utils   (cached)

?       github.com/toransahu/grpc-eg-go/utils/stack     [no test files]

 

Run

Now we are assured through unit tests that the business logic of the server & client codes is working as expected. Let’s try running the server and communicating with it via our client code.

Server

To launch the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go

$ go run cmd/run_machine_server.go

 

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go

$ go run client/machine.go

Streaming instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

Executing instructions:<operator:"PUSH" operand:6 > instructions:<operator:"FIB" >

output: 0

output: 1

output: 1

output: 2

output: 3

output: 5

output: 8

EOF

DONE!

 

Awesome!! We have successfully transformed a Unary RPC into Server Streaming RPC.

At the end of this blog, we’ve learned:

      • How to define an interface for uni-directional streaming RPCs using protobuf?
      • How to write gRPC server & client logic for uni-directional streaming RPCs?
      • How to write and run the unit test for server-streaming & client-streaming RPCs?
      • How to run the gRPC server and a client can communicate to it?

The source code of this example is available at toransahu/grpc-eg-go.

You can also git checkout to this commit SHA for Part-2(a) and to this commit SHA for Part-2(b).

See you in the next part of this blog series.

 

How to Integrate Firebase Authentication for Google Sign-in Functionality?

Introduction

Personalized experience is now a buzzword. Industries have started realizing the importance of learning what each individual thinks while making a decision. It helps in optimizing resources and expanding product reach. Consequently, most applications now ask for a user identity to create a premise for a personalized experience. However, it’s not easy to have your own Authentication/Identity providing the solution. When such an occasion strikes, Firebase Authentications enters the frame to save the day for us.

What is Firebase authentication?

Firebase authentication is a tool to rapidly and securely authenticate users. It offers a very clear flow for the authentication and login method and is easy to use.

Why one should use Firebase authentication

    • Time, cost, security, and stability are advantages of using authentication as a service instead of constructing it yourself. Firebase Authentication has already done the legwork for you in terms of safe account storage, email/phone authentication flows, and so on.
    • Firebase Authentication has many integrations with Developed products like Cloud Firestore, Realtime Database, and Cloud Storage. Declarative security rules are used to protect these products, and Firebase Authentication is used to introduce granular per-user security.
    • You can easily sign-in using any platform. It provides an end-to-end identity solution, supporting email and password accounts, phone authentication, and Google, Twitter, Facebook, and GitHub login, and more.

Getting Started

In this blog, we will focus on integrating the Firebase Authentication for Google Sign-In functionality in our Android application using Google and Firebase APIs.

Create an Android Project

  • Create a new project and choose the template of your choice.

I chose “Empty Activity”.

  • Add name and click finish

  • Let’s change the layout of the main activity to have some user’s info like name, profile photo. Also, add a sign out button so that we have an understanding of signout flow as well.

Create a Firebase Project

To use any firebase tool we need to create a firebase project for the same.

Let’s create one for our app RemoteConfigSample

  • Give any name for your project and click `Continue`

  • Select your Analytics location and create the project.

Add Android App to Our Firebase Project

 

  • Click on the Android icon to create an Android app in the firebase project

  • We have to add SHA-1 setting for Google Sign -in, Run the command below in the project directory to determine the SHA1 of your debug key:

./gradlew signingReport

  • Use the SHA1 from above in the app registration on firebase
  • After filing the relevant info click on the Register app

  • Download and add google-services.json to your project as per the instruction provided.

  • Click Next after following the instructions to connect the Firebase SDK to your project.

Add Firebase Authentication Dependencies

  • Go to Authentication Setting under Build in Left Pane

  • Click on “Get started”

  • Select the Sign In Method tab
  • Toggle the Google switch to enabled (blue)

  • Set a support email and Save it.

  • Go to Project Setting

  • And download the latest google-services.json which will be having the authentication setting for google sign in we enabled, and replace the old json file with the new one.
  • Add Firebase authentication dependency in build.gradle
    implementation ‘com.google.firebase:firebase-auth’
  • Add Google sign in dependency in build.gradle
    implementation ‘com.google.android.gms:play-services-auth:19.0.0’

Create a Sign-in Flow

To begin authentication, a simple Sign-In button is used. In this stage, you’ll implement the logic for signing in with Google and then authenticating with Firebase using that Google account.

  • Create a new sign in activity with a button to launch sign in flow
  • Initialize FirebaseAuth

mFirebaseAuth = FirebaseAuth.getInstance();

  • Initialize GoogleSignInClient

  • Create a SignIn method which we can call with the click of the SignIn button

  • Sign in results need to be handled in onActivityResult. If the result of the Google Sign-In was successful, use the account to authenticate with Firebase:
  • Authenticate GoogleSignInAccount with firebase to get Firebase user

Extract User Data

On successful authentication of google account with firebase authentication, we can extract relevant user information.

Get User Name

Get User Profile photo URL

SignOut

We’ve finished the login process. If the user is still logged in, our next goal is to log them out. To do this, we create a method called signOut ().

See App in Action

We can see the user also got created in Firebase Authentication “Users” tab

 

Hurray, we have done it, now we know how to have an authentication solution with no server of our own to get user identity and serve a personalized user experience.

For more details and an in-depth view, you can find the code here

 

References: https://firebase.google.com/products/auth , https://firebase.google.com/docs/auth

 

 

Part-1: A Quick Overview of gRPC in Golang

REST API is now the most popular framework among developers for web application development as it is very easy to use. REST is used to provide the business application service to the outer world and internal communication among internal microservices. However, ease and flexibility come with some pitfalls. REST requires a stringent Human Agreement and relies on Documentation. Also, in cases like internal communication and real-time applications, it has certain limitations. In 2015, gRPC kicked in. gRPC, initially developed at Google, is now disrupting the industry. gRPC is a modern open-source high-performance RPC framework, which comes with a simple language-agnostic Interface Definition Language (IDL) system, leveraging Protocol Buffers.

Objective

This blog aims to get you started with gRPC in Go with a simple working example. The blog covers basic information like What, Why, When/Where, and How about the gRPC. We’ll majorly focus on the How section, to establish a connection between the client and server and write unit tests for testing the client and server code separately. We’ll also run the code to establish a client-server communication.

What is gRPC?

gRPC – Remote Procedure Call

    • gRPC is a high performance, open-source universal RPC Framework
    • It enables the server and client applications to communicate transparently and build connected systems
    • gRPC is developed and open-sourced by Google (but no, the g doesn’t stand for Google)

Why Use gRPC?

    1. Better Design
      • With gRPC, we can define our service once in a .proto file and implement clients and servers in any of gRPC’s supported languages
      • Ability to auto-generate and publish SDKs as opposed to publishing the APIs for services
    2. High Performance
      • Advantages of working with protocol buffers, including efficient serialization, a simple IDL, and easy interface updating
      • Advantages of improved features of HTTP/2
      • Multiplexing: This forces the service client to utilize a single TCP connection to handle multiple requests simultaneously
      • Binary Framing and Compression
    3. Multi-way communication
      • Simple/Unary RPC
      • Server-side streaming RPC
      • Client-side streaming RPC
      • Bidirectional streaming RPC

Where to Use gRPC?

The “where” is pretty easy: we can leverage gRPC almost anywhere. We just need two computers communicating over a network:

    • Microservices
    • Client-Server Applications
    • Integrations and APIs
    • Browser-based Web Applications

How to Use gRPC?

Our example is a simple “Stack Machine” as a service that lets clients perform operations like PUSH, ADD, SUB, MUL, DIV, FIBB, AP, GP.

In Part-1, we’ll focus on Simple RPC implementation. In Part-2, we’ll focus on Server-side & Client-side streaming RPC, and in Part-3, we’ll implement Bidirectional streaming RPC.

Let’s get started with installing the prerequisites of the development.

Prerequisites

Go

    • Version 1.6 or higher.
    • For installation instructions, see Go’s Getting Started guide.

gRPC

Use the following command to install gRPC.

~/disk/E/workspace/grpc-eg-go
$ go get -u google.golang.org/grpc

Protocol Buffers v3

~/disk/E/workspace/grpc-eg-go
$ go get -u github.com/golang/protobuf/proto

 

  • Update the environment variable PATH to include the path to the protoc binary file.
  • Install the protoc plugin for Go
~/disk/E/workspace/grpc-eg-go
$ go get -u github.com/golang/protobuf/protoc-gen-go

Setting Project Structure

~/disk/E/workspace/grpc-eg-go
$ go mod init github.com/toransahu/grpc-eg-go
$ mkdir machine
$ mkdir server
$ mkdir client
$ tree
.
├── client/
├── go.mod
├── machine/
└── server/

Defining the service

Our first step is to define the gRPC service and the method request and response types using protocol buffers.

To define a service, we specify a named service in our machine/machine.proto file:

service Machine {

}

Then we define a Simple RPC method inside our service definition, specifying their request and response types.

  • A simple RPC where the client sends a request to the server using the stub and waits for a response to come back
// Execute accepts a set of Instructions from the client and returns a Result.
rpc Execute(InstructionSet) returns (Result) {}

 

  • machine/machine.proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods.
// Result represents the output of execution of the instruction(s).
message Result {
float output = 1;
}

 

Our machine/machine.proto file should look like this considering Part-1 of this blog series.

Generating client and server code

We need to generate the gRPC client and server interfaces from the machine/machine.proto service definition.

~/disk/E/workspace/grpc-eg-go
$ SRC_DIR=./
$ DST_DIR=$SRC_DIR
$ protoc \
-I=$SRC_DIR \
--go_out=plugins=grpc:$DST_DIR \
$SRC_DIR/machine/machine.proto

 

Running this command generates the machine.pb.go file in the machine directory under the repository:

~/disk/E/workspace/grpc-eg-go
$ tree machine/
.
├── machine/
│   ├── machine.pb.go
│   └── machine.proto

Server

Let’s create the server.

There are two parts to making our Machine service do its job:

  • Create server/machine.go: Implementing the service interface generated from our service definition; writing our service’s business logic.
  • Running the Machine gRPC server: Run the server to listen for clients’ requests and dispatch them to the right service implementation.

Take a look at how our MachineServer interface should appear: grpc-eg-go/server/machine.go

type MachineServer struct{}

// Execute runs the set of instructions given.

func (s *MachineServer) Execute(ctx context.Context, instructions *machine.InstructionSet) (*machine.Result, error) {
return nil, status.Error(codes.Unimplemented, "Execute() not implemented yet")
}

Implementing Simple RPC

MachineServer implements only Execute() service method as of now – as per Part-1 of this blog series.

Execute(), just gets a InstructionSet from the client and returns the value in a Result by executing every Instruction in the InstructionSet into our Stack Machine.

Before implementing Execute(), let’s implement a basic Stack. It should look like this.

type Stack []float32

func (s *Stack) IsEmpty() bool {
return len(*s) == 0
}

func (s *Stack) Push(input float32) {
*s = append(*s, input)
}

func (s *Stack) Pop() (float32, bool) {
if s.IsEmpty() {
return -1.0, false
}

item := (*s)[len(*s)-1]
*s = (*s)[:len(*s)-1]
return item, true
}

 

Now, let’s implement the Execute(). It should look like this.

type OperatorType string

const (
PUSH OperatorType   = "PUSH"
POP               = "POP"
ADD               = "ADD"
SUB               = "SUB"
MUL               = "MUL"
DIV               = "DIV"
)

type MachineServer struct{}
// Execute runs the set of instructions given.
func (s *MachineServer) Execute(ctx context.Context, instructions *machine.InstructionSet) (*machine.Result, error) {

if len(instructions.GetInstructions()) == 0 {
return nil, status.Error(codes.InvalidArgument, "No valid instructions received")
}

var stack stack.Stack

for _, instruction := range instructions.GetInstructions() {

operand := instruction.GetOperand()
operator := instruction.GetOperator()
op_type := OperatorType(operator)
fmt.Printf("Operand: %v, Operator: %v", operand, operator)

switch op_type {

case PUSH:
stack.Push(float32(operand))

case POP:
stack.Pop()

case ADD, SUB, MUL, DIV:
item2, popped := stack.Pop()
item1, popped := stack.Pop()
if !popped {
return &machine.Result{}, status.Error(codes.Aborted, "Invalide sets of instructions. Execution aborted")
}
if op_type == ADD {
stack.Push(item1 + item2)
} else if op_type == SUB {
stack.Push(item1 - item2)
} else if op_type == MUL {
stack.Push(item1 * item2)
} else if op_type == DIV {
stack.Push(item1 / item2)
}

default:
return nil, status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)
}
}

item, popped := stack.Pop()
if !popped {
return &machine.Result{}, status.Error(codes.Aborted, "Invalide sets of instructions. Execution aborted")
}
return &machine.Result{Output: item}, nil
}

 

We have implemented the Execute() to handle basic instructions like PUSH, POP, ADD, SUB, MUL, and DIV with proper error handling. On completion of the instructions set’s execution, it pops the result from Stack and returns as a Result object to the client.

Code to run the gRPC server

To run the gRPC server we need to:

  • Create a new instance of the gRPC struct and make it listen to one of the TCP ports at our localhost address. As a convention default port selected for gRPC is 9111.
  • To serve our StackMachine service over the gRPC server, we need to register the service with the newly created gRPC server.

For the development purpose, the basic insecure code to run the gRPC server should look like this.

var (
port = flag.Int("port", 9111, "Port on which gRPC server should listen TCP conn.")
)

func main() {
flag.Parse()
lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))

if err != nil {
log.Fatalf("failed to listen: %v", err)
}

grpcServer := grpc.NewServer()
machine.RegisterMachineServer(grpcServer, &server.MachineServer{})
grpcServer.Serve(lis)
log.Printf("Initializing gRPC server on port %d", *port)
}

 

We must consider strong TLS-based security for our production environment. I’ll try planning to include an example of TLS implementation in this blog series.

Client

As we already know that the same machine/machine.proto file, which is our IDL (Interface Definition Language) is capable of generating interfaces for clients as well, one has to just implement those interfaces to communicate with the gRPC server.

With a .proto, either the service provider can implement an SDK, or the consumer of the service itself can implement a client in the desired programming language.

Let’s implement our version of a basic client code, which will call the Execute() method of the service. The client should look like this.

var (
serverAddr = flag.String("server_addr", "localhost:9111", "The server address in the format of host:port")
)

func runExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

log.Printf("Executing %v", instructions)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
result, err := client.Execute(ctx, instructions)
if err != nil {
log.Fatalf("%v.Execute(_) = _, %v: ", client, err)
}
log.Println(result)
}

func main() {

flag.Parse()
var opts []grpc.DialOption

opts = append(opts, grpc.WithInsecure())
opts = append(opts, grpc.WithBlock())
conn, err := grpc.Dial(*serverAddr, opts...)

if err != nil {
log.Fatalf("fail to dial: %v", err)
}

defer conn.Close()
client := machine.NewMachineClient(conn)

// try Execute()

instructions := []*machine.Instruction{}
instructions = append(instructions, &machine.Instruction{Operand: 5, Operator: "PUSH"})
instructions = append(instructions, &machine.Instruction{Operand: 6, Operator: "PUSH"})
instructions = append(instructions, &machine.Instruction{Operator: "MUL"})
runExecute(client, &machine.InstructionSet{Instructions: instructions})
}

Test

Server

Let’s write a unit test to validate our business logic of Execute() method.

    • Create a test file server/machine_test.go
    • Write the unit test, it should look like this.

Run the test file.

~/disk/E/workspace/grpc-eg-go
$ go test server/machine.go server/machine_test.go -v

=== RUN   TestExecute

--- PASS: TestExecute (0.00s)
PASS
ok      command-line-arguments    0.004s

Client

To test client-side code without the overhead of connecting to a real server, we’ll use Mock. Mocking enables users to write light-weight unit tests to check functionalities on the client-side without invoking RPC calls to a server.

To write a unit test to validate client side business logic of calling the Execute() method:

    • Install golang/mock package
    • Generate mock for MachineClient
    • Create a test file mock/machine_mock_test.go
    • Write the unit test

As we are leveraging the golang/mock package, to install the package we need to run the following command:

~/disk/E/workspace/grpc-eg-go
$ go get github.com/golang/mock/mockgen@latest

 

To generate a mock of the MachineClient run the following command, the file should look like this.

~/disk/E/workspace/grpc-eg-go
$ mkdir mock_machine && cd mock_machine
$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient > machine_mock.go

 

Write the unit test, it should look like this.

Run the test file.

~/disk/E/workspace/grpc-eg-go
$ go test mock_machine/machine_mock.go  mock_machine/machine_mock_test.go -v

=== RUN   TestExecute

output:30
--- PASS: TestExecute (0.00s)
PASS
ok      command-line-arguments    0.004s

Run

Now we are assured through unit tests that the business logic of the server & client codes is working as expected, let’s try running the server and communicating to it via our client code.

Server

To turn on the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go
$ go run cmd/run_machine_server.go

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go
$ go run client/machine.go

Executing instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

 

Hurray!!! It worked.

At the end of this blog, we’ve learned:

    • Importance of gRPC – What, Why, Where
    • How to install all the prerequisites
    • How to define an interface using protobuf
    • How to write gRPC server & client logic for Simple RPC
    • How to write and run the unit test for server & client logic
    • How to run the gRPC server and a client can communicate to it

The source code of this example is available at toransahu/grpc-eg-go.

You can also git checkout to this commit SHA to walk through the source code specific to this Part-1 of the blog series.

See you in the next part of this blog series.

Publish Your Android Library on JitPack for Better Reachability

Introduction

Reusable code has existed since the beginning of computing, and Android is no exception. Any library developer’s primary aim is to simplify abstract code complications and bundle the code for others to reuse in their projects.

Android libraries are one of the important parts of any android application development. We use these as per application needs like network API (Retrofit,okhttp, etc.), Image downloading and caching (Picasso, glide) and many more.

When a piece of code is repeated or can be reused-

    • In a class: we move it to a method
    • In an application: we move it to a utility class
    • In multiple application: we create a library out of it

What is Android Library

The layout of an Android library is the same as that of an Android app module. Anything required to create an app, including source code, resource files, and a manifest for Android, can be included. However, an Android library gets compiled into an Android Archive (AAR) file that you can use as a dependency for an Android app module, instead of getting compiled into an APK that runs on a device.

When to create Android Library

There are specific times when you should opt for Android Library. They are-

    • When you’re creating different apps that share common components like events, utilities, or UI templates.
    • If you’re making an app that has several APK iterations, such as a free and premium version, and the main elements are the same in both.
    • When you want to share your solution publicly like a funky loader, or creative views (Button, Dropdown, etc.)
    • Or when you want to create and reuse or distribute some common solution that can be used in different apps like caching, image downloading, network APIs etc.

Getting Started

To learn how to publish an Android library, we need to briefly talk about the creation of the Android Library. We will also check its usage in a basic app flow, and to understand how to publish our library.

Creating the Android Library

    • Create a new library module.

    • Select Android Library and click Next.

    • Enter the module name.

    • The project structure be looking something like below-

    • Let’s create an email validation class in our library.

Our email validation library is ready. Let’s test this library by integrating it into our sample app.

Adding Library in Our App

Add the following in dependencies of build.gradle of the app module and sync project

implementation project(‘:email-validator’) 

Now EmailValidate class of the library is accessible in our app and we can use the validation API of the library to check email validation.

We successfully used the library in our sample application, now the next step will be to publish it for others to be able to use it.

What is Jitpack?

JitPack is a unique package repository for Java Virtual Machine and Android projects. It builds Git projects on demand and offers ready-to-use artifacts (jar, aar).

If you want your library to be available to the world, there is no need to go through project build and upload steps. All you need to do is push your project to GitHub and JitPack will take care of the rest. That’s it!

Why Jitpack?

There are reasons that give Jitpack an edge over others. They are-

    • It builds a specific commit or the latest one and works on any branch or pulls request
    • Library javadocs are published and hosted automatically
    • You can track your downloads.Weekly and monthly stats are also available for maintainers.
    • Artifacts are served via a global CDN, which allows fast downloads for you and your users
    • Private builds remain private. You can share them when needed.
    • Custom domains: Match artifact names with your domain
    • Jitpack works with Github,GitLab, BitBucket

Publishing Library

We need to bring our Android project into a public repo on our GitHub account first to continue publishing the library for this tutorial. Build a public repo and push all the files to the repo in our GitHub account.

We can only use this library right now since the library module is only accessible for our project. We’ll have to publish the library on JitPack to make the email-validator library accessible to everyone.

Building for JitPack

    • Add the JitPack maven repository to the list of repositories in the root-level build.gradle

    • Enable maven to publish plugin by adding the following in build.gradle of the library module

    • The code sample below produces a publication for an AAR library’s release. add it to the root level of the build.gradle of the library module.

Check Maven Publish Plugin

Let’s check whether all the changes we have done are correctly configured in the maven publish plugin or not. Check that your library can be installed to mavenLocal ($HOME/.m2/repository):

    • Run following command in Android Studio Terminal
      ./gradlew publishReleasePublicationToMavenLocal

    • Let’s see the $HOME/.m2/repository directory

Publish Library on JitPack

    • Commit the changes we have done in the library module
    • If everything goes well in the previous step, your library is ready to be released! Create a GitHub release or add a git tag and you’re done!
    • Create a release tag in the GitHub repo

    • If there is any error you can see in the “Log” tab with detail like what went wrong. For instance, I can check what went wrong for version 1.3

It says the build was successful but when it tried to upload the artifacts it was nowhere to be And the reason was, in 1.3, I forgot to commit the maven to publish settings in my build.gradle of the library module.

Installing Library

    • For installing the library in any app, we have to add the following in the project build.gradle
      maven { url 'https://jitpack.io' }

    • Add library dependency in the app module build.gradle
      implementation 'com.github.dk19121991:EmailValidator:1.5'

    • Sync the project and Voila! we can use our library downloaded publicly in the app

 

Congratulations! You can now publish your library for public usage.

For more details and an in-depth view, you can find the code here

References: https://jitpack.io/ , https://github.com/jitpack/jitpack.io

 

How to Use Firebase Remote Config Efficiently?

Introduction

Assume it’s a holiday, such as Holi , and you want to change your apps theme to match it. The easiest approach is to upload a new build with a new theme to the Google Play Store. But it does not guarantee that all of your users will download the upgrade. It will also be inefficient to upload a new build only to modify the style. It is doable for one time, but not if you intend to do the same for all the major festivals. 

Firebase Remote Config is perfect to handle these kinds of scenarios. It makes controlling all these possible without wasting time  creating a new build every time and waiting for the app to be available on the Play Store.

In this blog, I will create a sample app and discuss Firebase Remote Config and how it works.

What is Firebase Remote Config

The Firebase Remote Config service is a cloud-based service. It modifies your app’s  behaviour and appearance of your app without forcing all current users to download an update from the Play Store. Essentially, Remote Config helps you to maintain parameters on the cloud, and it controls the actions and appearance of your app depending on these parameters.

Why Firebase Remote Config

    • Make modifications without having to republish
    • Customize every aspect of your app
    • Customize the app for different types of users.
    • You can test new functionalities on a small number of people.

Getting started

We build in-app default values in Remote Config that govern the app’s actions and appearance (such as text, color, and pictures, among other things). We then retrieve parameters from the Firebase Remote Config and change the default value using Firebase Remote Config.

States of Remote Config

We can divide the state of remote config under two different category

    • Default 
    • Fetched

Default

In the default state config, the default values are specified in your app. It gets copied into the active state config and returned to the client if there is no matching key in the remote config server. The app will use the same.

Fetched

The most recent configuration that is downloaded from the cloud but not yet enabled. You must first activate these config parameters, after which you must copy all values into the active Config and get it ready for the app.

The below image gives the pictorial representation of how the system prioritizes parameter values in the Remote Config backend and the app:

Firebase Remote Config in Action

Let’s create a small app with an image view that will get the image URL from the remote config.

Create a Basic Sample app

 

      • Create a new project and choose the template of your choice.

I chose “Empty Activity”.

      • Add name and click finish

      • Add an ImageView in the activity layout

Create a Firebase Project

To use any firebase tool, we need to create a firebase project for the same.

Let’s create one for our app RemoteConfigSample


      • Give any name for your project and click `Continue`




    • Select your Analytics location and create the project.


Add Android App to Our Firebase Project

    • Click on the Android icon to create an Android app in the firebase project

    •  After filing the relevant info click on the Register app


    • Download and add google-services.json to your project as per the instruction provided.

    • Click Next after following the instructions to connect the Firebase SDK to your project.


Add Firebase Remote Config Dependencies

Add the following line in-app module build.gradle dependencies and then sync the project.

implementation ‘com.google.firebase:firebase-config’

Build a new folder called XML in the res folder. Build a resource file named config_defaults in the XML section. Set the default values for Remote Config parameters. If you need to adjust the default values in an app, you use the Firebase Console to set a modified value with the exact values you wish to change.

Add parameters to the firebase console remote config.
Go to the “Remote Config” setting under Engage in the Left pane.

Add a sample parameter

Implementation for Fetching Remote Config

  • Get the singleton object of Remote Config.
  • Set the in-app default parameter value
    mFirebaseRemoteConfig.setDefaultsAsync(R.xml.config_defaults);
  • Create a method for fetching from the firebase remote server
  • Create a button in the activity layout to trigger the fetch call

App in Action

The default parameter in-app is “https://tinyurl.com/25sskvem”

When App initially ran on the device it loaded the image from the default parameter.

When the user clicks on fetch, it fetches the parameter from the firebase remote server with the value “https://tinyurl.com/2dh9dsxa”  and loads the image view 

MainActivity 

Best Practices

  • Don’t update or switch aspects of the UI while the user is viewing or interacting with it. You can do this only if you have a strong app or business reasons for doing so, like removing options related to a promotion that has just ended.
  • Don’t send mass numbers of simultaneous fetch requests, as it may lead to  the server throttling your app. Risks of this happening are low in most production scenarios.However, it can be an issue during active development.
  • Do not store confidential or sensitive data in Remote Config parameters.

 

Awesome, you made it till the end.

So we have seen how we can change app configurations without the need to create a new build. There can be many use cases around this, like getting the configuration for changing the app theme, we can have a splash screen with image/gif changed based on the different events without a need to upload a new build.

We can even have a version check to notify users about the availability of a new version or even enforce them to upgrade to a new version of the old app that is fully deprecated.

Firebase Remote Config is convenient. In a future blog, we will cover a few more use cases with Firebase Remote Config.

For more details and an in-depth view, you can find the code here

References: https://firebase.google.com/products/remote-config, https://firebase.google.com/docs/remote-config/ 

How to simplify Android app distribution with Fastlane and improve workflow?

Introduction

Making releases, taking screenshots, and updating metadata in Google Play Store are all facets of Android app growth. But you can automate these to save time for meaningful things like integrating functionality and repairing bugs.

Fastlane lets you do all that repeatedly and effectively. It’s an open-source tool that makes Android and iOS software distribution simpler. It helps you simplify any part of the workflow for creation and publication.

In this blog, we’ll learn how to use Fastlane to distribute an app on the firebase app distribution platform and add testers to test the build.

Why Fastlane

    • It automatically takes screenshots.
    • You can easily distribute new beta builds to testers so that you can get useful feedback quickly.
    • You can streamline the whole app store rollout phase.
    • Avoid the inconvenience of keeping track of code signing identities.
    • You can easily integrate Fastlane into existing CI services, including Bitrise, Circle CI, Jenkins, Travis CI. 

Getting Started

We will use our BLE Scanner app (you can read more about this app here) to automate a few tasks.
Take checkout of the project and make sure it is working fine, you can build it and run it on a device.

Installing Fastlane 

There are many ways to install Fastlane, for more details visit here.

We will install it using Homebrew 

brew install fastlane

Setting up Fastlane

Open the terminal and navigate to your project directory and then run the following command.

fastlane init

 It will ask you to confirm a few specifics.

    • When prompted, give the package name for your submission (e.g. com.dinkar.blescanner)

      You can get the package name of your application from the AndroidManifest.xml file


    • When asked for the path to your confidential json file, click enter. (we can set this up later)
    • When asked if you intend to upload details to Google Play through Fastlane, reply ‘n’ (we can set this up later)

You’ll get a couple more prompts after that. To proceed, click Enter. When you’re done, run the following command to test the new fastlane configuration:

fastlane test

After successfully running the ‘test’ command you will see something like below.

The following files can be found in the newly created `fastlane` directory:

Appfile: that specifies configuration data for your android app that is global
Fastfile: that specifies the “lanes” that govern how fastlane acts.

Configuring Fastlane

To store the automation setup, Fastlane uses a Fastfile. When you open Fastfile, you’ll see the following:

Fastlane splits multiple acts into lanes. A lane begins with lane: name, where the name of a lane is the name given. You can see three separate lanes inside the file: test, beta and deploy.

The following is a list of the actions that each lane performs:

    • test: Performs all the project tests(unit,instrumented) using the Gradle action.
    • beta: Uses the gradle action followed by the crashlytics action to send a beta build to Firebase App Distribution.
    • deploy: Use the gradle action followed by the upload to play store action and deploy a new update to Google Play.

Create new Lane

Let’s edit Fastfile and add a new lane to clean build our app.
To add a new lane we have to edit `platform :android do` 
Add lane cleanBuild by adding something like below.

Open the terminal and run the following command

fastlane cleanBuild

You will get the following message after successful execution.

Now this tells us how we can create a lane and how to use that. Let’s Automate beta app distribution using firebase app distribution

Automating Firebase App Distribution

Once you are done with a new feature, you’ll want to share it with beta testers or your QA to test the stability of the build and gather feedback before releasing it on the Play Store.
For this kind of distribution, we use the highly recommended Firebase app distribution tool. 

Create a Firebase Project

To use any of the firebase tools in our app we need to have a firebase project for it. Let’s create the one for our BLE Scanner App 

    • Give any name for your project and click `Continue`

      • Select your Analytics location and create the project.

Add Android app to our Firebase project

    • Click on the Android icon to create an Android app in the firebase project

    •  After filing the relevant info click on the Register app

    • Download and add google-services.json to your project as per the instruction provided.

    • Click Next after following the instructions to connect the Firebase SDK to your project.

    • Go to project settings

    • Get App Id. Fastlane will ask you for it.

  

Install FirebaseCLI

Fastlane needs FirebaseCLI to connect to the Firebase server for uploading the build.

  • Open terminal and run command ‘ curl -sL https://firebase.tools | bash ’
  • After successful installation login to firebase CLI by running the command in terminal

    firebase login
  • After successful login, to test that the CLI is properly installed, access your account by listing your Firebase projects run ` firebase projects:list ` and check for the project you have created.

Install the Fastlane plugin for Firebase App Distribution

Run the following command in terminal

fastlane add_plugin firebase_app_distribution

Select `Y’ for the prompt `Should fastlane modify the Gemfile at path`

You’re now ready to submit various versions of the app to different groups of testers using Firebase.

Enabling Firebase App Distribution for your app

    • Select App Distribution from the left pane of the console

    • Accept TnC and click on `Get started`


    • Now let’s create a test group with a few testers, whom we will send our build for testing.


Distribute app for testing

Open Fastfile and overwrite the beta lane with the following, replacing the `app` with the App ID you copied earlier:

The whole Fastfile will be looking like this

Replace YOUR_APP_ID and YOUR_GROUP_NAME with correct values.

After editing Fastfile run the following command on the terminal.

fastlane beta

After the successful execution of the above command, you will see something like below.

The tester would receive email notification for Build.

You can also see the Firebase console to verify whether the new build is uploaded with release notes and testers have been added to the same or not.

Handle Flavors

We can have multiple flavors of our app depending upon different criteria, we can have different flavors for different environments like production, staging, QA and development.

To run a specific flavor we can run the following command

fastlane beta app:flavorName

Here flavorName can be prod,staging,qa,dev

If you provide a flavor with an app key, Fastfile will run gradle assembleReleaseFlavor. Otherwise, it will run gradle assembleRelease to build all build flavors.

Best Practices

    • If possible, do not keep Fastlane configuration files in the repository.
    • To exclude generated and temporary files getting committed in the repo, add the following lines to the repository’s.gitignore file:

    • It’s also a smart idea to keep screenshots and other distribution items out of the repository. If you have to re-generate, use Fastlane.
    • It is recommended that you use Git Tags or custom triggers rather than commit for App deployment.

Congratulations!
We have successfully automated the task of building an android app and distributing it to QA, now it’s as simple as one command execution “fastlane beta” which can be done by any non-technical person as well.

For more details and an in-depth view, you can find the code here

References:https://fastlane.tools/,https://docs.fastlane.tools/