How to Integrate Firebase Authentication for Google Sign-in Functionality?

Introduction

Personalized experience is now a buzzword. Industries have started realizing the importance of learning what each individual thinks while making a decision. It helps in optimizing resources and expanding product reach. Consequently, most applications now ask for a user identity to create a premise for a personalized experience. However, it’s not easy to have your own Authentication/Identity providing the solution. When such an occasion strikes, Firebase Authentications enters the frame to save the day for us.

What is Firebase authentication?

Firebase authentication is a tool to rapidly and securely authenticate users. It offers a very clear flow for the authentication and login method and is easy to use.

Why one should use Firebase authentication

    • Time, cost, security, and stability are advantages of using authentication as a service instead of constructing it yourself. Firebase Authentication has already done the legwork for you in terms of safe account storage, email/phone authentication flows, and so on.
    • Firebase Authentication has many integrations with Developed products like Cloud Firestore, Realtime Database, and Cloud Storage. Declarative security rules are used to protect these products, and Firebase Authentication is used to introduce granular per-user security.
    • You can easily sign-in using any platform. It provides an end-to-end identity solution, supporting email and password accounts, phone authentication, and Google, Twitter, Facebook, and GitHub login, and more.

Getting Started

In this blog, we will focus on integrating the Firebase Authentication for Google Sign-In functionality in our Android application using Google and Firebase APIs.

Create an Android Project

  • Create a new project and choose the template of your choice.

I chose “Empty Activity”.

  • Add name and click finish

Create a Firebase Project

To use any firebase tool we need to create a firebase project for the same.

Let’s create one for our app RemoteConfigSample

  • Give any name for your project and click `Continue`

  • Select your Analytics location and create the project.

Add Android App to Our Firebase Project

 

  • Click on the Android icon to create an Android app in the firebase project

  • We have to add SHA-1 setting for Google Sign -in, Run the command below in the project directory to determine the SHA1 of your debug key:

./gradlew signingReport

  • Use the SHA1 from above in the app registration on firebase
  • After filing the relevant info click on the Register app

  • Download and add google-services.json to your project as per the instruction provided.

  • Click Next after following the instructions to connect the Firebase SDK to your project.

Add Firebase Authentication Dependencies

  • Go to Authentication Setting under Build in Left Pane

  • Click on “Get started”

  • Select the Sign In Method tab
  • Toggle the Google switch to enabled (blue)

  • Set a support email and Save it.

  • Go to Project Setting

  • And download the latest google-services.json which will be having the authentication setting for google sign in we enabled, and replace the old json file with the new one.
  • Add Firebase authentication dependency in build.gradle
    implementation ‘com.google.firebase:firebase-auth’
  • Add Google sign in dependency in build.gradle
    implementation ‘com.google.android.gms:play-services-auth:19.0.0’

Create a Sign-in Flow

To begin authentication, a simple Sign-In button is used. In this stage, you’ll implement the logic for signing in with Google and then authenticating with Firebase using that Google account.

mFirebaseAuth = FirebaseAuth.getInstance();

  • Initialize GoogleSignInClient

  • Create a SignIn method which we can call with the click of the SignIn button

Extract User Data

On successful authentication of google account with firebase authentication, we can extract relevant user information.

Get User Name

Get User Profile photo URL

SignOut

We’ve finished the login process. If the user is still logged in, our next goal is to log them out. To do this, we create a method called signOut ().

See App in Action

We can see the user also got created in Firebase Authentication “Users” tab

 

Hurray, we have done it, now we know how to have an authentication solution with no server of our own to get user identity and serve a personalized user experience.

For more details and an in-depth view, you can find the code here

 

References: https://firebase.google.com/products/auth , https://firebase.google.com/docs/auth

 

 

A Quick Overview of gRPC in Golang

REST API is now the most popular framework among developers for web application development as it is very easy to use. REST is used to provide the business application service to the outer world and internal communication among internal microservices. However, ease and flexibility come with some pitfalls. REST requires a stringent Human Agreement and relies on Documentation. Also, in cases like internal communication and real-time applications, it has certain limitations. In 2015, gRPC kicked in. gRPC, initially developed at Google, is now disrupting the industry. gRPC is a modern open-source high-performance RPC framework, which comes with a simple language-agnostic Interface Definition Language (IDL) system, leveraging Protocol Buffers.

Objective

This blog aims to get you started with gRPC in Go with a simple working example. The blog covers basic information like What, Why, When/Where, and How about the gRPC. We’ll majorly focus on the How section, to establish a connection between the client and server and write unit tests for testing the client and server code separately. We’ll also run the code to establish a client-server communication.

What is gRPC?

gRPC – Remote Procedure Call

    • gRPC is a high performance, open-source universal RPC Framework
    • It enables the server and client applications to communicate transparently and build connected systems
    • gRPC is developed and open-sourced by Google (but no, the g doesn’t stand for Google)

Why Use gRPC?

    1. Better Design
      • With gRPC, we can define our service once in a .proto file and implement clients and servers in any of gRPC’s supported languages
      • Ability to auto-generate and publish SDKs as opposed to publishing the APIs for services
    2. High Performance
      • Advantages of working with protocol buffers, including efficient serialization, a simple IDL, and easy interface updating
      • Advantages of improved features of HTTP/2
      • Multiplexing: This forces the service client to utilize a single TCP connection to handle multiple requests simultaneously
      • Binary Framing and Compression
    3. Multi-way communication
      • Simple/Unary RPC
      • Server-side streaming RPC
      • Client-side streaming RPC
      • Bidirectional streaming RPC

Where to Use gRPC?

The “where” is pretty easy: we can leverage gRPC almost anywhere. We just need two computers communicating over a network:

    • Microservices
    • Client-Server Applications
    • Integrations and APIs
    • Browser-based Web Applications

How to Use gRPC?

Our example is a simple “Stack Machine” as a service that lets clients perform operations like PUSH, ADD, SUB, MUL, DIV, FIBB, AP, GP.

In Part-1, we’ll focus on Simple RPC implementation. In Part-2, we’ll focus on Server-side & Client-side streaming RPC, and in Part-3, we’ll implement Bidirectional streaming RPC.

Let’s get started with installing the prerequisites of the development.

Prerequisites

Go

    • Version 1.6 or higher.
    • For installation instructions, see Go’s Getting Started guide.

gRPC

Use the following command to install gRPC.

~/disk/E/workspace/grpc-eg-go
$ go get -u google.golang.org/grpc

Protocol Buffers v3

~/disk/E/workspace/grpc-eg-go
$ go get -u github.com/golang/protobuf/proto

 

  • Update the environment variable PATH to include the path to the protoc binary file.
  • Install the protoc plugin for Go
~/disk/E/workspace/grpc-eg-go
$ go get -u github.com/golang/protobuf/protoc-gen-go

Setting Project Structure

~/disk/E/workspace/grpc-eg-go
$ go mod init github.com/toransahu/grpc-eg-go
$ mkdir machine
$ mkdir server
$ mkdir client
$ tree
.
├── client/
├── go.mod
├── machine/
└── server/

Defining the service

Our first step is to define the gRPC service and the method request and response types using protocol buffers.

To define a service, we specify a named service in our machine/machine.proto file:

service Machine {

}

Then we define a Simple RPC method inside our service definition, specifying their request and response types.

  • A simple RPC where the client sends a request to the server using the stub and waits for a response to come back
// Execute accepts a set of Instructions from the client and returns a Result.
rpc Execute(InstructionSet) returns (Result) {}

 

  • machine/machine.proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods.
// Result represents the output of execution of the instruction(s).
message Result {
float output = 1;
}

 

Our machine/machine.proto file should look like this considering Part-1 of this blog series.

Generating client and server code

We need to generate the gRPC client and server interfaces from the machine/machine.proto service definition.

~/disk/E/workspace/grpc-eg-go
$ SRC_DIR=./
$ DST_DIR=$SRC_DIR
$ protoc \
-I=$SRC_DIR \
--go_out=plugins=grpc:$DST_DIR \
$SRC_DIR/machine/machine.proto

 

Running this command generates the machine.pb.go file in the machine directory under the repository:

~/disk/E/workspace/grpc-eg-go
$ tree machine/
.
├── machine/
│   ├── machine.pb.go
│   └── machine.proto

Server

Let’s create the server.

There are two parts to making our Machine service do its job:

  • Create server/machine.go: Implementing the service interface generated from our service definition; writing our service’s business logic.
  • Running the Machine gRPC server: Run the server to listen for clients’ requests and dispatch them to the right service implementation.

Take a look at how our MachineServer interface should appear: grpc-eg-go/server/machine.go

type MachineServer struct{}

// Execute runs the set of instructions given.

func (s *MachineServer) Execute(ctx context.Context, instructions *machine.InstructionSet) (*machine.Result, error) {
return nil, status.Error(codes.Unimplemented, "Execute() not implemented yet")
}

Implementing Simple RPC

MachineServer implements only Execute() service method as of now – as per Part-1 of this blog series.

Execute(), just gets a InstructionSet from the client and returns the value in a Result by executing every Instruction in the InstructionSet into our Stack Machine.

Before implementing Execute(), let’s implement a basic Stack. It should look like this.

type Stack []float32

func (s *Stack) IsEmpty() bool {
return len(*s) == 0
}

func (s *Stack) Push(input float32) {
*s = append(*s, input)
}

func (s *Stack) Pop() (float32, bool) {
if s.IsEmpty() {
return -1.0, false
}

item := (*s)[len(*s)-1]
*s = (*s)[:len(*s)-1]
return item, true
}

 

Now, let’s implement the Execute(). It should look like this.

type OperatorType string

const (
PUSH OperatorType   = "PUSH"
POP               = "POP"
ADD               = "ADD"
SUB               = "SUB"
MUL               = "MUL"
DIV               = "DIV"
)

type MachineServer struct{}
// Execute runs the set of instructions given.
func (s *MachineServer) Execute(ctx context.Context, instructions *machine.InstructionSet) (*machine.Result, error) {

if len(instructions.GetInstructions()) == 0 {
return nil, status.Error(codes.InvalidArgument, "No valid instructions received")
}

var stack stack.Stack

for _, instruction := range instructions.GetInstructions() {

operand := instruction.GetOperand()
operator := instruction.GetOperator()
op_type := OperatorType(operator)
fmt.Printf("Operand: %v, Operator: %v", operand, operator)

switch op_type {

case PUSH:
stack.Push(float32(operand))

case POP:
stack.Pop()

case ADD, SUB, MUL, DIV:
item2, popped := stack.Pop()
item1, popped := stack.Pop()
if !popped {
return &machine.Result{}, status.Error(codes.Aborted, "Invalide sets of instructions. Execution aborted")
}
if op_type == ADD {
stack.Push(item1 + item2)
} else if op_type == SUB {
stack.Push(item1 - item2)
} else if op_type == MUL {
stack.Push(item1 * item2)
} else if op_type == DIV {
stack.Push(item1 / item2)
}

default:
return nil, status.Errorf(codes.Unimplemented, "Operation '%s' not implemented yet", operator)
}
}

item, popped := stack.Pop()
if !popped {
return &machine.Result{}, status.Error(codes.Aborted, "Invalide sets of instructions. Execution aborted")
}
return &machine.Result{Output: item}, nil
}

 

We have implemented the Execute() to handle basic instructions like PUSH, POP, ADD, SUB, MUL, and DIV with proper error handling. On completion of the instructions set’s execution, it pops the result from Stack and returns as a Result object to the client.

Code to run the gRPC server

To run the gRPC server we need to:

  • Create a new instance of the gRPC struct and make it listen to one of the TCP ports at our localhost address. As a convention default port selected for gRPC is 9111.
  • To serve our StackMachine service over the gRPC server, we need to register the service with the newly created gRPC server.

For the development purpose, the basic insecure code to run the gRPC server should look like this.

var (
port = flag.Int("port", 9111, "Port on which gRPC server should listen TCP conn.")
)

func main() {
flag.Parse()
lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))

if err != nil {
log.Fatalf("failed to listen: %v", err)
}

grpcServer := grpc.NewServer()
machine.RegisterMachineServer(grpcServer, &server.MachineServer{})
grpcServer.Serve(lis)
log.Printf("Initializing gRPC server on port %d", *port)
}

 

We must consider strong TLS-based security for our production environment. I’ll try planning to include an example of TLS implementation in this blog series.

Client

As we already know that the same machine/machine.proto file, which is our IDL (Interface Definition Language) is capable of generating interfaces for clients as well, one has to just implement those interfaces to communicate with the gRPC server.

With a .proto, either the service provider can implement an SDK, or the consumer of the service itself can implement a client in the desired programming language.

Let’s implement our version of a basic client code, which will call the Execute() method of the service. The client should look like this.

var (
serverAddr = flag.String("server_addr", "localhost:9111", "The server address in the format of host:port")
)

func runExecute(client machine.MachineClient, instructions *machine.InstructionSet) {

log.Printf("Executing %v", instructions)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
result, err := client.Execute(ctx, instructions)
if err != nil {
log.Fatalf("%v.Execute(_) = _, %v: ", client, err)
}
log.Println(result)
}

func main() {

flag.Parse()
var opts []grpc.DialOption

opts = append(opts, grpc.WithInsecure())
opts = append(opts, grpc.WithBlock())
conn, err := grpc.Dial(*serverAddr, opts...)

if err != nil {
log.Fatalf("fail to dial: %v", err)
}

defer conn.Close()
client := machine.NewMachineClient(conn)

// try Execute()

instructions := []*machine.Instruction{}
instructions = append(instructions, &machine.Instruction{Operand: 5, Operator: "PUSH"})
instructions = append(instructions, &machine.Instruction{Operand: 6, Operator: "PUSH"})
instructions = append(instructions, &machine.Instruction{Operator: "MUL"})
runExecute(client, &machine.InstructionSet{Instructions: instructions})
}

Test

Server

Let’s write a unit test to validate our business logic of Execute() method.

    • Create a test file server/machine_test.go
    • Write the unit test, it should look like this.

Run the test file.

~/disk/E/workspace/grpc-eg-go
$ go test server/machine.go server/machine_test.go -v

=== RUN   TestExecute

--- PASS: TestExecute (0.00s)
PASS
ok      command-line-arguments    0.004s

Client

To test client-side code without the overhead of connecting to a real server, we’ll use Mock. Mocking enables users to write light-weight unit tests to check functionalities on the client-side without invoking RPC calls to a server.

To write a unit test to validate client side business logic of calling the Execute() method:

    • Install golang/mock package
    • Generate mock for MachineClient
    • Create a test file mock/machine_mock_test.go
    • Write the unit test

As we are leveraging the golang/mock package, to install the package we need to run the following command:

~/disk/E/workspace/grpc-eg-go
$ go get github.com/golang/mock/mockgen@latest

 

To generate a mock of the MachineClient run the following command, the file should look like this.

~/disk/E/workspace/grpc-eg-go
$ mkdir mock_machine && cd mock_machine
$ mockgen github.com/toransahu/grpc-eg-go/machine MachineClient > machine_mock.go

 

Write the unit test, it should look like this.

Run the test file.

~/disk/E/workspace/grpc-eg-go
$ go test mock_machine/machine_mock.go  mock_machine/machine_mock_test.go -v

=== RUN   TestExecute

output:30
--- PASS: TestExecute (0.00s)
PASS
ok      command-line-arguments    0.004s

Run

Now we are assured through unit tests that the business logic of the server & client codes is working as expected, let’s try running the server and communicating to it via our client code.

Server

To turn on the server we need to run the previously created cmd/run_machine_server.go file.

~/disk/E/workspace/grpc-eg-go
$ go run cmd/run_machine_server.go

Client

Now, let’s run the client code client/machine.go.

~/disk/E/workspace/grpc-eg-go
$ go run client/machine.go

Executing instructions:<operator:"PUSH" operand:5 > instructions:<operator:"PUSH" operand:6 > instructions:<operator:"MUL" >

output:30

 

Hurray!!! It worked.

At the end of this blog, we’ve learned:

    • Importance of gRPC – What, Why, Where
    • How to install all the prerequisites
    • How to define an interface using protobuf
    • How to write gRPC server & client logic for Simple RPC
    • How to write and run the unit test for server & client logic
    • How to run the gRPC server and a client can communicate to it

The source code of this example is available at toransahu/grpc-eg-go.

You can also git checkout to this commit SHA to walk through the source code specific to this Part-1 of the blog series.

See you in the next part of this blog series.

Publish Your Android Library on JitPack for Better Reachability

Introduction

Reusable code has existed since the beginning of computing, and Android is no exception. Any library developer’s primary aim is to simplify abstract code complications and bundle the code for others to reuse in their projects.

Android libraries are one of the important parts of any android application development. We use these as per application needs like network API (Retrofit,okhttp, etc.), Image downloading and caching (Picasso, glide) and many more.

When a piece of code is repeated or can be reused-

    • In a class: we move it to a method
    • In an application: we move it to a utility class
    • In multiple application: we create a library out of it

What is Android Library

The layout of an Android library is the same as that of an Android app module. Anything required to create an app, including source code, resource files, and a manifest for Android, can be included. However, an Android library gets compiled into an Android Archive (AAR) file that you can use as a dependency for an Android app module, instead of getting compiled into an APK that runs on a device.

When to create Android Library

There are specific times when you should opt for Android Library. They are-

    • When you’re creating different apps that share common components like events, utilities, or UI templates.
    • If you’re making an app that has several APK iterations, such as a free and premium version, and the main elements are the same in both.
    • When you want to share your solution publicly like a funky loader, or creative views (Button, Dropdown, etc.)
    • Or when you want to create and reuse or distribute some common solution that can be used in different apps like caching, image downloading, network APIs etc.

Getting Started

To learn how to publish an Android library, we need to briefly talk about the creation of the Android Library. We will also check its usage in a basic app flow, and to understand how to publish our library.

Creating the Android Library

    • Create a new library module.

    • Select Android Library and click Next.

    • Enter the module name.

    • The project structure be looking something like below-

Our email validation library is ready. Let’s test this library by integrating it into our sample app.

Adding Library in Our App

Add the following in dependencies of build.gradle of the app module and sync project

implementation project(‘:email-validator’) 

Now EmailValidate class of the library is accessible in our app and we can use the validation API of the library to check email validation.

<script src=”https://gist.github.com/dk19121991/5c3afd1ec8151a127892b9d532cc4dc1.js“></script>

We successfully used the library in our sample application, now the next step will be to publish it for others to be able to use it.

What is Jitpack?

JitPack is a unique package repository for Java Virtual Machine and Android projects. It builds Git projects on demand and offers ready-to-use artifacts (jar, aar).

If you want your library to be available to the world, there is no need to go through project build and upload steps. All you need to do is push your project to GitHub and JitPack will take care of the rest. That’s it!

Why Jitpack?

There are reasons that give Jitpack an edge over others. They are-

    • It builds a specific commit or the latest one and works on any branch or pulls request
    • Library javadocs are published and hosted automatically
    • You can track your downloads.Weekly and monthly stats are also available for maintainers.
    • Artifacts are served via a global CDN, which allows fast downloads for you and your users
    • Private builds remain private. You can share them when needed.
    • Custom domains: Match artifact names with your domain
    • Jitpack works with Github,GitLab, BitBucket

Publishing Library

We need to bring our Android project into a public repo on our GitHub account first to continue publishing the library for this tutorial. Build a public repo and push all the files to the repo in our GitHub account.

We can only use this library right now since the library module is only accessible for our project. We’ll have to publish the library on JitPack to make the email-validator library accessible to everyone.

Building for JitPack

    • Add the JitPack maven repository to the list of repositories in the root-level build.gradle

    • Enable maven to publish plugin by adding the following in build.gradle of the library module

Check Maven Publish Plugin

Let’s check whether all the changes we have done are correctly configured in the maven publish plugin or not. Check that your library can be installed to mavenLocal ($HOME/.m2/repository):

    • Run following command in Android Studio Terminal
      ./gradlew publishReleasePublicationToMavenLocal

    • Let’s see the $HOME/.m2/repository directory

Publish Library on JitPack

    • Commit the changes we have done in the library module
    • If everything goes well in the previous step, your library is ready to be released! Create a GitHub release or add a git tag and you’re done!
    • Create a release tag in the GitHub repo

    • If there is any error you can see in the “Log” tab with detail like what went wrong. For instance, I can check what went wrong for version 1.3

It says the build was successful but when it tried to upload the artifacts it was nowhere to be And the reason was, in 1.3, I forgot to commit the maven to publish settings in my build.gradle of the library module.

Installing Library

    • For installing the library in any app, we have to add the following in the project build.gradle
      maven { url 'https://jitpack.io' }

    • Add library dependency in the app module build.gradle
      implementation 'com.github.dk19121991:EmailValidator:1.5'

    • Sync the project and Voila! we can use our library downloaded publicly in the app

 

Congratulations! You can now publish your library for public usage.

For more details and an in-depth view, you can find the code here

References: https://jitpack.io/ , https://github.com/jitpack/jitpack.io

 

How to Use Firebase Remote Config Efficiently?

Introduction

Assume it’s a holiday, such as Holi , and you want to change your apps theme to match it. The easiest approach is to upload a new build with a new theme to the Google Play Store. But it does not guarantee that all of your users will download the upgrade. It will also be inefficient to upload a new build only to modify the style. It is doable for one time, but not if you intend to do the same for all the major festivals. 

Firebase Remote Config is perfect to handle these kinds of scenarios. It makes controlling all these possible without wasting time  creating a new build every time and waiting for the app to be available on the Play Store.

In this blog, I will create a sample app and discuss Firebase Remote Config and how it works.

What is Firebase Remote Config

The Firebase Remote Config service is a cloud-based service. It modifies your app’s  behaviour and appearance of your app without forcing all current users to download an update from the Play Store. Essentially, Remote Config helps you to maintain parameters on the cloud, and it controls the actions and appearance of your app depending on these parameters.

Why Firebase Remote Config

    • Make modifications without having to republish
    • Customize every aspect of your app
    • Customize the app for different types of users.
    • You can test new functionalities on a small number of people.

Getting started

We build in-app default values in Remote Config that govern the app’s actions and appearance (such as text, color, and pictures, among other things). We then retrieve parameters from the Firebase Remote Config and change the default value using Firebase Remote Config.

States of Remote Config

We can divide the state of remote config under two different category

    • Default 
    • Fetched

Default

In the default state config, the default values are specified in your app. It gets copied into the active state config and returned to the client if there is no matching key in the remote config server. The app will use the same.

Fetched

The most recent configuration that is downloaded from the cloud but not yet enabled. You must first activate these config parameters, after which you must copy all values into the active Config and get it ready for the app.

The below image gives the pictorial representation of how the system prioritizes parameter values in the Remote Config backend and the app:

Firebase Remote Config in Action

Let’s create a small app with an image view that will get the image URL from the remote config.

Create a Basic Sample app

 

      • Create a new project and choose the template of your choice.

I chose “Empty Activity”.

      • Add name and click finish

      • Add an ImageView in the activity layout

Create a Firebase Project

To use any firebase tool, we need to create a firebase project for the same.

Let’s create one for our app RemoteConfigSample


      • Give any name for your project and click `Continue`




    • Select your Analytics location and create the project.


Add Android App to Our Firebase Project

    • Click on the Android icon to create an Android app in the firebase project

    •  After filing the relevant info click on the Register app


    • Download and add google-services.json to your project as per the instruction provided.

    • Click Next after following the instructions to connect the Firebase SDK to your project.


Add Firebase Remote Config Dependencies

Add the following line in-app module build.gradle dependencies and then sync the project.

implementation ‘com.google.firebase:firebase-config’

Build a new folder called XML in the res folder. Build a resource file named config_defaults in the XML section. Set the default values for Remote Config parameters. If you need to adjust the default values in an app, you use the Firebase Console to set a modified value with the exact values you wish to change.

Add parameters to the firebase console remote config.
Go to the “Remote Config” setting under Engage in the Left pane.

Add a sample parameter

Implementation for Fetching Remote Config

  • Get the singleton object of Remote Config.
    <script src=”https://gist.github.com/dk19121991/e368763ce7560debab8ab044f549188b.js”></script>
  • Set the in-app default parameter value
    mFirebaseRemoteConfig.setDefaultsAsync(R.xml.config_defaults);
  • Create a method for fetching from the firebase remote server
    <script src=”https://gist.github.com/dk19121991/795126a8d833db7e74fc9a919c78a0ad.js“></script>
  • Create a button in the activity layout to trigger the fetch call

App in Action

The default parameter in-app is “https://tinyurl.com/25sskvem”

When App initially ran on the device it loaded the image from the default parameter.

When the user clicks on fetch, it fetches the parameter from the firebase remote server with the value “https://tinyurl.com/2dh9dsxa”  and loads the image view 

MainActivity 

<script src=”https://gist.github.com/dk19121991/0d3e95c48c12a2dad78ec125b61e2b4e.js“></script>

Best Practices

  • Don’t update or switch aspects of the UI while the user is viewing or interacting with it. You can do this only if you have a strong app or business reasons for doing so, like removing options related to a promotion that has just ended.
  • Don’t send mass numbers of simultaneous fetch requests, as it may lead to  the server throttling your app. Risks of this happening are low in most production scenarios.However, it can be an issue during active development.
  • Do not store confidential or sensitive data in Remote Config parameters.

 

Awesome, you made it till the end.

So we have seen how we can change app configurations without the need to create a new build. There can be many use cases around this, like getting the configuration for changing the app theme, we can have a splash screen with image/gif changed based on the different events without a need to upload a new build.

We can even have a version check to notify users about the availability of a new version or even enforce them to upgrade to a new version of the old app that is fully deprecated.

Firebase Remote Config is convenient. In a future blog, we will cover a few more use cases with Firebase Remote Config.

For more details and an in-depth view, you can find the code here

References: https://firebase.google.com/products/remote-config, https://firebase.google.com/docs/remote-config/ 

When and How to Use CSS Animation

Introduction

CSS animation is quickly growing to become an essential tool for web developers. While working on a project, I got a requirement from the client to create complex eye-catching animation. It was for a splash screen consisting of four informative cards, animating from left to right on page load. To put it another way, it had to reveal the elements one by one playfully and creatively, making things much more interesting.

animation CSS animations

To get this animation right, we had two options, either use Javascript or CSS.

Both can do impressive animations. However, we had to choose the one that suits the purpose. Thereupon we started to think of some key differences that we can consider as parameters to find the winner of Javascript vs. CSS animations:

Resilience

CSS rules are easy to write and maintain compared to Javascript. One broken CSS rule will not break the whole layout, whereas a single syntax error in Javascript may crash the complete web application or force the user to reload the page.

Functionality

In terms of functionality, CSS and Javascript are reasonably similar.

Although Javascript animations provide significant control over animations – pause, stop, revert, run asynchronously one after another, place on a timeline and schedule.

Performance

Performance is another important consideration when you have plans to develop your program on mobile platforms. All in all, CSS has relatively good performance as it offloads animation logic onto the browser itself due to which it lets the browser optimize DOM interaction and memory consumption and, most importantly, uses the GPU to improve performance. On the other hand, Javascript performance can range from reasonably faster to much slower than CSS. Javascript performance depends on the library used and puts the burden on the developer to optimize.

Optimization

CSS animations are better from an optimization perspective. In fact, they run on the GPU, so the frame rate is much higher than that of Javascript animations. On the positive side, CSS animations do not causing reflows and redraws, unlike modifying elements via Javascript.

These factors were my primary concerns. CSS animation was more suitable for my project because it provides complete control over the animation using just one keyframe.

Moreover, to work with keyframes, we needed support from the following CSS properties:

Transform

CSS Transform allows CSS elements to be transformed in two-dimensional or three-dimensional space – from moving the element to re-sizing it, from rotating the element to tilting it, without changing the document flow. When an element changes states, it triggers Transform, such as on mouse-hover or mouse-click.

There are four significant aspects of transforms:

translate: The translate() function allows us to move an element across the plane (on the x and y-axis).

scale: The scale() function allows us to resize an element. We can either expand or shrink it.

rotate: The rotate() function allows us to make an element revolve around a fixed point. By default, it revolves around the element’s center.

Skew: The skew() function allows us to distort an element, by dragging its sides along a line.

Transition

As the name suggests, transition lets you control the transformation of elements. It helps in making the process smooth and gradual. On the flip side, if you don’t have it, the element being transformed would change abruptly from one form to another. What’s more, it is widely used for simple animations and can be applied to most of the CSS properties.

A complete list of CSS properties that can be animated using transition can be found here.

https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_animated_properties

Animation Properties

Transitions animate the transformation of a CSS element from one state to another. But for more complex animations which require animating multiple elements with state dependency, we can use animation properties.

Creating CSS animations using animation properties is a two-step process, and requires keyframes and animation properties.

The @keyframes At-Rule

Keyframes are used to specify the values for animating properties at various stages of the animation. Keyframes are specified using a specialized CSS-at-rule — @keyframes.

/* Standard syntax */

@keyframes animate {

from   {top: 0px; background: green; width: 100px;}

to {top: 200px; background: black; width: 300px;}

}

To ensure optimal browser support for your CSS keyframes, you should define both 0% and 100% selectors:

/* Standard syntax */ @keyframes animate {    0%   {top: 0px;}    25%  {top: 200px;}    75%  {top: 50px;}    100% {top: 100px;}

}

We can create complex animation properties by using the @keyframe. A simple animation has two keyframes, while a complex animation has several keyframes.

The complete list of sub-animation properties can be found at:

https://developer.mozilla.org/en-US/docs/Web/CSS/animation

Let’s start working on our requirements. To start with, let’s create a basic card-based layout.

Step 1

We will create a simple 2 cards layout and we will add the animation in step 2 using keyframes. Since we can easily apply transition on CSS ‘position’ property, we can use it to place the elements as required.

https://codepen.io/pen/NWbBrRb

Step 2

Now that we have the card-based layout ready, let’s add some animation to it.
First, we need to hide the cards using opacity: 0 so that the cards are not visible before the animation.

Now, let’s add keyframes for the animation name.

@keyframes card-animation {

from {

opacity: 0;

left: 0;

}

to {

opacity: 1;

}

}

Since we have already defined the left property in class .card before, we can skip that in keyframe card-animation.

Step 3

Now our keyframe is ready, let’s add it to the .card class along with the animation-fill-mode.

animation: card-animation 1s;

animation-fill-mode: forwards;

We will have to add delay to animate the cards one by one.

.card-1 {

animation-delay: 2s;

}

.card-2 {

animation-delay: 3s;

}

That’s it! With the above simple code, we have with us the working card animation.

https://codepen.io/pen/ExNpyBz

Step 4

To update it as per our requirement, let’s add 2 more cards to it and apply the same animation-delay as we have used in the previous step.

.card-3 {

animation-delay: 4s;

}

.card-4 {

animation-delay: 5s;

}

Let’s see how it works.

https://codepen.io/pen/zYoLOYo

Conclusion

To sum up, use CSS animations for simpler “one-shot” transitions, like toggling UI element states. Besides, it is easier to use than JavaScript as it allows you to make some impressive animations from the users’ point of view. Use JavaScript animations when you want to have advanced effects like bouncing, stop, pause, rewind, or slow down. It gives you more control than CSS.

The W3C is working on a new spec called Web Animations that aims to solve a lot of the deficiencies in CSS Animations and CSS Transitions, providing better runtime controls and extra features. That is to say, we’ll have to wait and see how things come together.

How to simplify Android app distribution with Fastlane and improve workflow?

Introduction

Making releases, taking screenshots, and updating metadata in Google Play Store are all facets of Android app growth. But you can automate these to save time for meaningful things like integrating functionality and repairing bugs.

Fastlane lets you do all that repeatedly and effectively. It’s an open-source tool that makes Android and iOS software distribution simpler. It helps you simplify any part of the workflow for creation and publication.

In this blog, we’ll learn how to use Fastlane to distribute an app on the firebase app distribution platform and add testers to test the build.

Why Fastlane

    • It automatically takes screenshots.
    • You can easily distribute new beta builds to testers so that you can get useful feedback quickly.
    • You can streamline the whole app store rollout phase.
    • Avoid the inconvenience of keeping track of code signing identities.
    • You can easily integrate Fastlane into existing CI services, including Bitrise, Circle CI, Jenkins, Travis CI. 

Getting Started

We will use our BLE Scanner app (you can read more about this app here) to automate a few tasks.
Take checkout of the project and make sure it is working fine, you can build it and run it on a device.

Installing Fastlane 

There are many ways to install Fastlane, for more details visit here.

We will install it using Homebrew 

brew install fastlane

Setting up Fastlane

Open the terminal and navigate to your project directory and then run the following command.

fastlane init

 It will ask you to confirm a few specifics.

    • When prompted, give the package name for your submission (e.g. com.dinkar.blescanner)

      You can get the package name of your application from the AndroidManifest.xml file


    • When asked for the path to your confidential json file, click enter. (we can set this up later)
    • When asked if you intend to upload details to Google Play through Fastlane, reply ‘n’ (we can set this up later)

You’ll get a couple more prompts after that. To proceed, click Enter. When you’re done, run the following command to test the new fastlane configuration:

fastlane test

After successfully running the ‘test’ command you will see something like below.

The following files can be found in the newly created `fastlane` directory:

Appfile: that specifies configuration data for your android app that is global
Fastfile: that specifies the “lanes” that govern how fastlane acts.

Configuring Fastlane

To store the automation setup, Fastlane uses a Fastfile. When you open Fastfile, you’ll see the following:

Fastlane splits multiple acts into lanes. A lane begins with lane: name, where the name of a lane is the name given. You can see three separate lanes inside the file: test, beta and deploy.

The following is a list of the actions that each lane performs:

    • test: Performs all the project tests(unit,instrumented) using the Gradle action.
    • beta: Uses the gradle action followed by the crashlytics action to send a beta build to Firebase App Distribution.
    • deploy: Use the gradle action followed by the upload to play store action and deploy a new update to Google Play.

Create new Lane

Let’s edit Fastfile and add a new lane to clean build our app.
To add a new lane we have to edit `platform :android do` 
Add lane cleanBuild by adding something like below.

Open the terminal and run the following command

fastlane cleanBuild

You will get the following message after successful execution.

Now this tells us how we can create a lane and how to use that. Let’s Automate beta app distribution using firebase app distribution

Automating Firebase App Distribution

Once you are done with a new feature, you’ll want to share it with beta testers or your QA to test the stability of the build and gather feedback before releasing it on the Play Store.
For this kind of distribution, we use the highly recommended Firebase app distribution tool. 

Create a Firebase Project

To use any of the firebase tools in our app we need to have a firebase project for it. Let’s create the one for our BLE Scanner App 

    • Give any name for your project and click `Continue`

      • Select your Analytics location and create the project.

Add Android app to our Firebase project

    • Click on the Android icon to create an Android app in the firebase project

    •  After filing the relevant info click on the Register app

    • Download and add google-services.json to your project as per the instruction provided.

    • Click Next after following the instructions to connect the Firebase SDK to your project.

    • Go to project settings

    • Get App Id. Fastlane will ask you for it.

  

Install FirebaseCLI

Fastlane needs FirebaseCLI to connect to the Firebase server for uploading the build.

  • Open terminal and run command ‘ curl -sL https://firebase.tools | bash ’
  • After successful installation login to firebase CLI by running the command in terminal

    firebase login
  • After successful login, to test that the CLI is properly installed, access your account by listing your Firebase projects run ` firebase projects:list ` and check for the project you have created.

Install the Fastlane plugin for Firebase App Distribution

Run the following command in terminal

fastlane add_plugin firebase_app_distribution

Select `Y’ for the prompt `Should fastlane modify the Gemfile at path`

You’re now ready to submit various versions of the app to different groups of testers using Firebase.

Enabling Firebase App Distribution for your app

    • Select App Distribution from the left pane of the console

    • Accept TnC and click on `Get started`


    • Now let’s create a test group with a few testers, whom we will send our build for testing.


Distribute app for testing

Open Fastfile and overwrite the beta lane with the following, replacing the `app` with the App ID you copied earlier:

The whole Fastfile will be looking like this

Replace YOUR_APP_ID and YOUR_GROUP_NAME with correct values.

After editing Fastfile run the following command on the terminal.

fastlane beta

After the successful execution of the above command, you will see something like below.

The tester would receive email notification for Build.

You can also see the Firebase console to verify whether the new build is uploaded with release notes and testers have been added to the same or not.

Handle Flavors

We can have multiple flavors of our app depending upon different criteria, we can have different flavors for different environments like production, staging, QA and development.

To run a specific flavor we can run the following command

fastlane beta app:flavorName

Here flavorName can be prod,staging,qa,dev

If you provide a flavor with an app key, Fastfile will run gradle assembleReleaseFlavor. Otherwise, it will run gradle assembleRelease to build all build flavors.

Best Practices

    • If possible, do not keep Fastlane configuration files in the repository.
    • To exclude generated and temporary files getting committed in the repo, add the following lines to the repository’s.gitignore file:

    • It’s also a smart idea to keep screenshots and other distribution items out of the repository. If you have to re-generate, use Fastlane.
    • It is recommended that you use Git Tags or custom triggers rather than commit for App deployment.

Congratulations!
We have successfully automated the task of building an android app and distributing it to QA, now it’s as simple as one command execution “fastlane beta” which can be done by any non-technical person as well.

For more details and an in-depth view, you can find the code here

References:https://fastlane.tools/,https://docs.fastlane.tools/ 

ArangoDB vs MySQL Performance Benchmarking

ArangoDB vs MySQL Performance Benchmarking

Introduction

Recently, we got into a project where our client was interested in implementing the application with a multi-model Database called Arango DB that could be easily built in any relational database. This triggered us to carry out a comparison between MySQL and Arango DB. So, here we go!

Overview of ArangoDB

ArangoDB is a multi-model, distributed, open-source database with flexible data models for documents, graphs, and key-values. High-performance applications can be built using a convenient SQL-like query language or JavaScript extensions. It can be scaled horizontally or vertically.

MySQL

MySQL is an open-source relational database management system (RDBMS). It delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.

Arango vs MySQL: Performance Benchmarking

Configurations:

We used the following configurations to carry out the performance test.

  1. For Test 1, we spawned 4 VMs in Azure each having 2 Cores, 4 GB RAM, 4 GB Disk.
  2. VM1 — Hosted ArangoDB and MySQL over Docker.
  3. VM 2 — Hosted our Pizza Delivery Spring boot application with 2048 memory and pool size 20.
  4. VM3 & VM4– Used to execute Artillery Test.
  5. For Test 2, we spawned 2 VMs in Azure each having 4 Cores, 8 GB RAM, 8 GB Disk.
  6. VM1 — Hosted ArangoDB and MySQL over Docker.
  7. VM 2 — Hosted our Pizza Delivery Spring boot application with 2048 memory and pool size 20.
  8. We used 2 VMs from Talentica Network TestBox1, TestBox2 and my Laptop was used as TestBox3– Used to execute Artillery Test.
  9. Both Arango and MySQL were loaded with 50000+ Records/documents.
  10. Each Record or document was of size 10KB.
  11. Spring boot applications were configured to fetch Top 50 orders of a given City.
  12. We used artillery to load test our applications and had set a timeout of 120 seconds for each scenario to be tested.
  13. To load the test, we used 3 boxes, 2 Azure VMs, and a Laptop. Arrival rate distribution is mentioned in the Data Table under the “Machine wise Arrival rate column.

Test Cases:

Tests for the following scenarios were carried out for both ArangoDB and MySQL:

  1. Create Order
  2. Find an Order by Order ID
  3. Find Top 50 Orders by City

Test Set:

Test1:

  1. MySQL DB had joined. Order Details table joined with Pizza Table whereas ArangoDB stored data in a nested structure.
  2. The test was conducted with 50K data.
  3. Tests were conducted with 2 Cores, 4 GB RAM, 4 GB Disk configuration

 

Test 2:

  1. ArangoDB and MySQL used a flat table structure.
  2. The test was conducted with 100K data.
  3. Tests were conducted with 4 Cores, 8 GB RAM, 8 GB Disk configuration.

Test Results:

After executing the above test cases, the results were recorded in terms of Latency (ms), DB and Application CPU and Memory Utilization(%), Success Rate. The recorded data is as follows:

  • Latency is in milliseconds
  • CPU and Memory Utilization is in %

Test 1

Test 2

Latency

Test 1

Test 2

CPU and Memory Utilization for ArangoDB and MySQL

Test 1

Test 2

Success Rate for ArangoDB vs MySQL

Test 1

Test 2

Analysis

Performance

    • Read Queries

While conducting this exercise, we found that the Response time for read queries is the same for both ArangoDB and MySQL databases (even taking latency into account). This is derived after studying the results of Fetch/Find one.

  • Write Queries

While conducting this exercise, it was observed that ArangoDB performed way better than MySQL DB in terms of write operations. ArangoDB can provide an edge in the case of Write heavy Applications and Bulk Reads where the size of data per document/record is high as compared to MySQL.

  • Success Rate

ArangoDB is better in terms of success response within the given time of 120 seconds. Success response of MySQL DB improved on using the flat structure.

When to chose MySQL

Though Arango DB seems to be faster than MySQL in various tests conducted, MySQL has an edge over reliability, stability, and ease of data export & import. It is good as a DB for mission-critical applications and suitable for dealing with core/financial data.

When to choose ArangoDB

ArangoDB is a good choice for applications that need frequent fetching of bulk details and write-heavy applications. We also observed that the ArangoDB application needed more restarts than the MySQL application.

In short, both ArangoDB and MySQL perform almost the same in terms of Read operation. ArangoDB is better than MySQL in terms of write operations.

Pros and Cons/Limitations

Feature Comparison

Conclusion

After testing our application with both ArangoDB and MySQL we found ArangoDB is faster in case of write and bulk reads. MySQL has a good position from the stability point of view and community support as compared to ArangoDB. ArangoDB will be suitable for applications who need Graph DB and faster response and needs to deal with data loads. However, MySQL is well suited for application that deal with monitory details need to support transactions like Banking.

GitHub Details For ArangoDB and MySQL Spring Boot Projects

https://github.com/priyakartalentica/ArangoDBvsMySQLBenchmarking.git

References

https://www.arangodb.com/

https://www.mysql.com/

https://db-engines.com/en/system/ArangoDB%3BMongoDB%3BMySQL

 

 

Scanning iBeacon and Eddystone Using Android BLE Scanner

Introduction

This blog will introduce you to Bluetooth Low Energy and will cover all the end-use application areas where it is used. Furthermore, the blog will also walk you through different kinds of BLE beacons and popular beacon protocols.

In the latter half of the article, we will create a demo android application to test out 2 BLE Beacon protocol, one from Apple and one from Google. But, let us first go through some basic definitions and an overview of BLE and Beacons before jumping onto the coding part.

Bluetooth Low Energy

Bluetooth Low Energy is a wireless personal area network technology designed by Bluetooth SIG. The Bluetooth SIG identifies different markets for low energy technology, particularly in the field of smart home, health, sport, and fitness sectors. Some of the key advantages include:

  • low power requirements can run for months/years on a button cell
  • small size and low cost
  • compatibility with mobile phones, tablets, and computers

Bluetooth Low Energy (BLE), available from Android API 18(4.3 — Jelly Bean), and later creates short connections between devices to transfer bursts of data. BLE when not connected it remains in sleep mode. This is because as compared to Classic Bluetooth it utilizes less power by providing lower bandwidth. It is ideal for applications such as a heart-rate monitor or a wireless keyboard. To use BLE, devices need to have a chipset that supports BLE. Talking about BLE Beacons, Bluetooth Beacons are physical transmitters – a class of BLE devices that broadcast their identifiers on nearby electronic devices.

Use Cases of BLE Beacons

These beacons can be used for many proximity-related applications such as –

  • Proximity Alerts: These beacons can be used to get alerts in-app when they are in the vicinity
  • Indoor Navigation/ Location: By using a proper number of beacons placed in the room and utilizing the signal strength of all the beacons properly, we can create a working solution for indoor navigation or indoor location.
  • Interactions: These beacons can be placed on the poster/banner of a movie in a movie theatre and as soon as the device comes in proximity of it app can launch its trailer or teaser, the same can be done for museums where these can be placed on the art piece and people can the details of these painting as notification and also get video/audio/text info for the art piece.
  • Healthcare: it can be used for tracking patient movement and activities

It can be used for many other use cases as well. For instance, you can place a BLE tag in your key and then can use your mobile phone to search for it if it’s inside a cupboard or just lying under the sofa.

Beacon Protocols

  • iBeacon: Apple’s standard for Bluetooth beacon
  • AltBeacon: It is an open-source alternative to iBeacon created by Radius Networks
  • URIBeacon: It directly broadcast URL which can be understood immediately
  • Eddystone: Google’s standard for Bluetooth beacons, It supports three types of packets, Eddystone-UID, Eddystone-URL, and Eddystone-TLM.

Now, we will see how we can scan for Apple’s iBeacon and Google’s Eddystone-UID by creating a demo android application.

Getting Started

Create a new project and choose the template of your choice.

I chose “Empty Activity”.

BLE Dependency

There is no extra BLE library dependency as such for scanning for BLE beacons.

Open AndroidManifest.xml and add the following in the manifest element.

<uses-feature> tag with required as “true” means that this app requires BLE hardware to work with hence Google play will make sure that this app is only visible to devices that have the BLE hardware Available.

<uses-permission> tag is required to get the permission to use the Bluetooth hardware with Coarse location in Low energy mode.

Check for Permission

Coarse location permission is needed for Bluetooth low energy scanning mode. Hence we should make sure that we have the required permission provided by the user.

Check whether got the permission else to show the dialog letting the user know why we need this permission.

Setting up Bluetooth API

Initialize the BluetoothManager to get the instance of BluetoothAdapter for getting BluetoothLeScanner, which is required to perform scan related operations for Bluetooth LE devices.

  • BluetoothManager: High-level manager used to obtain an instance of a BluetoothAdapter and to conduct overall Bluetooth Management.
  • BluetoothAdapter: Represents the local device Bluetooth adapter. The BluetoothAdapter lets you perform fundamental Bluetooth tasks, such as initiate device discovery, query a list of bonded (paired) devices, instantiate a BluetoothDevice using a known MAC address, and create BluetoothServerSocket to listen for connection requests from other devices and start a scan for Bluetooth LE devices.
  • BluetoothLeScanner: This class provides methods to perform scan related operations for Bluetooth LE devices.

BLE Scan Callbacks

BLE Start/Stop Scanner

We can have the button to control the start and stop of the BLE scanner.

Parse ScanResult to Get Relevant Data

We should create a Beacon class to hold the different info we will be parsing from ScanResult from onScanResult callback.

Extracting Eddystone UID packet info if there is any.

Eddystone UID: A unique, static ID with a 10-byte Namespace component and a 6-byte Instance component.

  • scanRecord: a combination of advertisement and scan response
  • device.address: hardware address of this Bluetooth device. For example, “00:11:22:AA:BB:CC”.
  • rssi: received signal strength in dBm. The valid range is [-127, 126].
  • serviceUuids: list of service UUIDs within the advertisement that are used to identify the Bluetooth GATT services.
  • eddystoneServiceId : Service UUID for Eddystone UID which is “0000FEAA-0000–1000–8000–00805F9B34FB”
  • serviceData: the service data byte array associated with the serviceUuid, in our case eddystoneServiceId
  • eddystoneUID packet info is there in serviceData from index 2 to 18, we need to convert this byte array to Hex string using the utility method.

  • namespace is of 10 bytes which are starting 20 characters of eddystoneUID
  • instanceId is of 6 bytes which are the remaining 12 characters of eddystoneUID

Extracting iBeacon packet info if there is any.

iBeacon: A unique, static ID with a 16-byte Proximity UUID component and a 2-byte Major component, and a 2-byte Minor component.

  • iBeaconManufactureData: the manufacturer specific data associated with the manufacturer id, for iBeacon manufacturer id is “0X004c” (Apple).
  • iBeacon UUID or Proximity UUID is of 16 bytes and extracted from iBeaconManufactureData from index 2 to 18, we need to convert this byte array to Hex string using the utility method
  • major is of 2 bytes, range between 1 and 65535 and extracted from iBeaconManufactureData from index 18 to 20, we need to convert this byte array to Hex string using the utility method and then convert it to Integer
  • minor is of 2 bytes, range between 1 and 65535 and extracted from iBeaconManufactureData from index 20 to 22, we need to convert this byte array to Hex string using the utility method and then convert it to Integer

Let’s see the code in Action

Figure- Start screen

Figure- Start screen with the options

Figure- Result screen with Eddystone UID, iBeacon, generic BLE devices

BLE beacons iBeacon and Eddystone-UID are different from each other, however can be used for any of the proximity-related applications. It is because, at the application level, both of them solve similar problems using different Bluetooth profiles.

Eddystone do have a different type of packets to solve a different problem like-

  • Eddystone-URL : for broadcasting URL
  • Eddystone-TLM: broadcasts information about the beacon. This can include battery level, sensor data, battery voltage, beacon temperature, number of packets sent since the last startup, and beacon uptime, or other relevant information to beacon administrators

For more details and an in-depth view, you can find the code here

 

The Recipe for Performance Tuning

Recently, I got a chance to work on the scaling of a project scheduling application. Typical projects have somewhere around 100 to 1000 activities. These activities have predecessor and successor relationships between them, collectively forming a project network. Further, the activities have their durations, resources, and work calendars. We wanted to scale this to a level where a user can schedule a very large project network (such as the repair of an Aircraft Carrier) with 100K+ tasks, all the while staying within defined time and space boundary and transferring heavy data to the server.

Improving the performance of such a complex application can be a daunting task at first because the very heart of the application needs revamps. If the project does not have unit testing in place, it may even look like a non-starter. To accomplish such an endeavor, one needs to adopt a foolproof strategy aligned well with successful outcomes. Even though we ended up changing a lot of data structures along with placing a new scheduling algorithm, here in this article we will not focus on algorithmic solutions but on new design plans that can drive meaningful incremental changes with testability in mind. The following strategy can be applied to small performance tunings as well and can go a long way in meaningful results.

Naive Approach

One might get tempted to take a naive approach to optimize the time and space of all the code equally. But, be prepared for the fact that 80 percent of the optimization is destined to be wasted as there is a high chance that you may optimize code that hasn’t got enough time to run. All the time involved in making the program fast and getting back the lost clarity will surely be wasted. Hence, we bring you a suggested approach that is distilled out of our experience in enabling successful performance tunings.

Suggested Approach

Define time and space constraints

The time and space constraints are driven by user experience, application types (web app vs desktop app), application need, and hardware configuration. There should be enough clarity on the API request payload and target response time before you start on a new assignment.

Identifying the Hotspots

The interesting thing about performance is that if you analyze most programs, you’ll find that they waste most of their time in a small fraction of the code. You begin by running the program under a profiler that monitors the program and tells you where it is consuming time and space. This way, you can find that part of the program where the performance hotspots lie. There can be more than one hotspot and hence two approaches, fix the low-hanging ones up front and then move to the more complex ones or vice-a-versa.

Refactor before fixing

Fixing these hotspots without refactoring is not a good strategy. While encapsulating the hotspot, a developer gains more understanding of the underlying code too. If it is simple, a function may be sufficient for the job; however the need to maintain some invariant properties uncompromised calls for a class encapsulation. A very useful design pattern here is a strategy pattern when you can have different variants of an algorithm reflecting different space/time trade-offs.

Strategy along with factory pattern provides great flexibility. If you have observed, the factory takes config params such that we can control the instantiation of the desired implementation at run time. Refactoring is important because the switching between implementations will help you in testing the new implementation. Note that we have already mentioned that there can be many hotspots and hence the above pattern may need to be implemented/repeated in many places in the program, each with specialized intention/purpose i.e. encapsulating the hotspot.

Functional Verification and Plugging the Alternative Optimized Implementation

Once the refactoring is done, the next step is to write unit test cases against the refactored code to ensure that the program is still functionally correct. You are now ready to implement/ plug your new optimized algorithm. The good part is that while refactoring and writing unit test cases, the developer gains enough understanding of the code as well as different considerations/cases that need to be taken into account for the new optimized algorithm. Please note here that those same unit test cases that are written for functional verification need to pass against the new optimized implementation as well.

Even if you write many unit test cases, you may miss some scenarios and data related edge cases. To overcome that and to establish the correctness of the new implementation, there is a powerful technique called stress testing. In general, stress testing is nothing but a program that generates random tests (with random inputs) in an infinite loop, it executes the existing algorithm and then the new algorithm on the same test/inputs and compares the results. After this, you wait for a test case where the solution differs. The assumption here is that your existing algorithm is correct even though it is not time-optimized and space-optimized. All these can be done if abstractions and class interfaces are well-designed and refactored well for the said hotspot.

Nonetheless, unit testing cannot entirely replace the whole system integration along with manual testing. But, while doing changes in the critical area, unit testing must be implemented at least for the changes that are already done (refactoring + new implementation).

Given below is the strategy blueprint for your reference:

PROCEDURE :

PERFORMANCE-TUNING-STRATEGY(desired time and space constraint)

while( time and space constraint > desired)

run profiler

pick the top hotspot

refactor the hotspot(using strategy + factory pattern)

write unit test for refactored code/Existing Impl/algo

implement the new Impl/algo

passed = run same unit test cases against the new impl/algo

if(passed)

STRESS-TEST()

run whole system integration test

Deploy in production

END

PROCEDURE:

STRESS-TEST()

while(unsatisfied)

generate random input

call existing impl/algo

call new impl/algo

isEqual = compare new vs existing

if(!isEqual)

dump the difference

break( fix the issue + repeat STRESS-TEST)

END

The above procedure takes the assumption that the developer does not have much knowledge about the code, hence refactoring + unit test upfront. Even if someone has a good understanding of the code and may get tempted to implement the solution directly, it should be avoided. Be mindful of the fact that the new developers are going to work on the same code in the future when the original developer is not around. These refactoring and unit test cases will help the product immensely in the later stages all the while enabling the team to incorporate future changes easily.

Deploying the Solution

In cloud setup with server-side applications you may want to perform a rolling upgrade, deploy the new version to a few nodes at a time, check whether the new version is running smoothly, and gradually work your way through all the nodes. In a private cloud setup where access is limited, the switching between the new and existing through the configuration can be very handy (if anything goes wrong) till you get access to the server.

Conclusion

While doing any algorithmic changes, less focus on the cleanliness of code and structure of design can result in system complications that may occur in the future. Moreover, a complex system is inherently difficult to justify in terms of accuracy at the system level when multiple components are at play. Even we use mathematical proof to justify the correctness of algorithms, but it is a laborious process of formally proving every function. Hence, the only meaningful way is the scientific way to prove and establish things.

Scientific theories cannot be proven correct but can be demonstrated with experiments (unit test cases at function level). Likewise, software needs to be tested to prove its correctness, and a well thought out plan is required to be in place. If your project does not have unit testing in place, don’t get tempted to write unit test cases equally for all the codes but follow the Eisenhower Matrix. Unit testing of your refactored code (mentioned above) belongs to quadrant 1 (a point to be taken not of). Dijkstra once said, “Testing shows the presence, not the absence of bugs”. So, for some mental peace, you can give stress testing a stab and see what wonder it does with performance tuning.