Scala code analysis and coverage report on Sonarqube using SBT

Introduction

This blog is all about configuring scoverage plugin with SonarQube for tracking statement coverage as well as static code analysis for Scala project. SonarQube has support for many languages but it doesn’t have support for Scala- so this blog will guide through configuring sonar-scala and scoverage plugins to generate code analysis and code coverage reports.

The scoverage plugin for SonarQube reads the coverage reports generated by sbt coverage test and displays those in sonar dashboard.

Here are the steps to configure Scala projects with SonarQube for code coverage as well as static code analysis.

  1. Install sonarqube and start the server.
  2. Go to the Sonarqube marketplace and install `SonarScala` plugin.

This plugin provides static code analyzer for Scala language. It supports all the standard metrics implemented by sonarQube including Cognitive complexity.

  1. Add `Scoverage` plugin to Sonarqube from the marketplace

This plugin provides the ability to import statement coverage generated by Scoverage for scala projects. Also, this plugin reads XML report generated by Scoverage and populates several metrics in Sonar.

Requirements:

i.  SonarQube 5.1

ii. Scoverage 1.1.0

4. Now add the `sbt-sonar` plugin dependency to your scala project                   addSbtPlugin(“com.github.mwz” % “sbt-sonar” % “1.6.0”)

This sbt plugin can be used to run sonar-scanner launcher to analyze a Scala project with SonarQube.

Requirements:

i.  sbt 0.13.5+

ii. Scala 2.11/2.12

iii. SonarQube server.

iv. sonar-scanner (See point#5 for installation)

5. Configure `sonar-scanner` executable

 

6. Now, configure the sonar-properties in your project. This can be done in 2 ways

  • Use sonar-project.properties file:

This file has to be placed in your root directory. To use an external config file you can set the sonarUseExternalConfig to true.

import sbtsonar.SonarPlugin.autoImport.sonarUseExternalConfig

sonarUseExternalConfig := true

  • Configure Sonar-properties in build file:
  • By default, the plugin expects the properties to be defined in the sonarProperties setting key in sbt
import sbtsonar.SonarPlugin.autoImport.sonarProperties

sonarProperties ++= Map(

"sonar.sources" -> "src/main/scala",

"sonar.tests" -> "src/test/scala",

"sonar.modules" -> "module1,module2")
  1. Now run the below commands to publish code analysis and code coverage reports in your sonarQube server.
  • sbt  coverage test
  • sbt  coverageReport
  • sbt  sonarScan

 

SonarQube integration is really useful to perform an automatic review of code to detect bugs, code smells and security vulnerabilities.  SonarQube can also track history and provide the visual representation of it.

Introduction to Akka Streams

Why Streams?

In software development, there can be cases where we need to handle the potentially large amount of data. So while handling these kinds of scenarios there can be issues such as `out of memory` exceptions so we should divide the data in chunks and handle the chunks independently.

There come Akka streams for rescue to do this in a more predictable and less chaotic manner.

Introduction

Akka streams consist of 3 major components in it – Source, Flow, Sink – and any non-cyclical stream consist of at least 2 components Source, Sink and any number of Flow element. Here we can say Source and Sink are the special cases of Flow.

  • Source – this is the Source of data. It has exactly one output. We can think of Source as Publisher.
  • Sink – this is the Receiver of data. It has exactly one input. We can think of Sink as Receiver.
  • Flow – this is the Transformation that acts on the Source. It has exactly one input and one output.

Here Flow sits in between the Source and Sink as they are the Transformations applied on the Source data.

 

 

A very good thing is that we can combine these elements to obtain another one e.g combine Source and Flow to obtain another Source.

Akka streams are called reactive streams because of its backpressure handling capabilities.

What are Reactive Streams?

Applications developed using streams can run into problems if Source is generating data too fast than the Sink can handle. This causes Sink to buffer the data – but the problem is if data is too large then Sink buffer will also grow and can lead to memory issues.

So to handle this Sink need to communicate with the Source – to slow down the generation of data until it finished handling of current data.  This handle of communication between Publisher and Receiver is called as Backpressure handling. And Streams that handle this mechanism are called Reactive Streams.

Example using Akka Stream:

In this example, let’s try to find out prime numbers between 1 to 10000 using Akka stream. Akka stream version used is 2.5.11.

 

package example.akka

import akka.{Done, NotUsed}
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl._

import scala.concurrent.Future
object AkkaStreamExample {

def isPrime(i :Int) : Boolean = {
 if (i <= 1) false
 else if (i == 2) true
 else !(2 until i).exists(x => i % x == 0)
 }

def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem("actor-system")
 implicit val materializer = ActorMaterializer()

val numbers = 1 to 10000

//Source that will iterate over the number sequence
 val numberSource: Source[Int, NotUsed] = Source.fromIterator(() => numbers.iterator)

//Flow for Prime number detection
 val isPrimeFlow: Flow[Int, Int, NotUsed] = Flow[Int].filter(num => isPrime(num))

//Source from original Source with Flow applied
 val primeNumbersSource: Source[Int, NotUsed] = numberSource.via(isPrimeFlow)

//Sink to print the numbers
 val consoleSink: Sink[Int, Future[Done]] = Sink.foreach[Int](println)

//Connect the Source with the Sink and run it using the materializer
 primeNumbersSource.runWith(consoleSink)
 }
}

 

Above example illustrated as a diagram:

 

 

  1. `Source` – based on the number iterator

`Source`, as explained already, represents a stream. Source takes two type parameters. The first one represents the type of data it emits and the second one is the type of the auxiliary value it can produce when ran/materialized. If we don’t produce any we use the NotUsed type provided by Akka.

The static methods to create Source are

  • fromIterator – its will accepts elements till iterator is empty
  • fromPublisher – uses object that provides publisher functionality
  • fromFuture – new Source from a given future
  • fromGraph – Graph is also a Source.
  1. `Flow` – filters out only prime numbers

Basically, `Flow` is an ordered set of transformations to the provided input. It takes 3 type parameters – input datatype, output datatype & auxiliary datatype.

We can create a Source by combining existing one and a Flow- as used in code

val primeNumbersSource: Source[Int, NotUsed] = numberSource.via(isPrimeFlow)

  1. `Sink` – prints numbers to the console

It is basically subscriber of the data and the last element of the Stream steps.

The sink is basically a Flow which uses foreach or fold function to run a procedure over its input elements and propagate the auxiliary value.

As with Source and Flow, the companion object provides a method for creating an instance of it. As mentioned above the two main methods of doing so are:

  • forEach – run the given function for each received element
  • foreachParallel – same as forEach – except runs in parallel
  • fold – run the given function for each received element, propagating the resulting value to the next iteration.

The runWith method produces a Future that will be completed when the Source is empty and Sink is finished with the processing of elements. If processing fails it returns Failure.

We can also create a RunnableGraph instance and run it manually using toMat (or viaMat).

  1. `ActorSystem` and `ActorMaterializer` are needed as Akka Stream uses Akka Actor model.

The `ActorMaterializer` class instance is needed to materialize a Flow into a Processor which represents a processing stage, which is a construct from the Reactive Streams standard, which Akka Streams implements.

In fact, Akka Streams employs back-pressure as described in the Reactive Streams standard mentioned above. Source, Flow, Sink get eventually transformed into low-level Reactive Streams constructs via the process of materialization.

Journey from JSP to React JS SPA

This guide is going to help you when migrating from JSP based web application to a reactJS single page application. It gives details of all the available tools that will make the migration fast and easy. It will also help in understanding various conventions that are different in client-server architecture than the traditional server-based application.

Development Environment Setup

Starting from scratch and setting up a new react project is a painful experience so there are tools available to alleviate this pain.

  • Webpack vs Create-react-app

React app does not need webpack and can be run all alone but there many advantages of using webpack which cannot be ignored. Webpack is a module bundler and runs only during development and not when the page actually loads in the browser. Here’s a list of task that webpack does during the development.

  1. Bundle resource: It bundles all the resource including all the css and js files allowing to use ‘require’ or ‘import’ statements in JavaScript code.
  2. Babel transpilation: It transpiles ES6 JavaScript code into ES5 allowing to use latest JavaScript feature without worrying about the older browser support.
  3. Development server: webpack provides its own development server so that the development can be done inside the actual server-like environment. Eventually, everything will run inside a server.
  4. Hot Module replacement: add or remove modules when the application is running without a full reload.

Webpack is not simple. It can be confusing many a time. Create-react-app comes for the rescue during such cases. Its very handy during the initial setup of a new react app. It also provides abstraction over the entire webpack configuration. So that you don’t have to manually configure webpack settings. The only drawback would be you don’t have control over what is happening behind the scenes. But again you can eject out anytime and switch to manual configuration. For beginners, it always good to start with create-react-app and than later switch to webpack once you get familiar with all the webpack configuration stuff.

Getting started with create-react-app is pretty simple. You need to have npm (node package manager) or yarn installed. Then run the following command:

  1. “npm install create-react-app”: it will install the react script in your machine.
  2. “create-react-app <project_name> “ : it will create the project with all the default configuration and folder structure.
  3. Switch to your project root and then run “npm start”. This is going to start the react app in development server.
  4. In order to deploy the app to production server run “npm build”. This is going to create a build folder. This contains a deployable version of the app.where Javascript and css files are all minified and compresses in one file. Deploy the content of build folder to an actual production server.
  5. Note that the entire webpack configurations are done behind the scene and you have no control over it. In order to take control of webpack configuration run “npm eject”. This is a one-time command and cannot be reverted. It will create all the webpack configuration files and you can change them as per requirement.

Deployment

There are various ways a react app can be deployed on production. React consists of only JavaScript files which can be hosted and accessed from anywhere from the cloud.

  • This JavaScript bundle can be deployed inside the same war where the application is running. This can be achieved by running the npm script during the maven build command so that the build folder get created during the maven build and gets packaged inside the war. Now the war can be deployed any server over the cloud.
  • The JavaScript bundle can be deployed outside the war but in the same tomcat instance. This way it does not need to be built during the maven build process and can be handled separately.
  • Third way and the most popular way is to host these JavaScript files on S3 and access them using AWS CloudFront. This way there is no need to separately deploy them on all the servers.

Session Maintenance

Unlike JSP based application where sessions are maintained on the server side React app is single page application and maintains session on the client side and server side communication is done through stateless REST APIs. The server does not need to maintain any session.  For a simple application, you can use react components ‘state’ for storing the state of the user called session. But things can get messy when the state needs to be shared among different components. It also gets cumbersome when the state needs to be traversed too many child components and each component will get reloaded if there is a change in state. For such cases, you can go for Redux. Redux is the state management tool for javascript applications. The main concept about redux is that the entire state of the application is stored in one central location and every component can access it from anywhere.

Convert Spring controller to RESTful API

Unlike the spring MVC where the view is also served from the server side react js app follows client-server architecture where all the view resides on the client side and business data is fetched from the server. So the first thing that needs to be done is to change the server side controller to return json response and not the JSP view pages.  The client will make ajax calls to get the data from the server.

WebRTC – Basics of web real-time communication

WebRTC is a free open source standard for real-time, plugin-free video, audio and data communication between peers. Many solutions like Skype, Facebook, Google Hangout offer RTC but they need downloads, native apps or plugins. The guiding principles of the WebRTC project are that its APIs should be open source, free, standardized, built into web browsers and more efficient than existing technologies.

How does it work

  • Obtain a Video, Audio or Data stream from the current client.
  • Gather network information and exchange it with peer WebRTC enabled client.
  • Exchange metadata about the data to be transferred.
  • Stream audio, video or data.

That’s it ! .. well almost, it’s a dumbed down version of what actually happens. Since now you have an overall picture let’s dig into the details.

How it really works

WebRTC provides the implementation of 3 basic APIs to achieve everything.

  • MediaStream: Allowing the client to access a stream from a WebCam or microphone.
  • RTCPeerConnection: Enabling audio or video data transfer, with support for encryption and bandwidth management.
  • RTCDataChannel: Allowing peer-to-peer communication for any generic data.

Along with these capabilities, we will need a server (yes we still need a server !)  to identify the remote peer and to do the initial handshake. Once the peer has been identified we can directly transfer data between two peers if possible or relay the information using a server.

Let’s look at each of these steps in detail.

MediaStream

MediaStream has a getUserMedia() method to get access of Audio or Video or a data stream and provide success and failure handler.

 

navigator.getUserMedia(constraints, successCallback, errorCallback);

 

The constraints is a json which specifies if an audio or video access is required. In addition, we can specify some metadata about the constraints like video with and height, example:

 

navigator.getUserMedia({ audio: true, video: true}, successCallback, errorCallback);

 

RTCPeerConnection

This interface represents the connection between local WebRTC client and a remote peer. It is used to do the efficient transfer of data between the peers. Both the peers need to setup RtcPeerConnection at their end. In general, we use an RTCPeerConnection::onaddstream event callback to take care of audio/video stream.

  • The initiator of the call (the caller) needs to create an offer and send it to the callee, with the help of a signalling server.
  • Callee which receives the offer needs to create an answer and send it back to the caller using the signalling server.
ICE

It is a framework that allows web browsers to connect with peers. There are many reasons why a straight up connection from Peer A to Peer B simply won’t work. Most of the clients won’t have a public IP address as they are usually sitting behind a firewall and a NAT. Given the involvement of NAT, our client has to figure out the IP address of the peer machine. This is where Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN) servers come into the picture

STUN

A STUN server allows clients to discover their public IP address and the type of NAT they are behind. This information is used to establish a media connection. In most cases, a STUN server is only used during the connection setup and once that session has been established, media will flow directly between clients.

TURN

If a STUN server cannot establish the connection, ICE can switch to TURN. Traversal Using Relay NAT (TURN) is an extension to STUN, that allows media traversal over a NAT that does not allow a peer to peer connection required by STUN traffic. TURN servers are often used in the case of a symmetric NAT.

Unlike STUN, a TURN server remains in the media path after the connection has been established. That is why the term “relay” is used to define TURN. A TURN server literally relays the media between the WebRTC peers.

RTCDataChannel

The RTCDataChannel interface represents a bi-directional data channel between two peers of a connection. Objects of this type can be created using

 

RTCPeerConnection.createDataChannel()

 

Data channel capabilities make use of events based communication:

var peerConn= new RTCPeerConnection(),
     dc = peerConn.createDataChannel("my channel");
 
 dc.onmessage = function (event) {
   console.log("received: " + event.data);
 };

Links and References

Getting started with progressive React Web Apps using Firebase

 Introduction

Sending notifications is one of the best ways to increase your app usage. Out of many websites/apps user visit, he can remember a few. Sometimes users install the app and forget. Push notifications come to your help. It’s a quick and simple way to notify the user without spamming his inbox. Push notifications are used widely by News Apps and Shopping Apps. Apps build in such a way that they can display notifications and keep track of user activity are known as Progressive Apps. In this article, we will be discussing only React applications.

React is a JavaScript library for building user interfaces.

  • Declarative: React makes it painless to create interactive UIs. Design simple views for each state in your application, and React will efficiently update and render just the right components when your data changes. Declarative views make your code more predictable, simpler to understand, and easier to debug.
  • Component-Based: Build encapsulated components that manage their own state, then compose them to make complex UIs. Since component logic is written in JavaScript instead of templates, you can easily pass rich data through your app and keep the state out of the DOM.
  • Learn Once, Write Anywhere: We don’t make assumptions about the rest of your technology stack, so you can develop new features in React without rewriting existing code. React can also render on the server using Node and power mobile apps using React Native.

Firebase is Google’s mobile platform that helps you quickly develop high-quality apps and grow your business

As per Google Developers, Progressive Web Apps are

  • Reliable – Load instantly and never show the downasaur, even in uncertain network conditions.
  • Fast – Respond quickly to user interactions with silky smooth animations and no janky scrolling.
  • Engaging – Feel like a natural app on the device, with an immersive user experience.

Prerequisites:

To turn App into Progressive App you need

  • Working React App.
  • React 12.0 or above
  • Node 6.0 or above
  • Chrome(50+) or Firefox(48+)
  • Google Cloud / Firebase Account (Even free trial will suffice)

Steps to implement Push Notifications using Cloud Messaging in React App

Step 1:

Login to firebase console https://console.firebase.google.com , and create a project. Then go to Project Overview and get started by adding Firebase to your app.

Click on the platform you want to implement Cloud Messaging.

In our case click on web icon and you will see config variable with API Key and Sender Id. Copy and keep this object for use in our App.

Step 2:

Install Firebase SDK.

npm install firebase – -save

Step 3:

All below code to your App.js

In this code, we are asking user permission to send notifications. If user allows then we start a worker in the user’s browser which will listen to incoming push messages.

Step 4:

Add  “gcm_sender_id”: “103953800507”  to your manifest.json (Note: 103953800507 is hard code value and do not replace it with your sender id)

Step 5:

Create a file firebase-messaging-sw.js and add below code

This is code for worker which run in the background in the browser even if user close App. We have added two Event Listeners one to receive notification and other to handle click on the notification.

That’s it we are done this changes in the app, this setup will receive the push notification on the user’s browser. Now we need the setup to send push notifications to the user.

Sending Push Notifications to App from Firebase

To send push notification also you need to store token every time a new worker is registered or existing worker is refreshed.

With help of token, you can send the unicast push notification to that user.

To send a message you need to send a POST Request

URL: https://fcm.googleapis.com/fcm/send

Body:

Headers:

Content-Type: “application/json

Authorization: “key=AIzaSyD0TOmt….upinUwueESEYI”

To generate this key go to https://console.firebase.google.com/project/<your project>/settings/cloudmessaging/ and generate a key pair.

Use Public Key in Authorization Header.

There are few other ways to send Push Messages like use Firebase SDK. Firebase SDK can be installed via npm

npm install -g firebase-tools

Then Login to the firebase

firebase login

firebase init

Check docs here https://firebase.google.com/docs/cli/

Conclusion:

This is just a start with Progressive Apps. There are a lot of possibilities in the world of Progressive Apps. We can leverage local resources available and minimize the use of REST calls. Also, we can give user Native App-like experience in Web Apps when the user is offline. You can make use of Service Workers. Service Workers are great tools when the user is offline or away from App.

Drawbacks of Progressive Web Apps

PWAs are not supported by iOS Safari. It only operates on Chrome, Firefox or Opera. But the survey reveals that it performs better than mobile websites even if the web browser is not supported.

iOS Build Management using Custom Build Scheme

Introduction

One of the best practices in iOS development is to be able to manage multiple environments during the development of a project. Many a time we might have to jump between DEV, QA, STAGE, and Production environments. As the owner of a product, clients request to have both development version of the app and production version of the app i.e. App store released version of the app on the same device.

If you have ever faced or might face this situation, then you need a custom build scheme.

Objective

This blog explains the significance of custom build schemes & build configurations in XCode. We will see how we can leverage these to configure an iOS project to support multiple build environments without the need to duplicate targets and keep the same code base.

Prerequisites

  • XCode 8.0 onwards
  • Mac machine with macOS Sierra version

Advantages of Custom Builds

  • Write code that only runs on a particular environment. For example, on DEV you might want to have different values to constants in the app than on Production.
  • Switch between different environments easily to deliver a build that talks to the production server after testing your app in a development environment.

Difference Between Build Schemes & Build Configurations

Before we start actual changes on XCode, let’s understand the difference between build schemes & build configurations first.

A build scheme is a blueprint for an entire build process. It is a way of telling Xcode what build configurations you want to use to create the development, unit test, and production builds for a given target (framework or app bundle).

A build configuration is a specific group of builds settings that can be applied to any target.

Most app projects come with two build configurations and one build scheme. You get the debug and release build configurations along with a building scheme that runs the debug configuration for debugging purposes and the release configuration for archiving/submission.

For most projects, this is perfectly fine and requires no tweaking. However, if you want to offer both a DEV and a PRODUCTION version of the same app, it’s not quite enough. You must add a new build configuration to achieve this.

Adding a new build configuration

Whenever you wish to support multiple environments in the app, you need to start by adding a new build configuration. There are some important steps involved which sometimes seems confusing at first, so follow every step carefully.

  1. Open Xcode and select the project file.

vj1

2. Go to Editor → Add Configuration → Duplicate Debug Configuration.

vj2

Repeat Steps 1 and 2 for Release configuration.
NOTE: Remember that for every environment you must Duplicate Debug and Release configuration. Thus, if you want to support DEV, QA, STAGE, PRO then you should have the following configuration:

  • DEV-Debug, DEV-Release
  • QA-Debug, QA-Release
  • STAGE-Debug, STAGE-Release
  • PRO-Debug, PRO-Release

vj3

Creating a separate build scheme for every environment

We’re going to take our new build configurations and create a build scheme that runs them.

  1. Tap on the currently active scheme.
  2. In the dropdown, select New Scheme.

vj4

3. Provide a name to the new build scheme. I usually follow <Name of the app>-       <Environment>. For example, MultipleEnvApp-QA.

vj5

Once you’ve done this, notice that your new build scheme is selected.

vj6

We’re not done yet. We have a build scheme, but it isn’t using our new build configurations yet.

4. Click on your build scheme and select Edit Scheme.

vj7

5. Select the appropriate build configuration as per the environment. For example, our selected scheme is MultipleEnvApp-QA, hence choose respective QA build configurations.

vj8

That’s it. In the same way, you can create and configure schemes for STAGE and PRO environment. You can rename the default scheme as MultipleEnvApp-DEV

Writing code that runs on a particular environment of your app

Unfortunately, having separate build schemes isn’t quite enough. We also need a way to selectively run blocks of code on a particular environment. To do that, we are going to add a custom Swift flag that only applies to the particular build configurations we just created.

  1. Select the target and then Go to Build Settings, and scroll down to Other Swift Flags.
  2. You must add the flags for every configuration. For example, add the flag “-DQA” to both of the QA build configurations.

vj9

-D is the namespace for custom flags that can be passed into a build command.

You can ignore the “-D” for now.

3. Go to any of your source files. For example, AppDelegate and add these lines of code.

https://gist.github.com/vkhemnar/7b4c38bb8f8597f2ac4792208dab3f2c 

We have created a global variable SOME_SERVICE_KEY and used a unique value for each environment. In this way, you can actually use different service keys, constants for different environments.

Different bundle identifiers for different build configurations

Optionally, if you want to use different bundle ID for different configurations, do the following:

  1. Create two app IDs on your Apple Developer portal.
  2. Go to your project settings and set the appropriate bundle identifiers for different build configurations.

vj10

Conclusion

That’s all there is to it. Now you are setup to deliver a configurable app for different environments using the same shared codebase. Here is the GitHub project that contains all the configurations which we followed in this blog.

Happy Coding!

Comparing productivity of node.js frameworks

Our mission is to compare the node.js frameworks on productivity.

In one of my previous blogs I have benchmark the various node.js frameworks performance against native http call and native mongodb driver and native combination was clear winner in term of performance.

https://blog.talentica.com/2017/11/14/comparing-performance-of-node-js-frameworks/

so, Why not use only native http and native mongodb driver. well one of the the key aspect and usp of node.js frameworks is that they provide lot of abstractions and as a developer you don’t have to write boiler plate , repetitive code . so lets see what our research has come up with against this concept. Continue reading Comparing productivity of node.js frameworks

Comparing performance of node.js frameworks

Our mission is to compare the node.js frameworks on performance (completed no of requests per second).

Node.js performance tests were performed on the Ubuntu subsystem(2 core , 2 GB RAM) on a VM provisioned from Digital Ocean. The tests only utilize the most basic capabilities of the frameworks in question, therefore the main goal was to show the relative overhead these frameworks add to the handling of a request. This is not a test of the absolute performance as this will vary greatly depending on the environment and network conditions. This test also doesn’t cover the utility each framework provides and how this enables complex applications to be built with them. Continue reading Comparing performance of node.js frameworks