Lite Sites & Apps with “Prefers Reduced Data”

One of the interesting experiments going on with the CSS @media property is the prefers-reduced-data feature. This feature detects if the user has requested the content which consumes less internet traffic. And based on that, it serves the content which requires less quantity of internet data.

This feature is mainly useful for people who do not have the access to fast or unlimited internet data. If we want more people to keep accessing our websites/web applications, then this would be the perfect tool that will give them access to the required information with the minimum internet data usage.

Why we need this?

In many countries (different parts of the world) people either have very limited access to the internet or don’t even have it at all. In some places, the internet even comes with a data cap. And this restricts the user from using their internet data without any worries. They have to be more careful while accessing different websites/web applications that are heavy and takes more data to load.

By using the reduced data feature you can reduce the page size causing faster load time on the slower internet connection. And of course, people will have access to your website/web application in the first place.

Current support?

Officially, no browser supports this feature. Though, you can access this feature in Chromium browsers by enabling a simple flag.
Various operating systems also support this feature, and if prefers-reduced-data ever get implemented in CSS, this can mainly rely on the settings provided by the operating system.

With the below examples, you will get a better idea about this feature.
But before going through them, let’s prepare your browser for this!
Make the following change in your GOOGLE CHROME.

Note: This feature is not supported by any user agent and can be used on Chromium browsers enabling the correct flag. The feature is defined in the Media Queries Level 5 Spec.

Enable experimental-web-platform-features (see below image for your reference)



Once done with this, do the following.

To access the “prefers-reduced-data” feature, you need to access the rendering tool section. For that, open developer tools with the command + option (control + shift for Windows and Linux) + I in the browser. While it’s open, press command (control for Windows and Linux) + shift + P and type rendering in the search bar and press enter or click on the Show Rendering option.

Now in the Rendering tab, scroll down to the middle section to find “emulate CSS media feature prefers-reduced-data”

You can change the prefers-reduced-data: values from the available dropdown. Toggle these values to see the effect on the below examples.

Possible use cases of prefers-reduced-data:

  1. Conditionally load fonts –

    See the Pen
    Render google fonts using prefers-reduced-data
    by Vinil (@vinil)
    on CodePen.

Declare custom font @font-face code under the no-preference option.

@media (prefers-reduced-data: no-preference) {
        @font-face {
  	     font-family: 'Roboto';
  	     font-style: normal;
  	     font-weight: 100;
	     font-display: swap;
  	     src: url( format('woff2');

body {
        font-family: Roboto, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Ubuntu, Cantarell, Noto Sans, sans-serif;

This will call Roboto font only if the prefers-reduced-data property is set to no-preference. In other words, a user with reduced data selected will not get the custom font but will see your content in default lighter font stacks.

  1. Background images

    See the Pen
    Render background images using prefers-reduced-data
    by Vinil (@vinil)
    on CodePen.

You can load high-resolution background images under no-preference media query or small images under the reduced section.

@media (prefers-reduced-data: reduce) {
	body {
		background-image: url(/images/small-image.webp);

@media (prefers-reduced-data: no-preference) {
	body {
		background-image: url(/images/large-image.png);

Here if the user has reduced data selected, he will get the small image.

  1. Smaller images in HTML

    See the Pen
    Loading different images based on prefers-reduced-data
    by Vinil (@vinil)
    on CodePen.

One more way to use prefers-reduced-data is to use it in the HTML <picture> tag. As you know, <picture> tag already have the media attribute inside <source> element.

	<source srcset=”small.jpg” media=”(prefers-reduced-data: reduce)”/>
	<img src=”large.jpg” alt=”large image” srcset=”large2x.jpg” />

This code will serve small.jpg to the users who have set their data preference to low (reduced)

  1. Conditionally preload and auto-play videos.

    See the Pen
    Loading videos using prefers-reduced-data
    by Vinil (@vinil)
    on CodePen.

Just like HTML, we can use prefers-reduced-data in JavaScript too. Simply use the window.matchMedia function to set data preference for autoplay and preload attributes on a video tag.

const video = document.createElement(‘video’);
const canAutoPlayAndPreload = window.matchMedia(‘(prefers-reduced-data: no-preference)’).matches;

video.setAttribute(‘autoplay’, canAutoPlayAndPreload);
video.setAttribute(‘preload’, canAutoPlayAndPreload);
  1. Ditch infinite scrolling – (don’t auto load additional data)

With recent trends, many developers started replacing pagination with infinite scrolling giving the end user an option to access data without them to take any action. But this change also came with additional data cost. To prevent this data loss for the people with prefers-reduced-data: reduce, you can simply give them the option of “Load More” button.

const button = document.querySelector(“.loadmore-button”);
const list = document.querySelector(“.my-list”);

const button = document.querySelector(“.loadmore-button”);
const list = document.querySelector(“.my-list”);

const showButton = window.matchMedia(
    ‘(prefers-reduced-data: reduce)’

if (showButton) {
    button.addEventListerner(“click”, loadMoreData)
} else {
    list.addEventListener(“scroll”, function(){
        const shouldLoadData = /*Code that checks the scroll position*/;


In the world of the internet with 1tbps speed, there are places where people don’t have the access to the internet. Even if they have it, they have it with many limitations. If you want to sell your product/application to a large audience, then you need to consider all the different scenarios. You can’t serve the same (large data) content to everyone. Especially to the people with the above limitations. Prefers-reduced-data can help you with this problem in many ways. Let’s hope this feature gets a green flag and get implemented in all the browsers.


Fluid Typography with Clamp: Usage and Benefits

When it comes to making applications or websites that support a range of devices, be it the smallest screen resolution of 320*768 or large screen monitors of 2560*1440, responsive layout or fluid layout is the first thing that comes to developers’ minds.

Fluid typography gives immense scope to re-define reading experiences on the web. However, it does introduce problems of uncontrollable scaling of font sizes and issues of potential accessibility. To achieve fluid typography layouts, media queries come to the rescue.

/* Smartphones (portrait and landscape) ———– */@media only screen and (min-device-width : 320px) and (max-device-width : 480px) {/* Styles */}

/* iPads (landscape) ———– */

@media only screen and (min-device-width : 768px) and (max-device-width : 1024px) and (orientation : landscape) {

/* Styles */


/* Desktops and laptops ———– */@media only screen  and (min-width : 1224px) {/* Styles */}

/* Large screens ———– */@media only screen  and (min-width : 1824px) {/* Styles */}

The Rise of CSS Clamp

Imagine developing an app for book readers, where there is a lot of text to highlight and different font sizes are to be handled for all screen sizes. We define font-sizes for all range of devices and the code becomes bulky with all the media queries. With advancements coming up, you can now define font-sizes in multiple media queries with a single line of code that can handle all screen sizes. We now have CSS property ‘clamp ()’ that takes 3 parameters- minimum value, preferred value, and maximum value.

For ex- font-size{clamp(1rem , 2vw, 5rem)}

In the given example, this property allows you to limit the font size in the range of 1rem-5rem, and it varies with the change in the viewport. It is preferred to pass the preferred value in ‘vw’ viewport-width- 1% of viewport width- so that the font scales in line with the viewport without passing any mathematical logic to have the same behavior.

The property of clamp is mostly supported in all the latest browsers, which makes it more desirable among developers. Let us have a quick fallback to this is one giant formula which does serve the same purpose- font-size: calc([minimum size] + ([maximum size] – [minimum size]) * ((100vw – [minimum viewport width]) / ([maximum viewport width] – [minimum viewport width])));

For ex: font-size: calc (14px + (26 – 14) * ((100vw – 300px) / (1600 – 300)));

Where 14px – Minimum font size

26px – Maximum font size

300px – Minimum screen size

1600px – Maximum screen size

Reference Links:

Bootstrap 5 is Here: What’s New for Us

Bootstrap, the most popular and widely used CSS framework to develop modern, dynamic, responsive interfaces for web pages, came up with a major update, i.e. Bootstrap 5. Bootstrap is well-known for its grid system and predefined components, which gives the ability to use codes instead of writing them from the scratch. The last Bootstrap update got us Flexbox, an improved grid system, and a lot of utility classes. With Bootstrap 5 in the picture, we can expect the following changes-

  • No jQuery

    Bootstrap has been using jQuery since last year. But while using JS frameworks like Vue, reactjs, and angular, jQuery has been losing its popularity. Using/Downloading jQuery on the web application was not an ideal choice of developers considering its size and use in the application.

  • Switching to Vanilla JS:

    Known as the universal programming language, Javascript is used in almost everything like the browser on desktops, tablets, games, and mobile phones. Unlike jQuery, which comes with the additional burden of unwanted functions and $object to the global scope, the user can write vanilla JavaScript code without worrying about the size or adding up other non-essential functions.

  • Bye Bye IE 10 and 11:

    With bootstrap 5, bootstrap decided to drop support for Internet Explorer 10 and 11. Internet Explorer was always a pain for developers as they couldn’t use all the modern tools and features. With this change, developers can now focus on developing modern web applications without having to worry about breaking pieces of code on old browsers.

  • Responsive Font Sizes:

    One of the most challenging parts is to manage responsive font sizes based on the viewport. Media queries have been a great tool to solve these typography problems. Developers can easily apply different font sizes for different devices based on their viewport using Media Queries. But with Bootstrap 5, the framework will enable us with the responsive font sizes by default. Bootstrap will automatically resize the typography elements using its ‘Responsive Font Sizes’ engine and render font sizes based on the device viewports.

  • Gutter Width Unit:

    There are multiple ways with which we can use size or length units in CSS. Some of the most commonly used measurement units include px, em, rem, % vw, and vh. Bootstrap used to use px for its gutter width for quite a long time and finally, they are moving from it to rem (root em), which is equal to the calculated value of the font-size on the root element.

  • Remove Card Decks:

    To set equal width and height of cards that aren’t attached, bootstrap used to have card deck (card-deck class). With Bootstrap 5, they have removed the card deck option as the new grid system has more responsive controls.

  • SVG Icon Library:

    Bootstrap 5 is coming with a brand new SVG icon library that will remove all the dependency on 3rd party icon libraries.

  • Class Updates:

    As always, every bootstrap version comes up with some new classes and removes a few-

    List of removed classes:
    form-row, form-inline, list-inline, card-deck

    List of newly added classes:
    most of these classes are gutter related
    g. g-, g-y, gx-


Reinventing the base HTML, JavaScript, and CSS continues to be one of the most frustrating experiences of every developer. While some do prefer writing codes, it is still viable to use an existing framework like Bootstrap and harness the benefits that come along. With all the new things coming in Bootstrap 5, it can’t be denied that the Bootstrap team is taking big steps to make the framework simple, lightweight, useful, and faster for the benefit of developers.

Scala code analysis and coverage report on Sonarqube using SBT


This blog is all about configuring scoverage plugin with SonarQube for tracking statement coverage as well as static code analysis for Scala project. SonarQube has support for many languages but it doesn’t have support for Scala- so this blog will guide through configuring sonar-scala and scoverage plugins to generate code analysis and code coverage reports.

The scoverage plugin for SonarQube reads the coverage reports generated by sbt coverage test and displays those in sonar dashboard.

Here are the steps to configure Scala projects with SonarQube for code coverage as well as static code analysis.

  1. Install sonarqube and start the server.
  2. Go to the Sonarqube marketplace and install `SonarScala` plugin.

This plugin provides static code analyzer for Scala language. It supports all the standard metrics implemented by sonarQube including Cognitive complexity.

  1. Add `Scoverage` plugin to Sonarqube from the marketplace

This plugin provides the ability to import statement coverage generated by Scoverage for scala projects. Also, this plugin reads XML report generated by Scoverage and populates several metrics in Sonar.


i.  SonarQube 5.1

ii. Scoverage 1.1.0

4. Now add the `sbt-sonar` plugin dependency to your scala project                   addSbtPlugin(“com.github.mwz” % “sbt-sonar” % “1.6.0”)

This sbt plugin can be used to run sonar-scanner launcher to analyze a Scala project with SonarQube.


i.  sbt 0.13.5+

ii. Scala 2.11/2.12

iii. SonarQube server.

iv. sonar-scanner (See point#5 for installation)

5. Configure `sonar-scanner` executable


6. Now, configure the sonar-properties in your project. This can be done in 2 ways

  • Use file:

This file has to be placed in your root directory. To use an external config file you can set the sonarUseExternalConfig to true.

import sbtsonar.SonarPlugin.autoImport.sonarUseExternalConfig

sonarUseExternalConfig := true

  • Configure Sonar-properties in build file:
  • By default, the plugin expects the properties to be defined in the sonarProperties setting key in sbt
import sbtsonar.SonarPlugin.autoImport.sonarProperties

sonarProperties ++= Map(

"sonar.sources" -> "src/main/scala",

"sonar.tests" -> "src/test/scala",

"sonar.modules" -> "module1,module2")
  1. Now run the below commands to publish code analysis and code coverage reports in your sonarQube server.
  • sbt  coverage test
  • sbt  coverageReport
  • sbt  sonarScan


SonarQube integration is really useful to perform an automatic review of code to detect bugs, code smells and security vulnerabilities.  SonarQube can also track history and provide the visual representation of it.

Introduction to Akka Streams

Why Streams?

In software development, there can be cases where we need to handle the potentially large amount of data. So while handling these kinds of scenarios there can be issues such as `out of memory` exceptions so we should divide the data in chunks and handle the chunks independently.

There come Akka streams for rescue to do this in a more predictable and less chaotic manner.


Akka streams consist of 3 major components in it – Source, Flow, Sink – and any non-cyclical stream consist of at least 2 components Source, Sink and any number of Flow element. Here we can say Source and Sink are the special cases of Flow.

  • Source – this is the Source of data. It has exactly one output. We can think of Source as Publisher.
  • Sink – this is the Receiver of data. It has exactly one input. We can think of Sink as Receiver.
  • Flow – this is the Transformation that acts on the Source. It has exactly one input and one output.

Here Flow sits in between the Source and Sink as they are the Transformations applied on the Source data.



A very good thing is that we can combine these elements to obtain another one e.g combine Source and Flow to obtain another Source.

Akka streams are called reactive streams because of its backpressure handling capabilities.

What are Reactive Streams?

Applications developed using streams can run into problems if Source is generating data too fast than the Sink can handle. This causes Sink to buffer the data – but the problem is if data is too large then Sink buffer will also grow and can lead to memory issues.

So to handle this Sink need to communicate with the Source – to slow down the generation of data until it finished handling of current data.  This handle of communication between Publisher and Receiver is called as Backpressure handling. And Streams that handle this mechanism are called Reactive Streams.

Example using Akka Stream:

In this example, let’s try to find out prime numbers between 1 to 10000 using Akka stream. Akka stream version used is 2.5.11.


package example.akka

import akka.{Done, NotUsed}

import scala.concurrent.Future
object AkkaStreamExample {

def isPrime(i :Int) : Boolean = {
 if (i <= 1) false
 else if (i == 2) true
 else !(2 until i).exists(x => i % x == 0)

def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem("actor-system")
 implicit val materializer = ActorMaterializer()

val numbers = 1 to 10000

//Source that will iterate over the number sequence
 val numberSource: Source[Int, NotUsed] = Source.fromIterator(() => numbers.iterator)

//Flow for Prime number detection
 val isPrimeFlow: Flow[Int, Int, NotUsed] = Flow[Int].filter(num => isPrime(num))

//Source from original Source with Flow applied
 val primeNumbersSource: Source[Int, NotUsed] = numberSource.via(isPrimeFlow)

//Sink to print the numbers
 val consoleSink: Sink[Int, Future[Done]] = Sink.foreach[Int](println)

//Connect the Source with the Sink and run it using the materializer


Above example illustrated as a diagram:



  1. `Source` – based on the number iterator

`Source`, as explained already, represents a stream. Source takes two type parameters. The first one represents the type of data it emits and the second one is the type of the auxiliary value it can produce when ran/materialized. If we don’t produce any we use the NotUsed type provided by Akka.

The static methods to create Source are

  • fromIterator – its will accepts elements till iterator is empty
  • fromPublisher – uses object that provides publisher functionality
  • fromFuture – new Source from a given future
  • fromGraph – Graph is also a Source.
  1. `Flow` – filters out only prime numbers

Basically, `Flow` is an ordered set of transformations to the provided input. It takes 3 type parameters – input datatype, output datatype & auxiliary datatype.

We can create a Source by combining existing one and a Flow- as used in code

val primeNumbersSource: Source[Int, NotUsed] = numberSource.via(isPrimeFlow)

  1. `Sink` – prints numbers to the console

It is basically subscriber of the data and the last element of the Stream steps.

The sink is basically a Flow which uses foreach or fold function to run a procedure over its input elements and propagate the auxiliary value.

As with Source and Flow, the companion object provides a method for creating an instance of it. As mentioned above the two main methods of doing so are:

  • forEach – run the given function for each received element
  • foreachParallel – same as forEach – except runs in parallel
  • fold – run the given function for each received element, propagating the resulting value to the next iteration.

The runWith method produces a Future that will be completed when the Source is empty and Sink is finished with the processing of elements. If processing fails it returns Failure.

We can also create a RunnableGraph instance and run it manually using toMat (or viaMat).

  1. `ActorSystem` and `ActorMaterializer` are needed as Akka Stream uses Akka Actor model.

The `ActorMaterializer` class instance is needed to materialize a Flow into a Processor which represents a processing stage, which is a construct from the Reactive Streams standard, which Akka Streams implements.

In fact, Akka Streams employs back-pressure as described in the Reactive Streams standard mentioned above. Source, Flow, Sink get eventually transformed into low-level Reactive Streams constructs via the process of materialization.

Journey from JSP to React JS SPA

This guide is going to help you when migrating from JSP based web application to a reactJS single page application. It gives details of all the available tools that will make the migration fast and easy. It will also help in understanding various conventions that are different in client-server architecture than the traditional server-based application.

Development Environment Setup

Starting from scratch and setting up a new react project is a painful experience so there are tools available to alleviate this pain.

  • Webpack vs Create-react-app

React app does not need webpack and can be run all alone but there many advantages of using webpack which cannot be ignored. Webpack is a module bundler and runs only during development and not when the page actually loads in the browser. Here’s a list of task that webpack does during the development.

  1. Bundle resource: It bundles all the resource including all the css and js files allowing to use ‘require’ or ‘import’ statements in JavaScript code.
  2. Babel transpilation: It transpiles ES6 JavaScript code into ES5 allowing to use latest JavaScript feature without worrying about the older browser support.
  3. Development server: webpack provides its own development server so that the development can be done inside the actual server-like environment. Eventually, everything will run inside a server.
  4. Hot Module replacement: add or remove modules when the application is running without a full reload.

Webpack is not simple. It can be confusing many a time. Create-react-app comes for the rescue during such cases. Its very handy during the initial setup of a new react app. It also provides abstraction over the entire webpack configuration. So that you don’t have to manually configure webpack settings. The only drawback would be you don’t have control over what is happening behind the scenes. But again you can eject out anytime and switch to manual configuration. For beginners, it always good to start with create-react-app and than later switch to webpack once you get familiar with all the webpack configuration stuff.

Getting started with create-react-app is pretty simple. You need to have npm (node package manager) or yarn installed. Then run the following command:

  1. “npm install create-react-app”: it will install the react script in your machine.
  2. “create-react-app <project_name> “ : it will create the project with all the default configuration and folder structure.
  3. Switch to your project root and then run “npm start”. This is going to start the react app in development server.
  4. In order to deploy the app to production server run “npm build”. This is going to create a build folder. This contains a deployable version of the app.where Javascript and css files are all minified and compresses in one file. Deploy the content of build folder to an actual production server.
  5. Note that the entire webpack configurations are done behind the scene and you have no control over it. In order to take control of webpack configuration run “npm eject”. This is a one-time command and cannot be reverted. It will create all the webpack configuration files and you can change them as per requirement.


There are various ways a react app can be deployed on production. React consists of only JavaScript files which can be hosted and accessed from anywhere from the cloud.

  • This JavaScript bundle can be deployed inside the same war where the application is running. This can be achieved by running the npm script during the maven build command so that the build folder get created during the maven build and gets packaged inside the war. Now the war can be deployed any server over the cloud.
  • The JavaScript bundle can be deployed outside the war but in the same tomcat instance. This way it does not need to be built during the maven build process and can be handled separately.
  • Third way and the most popular way is to host these JavaScript files on S3 and access them using AWS CloudFront. This way there is no need to separately deploy them on all the servers.

Session Maintenance

Unlike JSP based application where sessions are maintained on the server side React app is single page application and maintains session on the client side and server side communication is done through stateless REST APIs. The server does not need to maintain any session.  For a simple application, you can use react components ‘state’ for storing the state of the user called session. But things can get messy when the state needs to be shared among different components. It also gets cumbersome when the state needs to be traversed too many child components and each component will get reloaded if there is a change in state. For such cases, you can go for Redux. Redux is the state management tool for javascript applications. The main concept about redux is that the entire state of the application is stored in one central location and every component can access it from anywhere.

Convert Spring controller to RESTful API

Unlike the spring MVC where the view is also served from the server side react js app follows client-server architecture where all the view resides on the client side and business data is fetched from the server. So the first thing that needs to be done is to change the server side controller to return json response and not the JSP view pages.  The client will make ajax calls to get the data from the server.

WebRTC – Basics of web real-time communication

WebRTC is a free open source standard for real-time, plugin-free video, audio and data communication between peers. Many solutions like Skype, Facebook, Google Hangout offer RTC but they need downloads, native apps or plugins. The guiding principles of the WebRTC project are that its APIs should be open source, free, standardized, built into web browsers and more efficient than existing technologies.

How does it work

  • Obtain a Video, Audio or Data stream from the current client.
  • Gather network information and exchange it with peer WebRTC enabled client.
  • Exchange metadata about the data to be transferred.
  • Stream audio, video or data.

That’s it ! .. well almost, it’s a dumbed down version of what actually happens. Since now you have an overall picture let’s dig into the details.

How it really works

WebRTC provides the implementation of 3 basic APIs to achieve everything.

  • MediaStream: Allowing the client to access a stream from a WebCam or microphone.
  • RTCPeerConnection: Enabling audio or video data transfer, with support for encryption and bandwidth management.
  • RTCDataChannel: Allowing peer-to-peer communication for any generic data.

Along with these capabilities, we will need a server (yes we still need a server !)  to identify the remote peer and to do the initial handshake. Once the peer has been identified we can directly transfer data between two peers if possible or relay the information using a server.

Let’s look at each of these steps in detail.


MediaStream has a getUserMedia() method to get access of Audio or Video or a data stream and provide success and failure handler.


navigator.getUserMedia(constraints, successCallback, errorCallback);


The constraints is a json which specifies if an audio or video access is required. In addition, we can specify some metadata about the constraints like video with and height, example:


navigator.getUserMedia({ audio: true, video: true}, successCallback, errorCallback);



This interface represents the connection between local WebRTC client and a remote peer. It is used to do the efficient transfer of data between the peers. Both the peers need to setup RtcPeerConnection at their end. In general, we use an RTCPeerConnection::onaddstream event callback to take care of audio/video stream.

  • The initiator of the call (the caller) needs to create an offer and send it to the callee, with the help of a signalling server.
  • Callee which receives the offer needs to create an answer and send it back to the caller using the signalling server.

It is a framework that allows web browsers to connect with peers. There are many reasons why a straight up connection from Peer A to Peer B simply won’t work. Most of the clients won’t have a public IP address as they are usually sitting behind a firewall and a NAT. Given the involvement of NAT, our client has to figure out the IP address of the peer machine. This is where Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN) servers come into the picture


A STUN server allows clients to discover their public IP address and the type of NAT they are behind. This information is used to establish a media connection. In most cases, a STUN server is only used during the connection setup and once that session has been established, media will flow directly between clients.


If a STUN server cannot establish the connection, ICE can switch to TURN. Traversal Using Relay NAT (TURN) is an extension to STUN, that allows media traversal over a NAT that does not allow a peer to peer connection required by STUN traffic. TURN servers are often used in the case of a symmetric NAT.

Unlike STUN, a TURN server remains in the media path after the connection has been established. That is why the term “relay” is used to define TURN. A TURN server literally relays the media between the WebRTC peers.


The RTCDataChannel interface represents a bi-directional data channel between two peers of a connection. Objects of this type can be created using




Data channel capabilities make use of events based communication:

var peerConn= new RTCPeerConnection(),
     dc = peerConn.createDataChannel("my channel");
 dc.onmessage = function (event) {
   console.log("received: " +;

Links and References

Getting started with progressive React Web Apps using Firebase


Sending notifications is one of the best ways to increase your app usage. Out of many websites/apps user visit, he can remember a few. Sometimes users install the app and forget. Push notifications come to your help. It’s a quick and simple way to notify the user without spamming his inbox. Push notifications are used widely by News Apps and Shopping Apps. Apps build in such a way that they can display notifications and keep track of user activity are known as Progressive Apps. In this article, we will be discussing only React applications.

React is a JavaScript library for building user interfaces.

  • Declarative: React makes it painless to create interactive UIs. Design simple views for each state in your application, and React will efficiently update and render just the right components when your data changes. Declarative views make your code more predictable, simpler to understand, and easier to debug.
  • Component-Based: Build encapsulated components that manage their own state, then compose them to make complex UIs. Since component logic is written in JavaScript instead of templates, you can easily pass rich data through your app and keep the state out of the DOM.
  • Learn Once, Write Anywhere: We don’t make assumptions about the rest of your technology stack, so you can develop new features in React without rewriting existing code. React can also render on the server using Node and power mobile apps using React Native.

Firebase is Google’s mobile platform that helps you quickly develop high-quality apps and grow your business

As per Google Developers, Progressive Web Apps are

  • Reliable – Load instantly and never show the downasaur, even in uncertain network conditions.
  • Fast – Respond quickly to user interactions with silky smooth animations and no janky scrolling.
  • Engaging – Feel like a natural app on the device, with an immersive user experience.


To turn App into Progressive App you need

  • Working React App.
  • React 12.0 or above
  • Node 6.0 or above
  • Chrome(50+) or Firefox(48+)
  • Google Cloud / Firebase Account (Even free trial will suffice)

Steps to implement Push Notifications using Cloud Messaging in React App

Step 1:

Login to firebase console , and create a project. Then go to Project Overview and get started by adding Firebase to your app.

Click on the platform you want to implement Cloud Messaging.

In our case click on web icon and you will see config variable with API Key and Sender Id. Copy and keep this object for use in our App.

Step 2:

Install Firebase SDK.

npm install firebase – -save

Step 3:

All below code to your App.js

In this code, we are asking user permission to send notifications. If user allows then we start a worker in the user’s browser which will listen to incoming push messages.

Step 4:

Add  “gcm_sender_id”: “103953800507”  to your manifest.json (Note: 103953800507 is hard code value and do not replace it with your sender id)

Step 5:

Create a file firebase-messaging-sw.js and add below code

This is code for worker which run in the background in the browser even if user close App. We have added two Event Listeners one to receive notification and other to handle click on the notification.

That’s it we are done this changes in the app, this setup will receive the push notification on the user’s browser. Now we need the setup to send push notifications to the user.

Sending Push Notifications to App from Firebase

To send push notification also you need to store token every time a new worker is registered or existing worker is refreshed.

With help of token, you can send the unicast push notification to that user.

To send a message you need to send a POST Request




Content-Type: “application/json

Authorization: “key=AIzaSyD0TOmt….upinUwueESEYI”

To generate this key go to<your project>/settings/cloudmessaging/ and generate a key pair.

Use Public Key in Authorization Header.

There are few other ways to send Push Messages like use Firebase SDK. Firebase SDK can be installed via npm

npm install -g firebase-tools

Then Login to the firebase

firebase login

firebase init

Check docs here


This is just a start with Progressive Apps. There are a lot of possibilities in the world of Progressive Apps. We can leverage local resources available and minimize the use of REST calls. Also, we can give user Native App-like experience in Web Apps when the user is offline. You can make use of Service Workers. Service Workers are great tools when the user is offline or away from App.

Drawbacks of Progressive Web Apps

PWAs are not supported by iOS Safari. It only operates on Chrome, Firefox or Opera. But the survey reveals that it performs better than mobile websites even if the web browser is not supported.

iOS Build Management using Custom Build Scheme


One of the best practices in iOS development is to be able to manage multiple environments during the development of a project. Many a time we might have to jump between DEV, QA, STAGE, and Production environments. As the owner of a product, clients request to have both development version of the app and production version of the app i.e. App store released version of the app on the same device.

If you have ever faced or might face this situation, then you need a custom build scheme.


This blog explains the significance of custom build schemes & build configurations in XCode. We will see how we can leverage these to configure an iOS project to support multiple build environments without the need to duplicate targets and keep the same code base.


  • XCode 8.0 onwards
  • Mac machine with macOS Sierra version

Advantages of Custom Builds

  • Write code that only runs on a particular environment. For example, on DEV you might want to have different values to constants in the app than on Production.
  • Switch between different environments easily to deliver a build that talks to the production server after testing your app in a development environment.

Difference Between Build Schemes & Build Configurations

Before we start actual changes on XCode, let’s understand the difference between build schemes & build configurations first.

A build scheme is a blueprint for an entire build process. It is a way of telling Xcode what build configurations you want to use to create the development, unit test, and production builds for a given target (framework or app bundle).

A build configuration is a specific group of builds settings that can be applied to any target.

Most app projects come with two build configurations and one build scheme. You get the debug and release build configurations along with a building scheme that runs the debug configuration for debugging purposes and the release configuration for archiving/submission.

For most projects, this is perfectly fine and requires no tweaking. However, if you want to offer both a DEV and a PRODUCTION version of the same app, it’s not quite enough. You must add a new build configuration to achieve this.

Adding a new build configuration

Whenever you wish to support multiple environments in the app, you need to start by adding a new build configuration. There are some important steps involved which sometimes seems confusing at first, so follow every step carefully.

  1. Open Xcode and select the project file.


2. Go to Editor → Add Configuration → Duplicate Debug Configuration.


Repeat Steps 1 and 2 for Release configuration.
NOTE: Remember that for every environment you must Duplicate Debug and Release configuration. Thus, if you want to support DEV, QA, STAGE, PRO then you should have the following configuration:

  • DEV-Debug, DEV-Release
  • QA-Debug, QA-Release
  • STAGE-Debug, STAGE-Release
  • PRO-Debug, PRO-Release


Creating a separate build scheme for every environment

We’re going to take our new build configurations and create a build scheme that runs them.

  1. Tap on the currently active scheme.
  2. In the dropdown, select New Scheme.


3. Provide a name to the new build scheme. I usually follow <Name of the app>-       <Environment>. For example, MultipleEnvApp-QA.


Once you’ve done this, notice that your new build scheme is selected.


We’re not done yet. We have a build scheme, but it isn’t using our new build configurations yet.

4. Click on your build scheme and select Edit Scheme.


5. Select the appropriate build configuration as per the environment. For example, our selected scheme is MultipleEnvApp-QA, hence choose respective QA build configurations.


That’s it. In the same way, you can create and configure schemes for STAGE and PRO environment. You can rename the default scheme as MultipleEnvApp-DEV

Writing code that runs on a particular environment of your app

Unfortunately, having separate build schemes isn’t quite enough. We also need a way to selectively run blocks of code on a particular environment. To do that, we are going to add a custom Swift flag that only applies to the particular build configurations we just created.

  1. Select the target and then Go to Build Settings, and scroll down to Other Swift Flags.
  2. You must add the flags for every configuration. For example, add the flag “-DQA” to both of the QA build configurations.


-D is the namespace for custom flags that can be passed into a build command.

You can ignore the “-D” for now.

3. Go to any of your source files. For example, AppDelegate and add these lines of code. 

We have created a global variable SOME_SERVICE_KEY and used a unique value for each environment. In this way, you can actually use different service keys, constants for different environments.

Different bundle identifiers for different build configurations

Optionally, if you want to use different bundle ID for different configurations, do the following:

  1. Create two app IDs on your Apple Developer portal.
  2. Go to your project settings and set the appropriate bundle identifiers for different build configurations.



That’s all there is to it. Now you are setup to deliver a configurable app for different environments using the same shared codebase. Here is the GitHub project that contains all the configurations which we followed in this blog.

Happy Coding!