Scanning iBeacon and Eddystone Using Android BLE Scanner

Introduction

This blog will introduce you to Bluetooth Low Energy and will cover all the end-use application areas where it is used. Furthermore, the blog will also walk you through different kinds of BLE beacons and popular beacon protocols.

In the latter half of the article, we will create a demo android application to test out 2 BLE Beacon protocol, one from Apple and one from Google. But, let us first go through some basic definitions and an overview of BLE and Beacons before jumping onto the coding part.

Bluetooth Low Energy

Bluetooth Low Energy is a wireless personal area network technology designed by Bluetooth SIG. The Bluetooth SIG identifies different markets for low energy technology, particularly in the field of smart home, health, sport, and fitness sectors. Some of the key advantages include:

  • low power requirements can run for months/years on a button cell
  • small size and low cost
  • compatibility with mobile phones, tablets, and computers

Bluetooth Low Energy (BLE), available from Android API 18(4.3 — Jelly Bean), and later creates short connections between devices to transfer bursts of data. BLE when not connected it remains in sleep mode. This is because as compared to Classic Bluetooth it utilizes less power by providing lower bandwidth. It is ideal for applications such as a heart-rate monitor or a wireless keyboard. To use BLE, devices need to have a chipset that supports BLE. Talking about BLE Beacons, Bluetooth Beacons are physical transmitters – a class of BLE devices that broadcast their identifiers on nearby electronic devices.

Use Cases of BLE Beacons

These beacons can be used for many proximity-related applications such as –

  • Proximity Alerts: These beacons can be used to get alerts in-app when they are in the vicinity
  • Indoor Navigation/ Location: By using a proper number of beacons placed in the room and utilizing the signal strength of all the beacons properly, we can create a working solution for indoor navigation or indoor location.
  • Interactions: These beacons can be placed on the poster/banner of a movie in a movie theatre and as soon as the device comes in proximity of it app can launch its trailer or teaser, the same can be done for museums where these can be placed on the art piece and people can the details of these painting as notification and also get video/audio/text info for the art piece.
  • Healthcare: it can be used for tracking patient movement and activities

It can be used for many other use cases as well. For instance, you can place a BLE tag in your key and then can use your mobile phone to search for it if it’s inside a cupboard or just lying under the sofa.

Beacon Protocols

  • iBeacon: Apple’s standard for Bluetooth beacon
  • AltBeacon: It is an open-source alternative to iBeacon created by Radius Networks
  • URIBeacon: It directly broadcast URL which can be understood immediately
  • Eddystone: Google’s standard for Bluetooth beacons, It supports three types of packets, Eddystone-UID, Eddystone-URL, and Eddystone-TLM.

Now, we will see how we can scan for Apple’s iBeacon and Google’s Eddystone-UID by creating a demo android application.

Getting Started

Create a new project and choose the template of your choice.

I chose “Empty Activity”.

BLE Dependency

There is no extra BLE library dependency as such for scanning for BLE beacons.

Open AndroidManifest.xml and add the following in the manifest element.

<uses-feature> tag with required as “true” means that this app requires BLE hardware to work with hence Google play will make sure that this app is only visible to devices that have the BLE hardware Available.

<uses-permission> tag is required to get the permission to use the Bluetooth hardware with Coarse location in Low energy mode.

Check for Permission

Coarse location permission is needed for Bluetooth low energy scanning mode. Hence we should make sure that we have the required permission provided by the user.

Check whether got the permission else to show the dialog letting the user know why we need this permission.

Setting up Bluetooth API

Initialize the BluetoothManager to get the instance of BluetoothAdapter for getting BluetoothLeScanner, which is required to perform scan related operations for Bluetooth LE devices.

  • BluetoothManager: High-level manager used to obtain an instance of a BluetoothAdapter and to conduct overall Bluetooth Management.
  • BluetoothAdapter: Represents the local device Bluetooth adapter. The BluetoothAdapter lets you perform fundamental Bluetooth tasks, such as initiate device discovery, query a list of bonded (paired) devices, instantiate a BluetoothDevice using a known MAC address, and create BluetoothServerSocket to listen for connection requests from other devices and start a scan for Bluetooth LE devices.
  • BluetoothLeScanner: This class provides methods to perform scan related operations for Bluetooth LE devices.

BLE Scan Callbacks

BLE Start/Stop Scanner

We can have the button to control the start and stop of the BLE scanner.

Parse ScanResult to Get Relevant Data

We should create a Beacon class to hold the different info we will be parsing from ScanResult from onScanResult callback.

Extracting Eddystone UID packet info if there is any.

Eddystone UID: A unique, static ID with a 10-byte Namespace component and a 6-byte Instance component.

  • scanRecord: a combination of advertisement and scan response
  • device.address: hardware address of this Bluetooth device. For example, “00:11:22:AA:BB:CC”.
  • rssi: received signal strength in dBm. The valid range is [-127, 126].
  • serviceUuids: list of service UUIDs within the advertisement that are used to identify the Bluetooth GATT services.
  • eddystoneServiceId : Service UUID for Eddystone UID which is “0000FEAA-0000–1000–8000–00805F9B34FB”
  • serviceData: the service data byte array associated with the serviceUuid, in our case eddystoneServiceId
  • eddystoneUID packet info is there in serviceData from index 2 to 18, we need to convert this byte array to Hex string using the utility method.

  • namespace is of 10 bytes which are starting 20 characters of eddystoneUID
  • instanceId is of 6 bytes which are the remaining 12 characters of eddystoneUID

Extracting iBeacon packet info if there is any.

iBeacon: A unique, static ID with a 16-byte Proximity UUID component and a 2-byte Major component, and a 2-byte Minor component.

  • iBeaconManufactureData: the manufacturer specific data associated with the manufacturer id, for iBeacon manufacturer id is “0X004c” (Apple).
  • iBeacon UUID or Proximity UUID is of 16 bytes and extracted from iBeaconManufactureData from index 2 to 18, we need to convert this byte array to Hex string using the utility method
  • major is of 2 bytes, range between 1 and 65535 and extracted from iBeaconManufactureData from index 18 to 20, we need to convert this byte array to Hex string using the utility method and then convert it to Integer
  • minor is of 2 bytes, range between 1 and 65535 and extracted from iBeaconManufactureData from index 20 to 22, we need to convert this byte array to Hex string using the utility method and then convert it to Integer

Let’s see the code in Action

Figure- Start screen

Figure- Start screen with the options

Figure- Result screen with Eddystone UID, iBeacon, generic BLE devices

BLE beacons iBeacon and Eddystone-UID are different from each other, however can be used for any of the proximity-related applications. It is because, at the application level, both of them solve similar problems using different Bluetooth profiles.

Eddystone do have a different type of packets to solve a different problem like-

  • Eddystone-URL : for broadcasting URL
  • Eddystone-TLM: broadcasts information about the beacon. This can include battery level, sensor data, battery voltage, beacon temperature, number of packets sent since the last startup, and beacon uptime, or other relevant information to beacon administrators

For more details and an in-depth view, you can find the code here

 

The Recipe for Performance Tuning

Recently, I got a chance to work on the scaling of a project scheduling application. Typical projects have somewhere around 100 to 1000 activities. These activities have predecessor and successor relationships between them, collectively forming a project network. Further, the activities have their durations, resources, and work calendars. We wanted to scale this to a level where a user can schedule a very large project network (such as the repair of an Aircraft Carrier) with 100K+ tasks, all the while staying within defined time and space boundary and transferring heavy data to the server.

Improving the performance of such a complex application can be a daunting task at first because the very heart of the application needs revamps. If the project does not have unit testing in place, it may even look like a non-starter. To accomplish such an endeavor, one needs to adopt a foolproof strategy aligned well with successful outcomes. Even though we ended up changing a lot of data structures along with placing a new scheduling algorithm, here in this article we will not focus on algorithmic solutions but on new design plans that can drive meaningful incremental changes with testability in mind. The following strategy can be applied to small performance tunings as well and can go a long way in meaningful results.

Naive Approach

One might get tempted to take a naive approach to optimize the time and space of all the code equally. But, be prepared for the fact that 80 percent of the optimization is destined to be wasted as there is a high chance that you may optimize code that hasn’t got enough time to run. All the time involved in making the program fast and getting back the lost clarity will surely be wasted. Hence, we bring you a suggested approach that is distilled out of our experience in enabling successful performance tunings.

Suggested Approach

Define time and space constraints

The time and space constraints are driven by user experience, application types (web app vs desktop app), application need, and hardware configuration. There should be enough clarity on the API request payload and target response time before you start on a new assignment.

Identifying the Hotspots

The interesting thing about performance is that if you analyze most programs, you’ll find that they waste most of their time in a small fraction of the code. You begin by running the program under a profiler that monitors the program and tells you where it is consuming time and space. This way, you can find that part of the program where the performance hotspots lie. There can be more than one hotspot and hence two approaches, fix the low-hanging ones up front and then move to the more complex ones or vice-a-versa.

Refactor before fixing

Fixing these hotspots without refactoring is not a good strategy. While encapsulating the hotspot, a developer gains more understanding of the underlying code too. If it is simple, a function may be sufficient for the job; however the need to maintain some invariant properties uncompromised calls for a class encapsulation. A very useful design pattern here is a strategy pattern when you can have different variants of an algorithm reflecting different space/time trade-offs.

Strategy along with factory pattern provides great flexibility. If you have observed, the factory takes config params such that we can control the instantiation of the desired implementation at run time. Refactoring is important because the switching between implementations will help you in testing the new implementation. Note that we have already mentioned that there can be many hotspots and hence the above pattern may need to be implemented/repeated in many places in the program, each with specialized intention/purpose i.e. encapsulating the hotspot.

Functional Verification and Plugging the Alternative Optimized Implementation

Once the refactoring is done, the next step is to write unit test cases against the refactored code to ensure that the program is still functionally correct. You are now ready to implement/ plug your new optimized algorithm. The good part is that while refactoring and writing unit test cases, the developer gains enough understanding of the code as well as different considerations/cases that need to be taken into account for the new optimized algorithm. Please note here that those same unit test cases that are written for functional verification need to pass against the new optimized implementation as well.

Even if you write many unit test cases, you may miss some scenarios and data related edge cases. To overcome that and to establish the correctness of the new implementation, there is a powerful technique called stress testing. In general, stress testing is nothing but a program that generates random tests (with random inputs) in an infinite loop, it executes the existing algorithm and then the new algorithm on the same test/inputs and compares the results. After this, you wait for a test case where the solution differs. The assumption here is that your existing algorithm is correct even though it is not time-optimized and space-optimized. All these can be done if abstractions and class interfaces are well-designed and refactored well for the said hotspot.

Nonetheless, unit testing cannot entirely replace the whole system integration along with manual testing. But, while doing changes in the critical area, unit testing must be implemented at least for the changes that are already done (refactoring + new implementation).

Given below is the strategy blueprint for your reference:

PROCEDURE :

PERFORMANCE-TUNING-STRATEGY(desired time and space constraint)

while( time and space constraint > desired)

run profiler

pick the top hotspot

refactor the hotspot(using strategy + factory pattern)

write unit test for refactored code/Existing Impl/algo

implement the new Impl/algo

passed = run same unit test cases against the new impl/algo

if(passed)

STRESS-TEST()

run whole system integration test

Deploy in production

END

PROCEDURE:

STRESS-TEST()

while(unsatisfied)

generate random input

call existing impl/algo

call new impl/algo

isEqual = compare new vs existing

if(!isEqual)

dump the difference

break( fix the issue + repeat STRESS-TEST)

END

The above procedure takes the assumption that the developer does not have much knowledge about the code, hence refactoring + unit test upfront. Even if someone has a good understanding of the code and may get tempted to implement the solution directly, it should be avoided. Be mindful of the fact that the new developers are going to work on the same code in the future when the original developer is not around. These refactoring and unit test cases will help the product immensely in the later stages all the while enabling the team to incorporate future changes easily.

Deploying the Solution

In cloud setup with server-side applications you may want to perform a rolling upgrade, deploy the new version to a few nodes at a time, check whether the new version is running smoothly, and gradually work your way through all the nodes. In a private cloud setup where access is limited, the switching between the new and existing through the configuration can be very handy (if anything goes wrong) till you get access to the server.

Conclusion

While doing any algorithmic changes, less focus on the cleanliness of code and structure of design can result in system complications that may occur in the future. Moreover, a complex system is inherently difficult to justify in terms of accuracy at the system level when multiple components are at play. Even we use mathematical proof to justify the correctness of algorithms, but it is a laborious process of formally proving every function. Hence, the only meaningful way is the scientific way to prove and establish things.

Scientific theories cannot be proven correct but can be demonstrated with experiments (unit test cases at function level). Likewise, software needs to be tested to prove its correctness, and a well thought out plan is required to be in place. If your project does not have unit testing in place, don’t get tempted to write unit test cases equally for all the codes but follow the Eisenhower Matrix. Unit testing of your refactored code (mentioned above) belongs to quadrant 1 (a point to be taken not of). Dijkstra once said, “Testing shows the presence, not the absence of bugs”. So, for some mental peace, you can give stress testing a stab and see what wonder it does with performance tuning.

Jenkins Pipeline: Features & Configurations

Jenkins, as we know is an automation server that helps automate the process of build, test and deploy and implement CI/CD workflows needed for every software nowadays. It also helps QAs to either integrate smoke/sanity automation test script after the build flow itself or to run regression suites separately by creating and scheduling Jenkins job.

Considering we all are familiar with the basics of Jenkins, in this blog we talk about one of the Jenkins project type ‘Jenkins pipeline’ and some features and configurations related to it:

  • What is Jenkins Pipeline?
  • Why Jenkins Pipeline?
  • Jenkins Pipeline Implementation
  • Jenkins pipeline with master-slave build execution
  • Blue ocean Jenkins UI
  • Summary

What is Jenkins Pipeline?

Pipelines are Jenkins jobs enabled by the Pipeline plugin and built with simple text scripts that use a Pipeline DSL (domain-specific language) based on the Groovy programming language. Jenkins pipeline allows us to write Jenkins build steps in code (Refer Figure 1)

This is a better alternative for the generally used freestyle project where every step of CI/CD is configured step by step in the Jenkins UI itself

Figure 1: Jenkins Project type ‘Pipeline’

Why Jenkins Pipeline?

The traditional approach of maintaining jobs has some limitations like:

  • Managing a huge number of jobs and their maintenance is tough
  • Making changes to Jenkins jobs through UI is very time consuming
  • Auditing and tracking the jobs is tough

Thus Jenkins pipeline provides a solution to it as it automates the job steps in the simple groovy script and the pipeline file itself becomes the part of our codebase in SCM. This helps in the following:

  • Automating the job since build steps are written in the form of code in a simple Jenkins text file.
  • Jenkins file can be a part of our Sourcecode and can be checked into a version control system
  • Better audit logging and tracking of changes in the job is possible
  • Deploying a similar job in any other environment is easy since we don’t need to install plugins and setup individual job steps again
  • Visualization of the execution status of each stage of the job is good

Jenkins pipeline implementation

To write a Jenkins pipeline file using groovy script for our Jenkins job we need to understand some of these terms.

A Jenkins file can be written using two types of syntax – Declarative and Scripted. Since Declarative is more recent and is designed to make writing and reading Pipeline code easier, we’ll take its example (Refer Figure 2)

  • Pipeline -A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.
  • Agent/Node– Defines the Jenkins worker where we want this job to run. If left blank it’s treated as ‘any node’
  • Stage –Stage structures your script into a high-level sequence and defines a building stage like build, deploy, test etc. User can define stages as he wants based on how he wants to divide the job conceptually
  • Step-A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time

Figure 2: Basic Building Blocks Declarative Pipeline (numbering in the image shows execution sequence)

Let’s take the example of Postman collection execution through Newman Command as we did in our project and also explained in one of previous blog series ‘API automation using postman –Simplified’

To execute postman API tests using Jenkins and store test results, we are required to pull postman collection from git, install some dependencies like npm, run Newman command for executing the collection, process the output to fetch important data and commit the report to git

This all can be done in a freestyle project by step by step configuring each stage and some steps require some plugin installation first.

But if we want to achieve the same thing using Jenkins pipeline, we just need to write a simple text file by using the syntax shown in Figure2. Thus every required set of actions can be added in the different stages of Jenkins file as shown in Figure 3.Steps can include commands similar to windows batch commands, Linux shell script etc. with groovy syntax. When this pipeline job executes, there are visualization plugins to see the execution details for each stage like which stage caused any failure or how much time it took to complete a stage, etc. (Refer Figure 4)

Figure 3: Pipeline text file template for one example case of Newman execution of postman collection

Figure 4: Pipeline Stage view after execution

Jenkins pipeline with master-slave build execution

If we use distributed execution of Jenkins and have Slave nodes or agents for running Jenkins job and we want to run Jenkins pipeline on a specific agent, Agent name can be specified as

agent {label ‘my-defined-label’ } rather than specifying Agent as ‘any’ in the declarative pipeline

To understand the flow of Jenkins Slave execution better, let’s briefly go through how to create a slave node in master and create a connection with the slave node

In your master Jenkins go to Manage Jenkins->Manage Nodes where master node already exists

Click on the new node and configure the details (Refer Figure 5)

Figure 5: Add a Node

There are now 2 ways to establish a connection

  1. Master Node connects to slave node over SSH

For this method of establishing a connection between master and slave, we need to configure slave agent’s host IP port and private ssh key in the Slave configuration modal (Refer Figure 6). Private ssh key of the agent needs to be added under Jenkins credentials manager using Add Credential option and then it can be selected from the Credentials field dropdown.

Also point to note here is that the user has to save this ssh key as allowed host to verify it as a known host, if we are using the host key verification strategy as ‘Known hosts file Verification Strategy’ as shown in the image.

Then the master is able to connect to the slave agent

Figure 6: Slave Node Configuration using a method where Master connects to Slave (SSH method)

  1. Slave Node connects to Master node Also called a connection through JNLP (useful when a slave is behind firewall)

Setting up the slave node using this method requires to do some Jenkins settings first

  • Under Manage Jenkins->Configure Global Security, Specify TCP port for Agents (Refer Figure 7)

Figure 7: Setting TCP port

  • Under Manage Jenkins->Configure System, Specify Jenkins location to which Agents can connect (Refer Figure 8)

By default, the location is set to localhost which needs to be modified to Jenkins server’s local IP and TCP port

Figure 8: Setting Jenkins Location

After the above settings are done, go to Manage Jenkins->Manage Nodes and create New Node (similar to Image)

In the Slave node configuration (Refer Figure 9), under Launch method, we can see the option ‘Launch agent by connecting it to master’ (in older versions also referred as Launch agent via JAVA web start)

Figure 9: Slave Node Configuration using a method where Slave connects to master (JNLP method)

After providing ‘Remote root directory’, click on save and click on create slave-Node

For windows, it provides a direct option to Launch agent. In the slave, the machine hit the Jenkins server URL and click on Launch option

While for Linux, Command is provided to be run (Refer Figure 10)

Figure 10: Launch options after Slave node creation

Using the above methods one can set up the slave nodes and execute Jenkins jobs. User can also make use of docker to run Jenkins container in achieving the same

Blue Ocean UI

Blue Ocean is a new frontend for Jenkins and is built from ground up for Jenkins pipeline. This new modern visual design aims to improve clarity, reduce clutter, and navigational depth to make the user experience very concise.

To enable Blue Ocean UI, the user first needs to install the Blue Ocean plugin. (Refer to Figure 11)

Figure 11: Launch options after Slave node creation

Once the Plugin is installed, the user can see the option ‘Open Blue Ocean’ in the left pane to switch to Blue Ocean UI (Refer Figure 12). Similarly while in Blue Ocean UI, the user sees the option to switch to classic UI (Refer Figure 13)

Figure 12: Launch options after Slave node creation

Figure 13: Launch options after Slave node creation

Blue Ocean UI provides:

  • Sophisticated visualization of the pipeline
  • A pipeline editor
  • Personalization
  • More precision to quickly find what’s wrong during an intervention
  • Native integration to branch and pull requests

The below images give a high-level idea of how the New Pipeline creation view looks like (Refer Figure 14) and how pipeline execution is visualized (Refer Figure 15) in the blue ocean.

Figure 14: New Pipeline Creation in Blue Ocean

Figure 15: Pipeline Execution Visualization in Blue Ocean

Pipelines are visualized on the screen along with the steps and logs to allow simplified comprehension of the continuous delivery pipeline – from the simple to the most sophisticated scenarios. Scrolling through 10,000 line log files is not required now as Blue Ocean breaks down log per step and calls out where your build failed.

Conclusion

Pipelines, although having few limitations in terms of plugin compatibility and additional effort needed for managing and maintaining pipeline scripts in cases of applications and technology changes, still has multiple benefits to conclude that Pipelines are a fantastic way to view traditional jobs, and it gives us new view backed by years of traditional CI power. Blue Ocean builds upon the solid foundation of the Jenkins CI server by providing both a cosmetic and functional overhaul for the modern process.

 

Enabling Support for Dark Mode in your Web Application

Introduction to Dark Mode

MacOS introduced Dark Mode, wherein dark colors are used instead of light colors for user Interface. The dark mode is everywhere from Mac, Windows, Android, and now on the iPhone and iPad. It is a support system that can be used to display dark surfaces on the UI. Dark Mode having light text and the dark background is mostly used to reduce eye strain in low light conditions & blue light.

Impact of Dark mode felt less in Safari, and it’s because almost all the websites and web-apps built till date are not designed to support dark mode. In Apple’s browser, the title bar at the top turns black, but web pages are displayed in the same manner as they are in regular, light mode. All that whiteness and brightness can be jarring against the dark elements of dark mode.

What can we do?

We should also build our websites and applications which are compatible with dark mode. But first, let’s take a look at how we can enable dark mode in iOS or Windows devices.

·         Enable Dark Mode in Windows 10.

Go to Settings > Colors > Choose your default Windows Mode > Dark.

·         Enable Dark Mode in iOS (Macbook/MacMini).

Go to System Preferences > General > Dark.

·         Enable Dark Mode in iPhones/iPad.

Go to Settings > Display & Brightness > Dark.

How can we do that?

There is this super new feature media query based on the user’s operating system set theme, called as prefers-color-scheme. You can add any CSS which you want to show, just like regular device responsive media query.

Method 1: This is a normal method with normal variables set to both the modes.

“prefers-color-scheme: dark”

To check if this media query works, change your theme preference to ‘dark’.

@media(prefers-color-scheme: dark){ background-color: $dark-mode-background; color: $dark-mode-text-color;

“prefers-color-scheme: light”

Similarly, for the light theme, we have another media query.

@media(prefers-color-scheme: light){ background-color: $light-mode-background; color: $light-mode-text-color;

Method 2: We can set CSS variables for changing themes, CSS variables start with two dashes (–) and they are case sensitive.

As we are changing the theme, we can define CSS variables in the global scope, i.e. in: root or in body selector.

:root{

–bg: #fff;

@media(prefers-color-scheme: dark){

–bg: #000;

}// change your OS into dark theme to see magic of color-scheme..

The link to explain this feature media query, try toggling between your OS Dark Mode for better understanding: https://codepen.io/shwetabhagre/pen/yLyNXOd

Do You Use Operating System’s Dark Mode?

Because this is a very new feature, people get confused with respect to the usage. Well, we found a survey that will help us to decide do people really use dark mode, or when they like to and not like to use dark mode.

Conclusion:

  • As a new feature, Dark Mode supports only updated browsers. In the case of older versions of browsers, we need to add traditional CSS to avoid
  • Below chart shows Browser support for prefers-color-scheme

Working with Kafka Consumers

What is Kafka

Apache Kafka is an open-source and distributed streaming platform used to publish and subscribe to streams of records. Its fast, scalable, fault-tolerant, durable, pub-sub messaging system. Kafka is reliable, has high throughput and good replication management.

Kafka works with Flume, Spark Streaming, Storm, HBase, Flink for real-time data ingestion, analysis, and processing of streaming data. Kafka data can be unloaded to data lakes like S3, Hadoop HDFS. Kafka Brokers works with low latency tools like spark, storm, Flink to do real-time data analysis.

Topics and Partitions

All the data to Kafka is written into different topics. A topic refers to the name of a category/feed name used for which storing and publishing records. Producers write data to Kafka topics, and consumers read data/messages from Kafka topics. There are multiple topics created in Kafka as per requirements.

Each topic is divided into multiple partitions. This means messages of a single topic would be in various partitions. Each of the partitions could have replicas which are the same copy.

Consumers

It’s the process which reads from Kafka. It can be a simple java program, python program, Go code or any distributed processing framework like Spark Stream, Storm, Flink or similar.

There are two types of Kafka consumers-

Low-level Consumer

In the case of low-level consumers, partitions and topics are specified as the offset from which to read, either fixed position, at the beginning or the end. This can, of course, be cumbersome to keep track of which offsets are consumed, so the same records aren’t read more than once.

High-Level Consumer

The high-level consumer (more commonly known as consumer groups) comprises more than one consumer. In this case, a consumer group is built by the addition of the property “group.id” to a consumer. Giving the same group id to any new consumer will allow him/her to join in the same group.

Consumer Group

A consumer group is a mechanism of grouping multiple consumers where consumers within the same group share the same group id. Data is then equally divided among consumers falling into a group, with no two consumers, from the same group, receiving the same data.

When you write Kafka consumer you add a property like below

props.setProperty(“bootstrap.servers”, “localhost:9092”);

props.setProperty(“group.id”, “test”);

So consumers with the same group id are part of the same consumer group. They will share the data from the Kafka topic. Consumers will read only from those partitions of Kafka topic, where Kafka cluster itself assigns them.

Partitions and Consumer Group

What happens when we start consumer with some consumer group. First, Kafka checks if already a consumer is running with the same consumer group id.

If it is a new consumer group ID, it will assign all the partitions of that topic to this new consumer. If there is more than one consumer with the same group ID, Kafka will divide partitions among available consumers.

If we write a new consumer group with a new group ID, Kafka sends data to that consumer as well. The data is not shared here. Two consumers with different group id will get the same data. This is usually done when you have multiple business logic to run on data in the Kafka.

In the below example consumer z is a new consumer with different group id. Here, only a single instance of a consumer is running, so all four partitions will be assigned to that consumer.

When to use the same consumer group?

When you want to increase parallel processing for your consumer application, then all individual consumers should be part of the same group. Consumers ought to be a part of a similar group when the consumer carrying out operation has to be scaled up to process in parallel. Consumers, who are part of a common group, would be assigned with partitions that are different, thereby leading to parallel processing. This is used to achieve parallel processing in consumers.

When you write a storm/spark application, the application uses a consumer id. When you increase the workers for your application, it adds more consumers for your application and increases parallel processing. But you can add consumers the same as that of a number of partitions and can’t have more consumers than the number of partitions in the same consumer group. Basically, partitions are assigned to one consumer.

When to use different consumer groups?

When you want to run different application/business logic, consumers should not be part of the same consumer group. Some consumers update the database, while another set of consumers might carry out some aggregations and computations with consumed data. In this case, we should register these consumers with different group-id. They will work independently and Kafka will manage data sharing for them.

Offset management

Each message in the partition would have a unique index that is specific to the partition. This unique id is called offset. It is usually a number that indicates the count of messages read by a consumer, and it is usually maintained according to consumer group-id and partition. Consumers belonging to different groups can resume or can pause independently of the other groups, hence creating no dependency among consumers from different groups.

auto.offset.reset

This property takes control over the behavior of a consumer whenever it starts reading a partition it doesn’t have a committed offset for/ if the committed offset is invalid as it aged out because of an inactive customer). The default says latest- that indicates ‘on restart’ or new application will start reading for newest Kafka records  The alternative is “earliest”, which means on restart or new application will read all data from start/beginning of Kafka partitions.

enable.auto.commit

This parameter is used to decide whether the consumer should commit offsets automatically or not. The default value is set to true, which means Kafka will commit offset on his own. If the value is false, the developer decides when to commit offset back to Kafka. This is highly essential to minimize duplicates and avoid missing data. In case you set enable.auto.commit as true, then it’s also necessary to minimize duplicates and avoid missing data. If you set to true, then you might also want to control how frequently offsets will be committed using auto.commit.interval.ms.

Automatic Commit

The easiest and best way to commit offsets is by allowing your consumer to do it for you. If you set value for enable.auto.commit as true, then Kafka consumer will commit the largest value of offset generated by poll() function every five seconds.

The interval of five seconds is by default and is taken control by setting auto.commit.interval.ms.

But if your consumer restarts before 5 seconds then there are chances that you process some records again. Automatic commits are easy to implement, but they don’t give developers enough flexibility to avoid duplicate messages.

Manual Commit

Developers want to control when they wish to commit offset back to Kafka. As normally applications read some data from Kafka, some processing and save data in the database, files, etc. So they want to commit back to Kafka only when their processing is successful.

When you set enable.auto.commit=false, the application explicitly chooses to commit offset to Kafka. The most reliable and simple of all the commit APIs is termed as commitSync(). This API commits the offset returned lately by poll() and again return as soon as the offset is committed, throwing back an exception in case of a commit failure.

One drawback of the manual commit is- application is blocked until the broker reverts to the commit request. This, in turn, limits the application’s throughput. To avoid this, asynchronous commit API comes to the picture. Instead of waiting for the response to a commit from the broker’s side, we can send out the request and continue on. The drawback is that commitSync() will continue re-trying the commit till the time it succeeds or encounters a non-retriable failure, commitAsync() will not retry. Sometimes, commitSync() and commitAsync() together are used to avoid re-try problems if the commit is failed.

Conclusion

We have checked here what a consumer is, how the consumer group works, and how we can parallelize consumers by using the same consumer group id. We have also checked on to do offset management when working with an application.

Setup of FreeRADIUS Server Using Raspberry Pi3

This blog will take you through the basics of 802.11X authentication and steps on how to configure FreeRadius using raspberry pi. Quite recently, I got the opportunity to work on the FreeRadius server for one of the customer requirements to test their product (access points) for the 802.11X standard. And to achieve this, I had to set up my radius server.

What is 802.1X and How Does it Work?

In a wireless network, 802.1X is used by an access point to authenticate client request to connect to the Wi-Fi. Whenever a wireless client tries to connect to a WLAN, the client passes user information (username/password) to access point, and these access points carry forward this information to the designated RADIUS server. RADIUS servers receive user connection requests, authenticate the user, and then return the configuration information required for the client to connect to the Wi-Fi.

802.1X Authentication comprises of three main parts:

1) Supplicant – Supplicant is a client or end-user who is waiting for authentication

2) Authentication Server (usually a RADIUS server): This server decides whether to accept the end user’s request for full network access.

3) Authenticator – It is an access point or a switch that sits between the supplicant and the authentication server. It acts as a proxy for the end-user and restricts the end-user’s communication with the authentication server.

To implement 802.11X, we need an external server called a Remote Authentication Dial-in User Service (RADIUS) or Authentication, Authorization, and Accounting (AAA) server, which is used for a variety of network protocols and environments including ISPs.

It is a client-server protocol that enables remote servers (Network Access Server-NAS) to communicate with the central servers (Active Directory) to authenticate and authorize dial-in users (WIFI/wired clients) to provide them access to the requested resources.

It provides security and helps companies to maintain a central location for managing client credentials and give easy-to-execute policies that can be applied to a vast range of users from the single administered network point.

It helps companies to have the privacy and security of the system and individual users. There many RADIUS servers available in the market for free which you can configure on your machine. One of them is FreeRadius- a daemon for Unix and Unix-like operating systems which allows one to set up a radius protocol server- which can be used for authenticating and accounting various types of network access.

Installation and Configuration of FreeRADIUS Server Using Terminal in Raspberry

Given below are the steps to install FreeRADIUS:

Open a terminal window. To get into the root directory, type the command given below:

sudo su –

You will get into the root.

To start the installation of FreeRADIUS:

apt-get install freeradius  -y

The steps to configure FreeRADIUS:

To add users that need to be authenticated by the server, you need to edit/etc/freeradius/3.0/users file.

The command is üser name” Cleartext-Password := “”Password”

For example, ”John Doe” Cleartext-Password := “hello”

To add Clients (client is the access point IP/Subnet which needs to direct messages to RADIUS server for authentication):

You need to edit/etc/freeradius/3.0/clients.conf.

In the example given below, I am allowing access points having IP in subnet 192.168.0.0/16

# Allow any address that starts with 192.168

client 192.168.0.0/16 {

secret = helloworld

shortname = office-network

}

or to allow any device with any IP:

client 0.0.0.0/0 {

secret = helloworld

shortname = office-network

}

Quick Steps to Test FreeRADIUS

Now make sure that FreeRADIUS initializes successfully using the following commands. You should see “Info: Ready to process requests” at the end of the initialization process.

#service freeradius stop

# freeradius -XXX

If FreeRADIUS starts with no hassle, you can then you can type Ctrl-C to exit the program and restart it with:

#service freeradius start

There is a command-line tool called radtest that is used to exercise the RADIUS server. Type:

radtest “username” “password” localhost 1812 testing123

Example,

radtest John Doe hello localhost 1812 testing123

You should receive a response that says “Access-Accept”.

By using the steps mentioned above, you will be able to setup freeRADIUS server. Also, we learned the method of adding a subnet range that will be able to send out access requests to the server. Please note that if the AP subnet is not inserted correctly, the server will surely be pingable, but access requests will never reach the server. In the current example, we added only one user information in the user file; however, there is immense scope to add multiple users as per our needs.

Whenever a wireless client tries to connect to a WLAN, the client will pass user information (username/password) to access points. Then, the access points forward info to the FreeRADIUS server, which then authenticates the users and returns configuration information essential for the client to connect to WiFi. In cases wherein the credentials don’t match the database created on the server, the server sends across ‘Access-Reject’ to the access point and the client’s request is declined.

We can also configure MAC-based authentication on the server, where the server authenticates the user based on a configured list of allowed mac addresses. If the MAC address matches, the server will send a message of ‘Access-Accept’. In case of any suspicious machine, whose MAC is not configured, tries to connect to the network, a message of ‘Access-Reject’ is sent.

To configure MAC address authentication, on the FreeRadius you need to edit etc/freeradius.3.0/users file.

To add users, use the command given below:

“üser name” Cleartext-Password := “Password”

In the same command for MAC authentication, you need to write MAC address of the device all in small letters and without colon (:), which you want to be authenticated by RADIUS server in place of user name and Password,

Eg- “453a345e56ed” Cleartext-Password := “453a345e56ed”

Summary-

This can go a long way in helping companies implement security protocols and only allow verified devices to connect to the network. I hope this article helps you with the easy setup of FreeRADIUS Server Using Raspberry Pi3.

IOS – 13 Dark Mode Support

Introduction:

What is the dark mode?

The dark mode is a color scheme that uses light-colored text, icons, and graphical user interface elements on a dark background. This is an inversion of the default color scheme on iOS and other apps, which is generally black or dark text and icons on a white background.

Dark Mode was introduced in iOS 13 and announced at WWDC 2019. It adds a darker theme to iOS and allows you to do the same for your app. It’s a great addition for your users to experience your app in a darker design.

Benefits of using Dark Mode

Dark Mode is ideally suited to low-light environments, where it not only disturbs your surroundings with the light emitted by your phone but also helps in the prevention of strain caused to eyes. Whether you’re using Mail, Books or Safari, the text will appear white on a black background, making it easy to read in the dark. Using dark mode can often extend the battery life of your device as well, as less power is needed to light the screen. However, Apple doesn’t explicitly list this as an advantage.

Opt-out and disable Dark Mode

If you wish to opt-out your entire application:

  • If you don’t have the time to add support for dark mode, you can simply disable it by adding the UIUserInterfaceStyle to your Info.plist and set it to Light.
<key>UIUserInterfaceStyle</key> <string>Light</string>

 

  • You can set overrideUserInterfaceStyle against the app’s window variable. Based on how your project was created, this may be in the AppDelegate file or the SceneDelegate.
if #available(iOS 13.0, *) {
window?.overrideUserInterfaceStyle = .light
}

If you wish to opt-out your UIViewController on an individual basis:

override func viewDidLoad() {
super.viewDidLoad()
// overrideUserInterfaceStyle is available with iOS 13
if #available(iOS 13.0, *) {
// Always adopt a light interface style.
overrideUserInterfaceStyle = .light
}
}

Overriding Dark Mode per View Controller

  • You can override the user interface style per view controller and set it to light or dark using the following code:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
overrideUserInterfaceStyle = .dark
}
}

Overriding Dark Mode per view:

  • You can do the same for a single UIView instance:
let view = UIView()
view.overrideUserInterfaceStyle = .dark

Overriding Dark Mode per window:

  • Overriding the user interface style per window can be handy, if you want to disable Dark Mode programmatically:
UIApplication.shared.windows.forEach { window in
window.overrideUserInterfaceStyle = .dark
}

Please note that we’re making use of the windows array here as the key window property on the shared UIApplication is deprecated starting from iOS 13. It’s discouraged to use it because applications can now support multiple scenes that all have an attached window.

Enabling Dark Mode for testing:

If you start implementing a darker appearance in your app, it’s important to have a good way of testing.

Enabling Dark Mode in the Simulator:

Navigate to the Developer page in the Settings app on your simulator and turn on the switch for Dark Appearance:

Enabling Dark Mode on the Simulator

Enabling Dark Mode on a device:

On a device, you can enable Dark Mode by navigating to the Display & Brightness page in the Settings app. However, it’s a lot easier during development to add an option to the Control Centre for easy switching between dark and light mode:

Switching Dark Mode from the debug menu:

While working in Xcode with the simulator open, you might want to use the Environment Override window instead. This allows you to quickly switch appearance during debugging:

The Environment Overrides window allows changing the Interface Style

Enabling Dark Mode in storyboards:

While working on your views inside a Storyboard, it can be useful to set the appearance to dark inside the Storyboard. You can find this option next to the device selection towards the bottom:

Updating the appearance of a Storyboard to dark

Adjusting colors for Dark Mode:

With Dark Mode on iOS 13, Apple also introduced adaptive and semantic colors. These colors adjust automatically based on several influences like being in a modal presentation or not.

Adaptive colors explained:

Adaptive colors automatically adapt to the current appearance. An adaptive color returns a different value for different interface styles and can also be influenced by presentation styles like a modal presentation style in a sheet.

Semantic colors explained:

Semantic colors describe their intentions and are adaptive as well. An example is the label semantic color which should be used for labels. Simple, isn’t it?

When you use them for their intended purpose, they will render correctly for the current appearance. The label example will automatically change the text color to black for light mode and white for dark.

It’s best to explore all available colors and make use of the ones you really need.

Exploring adaptive and semantic colors:

It will be a lot easier to adopt Dark Mode if you’re able to implement semantic and adaptive colors in your project. For this, I would highly recommend the SemanticUI app by Aaron Brethorst which allows you to see an overview of all available colors in both appearances.

The SemanticUI app by Aaron Brethorst helps in exploring Semantic and adaptable colors

Supporting iOS 12 and lower with semantic colors:

As soon as you start using semantic colors, you will realize that they only support iOS 13 and up. To solve this, we can create our own custom UIColor wrapper by making use of the UIColor.init(dynamicProvider: @escaping (UITraitCollection) -> UIColor) method. This allows you to return a different color for iOS 12 and lower.

public enum DefaultStyle {

public enum Colors {

public static let label: UIColor = {
if #available(iOS 13.0, *) {
return UIColor.label
} else {
return .black
}
}()
}
}

public let Style = DefaultStyle.self

let label = UILabel()
label.textColor = Style.Colors.label

Another benefit of this approach is that you’ll be able to define your own custom style object. This allows theming but also makes your color usage throughout the app more consistent when forced to use this new style configuration.

Creation of a custom semantic color

A custom semantic color can be created by using the earlier explained UIColor.init(dynamicProvider: @escaping (UITraitCollection) -> UIColor) method.

Oftentimes, your app has its own identical tint color. It could be the case that this color works great in light mode but not well in dark mode. For that, you can return a different color based on the current interface style.

public static var tint: UIColor = {
if #available(iOS 13, *) {
return UIColor { (UITraitCollection: UITraitCollection) -> UIColor in
if UITraitCollection.userInterfaceStyle == .dark {
/// Return the color for Dark Mode
return Colors.osloGray
} else {
/// Return the color for Light Mode
return Colors.dataRock
}
}
} else {
/// Return a fallback color for iOS 12 and lower.
return Colors.dataRock
}
}()

Updating assets and images for Dark Mode:

The easiest way to do this is by using an Image Asset Catalog. You can add an extra image per appearance.

Adding an extra appearance to an image asset.

Conclusion:

Now, if you have finally decided to adopt iOS 13 dark mode, then here’s a simple checklist to follow:

  • Download and Install Xcode 11.0 or latest
  • Build and Run your app when dark mode is enabled
  • Fix all the errors that you have found
  • Add dark variants to all your properties
  • Adapt Dark Mode one screen at a time:
    • Start from the xib’s files
    • Shift to storyboards
    • Shift to code
    • Repeat all the screens one by one
  • Ensure to set the foreground key while drawing attributed text
  • Swift all your appearance logic in the “Draw Time” functions
  • Test your app in both modes, light and dark mode
  • Don’t forget to change your LaunchScreen storyboard

By following the above process, you will be able to implement iOS 13 dark mode in your app with ease and confidence.

Data Synchronization in Real Time: An Evolution

 

HyperText Transfer Protocol (HTTP) is the most widely used application layer protocol in the Open Systems Interconnection (OSI) model. Traditionally, it was built to transfer text or media which had links to other similar resources, between a client that common users could interact with, and a server that provided the resources. Clicking on a link usually resulted in the un-mounting of the present page from the client and loading of an entirely another page. Gradually when the content across pages became repetitive with minute differences, engineers started looking for a solution to the only update some half of the content instead of loading the entire page.

This was when XMLHttpRequest or AJAX was born which supported the transfer of data in formats like XML or JSON, which differed from the traditional HTML pages. But all along the process, HTTP was always a stateless protocol where the onus lied on the client to initiate a request to the server for any data it required.

Real-time data

When exponential growth in the volume of data exchanged on the internet lead to application spanning multiple business use cases, the need arose to fetch this data on a real-time basis rather than waiting for the user to request a page refresh. This is the topic that we are trying to address here. Now there are different protocols and solutions available for syncing data between client and server to keep data updated between a third party server and our own server. We are limiting the scope only to real-time synchronization between a client application and a data server.

Without loss of generality, we are assuming that our server is on a cloud platform, with several instances of the server running behind a load balancer. Without going into the details on how this distributed system maintains a single source of new data, we are assuming that whenever a real-time data occurs, all servers are aware and access this new data from the same source. We will now disseminate four technologies that solve real-time data problems – namely Polling, Long Polling, Server-Sent Events, and WebSockets. We will also compare them in terms of ease of implementation on the client-side as well as the server-side.

Polling

Polling is a mechanism in which a client application, like a web browser, constantly asks the servers for new data. They are traditional HTTP requests that pull data from servers via XMLHttpRequest objects. The only difference is that we don’t rely on the user to perform any action for triggering this request. We periodically keep on pushing the requests to the server separated by a certain time window. As soon as any new data is available on the server, the immediate occurring request is responded with this data.

Figure 1: Polling

Ease of Implementation on client

  • Easiest implementation
  • Simply set up an interval timer that triggers the XMLHttpRequest

Ease of Implementation on Server

  • Easiest to implement
  • As soon as the request arrives, provide the new data if available
  • Else send a response indicating null data
  • Typically Server can close a connection after the response
  • Since HTTP 1.1, as all connections are by default kept alive till a threshold time or certain number of requests, modern browsers behind the scenes multiplex request among parallel connections to a server

Critical Drawbacks

  • Depending on the interval of requests, the data may not actually be real-time
  • We must not remove the new data for the amount of time that is at least the same as an interval of requests, else we risk some clients being not provided with the data
  • Results in heavy network load on the server
  • When the interval time is reached, it does not care whether the request made earlier has been responded to or not; it simply makes another periodic request
  • It may throttle other client requests as all of the connections that a browser is limited to for a domain may be consumed for polling

Long Polling

As the name suggests, long polling is mostly equivalent to the basic polling described above as it is a client pull of data and makes an HTTP request to the server using XMLHttpRequest object. But the only difference is that it now expects the server to keep the connection alive as long as it does not respond with new data or the connection timeout over TCP is reached. The client does not initiate a new request till the previous request is responded with.

Figure 2: Long Polling

Ease of Implementation on client

  • Still easy to implement
  • The client simply has to provide a Keep-Alive header in the request with a parameter indicating the maximum connection timeout (Note that modern browsers implementing HTTP 1.1 by default provide this header i.e. by default the connection is kept alive)
  • When the previous request is responded with, initiate a new request

Ease of Implementation on Server

  • More difficult to implement than traditional request-response cycle
  • The onus is on the server to keep the connection alive
  • The server has to periodically check for new data while keeping the connection open which results in the consumption of memory and resources
  • It is difficult to estimate how long should the new data be kept on the server because there can be cases where connections with some client had timed out in the moment when new data arrives and these clients cannot be provided with this data

Critical Drawbacks

  • If data changes are frequent, this is virtually equivalent to polling because client will keep on making requests very frequently too
  • If data changes are not that frequent, this results in lots of connection timeouts
  • So a connection which could have been used for other requests is adhering only to a single request for a very long time
  • Caching servers over the network between client and server can result in providing stale data if proper Cache-Control header is not provided
  • As mentioned earlier, it is possible that some connections may not be provided with new data at all

Server-Sent Events

Server-Sent Events or SSE follows the principle of Server push of data rather than client polling for data. The communication still follows the standard HTTP protocol. A client initiates a request with the server. After the TCP handshake is done, the server informs the client that it will be providing streams of text data. Both the browser and server agree to keep the connection alive for as long as possible. The server in fact never closes the connection on its own. The client can close the connection if it no more needs new data. Now whenever any new data occurs on the server, it keeps on providing stream in text format as a new event for each new data. If the SSE connection is ever interrupted because of network issues, the browser immediately initiates a new SSE request.

Figure 3: Server-Sent Events

Ease of Implementation on client

  • The modern browser provides a JavaScript class called as EventSource which abstracts a lot of overhead functionality for client
  • The client simply has to instantiate the EventSource class with the server endpoint
  • It will now receive event call-back whenever a stream of text data is pushed by the server
  • EventSource instance itself handles re-establishing an interrupted connection

Ease of Implementation on Server

  • Over the traditional HTTP response headers, the server must provide Content-Type header as ‘text/event-stream’ and Connection header as ‘Keep-Alive’
  • Each server has to remember the pool of connections with SSE properties
  • The server has to periodically check for new data which results in the consumption of memory via an asynchronously running thread
  • Since a consistent connection is almost guaranteed by all clients, the server can push new data to all connections from the pool and flush the now stale data immediately

Critical Drawbacks

  • EventSource class is not supported by Internet Explorer
  • The server must ensure to remove failed connections from the SSE pool to optimize resources

WebSockets

Unlike all the above three technologies which follow HTTP protocol, Websockets can be defined as something that’s built over HTTP. The client initiates a normal HTTP request with the server but includes a couple of special headers – Connection: Upgrade and Upgrade: WebSocket. These headers instruct the server to first establish a TCP connection with the client. But then, both the server and client agree to use this now active TCP connection for a protocol which is an upgrade over the TCP transport layer. The handshake that happens now over this active TCP connection follows WebSocket protocol and agree on following payload structure as JSON, XML, MQTT, etc. that both the browser and server can support via the Sec-WebSocket-Protocol Request and Response Header respectively. Once the handshake is complete, the client can push data to the server while the server too can push data to the client without waiting for the client to initiate any request. Thus a bi-directional flow of data is established over.

Figure 4: WebSockets

Ease of Implementation on client

  • The modern browser provides a JavaScript class called WebSocket which abstracts a lot of overhead functionality for client
  • The client simply has to instantiate the WebSocket class with server URL
  • Note that though the HTTP in URL (ex: http://example.com) must be replaced with ws protocol (ex: ws://example.com)
  • Similarly, https must be replaced with wss
  • WebSocket class provides a connection closed callback when a connection is interrupted and hence the client can initialize a new WebSockets request
  • WebSocket class provides a message received callback whenever the server pushes any data
  • WebSocket class also provides a method to send data to the server

Ease of Implementation on Server

  • On receiving an HTTP request from the client to upgrade the protocol, the server must provide HTTP 101 status code indicating the switch of the protocol to Web Socket
  • The server also provides a base64 encoded SHA-1 hash generated value of secure WebSocket key provided by each client on handshake request via the Sec-Websocket-Accept response header
  • The response header also includes the data format protocol via the Sec-Websocket-Protocol header

Critical Drawbacks

  • Though there are libraries available like websockify which make it possible for the server running on a single port to support both HTTP and WebSocket protocol, it is generally preferred to have a separate server for WebSockets
  • Since WebSockets don’t follow HTTP, browsers don’t provide multiplexing of requests to the server
  • This implies that each WebSocket class instance from the browser will open a new connection to the server and hence connecting and reconnecting need to be maintained optimally by both the servers and the client

Below is a table summarising all the parameters:

Polling Long Polling SSE WebSockets
Protocol HTTP HTTP HTTP HTTP Upgraded to WebSockets
Mechanism Client Pull Client Pull Server Push Server Push
Bi-directional No No No Yes
Ease of Implementation on Client Easy via XMLHttpRequest Easy via XMLHttpRequest Manageable via EventSource Interface Manageable via the WebSocket Interface
Browser support All All Not supported in IE – can be overcome with Polyfill library All
Automatic Reconnection Inherent Inherent Yes No
Ease of Implementation on Server Easy via the traditional HTTP Request-Response Cycle Logic of memorizing connection for a session needed Standard HTTP endpoint with specific headers and a pool of client connections Requires efforts and mostly need to set up a separate server
Secured Connection HTTPS HTTPS HTTPS WWS
Risk of Network Saturation Yes No No Since browser multiplexing not supported, need to optimize connection on both ends
Latency Maximum Acceptable Minimal Minimal
Issue of Caching Yes, need appropriate Cache-Control headers Yes, need appropriate Cache-Control headers No No

Conclusion

Polling and Long Polling are client pull mechanism that adheres to the standard HTTP request-response protocol. Both are relatively easier to implement on the server and client. Yet both pose the threat of request throttling on client and server respectively. Latency is also measurable in both the implementations which is somewhat self-contrasting for the purpose of providing real-time data. Server-Sent Events and WebSockets seem to be better candidates in providing real-time data. If the data flow is unidirectional and only the server needs to provide updates, it is advised to use SSE which follows the HTTP protocol. But if the need is that client and server both need to provide real-time data to each other which can be the case in scenarios like a chat application, it is advised to go for WebSockets.

React Context API vs Redux

Whenever there is a requirement for state management, the first name that pops in the head is REDUX. With approximately 18M downloads per month, it has been the most apparent and unmatched state management tool.

But the new React Context API is giving the redux a healthy competition and trying to replace it.

I will first give a brief explanation for both, and then we can deep dive into the details.

What is Redux?

Redux is most commonly used to manage state or data of a React app. It is not just limited to React apps; it can be used with Angular and other frameworks as well. But when using react, the most common and obvious choice is to use redux.

Redux provides a centralized store(state) that can connect with various react containers/components.

This state is not mutable and accessible directly, to change the state data we need to dispatch the actions and then the reducers will update the data in the centralized state.

What is React’s Context API?

Context API provides a way to solve a simple problem which you will face in almost all react apps, how can we manage a state or pass data to not connected components.

Let’s first see an example of a sample application with redux used for state management.

The state is always changed by dispatching an action.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_actions_example_1-jsx 

Then a reducer is present to update the global state of the app. Below is a sample reducer.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_reducer_example_2-jsx

Below would be a sample app.js file.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_app_example_3-jsx

The last step would be to connect the react component to the reducer, which would subscribe to the global state and automatically update the data,  passed as props.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_todo_component_4-jsx

This is a fundamental and trivial implementation of the react-redux setup. But there is a lot of boilerplate code that needs to be taken care of.

Now, let’s see how does React’s context API work. We will update the same code and use the context API and remove redux.

Context API consists of three things:

  • Context Object
  • Context Provider
  • Context Consumer

First of all, we will create a context object.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_create_context_5-js

We can create contexts in various forms. Either in a separate file or in the component itself. We can create multiple contexts, as well. But what is this context?

Well, a context is just a JSON that holds some data(It can hold functions as well).

Now let’s provide this newly created context to our app. Ideally, the component that wraps all the child components should be provided with the context. In our case, we are providing context to our app itself. The value prop in the <TodoContext.Provider> set here passes down to all the child components.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_provider_example_6-jsx

Here is how we can consume our provided context in the child components.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_consumer_example_7-jsx

The special component <TodoContext.Consumer> is injected into the context provided. The context is the same object that is passed to the value prop of <TodoContext.Provider>. So if the value changes over there, the context object in the consumer is also updated.

But how do we update the values? Do we need actions?

So here we can use the standard React State management to help us. We can create the state in our App.js file itself and pass the state object to the Provider. The example given below would provide you with a little more context. 🙂

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_context_management_example_8-jsx-jsx

In the above code, we are just updating the state normally as we would in a normal class-based React component. Also, we are passing methods as references to the value prop, so any component that is consuming the context will have access to this function and can easily update the global state.

So that is how we can achieve global state management using the React Context API instead of using redux.

So should you get rid of redux completely?

Let’s look into a little comparison listed down below:

Redux React Context API
Learning Curve Redux is a whole new package that needs to be integrated into an app, it takes some time to learn the basic concepts and standard code practices that we need to follow in order to have the react and redux working together smoothly. If you know react then it certainly helps to speed things up to learn and implement redux. React context, on the other hand, works on the state principle which is already a part of React, we only need to understand the additions to the API and how we can use the providers and consumers. In my opinion, a react developer can get familiarized with the concept in a short while
Refactoring Effort To refactor the code to redux API would depend on the project itself, a small scale app can be easily be converted in 3 to 4 days but if there is a big app that needs to converted it can take some time.
Code Size When using redux the code size of the web app is increased quite a bit as we include quite some packages just to bind all the stuff together.
redux  – 7.3kB
react-redux – 14.4kB
On the other hand, the context API is baked in the react package. So no additional dependencies are required

 

Scale Redux is known for it’s scaling capabilities, in fact, while building a large scale app redux is the first choice, it provides modularity (separating out reducers, actions) and a proper flow which can be easily scalable. The same cannot be said for the react context API, as everything is managed by the state property of React, while we can create a global higher-order component that can contain the whole app state, but this is not really maintainable and not easy to read code.

In my opinion, a small scale app can easily adapt to the react context API. To integrate redux, we need three to four separate packages. This adds to the final build, bigger bundle size, a lot more code to process, which would increase the render times.

On the other hand, React context API is built-in, and no further package is required to use it.

However, when we talk about large scale apps, where there are numerous components and containers involved, I believe the preferred way to go is redux, as redux provides maintainability, ability to debug your code. Their various middlewares present helps to write efficient code, handle async code and debug better. We can separate the actions dispatchers and reducers in redux, which provide us with an easier and defined coding pattern.

The last approach can be to use both of these, but I have not tried it. We can connect containers with redux, and if the containers have deep child component trees, we can pass the data to children using context objects.