Jenkins Pipeline: Features & Configurations

Jenkins, as we know is an automation server that helps automate the process of build, test and deploy and implement CI/CD workflows needed for every software nowadays. It also helps QAs to either integrate smoke/sanity automation test script after the build flow itself or to run regression suites separately by creating and scheduling Jenkins job.

Considering we all are familiar with the basics of Jenkins, in this blog we talk about one of the Jenkins project type ‘Jenkins pipeline’ and some features and configurations related to it:

  • What is Jenkins Pipeline?
  • Why Jenkins Pipeline?
  • Jenkins Pipeline Implementation
  • Jenkins pipeline with master-slave build execution
  • Blue ocean Jenkins UI
  • Summary

What is Jenkins Pipeline?

Pipelines are Jenkins jobs enabled by the Pipeline plugin and built with simple text scripts that use a Pipeline DSL (domain-specific language) based on the Groovy programming language. Jenkins pipeline allows us to write Jenkins build steps in code (Refer Figure 1)

This is a better alternative for the generally used freestyle project where every step of CI/CD is configured step by step in the Jenkins UI itself

Figure 1: Jenkins Project type ‘Pipeline’

Why Jenkins Pipeline?

The traditional approach of maintaining jobs has some limitations like:

  • Managing a huge number of jobs and their maintenance is tough
  • Making changes to Jenkins jobs through UI is very time consuming
  • Auditing and tracking the jobs is tough

Thus Jenkins pipeline provides a solution to it as it automates the job steps in the simple groovy script and the pipeline file itself becomes the part of our codebase in SCM. This helps in the following:

  • Automating the job since build steps are written in the form of code in a simple Jenkins text file.
  • Jenkins file can be a part of our Sourcecode and can be checked into a version control system
  • Better audit logging and tracking of changes in the job is possible
  • Deploying a similar job in any other environment is easy since we don’t need to install plugins and setup individual job steps again
  • Visualization of the execution status of each stage of the job is good

Jenkins pipeline implementation

To write a Jenkins pipeline file using groovy script for our Jenkins job we need to understand some of these terms.

A Jenkins file can be written using two types of syntax – Declarative and Scripted. Since Declarative is more recent and is designed to make writing and reading Pipeline code easier, we’ll take its example (Refer Figure 2)

  • Pipeline -A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.
  • Agent/Node– Defines the Jenkins worker where we want this job to run. If left blank it’s treated as ‘any node’
  • Stage –Stage structures your script into a high-level sequence and defines a building stage like build, deploy, test etc. User can define stages as he wants based on how he wants to divide the job conceptually
  • Step-A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time

Figure 2: Basic Building Blocks Declarative Pipeline (numbering in the image shows execution sequence)

Let’s take the example of Postman collection execution through Newman Command as we did in our project and also explained in one of previous blog series ‘API automation using postman –Simplified’

To execute postman API tests using Jenkins and store test results, we are required to pull postman collection from git, install some dependencies like npm, run Newman command for executing the collection, process the output to fetch important data and commit the report to git

This all can be done in a freestyle project by step by step configuring each stage and some steps require some plugin installation first.

But if we want to achieve the same thing using Jenkins pipeline, we just need to write a simple text file by using the syntax shown in Figure2. Thus every required set of actions can be added in the different stages of Jenkins file as shown in Figure 3.Steps can include commands similar to windows batch commands, Linux shell script etc. with groovy syntax. When this pipeline job executes, there are visualization plugins to see the execution details for each stage like which stage caused any failure or how much time it took to complete a stage, etc. (Refer Figure 4)

Figure 3: Pipeline text file template for one example case of Newman execution of postman collection

Figure 4: Pipeline Stage view after execution

Jenkins pipeline with master-slave build execution

If we use distributed execution of Jenkins and have Slave nodes or agents for running Jenkins job and we want to run Jenkins pipeline on a specific agent, Agent name can be specified as

agent {label ‘my-defined-label’ } rather than specifying Agent as ‘any’ in the declarative pipeline

To understand the flow of Jenkins Slave execution better, let’s briefly go through how to create a slave node in master and create a connection with the slave node

In your master Jenkins go to Manage Jenkins->Manage Nodes where master node already exists

Click on the new node and configure the details (Refer Figure 5)

Figure 5: Add a Node

There are now 2 ways to establish a connection

  1. Master Node connects to slave node over SSH

For this method of establishing a connection between master and slave, we need to configure slave agent’s host IP port and private ssh key in the Slave configuration modal (Refer Figure 6). Private ssh key of the agent needs to be added under Jenkins credentials manager using Add Credential option and then it can be selected from the Credentials field dropdown.

Also point to note here is that the user has to save this ssh key as allowed host to verify it as a known host, if we are using the host key verification strategy as ‘Known hosts file Verification Strategy’ as shown in the image.

Then the master is able to connect to the slave agent

Figure 6: Slave Node Configuration using a method where Master connects to Slave (SSH method)

  1. Slave Node connects to Master node Also called a connection through JNLP (useful when a slave is behind firewall)

Setting up the slave node using this method requires to do some Jenkins settings first

  • Under Manage Jenkins->Configure Global Security, Specify TCP port for Agents (Refer Figure 7)

Figure 7: Setting TCP port

  • Under Manage Jenkins->Configure System, Specify Jenkins location to which Agents can connect (Refer Figure 8)

By default, the location is set to localhost which needs to be modified to Jenkins server’s local IP and TCP port

Figure 8: Setting Jenkins Location

After the above settings are done, go to Manage Jenkins->Manage Nodes and create New Node (similar to Image)

In the Slave node configuration (Refer Figure 9), under Launch method, we can see the option ‘Launch agent by connecting it to master’ (in older versions also referred as Launch agent via JAVA web start)

Figure 9: Slave Node Configuration using a method where Slave connects to master (JNLP method)

After providing ‘Remote root directory’, click on save and click on create slave-Node

For windows, it provides a direct option to Launch agent. In the slave, the machine hit the Jenkins server URL and click on Launch option

While for Linux, Command is provided to be run (Refer Figure 10)

Figure 10: Launch options after Slave node creation

Using the above methods one can set up the slave nodes and execute Jenkins jobs. User can also make use of docker to run Jenkins container in achieving the same

Blue Ocean UI

Blue Ocean is a new frontend for Jenkins and is built from ground up for Jenkins pipeline. This new modern visual design aims to improve clarity, reduce clutter, and navigational depth to make the user experience very concise.

To enable Blue Ocean UI, the user first needs to install the Blue Ocean plugin. (Refer to Figure 11)

Figure 11: Launch options after Slave node creation

Once the Plugin is installed, the user can see the option ‘Open Blue Ocean’ in the left pane to switch to Blue Ocean UI (Refer Figure 12). Similarly while in Blue Ocean UI, the user sees the option to switch to classic UI (Refer Figure 13)

Figure 12: Launch options after Slave node creation

Figure 13: Launch options after Slave node creation

Blue Ocean UI provides:

  • Sophisticated visualization of the pipeline
  • A pipeline editor
  • Personalization
  • More precision to quickly find what’s wrong during an intervention
  • Native integration to branch and pull requests

The below images give a high-level idea of how the New Pipeline creation view looks like (Refer Figure 14) and how pipeline execution is visualized (Refer Figure 15) in the blue ocean.

Figure 14: New Pipeline Creation in Blue Ocean

Figure 15: Pipeline Execution Visualization in Blue Ocean

Pipelines are visualized on the screen along with the steps and logs to allow simplified comprehension of the continuous delivery pipeline – from the simple to the most sophisticated scenarios. Scrolling through 10,000 line log files is not required now as Blue Ocean breaks down log per step and calls out where your build failed.

Conclusion

Pipelines, although having few limitations in terms of plugin compatibility and additional effort needed for managing and maintaining pipeline scripts in cases of applications and technology changes, still has multiple benefits to conclude that Pipelines are a fantastic way to view traditional jobs, and it gives us new view backed by years of traditional CI power. Blue Ocean builds upon the solid foundation of the Jenkins CI server by providing both a cosmetic and functional overhaul for the modern process.

 

Enabling Support for Dark Mode in your Web Application

Introduction to Dark Mode

MacOS introduced Dark Mode, wherein dark colors are used instead of light colors for user Interface. The dark mode is everywhere from Mac, Windows, Android, and now on the iPhone and iPad. It is a support system that can be used to display dark surfaces on the UI. Dark Mode having light text and the dark background is mostly used to reduce eye strain in low light conditions & blue light.

Impact of Dark mode felt less in Safari, and it’s because almost all the websites and web-apps built till date are not designed to support dark mode. In Apple’s browser, the title bar at the top turns black, but web pages are displayed in the same manner as they are in regular, light mode. All that whiteness and brightness can be jarring against the dark elements of dark mode.

What can we do?

We should also build our websites and applications which are compatible with dark mode. But first, let’s take a look at how we can enable dark mode in iOS or Windows devices.

·         Enable Dark Mode in Windows 10.

Go to Settings > Colors > Choose your default Windows Mode > Dark.

·         Enable Dark Mode in iOS (Macbook/MacMini).

Go to System Preferences > General > Dark.

·         Enable Dark Mode in iPhones/iPad.

Go to Settings > Display & Brightness > Dark.

How can we do that?

There is this super new feature media query based on the user’s operating system set theme, called as prefers-color-scheme. You can add any CSS which you want to show, just like regular device responsive media query.

Method 1: This is a normal method with normal variables set to both the modes.

“prefers-color-scheme: dark”

To check if this media query works, change your theme preference to ‘dark’.

@media(prefers-color-scheme: dark){ background-color: $dark-mode-background; color: $dark-mode-text-color;

“prefers-color-scheme: light”

Similarly, for the light theme, we have another media query.

@media(prefers-color-scheme: light){ background-color: $light-mode-background; color: $light-mode-text-color;

Method 2: We can set CSS variables for changing themes, CSS variables start with two dashes (–) and they are case sensitive.

As we are changing the theme, we can define CSS variables in the global scope, i.e. in: root or in body selector.

:root{

–bg: #fff;

@media(prefers-color-scheme: dark){

–bg: #000;

}// change your OS into dark theme to see magic of color-scheme..

The link to explain this feature media query, try toggling between your OS Dark Mode for better understanding: https://codepen.io/shwetabhagre/pen/yLyNXOd

Do You Use Operating System’s Dark Mode?

Because this is a very new feature, people get confused with respect to the usage. Well, we found a survey that will help us to decide do people really use dark mode, or when they like to and not like to use dark mode.

Conclusion:

  • As a new feature, Dark Mode supports only updated browsers. In the case of older versions of browsers, we need to add traditional CSS to avoid
  • Below chart shows Browser support for prefers-color-scheme

Working with Kafka Consumers

What is Kafka

Apache Kafka is an open-source and distributed streaming platform used to publish and subscribe to streams of records. Its fast, scalable, fault-tolerant, durable, pub-sub messaging system. Kafka is reliable, has high throughput and good replication management.

Kafka works with Flume, Spark Streaming, Storm, HBase, Flink for real-time data ingestion, analysis, and processing of streaming data. Kafka data can be unloaded to data lakes like S3, Hadoop HDFS. Kafka Brokers works with low latency tools like spark, storm, Flink to do real-time data analysis.

Topics and Partitions

All the data to Kafka is written into different topics. A topic refers to the name of a category/feed name used for which storing and publishing records. Producers write data to Kafka topics, and consumers read data/messages from Kafka topics. There are multiple topics created in Kafka as per requirements.

Each topic is divided into multiple partitions. This means messages of a single topic would be in various partitions. Each of the partitions could have replicas which are the same copy.

Consumers

It’s the process which reads from Kafka. It can be a simple java program, python program, Go code or any distributed processing framework like Spark Stream, Storm, Flink or similar.

There are two types of Kafka consumers-

Low-level Consumer

In the case of low-level consumers, partitions and topics are specified as the offset from which to read, either fixed position, at the beginning or the end. This can, of course, be cumbersome to keep track of which offsets are consumed, so the same records aren’t read more than once.

High-Level Consumer

The high-level consumer (more commonly known as consumer groups) comprises more than one consumer. In this case, a consumer group is built by the addition of the property “group.id” to a consumer. Giving the same group id to any new consumer will allow him/her to join in the same group.

Consumer Group

A consumer group is a mechanism of grouping multiple consumers where consumers within the same group share the same group id. Data is then equally divided among consumers falling into a group, with no two consumers, from the same group, receiving the same data.

When you write Kafka consumer you add a property like below

props.setProperty(“bootstrap.servers”, “localhost:9092”);

props.setProperty(“group.id”, “test”);

So consumers with the same group id are part of the same consumer group. They will share the data from the Kafka topic. Consumers will read only from those partitions of Kafka topic, where Kafka cluster itself assigns them.

Partitions and Consumer Group

What happens when we start consumer with some consumer group. First, Kafka checks if already a consumer is running with the same consumer group id.

If it is a new consumer group ID, it will assign all the partitions of that topic to this new consumer. If there is more than one consumer with the same group ID, Kafka will divide partitions among available consumers.

If we write a new consumer group with a new group ID, Kafka sends data to that consumer as well. The data is not shared here. Two consumers with different group id will get the same data. This is usually done when you have multiple business logic to run on data in the Kafka.

In the below example consumer z is a new consumer with different group id. Here, only a single instance of a consumer is running, so all four partitions will be assigned to that consumer.

When to use the same consumer group?

When you want to increase parallel processing for your consumer application, then all individual consumers should be part of the same group. Consumers ought to be a part of a similar group when the consumer carrying out operation has to be scaled up to process in parallel. Consumers, who are part of a common group, would be assigned with partitions that are different, thereby leading to parallel processing. This is used to achieve parallel processing in consumers.

When you write a storm/spark application, the application uses a consumer id. When you increase the workers for your application, it adds more consumers for your application and increases parallel processing. But you can add consumers the same as that of a number of partitions and can’t have more consumers than the number of partitions in the same consumer group. Basically, partitions are assigned to one consumer.

When to use different consumer groups?

When you want to run different application/business logic, consumers should not be part of the same consumer group. Some consumers update the database, while another set of consumers might carry out some aggregations and computations with consumed data. In this case, we should register these consumers with different group-id. They will work independently and Kafka will manage data sharing for them.

Offset management

Each message in the partition would have a unique index that is specific to the partition. This unique id is called offset. It is usually a number that indicates the count of messages read by a consumer, and it is usually maintained according to consumer group-id and partition. Consumers belonging to different groups can resume or can pause independently of the other groups, hence creating no dependency among consumers from different groups.

auto.offset.reset

This property takes control over the behavior of a consumer whenever it starts reading a partition it doesn’t have a committed offset for/ if the committed offset is invalid as it aged out because of an inactive customer). The default says latest- that indicates ‘on restart’ or new application will start reading for newest Kafka records  The alternative is “earliest”, which means on restart or new application will read all data from start/beginning of Kafka partitions.

enable.auto.commit

This parameter is used to decide whether the consumer should commit offsets automatically or not. The default value is set to true, which means Kafka will commit offset on his own. If the value is false, the developer decides when to commit offset back to Kafka. This is highly essential to minimize duplicates and avoid missing data. In case you set enable.auto.commit as true, then it’s also necessary to minimize duplicates and avoid missing data. If you set to true, then you might also want to control how frequently offsets will be committed using auto.commit.interval.ms.

Automatic Commit

The easiest and best way to commit offsets is by allowing your consumer to do it for you. If you set value for enable.auto.commit as true, then Kafka consumer will commit the largest value of offset generated by poll() function every five seconds.

The interval of five seconds is by default and is taken control by setting auto.commit.interval.ms.

But if your consumer restarts before 5 seconds then there are chances that you process some records again. Automatic commits are easy to implement, but they don’t give developers enough flexibility to avoid duplicate messages.

Manual Commit

Developers want to control when they wish to commit offset back to Kafka. As normally applications read some data from Kafka, some processing and save data in the database, files, etc. So they want to commit back to Kafka only when their processing is successful.

When you set enable.auto.commit=false, the application explicitly chooses to commit offset to Kafka. The most reliable and simple of all the commit APIs is termed as commitSync(). This API commits the offset returned lately by poll() and again return as soon as the offset is committed, throwing back an exception in case of a commit failure.

One drawback of the manual commit is- application is blocked until the broker reverts to the commit request. This, in turn, limits the application’s throughput. To avoid this, asynchronous commit API comes to the picture. Instead of waiting for the response to a commit from the broker’s side, we can send out the request and continue on. The drawback is that commitSync() will continue re-trying the commit till the time it succeeds or encounters a non-retriable failure, commitAsync() will not retry. Sometimes, commitSync() and commitAsync() together are used to avoid re-try problems if the commit is failed.

Conclusion

We have checked here what a consumer is, how the consumer group works, and how we can parallelize consumers by using the same consumer group id. We have also checked on to do offset management when working with an application.

Setup of FreeRADIUS Server Using Raspberry Pi3

This blog will take you through the basics of 802.11X authentication and steps on how to configure FreeRadius using raspberry pi. Quite recently, I got the opportunity to work on the FreeRadius server for one of the customer requirements to test their product (access points) for the 802.11X standard. And to achieve this, I had to set up my radius server.

What is 802.1X and How Does it Work?

In a wireless network, 802.1X is used by an access point to authenticate client request to connect to the Wi-Fi. Whenever a wireless client tries to connect to a WLAN, the client passes user information (username/password) to access point, and these access points carry forward this information to the designated RADIUS server. RADIUS servers receive user connection requests, authenticate the user, and then return the configuration information required for the client to connect to the Wi-Fi.

802.1X Authentication comprises of three main parts:

1) Supplicant – Supplicant is a client or end-user who is waiting for authentication

2) Authentication Server (usually a RADIUS server): This server decides whether to accept the end user’s request for full network access.

3) Authenticator – It is an access point or a switch that sits between the supplicant and the authentication server. It acts as a proxy for the end-user and restricts the end-user’s communication with the authentication server.

To implement 802.11X, we need an external server called a Remote Authentication Dial-in User Service (RADIUS) or Authentication, Authorization, and Accounting (AAA) server, which is used for a variety of network protocols and environments including ISPs.

It is a client-server protocol that enables remote servers (Network Access Server-NAS) to communicate with the central servers (Active Directory) to authenticate and authorize dial-in users (WIFI/wired clients) to provide them access to the requested resources.

It provides security and helps companies to maintain a central location for managing client credentials and give easy-to-execute policies that can be applied to a vast range of users from the single administered network point.

It helps companies to have the privacy and security of the system and individual users. There many RADIUS servers available in the market for free which you can configure on your machine. One of them is FreeRadius- a daemon for Unix and Unix-like operating systems which allows one to set up a radius protocol server- which can be used for authenticating and accounting various types of network access.

Installation and Configuration of FreeRADIUS Server Using Terminal in Raspberry

Given below are the steps to install FreeRADIUS:

Open a terminal window. To get into the root directory, type the command given below:

sudo su –

You will get into the root.

To start the installation of FreeRADIUS:

apt-get install freeradius  -y

The steps to configure FreeRADIUS:

To add users that need to be authenticated by the server, you need to edit/etc/freeradius/3.0/users file.

The command is üser name” Cleartext-Password := “”Password”

For example, ”John Doe” Cleartext-Password := “hello”

To add Clients (client is the access point IP/Subnet which needs to direct messages to RADIUS server for authentication):

You need to edit/etc/freeradius/3.0/clients.conf.

In the example given below, I am allowing access points having IP in subnet 192.168.0.0/16

# Allow any address that starts with 192.168

client 192.168.0.0/16 {

secret = helloworld

shortname = office-network

}

or to allow any device with any IP:

client 0.0.0.0/0 {

secret = helloworld

shortname = office-network

}

Quick Steps to Test FreeRADIUS

Now make sure that FreeRADIUS initializes successfully using the following commands. You should see “Info: Ready to process requests” at the end of the initialization process.

#service freeradius stop

# freeradius -XXX

If FreeRADIUS starts with no hassle, you can then you can type Ctrl-C to exit the program and restart it with:

#service freeradius start

There is a command-line tool called radtest that is used to exercise the RADIUS server. Type:

radtest “username” “password” localhost 1812 testing123

Example,

radtest John Doe hello localhost 1812 testing123

You should receive a response that says “Access-Accept”.

By using the steps mentioned above, you will be able to setup freeRADIUS server. Also, we learned the method of adding a subnet range that will be able to send out access requests to the server. Please note that if the AP subnet is not inserted correctly, the server will surely be pingable, but access requests will never reach the server. In the current example, we added only one user information in the user file; however, there is immense scope to add multiple users as per our needs.

Whenever a wireless client tries to connect to a WLAN, the client will pass user information (username/password) to access points. Then, the access points forward info to the FreeRADIUS server, which then authenticates the users and returns configuration information essential for the client to connect to WiFi. In cases wherein the credentials don’t match the database created on the server, the server sends across ‘Access-Reject’ to the access point and the client’s request is declined.

We can also configure MAC-based authentication on the server, where the server authenticates the user based on a configured list of allowed mac addresses. If the MAC address matches, the server will send a message of ‘Access-Accept’. In case of any suspicious machine, whose MAC is not configured, tries to connect to the network, a message of ‘Access-Reject’ is sent.

To configure MAC address authentication, on the FreeRadius you need to edit etc/freeradius.3.0/users file.

To add users, use the command given below:

“üser name” Cleartext-Password := “Password”

In the same command for MAC authentication, you need to write MAC address of the device all in small letters and without colon (:), which you want to be authenticated by RADIUS server in place of user name and Password,

Eg- “453a345e56ed” Cleartext-Password := “453a345e56ed”

Summary-

This can go a long way in helping companies implement security protocols and only allow verified devices to connect to the network. I hope this article helps you with the easy setup of FreeRADIUS Server Using Raspberry Pi3.

IOS – 13 Dark Mode Support

Introduction:

What is the dark mode?

The dark mode is a color scheme that uses light-colored text, icons, and graphical user interface elements on a dark background. This is an inversion of the default color scheme on iOS and other apps, which is generally black or dark text and icons on a white background.

Dark Mode was introduced in iOS 13 and announced at WWDC 2019. It adds a darker theme to iOS and allows you to do the same for your app. It’s a great addition for your users to experience your app in a darker design.

Benefits of using Dark Mode

Dark Mode is ideally suited to low-light environments, where it not only disturbs your surroundings with the light emitted by your phone but also helps in the prevention of strain caused to eyes. Whether you’re using Mail, Books or Safari, the text will appear white on a black background, making it easy to read in the dark. Using dark mode can often extend the battery life of your device as well, as less power is needed to light the screen. However, Apple doesn’t explicitly list this as an advantage.

Opt-out and disable Dark Mode

If you wish to opt-out your entire application:

  • If you don’t have the time to add support for dark mode, you can simply disable it by adding the UIUserInterfaceStyle to your Info.plist and set it to Light.
<key>UIUserInterfaceStyle</key> <string>Light</string>

 

  • You can set overrideUserInterfaceStyle against the app’s window variable. Based on how your project was created, this may be in the AppDelegate file or the SceneDelegate.
if #available(iOS 13.0, *) {
window?.overrideUserInterfaceStyle = .light
}

If you wish to opt-out your UIViewController on an individual basis:

override func viewDidLoad() {
super.viewDidLoad()
// overrideUserInterfaceStyle is available with iOS 13
if #available(iOS 13.0, *) {
// Always adopt a light interface style.
overrideUserInterfaceStyle = .light
}
}

Overriding Dark Mode per View Controller

  • You can override the user interface style per view controller and set it to light or dark using the following code:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
overrideUserInterfaceStyle = .dark
}
}

Overriding Dark Mode per view:

  • You can do the same for a single UIView instance:
let view = UIView()
view.overrideUserInterfaceStyle = .dark

Overriding Dark Mode per window:

  • Overriding the user interface style per window can be handy, if you want to disable Dark Mode programmatically:
UIApplication.shared.windows.forEach { window in
window.overrideUserInterfaceStyle = .dark
}

Please note that we’re making use of the windows array here as the key window property on the shared UIApplication is deprecated starting from iOS 13. It’s discouraged to use it because applications can now support multiple scenes that all have an attached window.

Enabling Dark Mode for testing:

If you start implementing a darker appearance in your app, it’s important to have a good way of testing.

Enabling Dark Mode in the Simulator:

Navigate to the Developer page in the Settings app on your simulator and turn on the switch for Dark Appearance:

Enabling Dark Mode on the Simulator

Enabling Dark Mode on a device:

On a device, you can enable Dark Mode by navigating to the Display & Brightness page in the Settings app. However, it’s a lot easier during development to add an option to the Control Centre for easy switching between dark and light mode:

Switching Dark Mode from the debug menu:

While working in Xcode with the simulator open, you might want to use the Environment Override window instead. This allows you to quickly switch appearance during debugging:

The Environment Overrides window allows changing the Interface Style

Enabling Dark Mode in storyboards:

While working on your views inside a Storyboard, it can be useful to set the appearance to dark inside the Storyboard. You can find this option next to the device selection towards the bottom:

Updating the appearance of a Storyboard to dark

Adjusting colors for Dark Mode:

With Dark Mode on iOS 13, Apple also introduced adaptive and semantic colors. These colors adjust automatically based on several influences like being in a modal presentation or not.

Adaptive colors explained:

Adaptive colors automatically adapt to the current appearance. An adaptive color returns a different value for different interface styles and can also be influenced by presentation styles like a modal presentation style in a sheet.

Semantic colors explained:

Semantic colors describe their intentions and are adaptive as well. An example is the label semantic color which should be used for labels. Simple, isn’t it?

When you use them for their intended purpose, they will render correctly for the current appearance. The label example will automatically change the text color to black for light mode and white for dark.

It’s best to explore all available colors and make use of the ones you really need.

Exploring adaptive and semantic colors:

It will be a lot easier to adopt Dark Mode if you’re able to implement semantic and adaptive colors in your project. For this, I would highly recommend the SemanticUI app by Aaron Brethorst which allows you to see an overview of all available colors in both appearances.

The SemanticUI app by Aaron Brethorst helps in exploring Semantic and adaptable colors

Supporting iOS 12 and lower with semantic colors:

As soon as you start using semantic colors, you will realize that they only support iOS 13 and up. To solve this, we can create our own custom UIColor wrapper by making use of the UIColor.init(dynamicProvider: @escaping (UITraitCollection) -> UIColor) method. This allows you to return a different color for iOS 12 and lower.

public enum DefaultStyle {

public enum Colors {

public static let label: UIColor = {
if #available(iOS 13.0, *) {
return UIColor.label
} else {
return .black
}
}()
}
}

public let Style = DefaultStyle.self

let label = UILabel()
label.textColor = Style.Colors.label

Another benefit of this approach is that you’ll be able to define your own custom style object. This allows theming but also makes your color usage throughout the app more consistent when forced to use this new style configuration.

Creation of a custom semantic color

A custom semantic color can be created by using the earlier explained UIColor.init(dynamicProvider: @escaping (UITraitCollection) -> UIColor) method.

Oftentimes, your app has its own identical tint color. It could be the case that this color works great in light mode but not well in dark mode. For that, you can return a different color based on the current interface style.

public static var tint: UIColor = {
if #available(iOS 13, *) {
return UIColor { (UITraitCollection: UITraitCollection) -> UIColor in
if UITraitCollection.userInterfaceStyle == .dark {
/// Return the color for Dark Mode
return Colors.osloGray
} else {
/// Return the color for Light Mode
return Colors.dataRock
}
}
} else {
/// Return a fallback color for iOS 12 and lower.
return Colors.dataRock
}
}()

Updating assets and images for Dark Mode:

The easiest way to do this is by using an Image Asset Catalog. You can add an extra image per appearance.

Adding an extra appearance to an image asset.

Conclusion:

Now, if you have finally decided to adopt iOS 13 dark mode, then here’s a simple checklist to follow:

  • Download and Install Xcode 11.0 or latest
  • Build and Run your app when dark mode is enabled
  • Fix all the errors that you have found
  • Add dark variants to all your properties
  • Adapt Dark Mode one screen at a time:
    • Start from the xib’s files
    • Shift to storyboards
    • Shift to code
    • Repeat all the screens one by one
  • Ensure to set the foreground key while drawing attributed text
  • Swift all your appearance logic in the “Draw Time” functions
  • Test your app in both modes, light and dark mode
  • Don’t forget to change your LaunchScreen storyboard

By following the above process, you will be able to implement iOS 13 dark mode in your app with ease and confidence.

Data Synchronization in Real Time: An Evolution

 

HyperText Transfer Protocol (HTTP) is the most widely used application layer protocol in the Open Systems Interconnection (OSI) model. Traditionally, it was built to transfer text or media which had links to other similar resources, between a client that common users could interact with, and a server that provided the resources. Clicking on a link usually resulted in the un-mounting of the present page from the client and loading of an entirely another page. Gradually when the content across pages became repetitive with minute differences, engineers started looking for a solution to the only update some half of the content instead of loading the entire page.

This was when XMLHttpRequest or AJAX was born which supported the transfer of data in formats like XML or JSON, which differed from the traditional HTML pages. But all along the process, HTTP was always a stateless protocol where the onus lied on the client to initiate a request to the server for any data it required.

Real-time data

When exponential growth in the volume of data exchanged on the internet lead to application spanning multiple business use cases, the need arose to fetch this data on a real-time basis rather than waiting for the user to request a page refresh. This is the topic that we are trying to address here. Now there are different protocols and solutions available for syncing data between client and server to keep data updated between a third party server and our own server. We are limiting the scope only to real-time synchronization between a client application and a data server.

Without loss of generality, we are assuming that our server is on a cloud platform, with several instances of the server running behind a load balancer. Without going into the details on how this distributed system maintains a single source of new data, we are assuming that whenever a real-time data occurs, all servers are aware and access this new data from the same source. We will now disseminate four technologies that solve real-time data problems – namely Polling, Long Polling, Server-Sent Events, and WebSockets. We will also compare them in terms of ease of implementation on the client-side as well as the server-side.

Polling

Polling is a mechanism in which a client application, like a web browser, constantly asks the servers for new data. They are traditional HTTP requests that pull data from servers via XMLHttpRequest objects. The only difference is that we don’t rely on the user to perform any action for triggering this request. We periodically keep on pushing the requests to the server separated by a certain time window. As soon as any new data is available on the server, the immediate occurring request is responded with this data.

Figure 1: Polling

Ease of Implementation on client

  • Easiest implementation
  • Simply set up an interval timer that triggers the XMLHttpRequest

Ease of Implementation on Server

  • Easiest to implement
  • As soon as the request arrives, provide the new data if available
  • Else send a response indicating null data
  • Typically Server can close a connection after the response
  • Since HTTP 1.1, as all connections are by default kept alive till a threshold time or certain number of requests, modern browsers behind the scenes multiplex request among parallel connections to a server

Critical Drawbacks

  • Depending on the interval of requests, the data may not actually be real-time
  • We must not remove the new data for the amount of time that is at least the same as an interval of requests, else we risk some clients being not provided with the data
  • Results in heavy network load on the server
  • When the interval time is reached, it does not care whether the request made earlier has been responded to or not; it simply makes another periodic request
  • It may throttle other client requests as all of the connections that a browser is limited to for a domain may be consumed for polling

Long Polling

As the name suggests, long polling is mostly equivalent to the basic polling described above as it is a client pull of data and makes an HTTP request to the server using XMLHttpRequest object. But the only difference is that it now expects the server to keep the connection alive as long as it does not respond with new data or the connection timeout over TCP is reached. The client does not initiate a new request till the previous request is responded with.

Figure 2: Long Polling

Ease of Implementation on client

  • Still easy to implement
  • The client simply has to provide a Keep-Alive header in the request with a parameter indicating the maximum connection timeout (Note that modern browsers implementing HTTP 1.1 by default provide this header i.e. by default the connection is kept alive)
  • When the previous request is responded with, initiate a new request

Ease of Implementation on Server

  • More difficult to implement than traditional request-response cycle
  • The onus is on the server to keep the connection alive
  • The server has to periodically check for new data while keeping the connection open which results in the consumption of memory and resources
  • It is difficult to estimate how long should the new data be kept on the server because there can be cases where connections with some client had timed out in the moment when new data arrives and these clients cannot be provided with this data

Critical Drawbacks

  • If data changes are frequent, this is virtually equivalent to polling because client will keep on making requests very frequently too
  • If data changes are not that frequent, this results in lots of connection timeouts
  • So a connection which could have been used for other requests is adhering only to a single request for a very long time
  • Caching servers over the network between client and server can result in providing stale data if proper Cache-Control header is not provided
  • As mentioned earlier, it is possible that some connections may not be provided with new data at all

Server-Sent Events

Server-Sent Events or SSE follows the principle of Server push of data rather than client polling for data. The communication still follows the standard HTTP protocol. A client initiates a request with the server. After the TCP handshake is done, the server informs the client that it will be providing streams of text data. Both the browser and server agree to keep the connection alive for as long as possible. The server in fact never closes the connection on its own. The client can close the connection if it no more needs new data. Now whenever any new data occurs on the server, it keeps on providing stream in text format as a new event for each new data. If the SSE connection is ever interrupted because of network issues, the browser immediately initiates a new SSE request.

Figure 3: Server-Sent Events

Ease of Implementation on client

  • The modern browser provides a JavaScript class called as EventSource which abstracts a lot of overhead functionality for client
  • The client simply has to instantiate the EventSource class with the server endpoint
  • It will now receive event call-back whenever a stream of text data is pushed by the server
  • EventSource instance itself handles re-establishing an interrupted connection

Ease of Implementation on Server

  • Over the traditional HTTP response headers, the server must provide Content-Type header as ‘text/event-stream’ and Connection header as ‘Keep-Alive’
  • Each server has to remember the pool of connections with SSE properties
  • The server has to periodically check for new data which results in the consumption of memory via an asynchronously running thread
  • Since a consistent connection is almost guaranteed by all clients, the server can push new data to all connections from the pool and flush the now stale data immediately

Critical Drawbacks

  • EventSource class is not supported by Internet Explorer
  • The server must ensure to remove failed connections from the SSE pool to optimize resources

WebSockets

Unlike all the above three technologies which follow HTTP protocol, Websockets can be defined as something that’s built over HTTP. The client initiates a normal HTTP request with the server but includes a couple of special headers – Connection: Upgrade and Upgrade: WebSocket. These headers instruct the server to first establish a TCP connection with the client. But then, both the server and client agree to use this now active TCP connection for a protocol which is an upgrade over the TCP transport layer. The handshake that happens now over this active TCP connection follows WebSocket protocol and agree on following payload structure as JSON, XML, MQTT, etc. that both the browser and server can support via the Sec-WebSocket-Protocol Request and Response Header respectively. Once the handshake is complete, the client can push data to the server while the server too can push data to the client without waiting for the client to initiate any request. Thus a bi-directional flow of data is established over.

Figure 4: WebSockets

Ease of Implementation on client

  • The modern browser provides a JavaScript class called WebSocket which abstracts a lot of overhead functionality for client
  • The client simply has to instantiate the WebSocket class with server URL
  • Note that though the HTTP in URL (ex: http://example.com) must be replaced with ws protocol (ex: ws://example.com)
  • Similarly, https must be replaced with wss
  • WebSocket class provides a connection closed callback when a connection is interrupted and hence the client can initialize a new WebSockets request
  • WebSocket class provides a message received callback whenever the server pushes any data
  • WebSocket class also provides a method to send data to the server

Ease of Implementation on Server

  • On receiving an HTTP request from the client to upgrade the protocol, the server must provide HTTP 101 status code indicating the switch of the protocol to Web Socket
  • The server also provides a base64 encoded SHA-1 hash generated value of secure WebSocket key provided by each client on handshake request via the Sec-Websocket-Accept response header
  • The response header also includes the data format protocol via the Sec-Websocket-Protocol header

Critical Drawbacks

  • Though there are libraries available like websockify which make it possible for the server running on a single port to support both HTTP and WebSocket protocol, it is generally preferred to have a separate server for WebSockets
  • Since WebSockets don’t follow HTTP, browsers don’t provide multiplexing of requests to the server
  • This implies that each WebSocket class instance from the browser will open a new connection to the server and hence connecting and reconnecting need to be maintained optimally by both the servers and the client

Below is a table summarising all the parameters:

Polling Long Polling SSE WebSockets
Protocol HTTP HTTP HTTP HTTP Upgraded to WebSockets
Mechanism Client Pull Client Pull Server Push Server Push
Bi-directional No No No Yes
Ease of Implementation on Client Easy via XMLHttpRequest Easy via XMLHttpRequest Manageable via EventSource Interface Manageable via the WebSocket Interface
Browser support All All Not supported in IE – can be overcome with Polyfill library All
Automatic Reconnection Inherent Inherent Yes No
Ease of Implementation on Server Easy via the traditional HTTP Request-Response Cycle Logic of memorizing connection for a session needed Standard HTTP endpoint with specific headers and a pool of client connections Requires efforts and mostly need to set up a separate server
Secured Connection HTTPS HTTPS HTTPS WWS
Risk of Network Saturation Yes No No Since browser multiplexing not supported, need to optimize connection on both ends
Latency Maximum Acceptable Minimal Minimal
Issue of Caching Yes, need appropriate Cache-Control headers Yes, need appropriate Cache-Control headers No No

Conclusion

Polling and Long Polling are client pull mechanism that adheres to the standard HTTP request-response protocol. Both are relatively easier to implement on the server and client. Yet both pose the threat of request throttling on client and server respectively. Latency is also measurable in both the implementations which is somewhat self-contrasting for the purpose of providing real-time data. Server-Sent Events and WebSockets seem to be better candidates in providing real-time data. If the data flow is unidirectional and only the server needs to provide updates, it is advised to use SSE which follows the HTTP protocol. But if the need is that client and server both need to provide real-time data to each other which can be the case in scenarios like a chat application, it is advised to go for WebSockets.

React Context API vs Redux

Whenever there is a requirement for state management, the first name that pops in the head is REDUX. With approximately 18M downloads per month, it has been the most apparent and unmatched state management tool.

But the new React Context API is giving the redux a healthy competition and trying to replace it.

I will first give a brief explanation for both, and then we can deep dive into the details.

What is Redux?

Redux is most commonly used to manage state or data of a React app. It is not just limited to React apps; it can be used with Angular and other frameworks as well. But when using react, the most common and obvious choice is to use redux.

Redux provides a centralized store(state) that can connect with various react containers/components.

This state is not mutable and accessible directly, to change the state data we need to dispatch the actions and then the reducers will update the data in the centralized state.

What is React’s Context API?

Context API provides a way to solve a simple problem which you will face in almost all react apps, how can we manage a state or pass data to not connected components.

Let’s first see an example of a sample application with redux used for state management.

The state is always changed by dispatching an action.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_actions_example_1-jsx 

Then a reducer is present to update the global state of the app. Below is a sample reducer.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_reducer_example_2-jsx

Below would be a sample app.js file.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_app_example_3-jsx

The last step would be to connect the react component to the reducer, which would subscribe to the global state and automatically update the data,  passed as props.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_todo_component_4-jsx

This is a fundamental and trivial implementation of the react-redux setup. But there is a lot of boilerplate code that needs to be taken care of.

Now, let’s see how does React’s context API work. We will update the same code and use the context API and remove redux.

Context API consists of three things:

  • Context Object
  • Context Provider
  • Context Consumer

First of all, we will create a context object.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_create_context_5-js

We can create contexts in various forms. Either in a separate file or in the component itself. We can create multiple contexts, as well. But what is this context?

Well, a context is just a JSON that holds some data(It can hold functions as well).

Now let’s provide this newly created context to our app. Ideally, the component that wraps all the child components should be provided with the context. In our case, we are providing context to our app itself. The value prop in the <TodoContext.Provider> set here passes down to all the child components.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_provider_example_6-jsx

Here is how we can consume our provided context in the child components.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_consumer_example_7-jsx

The special component <TodoContext.Consumer> is injected into the context provided. The context is the same object that is passed to the value prop of <TodoContext.Provider>. So if the value changes over there, the context object in the consumer is also updated.

But how do we update the values? Do we need actions?

So here we can use the standard React State management to help us. We can create the state in our App.js file itself and pass the state object to the Provider. The example given below would provide you with a little more context. 🙂

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_context_management_example_8-jsx-jsx

In the above code, we are just updating the state normally as we would in a normal class-based React component. Also, we are passing methods as references to the value prop, so any component that is consuming the context will have access to this function and can easily update the global state.

So that is how we can achieve global state management using the React Context API instead of using redux.

So should you get rid of redux completely?

Let’s look into a little comparison listed down below:

Redux React Context API
Learning Curve Redux is a whole new package that needs to be integrated into an app, it takes some time to learn the basic concepts and standard code practices that we need to follow in order to have the react and redux working together smoothly. If you know react then it certainly helps to speed things up to learn and implement redux. React context, on the other hand, works on the state principle which is already a part of React, we only need to understand the additions to the API and how we can use the providers and consumers. In my opinion, a react developer can get familiarized with the concept in a short while
Refactoring Effort To refactor the code to redux API would depend on the project itself, a small scale app can be easily be converted in 3 to 4 days but if there is a big app that needs to converted it can take some time.
Code Size When using redux the code size of the web app is increased quite a bit as we include quite some packages just to bind all the stuff together.
redux  – 7.3kB
react-redux – 14.4kB
On the other hand, the context API is baked in the react package. So no additional dependencies are required

 

Scale Redux is known for it’s scaling capabilities, in fact, while building a large scale app redux is the first choice, it provides modularity (separating out reducers, actions) and a proper flow which can be easily scalable. The same cannot be said for the react context API, as everything is managed by the state property of React, while we can create a global higher-order component that can contain the whole app state, but this is not really maintainable and not easy to read code.

In my opinion, a small scale app can easily adapt to the react context API. To integrate redux, we need three to four separate packages. This adds to the final build, bigger bundle size, a lot more code to process, which would increase the render times.

On the other hand, React context API is built-in, and no further package is required to use it.

However, when we talk about large scale apps, where there are numerous components and containers involved, I believe the preferred way to go is redux, as redux provides maintainability, ability to debug your code. Their various middlewares present helps to write efficient code, handle async code and debug better. We can separate the actions dispatchers and reducers in redux, which provide us with an easier and defined coding pattern.

The last approach can be to use both of these, but I have not tried it. We can connect containers with redux, and if the containers have deep child component trees, we can pass the data to children using context objects.

API test automation using Postman simplified: Part 2

In previous blog of this series, we talked about deciding factors for tool selection, POC, suite creation and suite testing.

Now moving one step further we’ll talk about next steps like command-line execution of postman collections, integration with CI tool, monitoring etc.

Thus I have structured this blog into following phases:

  • Command-line execution of postman collection
  • Integration with Jenkins and Report generation
  • Monitoring

Command-line execution of postman collection

Postman has a command-line interface called Newman. Newman makes it easy to run a collection of tests right from the command line. This easily enables running Postman tests on systems that don’t have a GUI, but it also gives us the ability to run a collection of tests written in Postman right from within most build tools. Jenkins, for example, allows you to execute commands within the build job itself, with the job either passing or failing depending on the test results.

The easiest way to install Newman is via the use of NPM. If you have Node.js installed, it is most likely that you have NPM installed as well.

$ npm install -g newman

Sample windows batch command to run postman collection for a given environment

newman run https://www.getpostman.com/collections/b3809277c54561718f1a -e Staging-Environment-SLE-API-Automation.postman_environment.json –reporters cli,htmlextra –reporter-htmlextra-export “newman/report.html” –disable-unicode –x

Above command uses cloud URL of collection under test. If someone doesn’t want to use cloud version, it is possible to import the collection JSON and pass the path in the command. The generated report file clearly shows the passed/failed/skipped tests along with request and responses and other useful information. In the post-build actions, we can add steps to email the attached report to intended recipients.

Integration with Jenkins and Report generation

Scheduling and executing postman collection through Jenkins is a pretty easy job. First, it requires you to install all the necessary plugins as needed.

E.g. We installed the following plugins:

  • js –For Newman
  • Email extension –For sending mails
  • S3 publisher –For storing the report files in aws s3 bucket

Once you have all the required plugins, you just need to create a job and do necessary configurations for:

  • Build triggers – For scheduling the job (time and frequency)
  • Build – Command to execute the postman collection.
  • Post build actions –Like storing the reports at the required location, sending emails etc.

If you notice, the test execution Newman report generated after Jenkins build execution looks something as shown in Figure 1:

Figure 1: Report in plain format due to Jenkins’s default security policy

This is due to one of the security features of Jenkins i.e. to send Content Security Policy (CSP) headers which describes how certain resources can behave. The default policy blocks pretty much everything – no JavaScript, inline CSS, or even CSS from external websites. This can cause problems with content added to Jenkins via build processes, typically using the Plugin. Thus with the default policy, our report will look something like this

Therefore it requires modifying the CSP to see the visually-appealing version of the Newman report. While turning this policy off completely is not recommended, it can be beneficial to adjust the policy to be less restrictive, allowing the use of external reports without compromising security. Thus after making the changes, our report will look at something as shown in Figure 2:

Figure 2: Properly formatted Newman report after modifying Jenkins’s Content Security Policy

One of the ways to achieve this is through making changes in the jenkins.xml file, which is located in your main Jenkins installation to permanently changing the Content Security Policy when Jenkins is running as a Windows Service. Simply add your new argument to the arguments element, as shown in Figure 3, save the file and restart Jenkins.

Figure 3: Sample snippet of Jenkins.xml showing modified argument for relaxing Content Security Policy

Monitoring

As we see in Jenkins integration, we have fixed the job frequency using the Jenkins scheduler, which means we have restricted our build to run at particular times in a day. This solution is working for us for now but what if someone wants it in such a way that stakeholders are getting informed whenever there is a failure and needs to be looked upon rather than spamming everyone with the regular mails even if everything passes.

One of the best ways is to have the framework integrated with code repository management system and trigger the automation whenever a new code change related to the feature is pushed and send report mail when any failure is detected by the automation script.

Postman provides a better solution in terms of monitors that lets you stay up to date on the health and performance of your APIs, Although we have not used this utility since we are using the free version and it has a limit of 1,000 free monitoring calls every month. You can create a monitor by navigating to New->Monitor in Postman. (Refer to Figure 4)

Postman monitors are based on collections. Monitors can be scheduled as frequently as every five minutes and will run through each request in your collection, similar to the collection runner. You can also attach a corresponding environment with variables you’d like to utilize during the collection run.

The value of monitors lies in your test scripts. When running your collection, a monitor will use your tests to validate the responses it’s receiving. When one of these tests fail, you can automatically receive an email notification or configure the available integrations to receive alerts in tools like Slack, Pager Duty, or HipChat.

Figure 4: Adding Monitor in Postman

Here we come to an end of this blog as we have discussed all the phases of postman end to end usage in terms of what we explored or implemented in our project. Hope this information helps in setting up an automation framework through postman in some other project as well.

API test automation using Postman simplified: Part 1

Every application you build today relies on APIs. This means it’s crucial to thoroughly verify APIs before rolling out your product to the client or end-users. Although multiple tools for automating API testing are available and known to QAs, we have to decide on the tool that best suites our project requirements and can scale and run without much maintenance/upgrade while creating a test suite. And once the tool is finalized, we have to design an end to end flow with the tool which is easy to use, and we can get most benefits out of automation efforts.

I am writing this two blog series to talk about end to end flow of API automation with Postman from deciding on the tool to implementing the suite till integration with CI tool and report generation .thus the content of these blogs is totally based on our experience and learning while setting it up in our project.

In the first blog of the series, we’ll talk about the phases till implementation while in the next blog we’ll discuss integration with Jenkins, monitoring etc.

Thus I have structured this blog into following phases:

  • Doing POC and Deciding on the tool depending on its suitability to meet project requirement
  • Checking tool’s scalability
  • Suite creation with basic components
  • Testing the Suite using Mock servers

Before moving on to talk on these topics in detail , For those who are new to postman, in brief, Postman is one of the most renowned tools for testing the APIs and is most commonly used by developers and testers. It allows for repeatable, reliable tests that can be automated and used in a variety of environments like Dev, Staging, and Production etc. It presents you with a friendly GUI for constructing requests and reading responses, easy for anyone to get started without any prior knowledge of any scripting language since Postman also has a feature called ‘Snippets’. By default, they are present in Java script but by using it you can generate code snippets in a variety of languages and frameworks such as Java, Python, C, CURL and many others.

Let’s now move to each phase one by one wherein I’ll talk about our project specific criteria and examples in detail.

Doing POC and Deciding on the tool

A few months back when we came up with a plan to automate API tests of our application, first question in mind was, what tool to use which will best suit our requirement?

Mostly the team was briefly familiar with Postman, JMETER, rest assured and FitNesse tools/frameworks for API automation. The main criteria for selecting the tool was to have an open source option which helps to quickly get started with the test automation task, is easy to use,  gives a nice and detailed reporting and is easy to integrate with CI tool.

We could quickly create a POC of complete end to end flow using postman and a similar POC for comparison purpose on JMETER. However, Postman came out as a better option in terms of reporting and user-friendliness since it does not require much of scripting knowledge and hence anyone in the team can pitch in anytime and contribute in the automation effort.

Checking tool’s scalability

Now since we had liked the tool and wanted to go ahead with the same to build a complete automation suite, next set of questions on our mind was related to limitations and scalability of Postman free version,

This was important to be evaluated first and foremost before starting the actual automation effort as we wanted to avoid any unnecessary rework. Thus we started finding answers to our questions. While we could find few answers through the web searches, for some of the clarifications we had to reach out to postman customer support to be double sure on the availability and limitations of the tool.

As a gist, it is important to know that if you are using postman free version then:

  • While using personal workspace there is no upper limit on the number of collections, variables, environments, assertions and collection runs but if you want to use shared/team workspace then there is a limit of 25 requests.
  • If you are using postman’s API for any purposes (for ex. add/update collections, update environments, or add and run monitors) then limit of 1000 request and rate limit of 60 applies.
  • Postman’s execution performance is not actually dependent on number of request but mainly depends on how large computations are being performed in the scripts

This helped us to understand whether free version suffices our requirements or not. Since we were not planning to use postman APIs or monitoring services, we were good to go ahead with postman free version.

Suite creation with basic components

Creating an automation suite with postman requires understanding of following building blocks (Refer Figure 1)

  • Collections & Folders: Postman Collections are a group of saved requests you can organize into folders. This helps in achieving the nice readable hierarchies of requests.
  • Global/Environment variables: An environment is a set of key-value pairs. It lets you customize requests using variables so you can easily switch between different setups without changing your requests. Global variables allow you to access data between collections, requests, test scripts, and environments. Environment variables have a little narrow scope and are applicable only for selected environment. For instance, we have multiple test environments like Integration, Staging, Production. So we can run same collection in all three environments without requiring any changes to collection but by just maintaining 3 environments with environment-specific values for the same keys.
  • Authentication options: APIs use authorization to ensure that client requests access data securely. Postman is equipped with various authorization methods from simple Basic Auth to special AWS signature to  OAuth and NTLM Authentication
  • Pre-Request: Pre-request scripts are snippets of code associated with a collection request that is executed before the request is sent. Some of the common use cases for pre-request scripts are Generating values and injecting them in requests through environment variables, converting data type/format before passing to test script etc.,
  • Tests: Tests are scripts written in JavaScript that are executed after a response is received. Tests can be run as part of a single request or run with a collection of requests.
  • Postman in-built js snippets for creating assertions: Postman allows us to write JavaScript code which can assert on the responses and automatically check the response. We can use the Snippets feature of the Tests tab to write assertions.

Figure 1: Basic Building Blocks of Postman

Testing the Suite using Mock servers

Using the base framework that we created during POC we were able to extend it to add multiple request and multiple tests around each request’s response to having a full-fledged automation suite ready with us and running daily.

In our case our first problem statement to be achieved in API automation was a set of APIs for a reporting module.

Since report contains dynamic data and generating a set of test data is also very tough due to multiple environmental factors, it was not possible for us to apply fixed assertions to validate data accuracy. That’s why we had to come up with other ways to test that don’t exactly match the correctness of the data but still are thorough enough to check the validity of the data and report actual failures.

Thus while doing this exercise what we followed and that really turned out quite beneficial for us was that before starting to write the tests in the tool itself, it is very important to clearly list down everything in detail as to what exactly we want to assert.

For simple APIs with static responses, this list might be pretty straightforward to define. But In our example, it required a good amount of brainstorming to come up with list of assertions which can actually check the validity of the response without knowing the data value itself.

So we thoroughly studied the API responses, came up with our PASS/FAIL criteria, listed down each assertion in own words in our plan  and then went ahead with converting them into actual postman assertions. For ex:

-> Response Code 200 OK
-> Schema Validation
-> Not Null check for applicable values
-> Exact value check for a set of values
-> Match Request start/end time with response start/end time
-> Range validation for a set of values (between 0-1)
-> Data Validation Logic: Detailed logic in terms of response   objects/data with if/else criteria for defined PASS/FAIL cases (details removed)

As we see in above list, we have a number of positive and negative tests covered. While we have such assertions in place in Postman, we can’t say it will work when such response gets generated in actual until we test it thoroughly. If we are testing postman collection of APIs with actual environment response, we might not get each type of failed responses.

And thus to test it we need a way to mock the API request and response which is very similar to the actual response but has some values modified to invalid values to test whether out script and assertion catch them as failure or not. This is possible in postman through mock servers. You can add a mock server for a new or existing collection by navigating to New->Mock server in postman (Refer to Figure 2)

Postman mock server lets you mock a server response, allowing a team to develop or write tests against a service that is not yet complete or is unstable so that instead of hitting the actual endpoint URL, request is made to specified request path given in mock server and accordingly mocked test responses are returned and we can see how our script behaves for such requests and responses. Thus during actual execution of live endpoints if similar scenarios occur, we already know how our script is going to handle them.

Figure 2: Adding mocked request/response using Mock Server in Postman

Now once we have our test suite ready with required cases added and tested, we are good to start scheduling it to run daily for our test environments so that it checks the API health and reports failures.

In our next blog we will discuss these phases in detail. Stay tuned.