Setup of FreeRADIUS Server Using Raspberry Pi3

This blog will take you through the basics of 802.11X authentication and steps on how to configure FreeRadius using raspberry pi. Quite recently, I got the opportunity to work on the FreeRadius server for one of the customer requirements to test their product (access points) for the 802.11X standard. And to achieve this, I had to set up my radius server.

What is 802.1X and How Does it Work?

In a wireless network, 802.1X is used by an access point to authenticate client request to connect to the Wi-Fi. Whenever a wireless client tries to connect to a WLAN, the client passes user information (username/password) to access point, and these access points carry forward this information to the designated RADIUS server. RADIUS servers receive user connection requests, authenticate the user, and then return the configuration information required for the client to connect to the Wi-Fi.

802.1X Authentication comprises of three main parts:

1) Supplicant – Supplicant is a client or end-user who is waiting for authentication

2) Authentication Server (usually a RADIUS server): This server decides whether to accept the end user’s request for full network access.

3) Authenticator – It is an access point or a switch that sits between the supplicant and the authentication server. It acts as a proxy for the end-user and restricts the end-user’s communication with the authentication server.

To implement 802.11X, we need an external server called a Remote Authentication Dial-in User Service (RADIUS) or Authentication, Authorization, and Accounting (AAA) server, which is used for a variety of network protocols and environments including ISPs.

It is a client-server protocol that enables remote servers (Network Access Server-NAS) to communicate with the central servers (Active Directory) to authenticate and authorize dial-in users (WIFI/wired clients) to provide them access to the requested resources.

It provides security and helps companies to maintain a central location for managing client credentials and give easy-to-execute policies that can be applied to a vast range of users from the single administered network point.

It helps companies to have the privacy and security of the system and individual users. There many RADIUS servers available in the market for free which you can configure on your machine. One of them is FreeRadius- a daemon for Unix and Unix-like operating systems which allows one to set up a radius protocol server- which can be used for authenticating and accounting various types of network access.

Installation and Configuration of FreeRADIUS Server Using Terminal in Raspberry

Given below are the steps to install FreeRADIUS:

Open a terminal window. To get into the root directory, type the command given below:

sudo su –

You will get into the root.

To start the installation of FreeRADIUS:

apt-get install freeradius  -y

The steps to configure FreeRADIUS:

To add users that need to be authenticated by the server, you need to edit/etc/freeradius/3.0/users file.

The command is üser name” Cleartext-Password := “”Password”

For example, ”John Doe” Cleartext-Password := “hello”

To add Clients (client is the access point IP/Subnet which needs to direct messages to RADIUS server for authentication):

You need to edit/etc/freeradius/3.0/clients.conf.

In the example given below, I am allowing access points having IP in subnet

# Allow any address that starts with 192.168

client {

secret = helloworld

shortname = office-network


or to allow any device with any IP:

client {

secret = helloworld

shortname = office-network


Quick Steps to Test FreeRADIUS

Now make sure that FreeRADIUS initializes successfully using the following commands. You should see “Info: Ready to process requests” at the end of the initialization process.

#service freeradius stop

# freeradius -XXX

If FreeRADIUS starts with no hassle, you can then you can type Ctrl-C to exit the program and restart it with:

#service freeradius start

There is a command-line tool called radtest that is used to exercise the RADIUS server. Type:

radtest “username” “password” localhost 1812 testing123


radtest John Doe hello localhost 1812 testing123

You should receive a response that says “Access-Accept”.

By using the steps mentioned above, you will be able to setup freeRADIUS server. Also, we learned the method of adding a subnet range that will be able to send out access requests to the server. Please note that if the AP subnet is not inserted correctly, the server will surely be pingable, but access requests will never reach the server. In the current example, we added only one user information in the user file; however, there is immense scope to add multiple users as per our needs.

Whenever a wireless client tries to connect to a WLAN, the client will pass user information (username/password) to access points. Then, the access points forward info to the FreeRADIUS server, which then authenticates the users and returns configuration information essential for the client to connect to WiFi. In cases wherein the credentials don’t match the database created on the server, the server sends across ‘Access-Reject’ to the access point and the client’s request is declined.

We can also configure MAC-based authentication on the server, where the server authenticates the user based on a configured list of allowed mac addresses. If the MAC address matches, the server will send a message of ‘Access-Accept’. In case of any suspicious machine, whose MAC is not configured, tries to connect to the network, a message of ‘Access-Reject’ is sent.

To configure MAC address authentication, on the FreeRadius you need to edit etc/freeradius.3.0/users file.

To add users, use the command given below:

“üser name” Cleartext-Password := “Password”

In the same command for MAC authentication, you need to write MAC address of the device all in small letters and without colon (:), which you want to be authenticated by RADIUS server in place of user name and Password,

Eg- “453a345e56ed” Cleartext-Password := “453a345e56ed”


This can go a long way in helping companies implement security protocols and only allow verified devices to connect to the network. I hope this article helps you with the easy setup of FreeRADIUS Server Using Raspberry Pi3.

Distributed transactions are Not Micro-services

(A Quick Note for the Readers- This is purely an opinion-based article distilled out of my experiences)

I’ve been a part of many Architecture-based discussions, reviews, and implementations, and have shipped many microservices’ based systems to the production. I pretty much agree with the ‘Monolith first’ approach of Martin Fowler. However, I’ve seen many people go in the opposite direction and justifying the pre-mature optimization, which can lead to an unstable and chaotic system.

It’s highly important to understand if you are building microservices just for the purpose of distributed transactions, you’re going to land onto great trouble.

What is a Distributed System?

Let’s go by an example, in an Ecommerce app this will be the order flow in a monolithic version

In Microservice version, the same thing will be like this

In this version, the transaction is dived into two separate transactions by two services and now the atomicity needs to be managed by the API controller.

You need to avoid distributed transactions while building microservices. If you’re spawning your transactions in multiple microservices or calling multiple rest APIs or PUB/SUB, which can be easily done with in-process single service and a single database, then there’s a high chance that you’re doing it the wrong way.

Challenges in Using Microservices to Implement Distributed Transactions

  1. Chaotic testing, as compared to the ones in in-process transactions. It’s really hard to stabilize features written in a distributed fashion, as you not only test happy cases but also cases like service down, timeout, and error handlings of rest APIs.
  2. Unstable and intermittent bugs, which you will start seeing in production.
  3. Sequencing, in real word everyone needs some kind of sequencing when it comes to transactions, but it’s not easy to stabilize a system that is asynchronous (like node.js) and distributed.
  4. Performance, which is a big one and is a by-product of premature optimization. Initially, your transactions might not handle big jsons, but might appear later, and in-process where the same memory is accessible to subsequent codes and transactions, in microservice world where a transaction is distributed it could be painful (now every microservices will load data, serialize and deserialize or same large Db calls multiple time).
  5. Refactoring, every time you make changes in the design level, you will end up having new problems (1-3), which leads to engineering team a mod “resistant to change”
  6. Slow features, the whole concept behind microservices is to “build and deploy features independently and fast” but now you may need to build, test, stabilize, and deploy bunch of services and it will slow down
  7. Unoptimized hardware utilization, there is a high chance that most of the hardware will be under utilized and you might be start shipping many services in same container or same VMs, resulting in high I/O. Suddenly if some big request comes into the system, it could make it go hyper utilized, which will then make you separate that component out, further making the system under-utilized if these kind of requests are not coming anymore, and now there will be a team to handle this infinite oscillation that could have been avoided.

Do’s & Dont’s for Building Microservices from Scratch

  1. Don’t think of microservices as an exercise similar to refactoration of code in different directories. If some code files seem to be logically separated, it’s always a good idea to separate them in one package, however, to create a microservice herein is nothing but premature optimization.
  2. If you need to call rest APIs to complete a request, think twice about it (I would rather recommend to avoid it completely). Same goes for a messaging-based system before creating new producers and consumers, try not to have them at all.
  3. Always focus on different user experiences and their diverse scaling requirements, like for e-commerce vendors APIs are bulky and transactional, as compared to consumer API, it’s a good way of identifying components
  4. Avoid integration tests (yes, you just heard it right ). If you create 10 services and write hundreds of integration tests, you’re creating A chaotic situation altogether. Instead, start with 2-4 services, write hundreds of unit tests, and write 5 integration tests, which I’m sure you won’t regret later.
  5. Consider batch processing, as this design would turn out to be good in performance and less chaotic. For instance, let’s say in e-commerce, you have products in both vendor and consumer databases. Herein, instead of writing distributed transactions to make new products in both the DBs, you can first write only in the vendor DB and run batch processes to pick 100 new products and insert them into consumer DB.
  6. Consider setup auditor or create your own, so that you’ll easily be able to debug and fix an atomic operation when it fails instead of looking into different databases. In case you wish to reduce your late-night intermittent bug fixes, set this early on and use in all the places. So, the solution could be like this 
  7. I would recommend to overlook synchronizing. I have seen many people trying to use this as a way to stabilize the ecosystem, but it introduces new problems (like time outs) then fixing. In the end, services should remain scalable.
  8. Don’t partition your database early, if possible every microservice should have its own database but not all of them need databases. You should create persistent microservices first, and then try to use them inside other microservices. If your most/all microservices are connecting to Databases then it’s a design smell, scale the persistent microservices horizontally with more instances
  9. Don’t create a new git repository for new microservices, first create well unit tested core components, reuse (don’t copy) them in high level components, and from a single repository you might be able to spawn many microservices. Every time you need same code in another repository don’t copy them, rather move it to core component, write super quick unit test, and reuse in all microservices.
  10. Async programming , this can be a real problem if transactions are written in proper sequence handling . there might be some fire and forget scenario could have come which might not impact in normal scenario but in regress or heavy load these fire and forget might not even exected ) lead to inconsistent scenarios.

Check above example here developer thought calling sendOTP Service don’t need to synchronize and did classic “fire and forget”, now in normal testing and low load OTP will be send always but in heavy load sometime sendOTP would not get chance to execute .

Microservices Out of Monolithic: A Cheatsheet

  1. 1-5 of the above-mentioned are applicable
  2. Forget big-bang, you have a stable production system (might not be scalable though)and have to still use 50-70% of existing system in new one.
  3. Start collecting data and figuring out pain points in the system, like tables, non-scalable APIs, performance bottlenecks, intermittent performance issues, and load testing results.
  4. Make a call over scaling by adding hardware vs optimization, however, there’s cost involved in both the cases and you’ll have to decide which is lower. Many a time it’s easier to add more nodes and solve a problem (optimizing the system might involve development and testing cost which might be way higher than just addng nodes).
  5. Consider using the incremental approach. For example, let’s say I’ve an ecommerce app that is monolith (vendor and consumer both), and I come to know that we will be scaling with more new vendors in the coming six months. The first intuition would be to re-architect, however, in case of incremental approach you will determine that your biggest request hit will be from consumer side and product search. The product catalogue will need to be refactored, so you will not change anything in the existing app and it will work as it is for all vendors APIs and consumer transactions. Only for the new problems you will be creating another microservice and another db, replicate the data using batch processing from primary DB, and redirect all search and product catalogue APIs to new microservice.
  6. Optimization, you’ll have to shift your key area of focus on optimizing problematic components (scaling with adding more hardware might not work here).
  7. Partition of your DB to fix problems (don’t ignore this). Many people out there might not agree to this but you need to fix the core design problems instead of adding a counter mechanism like caching.
  8. Don’t rush into new techs and tools, you should be using when you have enough expertise and readiness in your team. Always pick stable opensource small projects instead of the new, trendy library or framework promising too many things.

Still Distributed Transactions in Microservices? Here’s the Way Forward

  1. Compositions, if you think you should merge couple of microservices or integrate transactions in one service, it’s never late to do this exercise.
  2. Build consistent and useful audit for transactions, and make sure you always capture audits even your service gets timed out. A simple example of setting up elk stack, structured logs with transaction ids, entity ids and ability to define policies that will enable you to trace your failed transactions and fix them by data operation teams (this is supercritical). You need to enable them to fix these, if it comes to engineering team then your audit setup is failed)
  3. Redesign your process for chaos testing. Don’t test with hypothetical scenarios (like killing a service then see how other components behave), instead try to produce the situation or data or sequences which can kill or time out a service and then see how resiliency/retry works in other services.
  4. For new requirements, always do estimates, impact analysis, and build an action plan based on your testing time and not development time (since now you will spend most of the time testing).
  5. Integrate a circuit breaker in your ecosystem, so that you’ll be able to check whether all services- the ones going to participate in these transactions- are live and healthy. This way you can avoid half-cooked transactions big time even before starting the transactions.
  6. Adopt batch process, wherein you convert some of critical transactions in batch and offline to make the system more stable and consistent. For example, for the e-commerce example mentioned above, you can use the following-

Here you will still get scaling, isolation, and independent deployment but batch process will make it far more consistent.

  1. Don’t try to build two-phase commit, instead go for an arbitrator pattern which essentially supports resiliency, retry, error handling, timeout handling, and rollback. This is applicable for PUB-SUB as well, with this you don’t need to make every service robust and just have to ensure that arbitrator is capable of handling most of the scenarios.
  2. For performance, you can use IPC, memory sharing across processes, and TCP, if there are chatty microservices check for gRPC or websockets as an alternative of rest APIs.
  3. Configurations can become real nightmares if not handled properly. If your apps fail in production due to missing configuration and you are busy rolling back, fixing and redeploying, you would require something else here. It’s very hard to make every microservice configuration savy and you can never figure out all missing configurations before shipping to productions. So, follow this

Hard code à config files à Data bases à api à discovery

  1. Enable service discovery, in case if you haven’t.


You can use microservices but must also have the pitfalls in the back of your mind. Avoid premature optimizations, and your target should be building stable and scalable products instead of building microservices. Monolith is never bad, however, SOA is versatile and capable of measuring everything. You don’t require a system where everything is essentially microservices, rather a well-built system with combination of monoliths, microsevices, and SOAs can fly really high.

IOS – 13 Dark Mode Support


What is the dark mode?

The dark mode is a color scheme that uses light-colored text, icons, and graphical user interface elements on a dark background. This is an inversion of the default color scheme on iOS and other apps, which is generally black or dark text and icons on a white background.

Dark Mode was introduced in iOS 13 and announced at WWDC 2019. It adds a darker theme to iOS and allows you to do the same for your app. It’s a great addition for your users to experience your app in a darker design.

Benefits of using Dark Mode

Dark Mode is ideally suited to low-light environments, where it not only disturbs your surroundings with the light emitted by your phone but also helps in the prevention of strain caused to eyes. Whether you’re using Mail, Books or Safari, the text will appear white on a black background, making it easy to read in the dark. Using dark mode can often extend the battery life of your device as well, as less power is needed to light the screen. However, Apple doesn’t explicitly list this as an advantage.

Opt-out and disable Dark Mode

If you wish to opt-out your entire application:

  • If you don’t have the time to add support for dark mode, you can simply disable it by adding the UIUserInterfaceStyle to your Info.plist and set it to Light.
<key>UIUserInterfaceStyle</key> <string>Light</string>


  • You can set overrideUserInterfaceStyle against the app’s window variable. Based on how your project was created, this may be in the AppDelegate file or the SceneDelegate.
if #available(iOS 13.0, *) {
window?.overrideUserInterfaceStyle = .light

If you wish to opt-out your UIViewController on an individual basis:

override func viewDidLoad() {
// overrideUserInterfaceStyle is available with iOS 13
if #available(iOS 13.0, *) {
// Always adopt a light interface style.
overrideUserInterfaceStyle = .light

Overriding Dark Mode per View Controller

  • You can override the user interface style per view controller and set it to light or dark using the following code:
class ViewController: UIViewController {
override func viewDidLoad() {
overrideUserInterfaceStyle = .dark

Overriding Dark Mode per view:

  • You can do the same for a single UIView instance:
let view = UIView()
view.overrideUserInterfaceStyle = .dark

Overriding Dark Mode per window:

  • Overriding the user interface style per window can be handy, if you want to disable Dark Mode programmatically: { window in
window.overrideUserInterfaceStyle = .dark

Please note that we’re making use of the windows array here as the key window property on the shared UIApplication is deprecated starting from iOS 13. It’s discouraged to use it because applications can now support multiple scenes that all have an attached window.

Enabling Dark Mode for testing:

If you start implementing a darker appearance in your app, it’s important to have a good way of testing.

Enabling Dark Mode in the Simulator:

Navigate to the Developer page in the Settings app on your simulator and turn on the switch for Dark Appearance:

Enabling Dark Mode on the Simulator

Enabling Dark Mode on a device:

On a device, you can enable Dark Mode by navigating to the Display & Brightness page in the Settings app. However, it’s a lot easier during development to add an option to the Control Centre for easy switching between dark and light mode:

Switching Dark Mode from the debug menu:

While working in Xcode with the simulator open, you might want to use the Environment Override window instead. This allows you to quickly switch appearance during debugging:

The Environment Overrides window allows changing the Interface Style

Enabling Dark Mode in storyboards:

While working on your views inside a Storyboard, it can be useful to set the appearance to dark inside the Storyboard. You can find this option next to the device selection towards the bottom:

Updating the appearance of a Storyboard to dark

Adjusting colors for Dark Mode:

With Dark Mode on iOS 13, Apple also introduced adaptive and semantic colors. These colors adjust automatically based on several influences like being in a modal presentation or not.

Adaptive colors explained:

Adaptive colors automatically adapt to the current appearance. An adaptive color returns a different value for different interface styles and can also be influenced by presentation styles like a modal presentation style in a sheet.

Semantic colors explained:

Semantic colors describe their intentions and are adaptive as well. An example is the label semantic color which should be used for labels. Simple, isn’t it?

When you use them for their intended purpose, they will render correctly for the current appearance. The label example will automatically change the text color to black for light mode and white for dark.

It’s best to explore all available colors and make use of the ones you really need.

Exploring adaptive and semantic colors:

It will be a lot easier to adopt Dark Mode if you’re able to implement semantic and adaptive colors in your project. For this, I would highly recommend the SemanticUI app by Aaron Brethorst which allows you to see an overview of all available colors in both appearances.

The SemanticUI app by Aaron Brethorst helps in exploring Semantic and adaptable colors

Supporting iOS 12 and lower with semantic colors:

As soon as you start using semantic colors, you will realize that they only support iOS 13 and up. To solve this, we can create our own custom UIColor wrapper by making use of the UIColor.init(dynamicProvider: @escaping (UITraitCollection) -> UIColor) method. This allows you to return a different color for iOS 12 and lower.

public enum DefaultStyle {

public enum Colors {

public static let label: UIColor = {
if #available(iOS 13.0, *) {
return UIColor.label
} else {
return .black

public let Style = DefaultStyle.self

let label = UILabel()
label.textColor = Style.Colors.label

Another benefit of this approach is that you’ll be able to define your own custom style object. This allows theming but also makes your color usage throughout the app more consistent when forced to use this new style configuration.

Creation of a custom semantic color

A custom semantic color can be created by using the earlier explained UIColor.init(dynamicProvider: @escaping (UITraitCollection) -> UIColor) method.

Oftentimes, your app has its own identical tint color. It could be the case that this color works great in light mode but not well in dark mode. For that, you can return a different color based on the current interface style.

public static var tint: UIColor = {
if #available(iOS 13, *) {
return UIColor { (UITraitCollection: UITraitCollection) -> UIColor in
if UITraitCollection.userInterfaceStyle == .dark {
/// Return the color for Dark Mode
return Colors.osloGray
} else {
/// Return the color for Light Mode
return Colors.dataRock
} else {
/// Return a fallback color for iOS 12 and lower.
return Colors.dataRock

Updating assets and images for Dark Mode:

The easiest way to do this is by using an Image Asset Catalog. You can add an extra image per appearance.

Adding an extra appearance to an image asset.


Now, if you have finally decided to adopt iOS 13 dark mode, then here’s a simple checklist to follow:

  • Download and Install Xcode 11.0 or latest
  • Build and Run your app when dark mode is enabled
  • Fix all the errors that you have found
  • Add dark variants to all your properties
  • Adapt Dark Mode one screen at a time:
    • Start from the xib’s files
    • Shift to storyboards
    • Shift to code
    • Repeat all the screens one by one
  • Ensure to set the foreground key while drawing attributed text
  • Swift all your appearance logic in the “Draw Time” functions
  • Test your app in both modes, light and dark mode
  • Don’t forget to change your LaunchScreen storyboard

By following the above process, you will be able to implement iOS 13 dark mode in your app with ease and confidence.

Data Synchronization in Real Time: An Evolution


HyperText Transfer Protocol (HTTP) is the most widely used application layer protocol in the Open Systems Interconnection (OSI) model. Traditionally, it was built to transfer text or media which had links to other similar resources, between a client that common users could interact with, and a server that provided the resources. Clicking on a link usually resulted in the un-mounting of the present page from the client and loading of an entirely another page. Gradually when the content across pages became repetitive with minute differences, engineers started looking for a solution to the only update some half of the content instead of loading the entire page.

This was when XMLHttpRequest or AJAX was born which supported the transfer of data in formats like XML or JSON, which differed from the traditional HTML pages. But all along the process, HTTP was always a stateless protocol where the onus lied on the client to initiate a request to the server for any data it required.

Real-time data

When exponential growth in the volume of data exchanged on the internet lead to application spanning multiple business use cases, the need arose to fetch this data on a real-time basis rather than waiting for the user to request a page refresh. This is the topic that we are trying to address here. Now there are different protocols and solutions available for syncing data between client and server to keep data updated between a third party server and our own server. We are limiting the scope only to real-time synchronization between a client application and a data server.

Without loss of generality, we are assuming that our server is on a cloud platform, with several instances of the server running behind a load balancer. Without going into the details on how this distributed system maintains a single source of new data, we are assuming that whenever a real-time data occurs, all servers are aware and access this new data from the same source. We will now disseminate four technologies that solve real-time data problems – namely Polling, Long Polling, Server-Sent Events, and WebSockets. We will also compare them in terms of ease of implementation on the client-side as well as the server-side.


Polling is a mechanism in which a client application, like a web browser, constantly asks the servers for new data. They are traditional HTTP requests that pull data from servers via XMLHttpRequest objects. The only difference is that we don’t rely on the user to perform any action for triggering this request. We periodically keep on pushing the requests to the server separated by a certain time window. As soon as any new data is available on the server, the immediate occurring request is responded with this data.

Figure 1: Polling

Ease of Implementation on client

  • Easiest implementation
  • Simply set up an interval timer that triggers the XMLHttpRequest

Ease of Implementation on Server

  • Easiest to implement
  • As soon as the request arrives, provide the new data if available
  • Else send a response indicating null data
  • Typically Server can close a connection after the response
  • Since HTTP 1.1, as all connections are by default kept alive till a threshold time or certain number of requests, modern browsers behind the scenes multiplex request among parallel connections to a server

Critical Drawbacks

  • Depending on the interval of requests, the data may not actually be real-time
  • We must not remove the new data for the amount of time that is at least the same as an interval of requests, else we risk some clients being not provided with the data
  • Results in heavy network load on the server
  • When the interval time is reached, it does not care whether the request made earlier has been responded to or not; it simply makes another periodic request
  • It may throttle other client requests as all of the connections that a browser is limited to for a domain may be consumed for polling

Long Polling

As the name suggests, long polling is mostly equivalent to the basic polling described above as it is a client pull of data and makes an HTTP request to the server using XMLHttpRequest object. But the only difference is that it now expects the server to keep the connection alive as long as it does not respond with new data or the connection timeout over TCP is reached. The client does not initiate a new request till the previous request is responded with.

Figure 2: Long Polling

Ease of Implementation on client

  • Still easy to implement
  • The client simply has to provide a Keep-Alive header in the request with a parameter indicating the maximum connection timeout (Note that modern browsers implementing HTTP 1.1 by default provide this header i.e. by default the connection is kept alive)
  • When the previous request is responded with, initiate a new request

Ease of Implementation on Server

  • More difficult to implement than traditional request-response cycle
  • The onus is on the server to keep the connection alive
  • The server has to periodically check for new data while keeping the connection open which results in the consumption of memory and resources
  • It is difficult to estimate how long should the new data be kept on the server because there can be cases where connections with some client had timed out in the moment when new data arrives and these clients cannot be provided with this data

Critical Drawbacks

  • If data changes are frequent, this is virtually equivalent to polling because client will keep on making requests very frequently too
  • If data changes are not that frequent, this results in lots of connection timeouts
  • So a connection which could have been used for other requests is adhering only to a single request for a very long time
  • Caching servers over the network between client and server can result in providing stale data if proper Cache-Control header is not provided
  • As mentioned earlier, it is possible that some connections may not be provided with new data at all

Server-Sent Events

Server-Sent Events or SSE follows the principle of Server push of data rather than client polling for data. The communication still follows the standard HTTP protocol. A client initiates a request with the server. After the TCP handshake is done, the server informs the client that it will be providing streams of text data. Both the browser and server agree to keep the connection alive for as long as possible. The server in fact never closes the connection on its own. The client can close the connection if it no more needs new data. Now whenever any new data occurs on the server, it keeps on providing stream in text format as a new event for each new data. If the SSE connection is ever interrupted because of network issues, the browser immediately initiates a new SSE request.

Figure 3: Server-Sent Events

Ease of Implementation on client

  • The modern browser provides a JavaScript class called as EventSource which abstracts a lot of overhead functionality for client
  • The client simply has to instantiate the EventSource class with the server endpoint
  • It will now receive event call-back whenever a stream of text data is pushed by the server
  • EventSource instance itself handles re-establishing an interrupted connection

Ease of Implementation on Server

  • Over the traditional HTTP response headers, the server must provide Content-Type header as ‘text/event-stream’ and Connection header as ‘Keep-Alive’
  • Each server has to remember the pool of connections with SSE properties
  • The server has to periodically check for new data which results in the consumption of memory via an asynchronously running thread
  • Since a consistent connection is almost guaranteed by all clients, the server can push new data to all connections from the pool and flush the now stale data immediately

Critical Drawbacks

  • EventSource class is not supported by Internet Explorer
  • The server must ensure to remove failed connections from the SSE pool to optimize resources


Unlike all the above three technologies which follow HTTP protocol, Websockets can be defined as something that’s built over HTTP. The client initiates a normal HTTP request with the server but includes a couple of special headers – Connection: Upgrade and Upgrade: WebSocket. These headers instruct the server to first establish a TCP connection with the client. But then, both the server and client agree to use this now active TCP connection for a protocol which is an upgrade over the TCP transport layer. The handshake that happens now over this active TCP connection follows WebSocket protocol and agree on following payload structure as JSON, XML, MQTT, etc. that both the browser and server can support via the Sec-WebSocket-Protocol Request and Response Header respectively. Once the handshake is complete, the client can push data to the server while the server too can push data to the client without waiting for the client to initiate any request. Thus a bi-directional flow of data is established over.

Figure 4: WebSockets

Ease of Implementation on client

  • The modern browser provides a JavaScript class called WebSocket which abstracts a lot of overhead functionality for client
  • The client simply has to instantiate the WebSocket class with server URL
  • Note that though the HTTP in URL (ex: must be replaced with ws protocol (ex: ws://
  • Similarly, https must be replaced with wss
  • WebSocket class provides a connection closed callback when a connection is interrupted and hence the client can initialize a new WebSockets request
  • WebSocket class provides a message received callback whenever the server pushes any data
  • WebSocket class also provides a method to send data to the server

Ease of Implementation on Server

  • On receiving an HTTP request from the client to upgrade the protocol, the server must provide HTTP 101 status code indicating the switch of the protocol to Web Socket
  • The server also provides a base64 encoded SHA-1 hash generated value of secure WebSocket key provided by each client on handshake request via the Sec-Websocket-Accept response header
  • The response header also includes the data format protocol via the Sec-Websocket-Protocol header

Critical Drawbacks

  • Though there are libraries available like websockify which make it possible for the server running on a single port to support both HTTP and WebSocket protocol, it is generally preferred to have a separate server for WebSockets
  • Since WebSockets don’t follow HTTP, browsers don’t provide multiplexing of requests to the server
  • This implies that each WebSocket class instance from the browser will open a new connection to the server and hence connecting and reconnecting need to be maintained optimally by both the servers and the client

Below is a table summarising all the parameters:

Polling Long Polling SSE WebSockets
Protocol HTTP HTTP HTTP HTTP Upgraded to WebSockets
Mechanism Client Pull Client Pull Server Push Server Push
Bi-directional No No No Yes
Ease of Implementation on Client Easy via XMLHttpRequest Easy via XMLHttpRequest Manageable via EventSource Interface Manageable via the WebSocket Interface
Browser support All All Not supported in IE – can be overcome with Polyfill library All
Automatic Reconnection Inherent Inherent Yes No
Ease of Implementation on Server Easy via the traditional HTTP Request-Response Cycle Logic of memorizing connection for a session needed Standard HTTP endpoint with specific headers and a pool of client connections Requires efforts and mostly need to set up a separate server
Secured Connection HTTPS HTTPS HTTPS WWS
Risk of Network Saturation Yes No No Since browser multiplexing not supported, need to optimize connection on both ends
Latency Maximum Acceptable Minimal Minimal
Issue of Caching Yes, need appropriate Cache-Control headers Yes, need appropriate Cache-Control headers No No


Polling and Long Polling are client pull mechanism that adheres to the standard HTTP request-response protocol. Both are relatively easier to implement on the server and client. Yet both pose the threat of request throttling on client and server respectively. Latency is also measurable in both the implementations which is somewhat self-contrasting for the purpose of providing real-time data. Server-Sent Events and WebSockets seem to be better candidates in providing real-time data. If the data flow is unidirectional and only the server needs to provide updates, it is advised to use SSE which follows the HTTP protocol. But if the need is that client and server both need to provide real-time data to each other which can be the case in scenarios like a chat application, it is advised to go for WebSockets.

React Context API vs Redux

Whenever there is a requirement for state management, the first name that pops in the head is REDUX. With approximately 18M downloads per month, it has been the most apparent and unmatched state management tool.

But the new React Context API is giving the redux a healthy competition and trying to replace it.

I will first give a brief explanation for both, and then we can deep dive into the details.

What is Redux?

Redux is most commonly used to manage state or data of a React app. It is not just limited to React apps; it can be used with Angular and other frameworks as well. But when using react, the most common and obvious choice is to use redux.

Redux provides a centralized store(state) that can connect with various react containers/components.

This state is not mutable and accessible directly, to change the state data we need to dispatch the actions and then the reducers will update the data in the centralized state.

What is React’s Context API?

Context API provides a way to solve a simple problem which you will face in almost all react apps, how can we manage a state or pass data to not connected components.

Let’s first see an example of a sample application with redux used for state management.

The state is always changed by dispatching an action. 

Then a reducer is present to update the global state of the app. Below is a sample reducer.

Below would be a sample app.js file.

The last step would be to connect the react component to the reducer, which would subscribe to the global state and automatically update the data,  passed as props.

This is a fundamental and trivial implementation of the react-redux setup. But there is a lot of boilerplate code that needs to be taken care of.

Now, let’s see how does React’s context API work. We will update the same code and use the context API and remove redux.

Context API consists of three things:

  • Context Object
  • Context Provider
  • Context Consumer

First of all, we will create a context object.

We can create contexts in various forms. Either in a separate file or in the component itself. We can create multiple contexts, as well. But what is this context?

Well, a context is just a JSON that holds some data(It can hold functions as well).

Now let’s provide this newly created context to our app. Ideally, the component that wraps all the child components should be provided with the context. In our case, we are providing context to our app itself. The value prop in the <TodoContext.Provider> set here passes down to all the child components.

Here is how we can consume our provided context in the child components.

The special component <TodoContext.Consumer> is injected into the context provided. The context is the same object that is passed to the value prop of <TodoContext.Provider>. So if the value changes over there, the context object in the consumer is also updated.

But how do we update the values? Do we need actions?

So here we can use the standard React State management to help us. We can create the state in our App.js file itself and pass the state object to the Provider. The example given below would provide you with a little more context. 🙂

In the above code, we are just updating the state normally as we would in a normal class-based React component. Also, we are passing methods as references to the value prop, so any component that is consuming the context will have access to this function and can easily update the global state.

So that is how we can achieve global state management using the React Context API instead of using redux.

So should you get rid of redux completely?

Let’s look into a little comparison listed down below:

Redux React Context API
Learning Curve Redux is a whole new package that needs to be integrated into an app, it takes some time to learn the basic concepts and standard code practices that we need to follow in order to have the react and redux working together smoothly. If you know react then it certainly helps to speed things up to learn and implement redux. React context, on the other hand, works on the state principle which is already a part of React, we only need to understand the additions to the API and how we can use the providers and consumers. In my opinion, a react developer can get familiarized with the concept in a short while
Refactoring Effort To refactor the code to redux API would depend on the project itself, a small scale app can be easily be converted in 3 to 4 days but if there is a big app that needs to converted it can take some time.
Code Size When using redux the code size of the web app is increased quite a bit as we include quite some packages just to bind all the stuff together.
redux  – 7.3kB
react-redux – 14.4kB
On the other hand, the context API is baked in the react package. So no additional dependencies are required


Scale Redux is known for it’s scaling capabilities, in fact, while building a large scale app redux is the first choice, it provides modularity (separating out reducers, actions) and a proper flow which can be easily scalable. The same cannot be said for the react context API, as everything is managed by the state property of React, while we can create a global higher-order component that can contain the whole app state, but this is not really maintainable and not easy to read code.

In my opinion, a small scale app can easily adapt to the react context API. To integrate redux, we need three to four separate packages. This adds to the final build, bigger bundle size, a lot more code to process, which would increase the render times.

On the other hand, React context API is built-in, and no further package is required to use it.

However, when we talk about large scale apps, where there are numerous components and containers involved, I believe the preferred way to go is redux, as redux provides maintainability, ability to debug your code. Their various middlewares present helps to write efficient code, handle async code and debug better. We can separate the actions dispatchers and reducers in redux, which provide us with an easier and defined coding pattern.

The last approach can be to use both of these, but I have not tried it. We can connect containers with redux, and if the containers have deep child component trees, we can pass the data to children using context objects.

API test automation using Postman simplified: Part 2

In previous blog of this series, we talked about deciding factors for tool selection, POC, suite creation and suite testing.

Now moving one step further we’ll talk about next steps like command-line execution of postman collections, integration with CI tool, monitoring etc.

Thus I have structured this blog into following phases:

  • Command-line execution of postman collection
  • Integration with Jenkins and Report generation
  • Monitoring

Command-line execution of postman collection

Postman has a command-line interface called Newman. Newman makes it easy to run a collection of tests right from the command line. This easily enables running Postman tests on systems that don’t have a GUI, but it also gives us the ability to run a collection of tests written in Postman right from within most build tools. Jenkins, for example, allows you to execute commands within the build job itself, with the job either passing or failing depending on the test results.

The easiest way to install Newman is via the use of NPM. If you have Node.js installed, it is most likely that you have NPM installed as well.

$ npm install -g newman

Sample windows batch command to run postman collection for a given environment

newman run -e Staging-Environment-SLE-API-Automation.postman_environment.json –reporters cli,htmlextra –reporter-htmlextra-export “newman/report.html” –disable-unicode –x

Above command uses cloud URL of collection under test. If someone doesn’t want to use cloud version, it is possible to import the collection JSON and pass the path in the command. The generated report file clearly shows the passed/failed/skipped tests along with request and responses and other useful information. In the post-build actions, we can add steps to email the attached report to intended recipients.

Integration with Jenkins and Report generation

Scheduling and executing postman collection through Jenkins is a pretty easy job. First, it requires you to install all the necessary plugins as needed.

E.g. We installed the following plugins:

  • js –For Newman
  • Email extension –For sending mails
  • S3 publisher –For storing the report files in aws s3 bucket

Once you have all the required plugins, you just need to create a job and do necessary configurations for:

  • Build triggers – For scheduling the job (time and frequency)
  • Build – Command to execute the postman collection.
  • Post build actions –Like storing the reports at the required location, sending emails etc.

If you notice, the test execution Newman report generated after Jenkins build execution looks something as shown in Figure 1:

Figure 1: Report in plain format due to Jenkins’s default security policy

This is due to one of the security features of Jenkins i.e. to send Content Security Policy (CSP) headers which describes how certain resources can behave. The default policy blocks pretty much everything – no JavaScript, inline CSS, or even CSS from external websites. This can cause problems with content added to Jenkins via build processes, typically using the Plugin. Thus with the default policy, our report will look something like this

Therefore it requires modifying the CSP to see the visually-appealing version of the Newman report. While turning this policy off completely is not recommended, it can be beneficial to adjust the policy to be less restrictive, allowing the use of external reports without compromising security. Thus after making the changes, our report will look at something as shown in Figure 2:

Figure 2: Properly formatted Newman report after modifying Jenkins’s Content Security Policy

One of the ways to achieve this is through making changes in the jenkins.xml file, which is located in your main Jenkins installation to permanently changing the Content Security Policy when Jenkins is running as a Windows Service. Simply add your new argument to the arguments element, as shown in Figure 3, save the file and restart Jenkins.

Figure 3: Sample snippet of Jenkins.xml showing modified argument for relaxing Content Security Policy


As we see in Jenkins integration, we have fixed the job frequency using the Jenkins scheduler, which means we have restricted our build to run at particular times in a day. This solution is working for us for now but what if someone wants it in such a way that stakeholders are getting informed whenever there is a failure and needs to be looked upon rather than spamming everyone with the regular mails even if everything passes.

One of the best ways is to have the framework integrated with code repository management system and trigger the automation whenever a new code change related to the feature is pushed and send report mail when any failure is detected by the automation script.

Postman provides a better solution in terms of monitors that lets you stay up to date on the health and performance of your APIs, Although we have not used this utility since we are using the free version and it has a limit of 1,000 free monitoring calls every month. You can create a monitor by navigating to New->Monitor in Postman. (Refer to Figure 4)

Postman monitors are based on collections. Monitors can be scheduled as frequently as every five minutes and will run through each request in your collection, similar to the collection runner. You can also attach a corresponding environment with variables you’d like to utilize during the collection run.

The value of monitors lies in your test scripts. When running your collection, a monitor will use your tests to validate the responses it’s receiving. When one of these tests fail, you can automatically receive an email notification or configure the available integrations to receive alerts in tools like Slack, Pager Duty, or HipChat.

Figure 4: Adding Monitor in Postman

Here we come to an end of this blog as we have discussed all the phases of postman end to end usage in terms of what we explored or implemented in our project. Hope this information helps in setting up an automation framework through postman in some other project as well.

API test automation using Postman simplified: Part 1

Every application you build today relies on APIs. This means it’s crucial to thoroughly verify APIs before rolling out your product to the client or end-users. Although multiple tools for automating API testing are available and known to QAs, we have to decide on the tool that best suites our project requirements and can scale and run without much maintenance/upgrade while creating a test suite. And once the tool is finalized, we have to design an end to end flow with the tool which is easy to use, and we can get most benefits out of automation efforts.

I am writing this two blog series to talk about end to end flow of API automation with Postman from deciding on the tool to implementing the suite till integration with CI tool and report generation .thus the content of these blogs is totally based on our experience and learning while setting it up in our project.

In the first blog of the series, we’ll talk about the phases till implementation while in the next blog we’ll discuss integration with Jenkins, monitoring etc.

Thus I have structured this blog into following phases:

  • Doing POC and Deciding on the tool depending on its suitability to meet project requirement
  • Checking tool’s scalability
  • Suite creation with basic components
  • Testing the Suite using Mock servers

Before moving on to talk on these topics in detail , For those who are new to postman, in brief, Postman is one of the most renowned tools for testing the APIs and is most commonly used by developers and testers. It allows for repeatable, reliable tests that can be automated and used in a variety of environments like Dev, Staging, and Production etc. It presents you with a friendly GUI for constructing requests and reading responses, easy for anyone to get started without any prior knowledge of any scripting language since Postman also has a feature called ‘Snippets’. By default, they are present in Java script but by using it you can generate code snippets in a variety of languages and frameworks such as Java, Python, C, CURL and many others.

Let’s now move to each phase one by one wherein I’ll talk about our project specific criteria and examples in detail.

Doing POC and Deciding on the tool

A few months back when we came up with a plan to automate API tests of our application, first question in mind was, what tool to use which will best suit our requirement?

Mostly the team was briefly familiar with Postman, JMETER, rest assured and FitNesse tools/frameworks for API automation. The main criteria for selecting the tool was to have an open source option which helps to quickly get started with the test automation task, is easy to use,  gives a nice and detailed reporting and is easy to integrate with CI tool.

We could quickly create a POC of complete end to end flow using postman and a similar POC for comparison purpose on JMETER. However, Postman came out as a better option in terms of reporting and user-friendliness since it does not require much of scripting knowledge and hence anyone in the team can pitch in anytime and contribute in the automation effort.

Checking tool’s scalability

Now since we had liked the tool and wanted to go ahead with the same to build a complete automation suite, next set of questions on our mind was related to limitations and scalability of Postman free version,

This was important to be evaluated first and foremost before starting the actual automation effort as we wanted to avoid any unnecessary rework. Thus we started finding answers to our questions. While we could find few answers through the web searches, for some of the clarifications we had to reach out to postman customer support to be double sure on the availability and limitations of the tool.

As a gist, it is important to know that if you are using postman free version then:

  • While using personal workspace there is no upper limit on the number of collections, variables, environments, assertions and collection runs but if you want to use shared/team workspace then there is a limit of 25 requests.
  • If you are using postman’s API for any purposes (for ex. add/update collections, update environments, or add and run monitors) then limit of 1000 request and rate limit of 60 applies.
  • Postman’s execution performance is not actually dependent on number of request but mainly depends on how large computations are being performed in the scripts

This helped us to understand whether free version suffices our requirements or not. Since we were not planning to use postman APIs or monitoring services, we were good to go ahead with postman free version.

Suite creation with basic components

Creating an automation suite with postman requires understanding of following building blocks (Refer Figure 1)

  • Collections & Folders: Postman Collections are a group of saved requests you can organize into folders. This helps in achieving the nice readable hierarchies of requests.
  • Global/Environment variables: An environment is a set of key-value pairs. It lets you customize requests using variables so you can easily switch between different setups without changing your requests. Global variables allow you to access data between collections, requests, test scripts, and environments. Environment variables have a little narrow scope and are applicable only for selected environment. For instance, we have multiple test environments like Integration, Staging, Production. So we can run same collection in all three environments without requiring any changes to collection but by just maintaining 3 environments with environment-specific values for the same keys.
  • Authentication options: APIs use authorization to ensure that client requests access data securely. Postman is equipped with various authorization methods from simple Basic Auth to special AWS signature to  OAuth and NTLM Authentication
  • Pre-Request: Pre-request scripts are snippets of code associated with a collection request that is executed before the request is sent. Some of the common use cases for pre-request scripts are Generating values and injecting them in requests through environment variables, converting data type/format before passing to test script etc.,
  • Tests: Tests are scripts written in JavaScript that are executed after a response is received. Tests can be run as part of a single request or run with a collection of requests.
  • Postman in-built js snippets for creating assertions: Postman allows us to write JavaScript code which can assert on the responses and automatically check the response. We can use the Snippets feature of the Tests tab to write assertions.

Figure 1: Basic Building Blocks of Postman

Testing the Suite using Mock servers

Using the base framework that we created during POC we were able to extend it to add multiple request and multiple tests around each request’s response to having a full-fledged automation suite ready with us and running daily.

In our case our first problem statement to be achieved in API automation was a set of APIs for a reporting module.

Since report contains dynamic data and generating a set of test data is also very tough due to multiple environmental factors, it was not possible for us to apply fixed assertions to validate data accuracy. That’s why we had to come up with other ways to test that don’t exactly match the correctness of the data but still are thorough enough to check the validity of the data and report actual failures.

Thus while doing this exercise what we followed and that really turned out quite beneficial for us was that before starting to write the tests in the tool itself, it is very important to clearly list down everything in detail as to what exactly we want to assert.

For simple APIs with static responses, this list might be pretty straightforward to define. But In our example, it required a good amount of brainstorming to come up with list of assertions which can actually check the validity of the response without knowing the data value itself.

So we thoroughly studied the API responses, came up with our PASS/FAIL criteria, listed down each assertion in own words in our plan  and then went ahead with converting them into actual postman assertions. For ex:

-> Response Code 200 OK
-> Schema Validation
-> Not Null check for applicable values
-> Exact value check for a set of values
-> Match Request start/end time with response start/end time
-> Range validation for a set of values (between 0-1)
-> Data Validation Logic: Detailed logic in terms of response   objects/data with if/else criteria for defined PASS/FAIL cases (details removed)

As we see in above list, we have a number of positive and negative tests covered. While we have such assertions in place in Postman, we can’t say it will work when such response gets generated in actual until we test it thoroughly. If we are testing postman collection of APIs with actual environment response, we might not get each type of failed responses.

And thus to test it we need a way to mock the API request and response which is very similar to the actual response but has some values modified to invalid values to test whether out script and assertion catch them as failure or not. This is possible in postman through mock servers. You can add a mock server for a new or existing collection by navigating to New->Mock server in postman (Refer to Figure 2)

Postman mock server lets you mock a server response, allowing a team to develop or write tests against a service that is not yet complete or is unstable so that instead of hitting the actual endpoint URL, request is made to specified request path given in mock server and accordingly mocked test responses are returned and we can see how our script behaves for such requests and responses. Thus during actual execution of live endpoints if similar scenarios occur, we already know how our script is going to handle them.

Figure 2: Adding mocked request/response using Mock Server in Postman

Now once we have our test suite ready with required cases added and tested, we are good to start scheduling it to run daily for our test environments so that it checks the API health and reports failures.

In our next blog we will discuss these phases in detail. Stay tuned.

Defence against Rising Bot Attacks

Bots are software that performs an automated task over the Internet. They are used for the productive task but they are frequently used for malicious activities. They are categorised as good and bad bots.

Good bots are used for positives purposes like Chabot for solving customer queries and web crawlers that are used for indexing search engines. A plain text file robot.txt can be placed in the root of site and rules can be configured in this file to allow/deny access to different site URLs. This way good bot can be controlled and are allowed to access certain site resources.

Then comes the bad bots, which are malicious programs and performs certain activities in the background in victims machine without the user’s knowledge.  Such activities include accessing certain websites without the user’s knowledge or stealing the user’s confidential information’s. These are also spread across the Internet to perform DDoS (Distributed Denial of service) attack on target websites. Following techniques can be used to deter malicious bots from accessing resource extensive API of web applications.

  1. Canvas Fingerprint:

Canvas fingerprint works on html5 canvas element. A small image is drawn on the canvas element of 1 x 1-pixel size. Each device generates a different hash of this image based on the browser, operating system and installed the graphics card. This technique is not sufficient enough to uniquely identify users because there will be certain group of users sharing the same configuration and device. But this has been observed that when the bot is scanning through the web pages it tries to access all link present on that landing page. So the time taken by the bot to click on the link on that page will always be similar. These two technique canvas fingerprint along with time to click can be combined to decide whether to provide access to the application or allow the user to validate that its a genuine user by shown a captcha. User will be given access to the site after successfully validating the captcha. So if this is observed that requests coming from the same device (same fingerprint) and with same time to click interval then this request will fall in bot category and has to be validated by showing captcha to the user. This will improve user experience, as captcha is not shown to all users but only for suspected requests.

  1. Honey Trap:

The honey trap works by having few hidden links on the landing pages along with the actual link. Since bots are going to try all link on the page when sniffing the landing page it will get trapped in the hidden link which points to 404 pages.  This data can be collected and use to identify the source of the request and later those sources can be blocked from accessing the website. Genuine user will only be able to see the actual link and will be able to access the site.

  1. Blacklist IP address:

Not the best solution because the bot is smart enough to change the IP address with each request. However, it will help to reduce some of the bot traffic by providing one layer of protection.

  1. Blacklist X-Requested-with:

There are some malicious apps that try to click certain links on the user’s device in the background and user does not know about this. Those requests have X-Requested-With header in the request which contains the app package name. The web application can be configured to block all requests that contain fraudulent X-Requested-With field value.

  1. Blacklist User-Agent:

There are many third-party service providers that maintains a list of User-Agents that bots use. The website can be configured to block all the requests coming from these blacklisted user agents. Similar to IP address this field can also be changed by bot owners with each request, hence does not provide full proof protection.


The separate defence can be used in different cases. There are a few advertising partners that try to convert the same user again and again just to increase their share. For such cases, canvas fingerprint will be the most suitable solution. It identifies the request coming from the same device several times within a time interval and mark it as bot traffic and ask the user to validate to proceed further.

Honey trap will be an ideal defence when there are many advertising partners working to bring in more traffic and we need to generate the report which partner provide bot traffic more. From this report bot source can be identified and later request coming from that traffic can be blocked. Other defence works on blacklisting, eg IP address, User-Agent and X-Requested-with. These are part of request headers and can be easily changed by bot from time to time. When using these blacklisting defences, the application owner needs to make sure they are using the most updated list of fraud causing agents. Given the pace of new frauds happening daily keeping track of updated fraud causing agent will be a challenge.

So this can be used as the first line of defence to filter most notorious bots and later canvas fingerprint will filter advanced bots.


Building a basic REST API using Django REST Framework

An API (Application Programming Interface) is a software that allows two applications to talk to each other.

In this tutorial, We will explore different ways to create a Django Rest Framework (DFR) API. We will build Django REST application with Django 2.X.X that allows users to create, edit, and delete API.

Why DRF:

Django REST framework is a powerful and flexible toolkit for building Web APIs.

Some reasons you might want to use REST framework:

  • The Web browsable API is a huge usability win for your developers.
  • Authentication policies including packages for OAuth1a and OAuth2.
  • Serialization that supports both ORM and non-ORM data sources.
  • Customizable all the way down – just use regular function-based views if you don’t need the more powerful features.
  • Extensive documentation, and great community support.
  • Used and trusted by internationally recognized companies including Mozilla, Red Hat, Heroku, and Eventbrite.

Traditionally, Django is known to many developers as an MVC Web Framework, but it can also be used to build a backend, which in this case is an API. We shall see how you can build a backend with it.

Let’s get started

In this blog, you will be building a simple API for a simple employee management service.

Setup your Dev environment:

Please install python 3. I am using python 3.7.3 here

You can check your python version using the command

$ python –V

Python 3.7.3

After installing python, you can go ahead and create a working directory for your API and then set up a virtual environment.

You can set up virtual env. by below command

$ pip install virtualenv

Create directory employee-management and use that directory

$ mkdir employee-management  && cd employee-management
# creates virtual environment named drf_api
 employee-management $ virtualenv --python=python3 drf_api
# activate the virtual environment named drf_api
 employee-management e$ source drf_api/bin/activate


This will activate the virtual env that you have just created.

Let’s install Django and djangorestframework in your virtual env.

I will be installing Django 2.2.3 and djangorestframework 3.9.4

(drf_api) employee-management $pip install Django=2.2.3
(drf_api) employee-management $pip install djangorestframework =3.9.4

Start Project:

After setting up your Dev environment, let’s start a Django Project. I am creating a project with name API


(drf_api) employee-management $ startproject api

(drf_api) employee-management $ cd api


Now create a Django app. I am creating employees app


(drf_api) employee-management $ startapp employees


Now you will have a directory structure like this:









The app and project are now created. We will now sync the database.  By default, Django uses sqlite3 as a database.

If you open api/ you will notice this:


     'default': {
         'ENGINE': 'django.db.backends.sqlite3',
         'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),


You can change the DB engine as per your need. E.g PostgreSQL etc.

We will create an initial admin user and set a password for the use.


(drf_api) employee-management $ python migrate

(drf_api) employee-management $ python createsuperuser --email --username admin


Let’s add your app as API, open the api/ file and add the rest_framework and employee apps to INSTALLED_APPS.



open the api/ file and add urls for the 'employees' app;

...from django.contrib import admin
 from django.urls import path, include
 urlpatterns = [
     path('', include(employees.urls'))


This makes your basic setup ready and now you can start adding code to your employees’ service API.

TDD – Test Driver Development

Before we write the business logic of our API, we will need to write a test. So this is what we are doing: Write a unit test for a view and then update the code to make so that your test case works

Let’s Write a test for the GET employees/ endpoint

Let’s create a test for the endpoint that returns all songs: GET employees/.

Open the employees/ file and add the following lines of code;

Add this code to git and make gist link

For now, let’s attach screenshots:

Do not try to run this code yet. We have not added the view or model code yet. Let’s add the view now.

Add the View for GET employees/ endpoint

Now we will add the code the view that will respond to the request GET employees/.

Model: First, add a model that will store the data about the employees that will be returned in the response. Open the employees / file and the following lines of code.



We will add our model to the admin. This will help in running the admin part of the employees. Like, add/remove employee via admin UI  Lets add the following lines of code to the employees / file.



Now run make migrations from the command line

(drf_api) employee-management $ python makemigrations

Now run migrate command. This will create the employee’s table in your DB.

(drf_api) employee-management $ python migrate


Serializer: Add a serializer. Serializers allow complex data such as query sets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML or other content types.

Add a new file employees/ and add the following lines of code;



Serializers also provide deserialization, allowing parsed data to be converted back into complex types, after first validating the incoming data. The serializers in REST framework work very similarly to Django’s Form and ModelForm classes.


View: Finally, add a view that returns all songs. Open the employees/ file and the following lines of code;



Here hew has specified how to get the objects from the database by setting the queryset attribute of the class and specify a serializer that will be used in serializing and deserializing the data.

The view in this code inherits from a generic viewset ListViewSet

Connect the views

Before you can run the tests, you will have to link the views by configuring the URLs.

Open the api/ file and add the following lines of code;



Now go to employees/ and add below code;


Let’s run the test!

First, let’s run automated tests. Run the command;

 (drf_api) employee-management $ python test


The output in your shell should be similar to this;


How to test this endpoint manually?

From your command line run below command


(drf_api) employee-management $  nohup python runserver & disown


Now type in in your browser. You now will be prompted for username and password. Enter admin username and password which we have created while doing create user step.

The screen will look like below once you log in :



Let’s add a few employees by adding add button


Once you added the employees. Let’s test our view employees API by hitting URL below


If you are able to see the above screen. This means your API works.

Congrats! Your first API using DRF is live.