React Context API vs Redux

Whenever there is a requirement for state management, the first name that pops in the head is REDUX. With approximately 18M downloads per month, it has been the most apparent and unmatched state management tool.

But the new React Context API is giving the redux a healthy competition and trying to replace it.

I will first give a brief explanation for both, and then we can deep dive into the details.

What is Redux?

Redux is most commonly used to manage state or data of a React app. It is not just limited to React apps; it can be used with Angular and other frameworks as well. But when using react, the most common and obvious choice is to use redux.

Redux provides a centralized store(state) that can connect with various react containers/components.

This state is not mutable and accessible directly, to change the state data we need to dispatch the actions and then the reducers will update the data in the centralized state.

What is React’s Context API?

Context API provides a way to solve a simple problem which you will face in almost all react apps, how can we manage a state or pass data to not connected components.

Let’s first see an example of a sample application with redux used for state management.

The state is always changed by dispatching an action.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_actions_example_1-jsx 

Then a reducer is present to update the global state of the app. Below is a sample reducer.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_reducer_example_2-jsx

Below would be a sample app.js file.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_app_example_3-jsx

The last step would be to connect the react component to the reducer, which would subscribe to the global state and automatically update the data,  passed as props.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_todo_component_4-jsx

This is a fundamental and trivial implementation of the react-redux setup. But there is a lot of boilerplate code that needs to be taken care of.

Now, let’s see how does React’s context API work. We will update the same code and use the context API and remove redux.

Context API consists of three things:

  • Context Object
  • Context Provider
  • Context Consumer

First of all, we will create a context object.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_create_context_5-js

We can create contexts in various forms. Either in a separate file or in the component itself. We can create multiple contexts, as well. But what is this context?

Well, a context is just a JSON that holds some data(It can hold functions as well).

Now let’s provide this newly created context to our app. Ideally, the component that wraps all the child components should be provided with the context. In our case, we are providing context to our app itself. The value prop in the <TodoContext.Provider> set here passes down to all the child components.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_provider_example_6-jsx

Here is how we can consume our provided context in the child components.

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_consumer_example_7-jsx

The special component <TodoContext.Consumer> is injected into the context provided. The context is the same object that is passed to the value prop of <TodoContext.Provider>. So if the value changes over there, the context object in the consumer is also updated.

But how do we update the values? Do we need actions?

So here we can use the standard React State management to help us. We can create the state in our App.js file itself and pass the state object to the Provider. The example given below would provide you with a little more context. 🙂

https://gist.github.com/hch2904/c1e90dd49c8143ea562dc05676b502c7#file-rvr_context_management_example_8-jsx-jsx

In the above code, we are just updating the state normally as we would in a normal class-based React component. Also, we are passing methods as references to the value prop, so any component that is consuming the context will have access to this function and can easily update the global state.

So that is how we can achieve global state management using the React Context API instead of using redux.

So should you get rid of redux completely?

Let’s look into a little comparison listed down below:

Redux React Context API
Learning Curve Redux is a whole new package that needs to be integrated into an app, it takes some time to learn the basic concepts and standard code practices that we need to follow in order to have the react and redux working together smoothly. If you know react then it certainly helps to speed things up to learn and implement redux. React context, on the other hand, works on the state principle which is already a part of React, we only need to understand the additions to the API and how we can use the providers and consumers. In my opinion, a react developer can get familiarized with the concept in a short while
Refactoring Effort To refactor the code to redux API would depend on the project itself, a small scale app can be easily be converted in 3 to 4 days but if there is a big app that needs to converted it can take some time.
Code Size When using redux the code size of the web app is increased quite a bit as we include quite some packages just to bind all the stuff together.
redux  – 7.3kB
react-redux – 14.4kB
On the other hand, the context API is baked in the react package. So no additional dependencies are required

 

Scale Redux is known for it’s scaling capabilities, in fact, while building a large scale app redux is the first choice, it provides modularity (separating out reducers, actions) and a proper flow which can be easily scalable. The same cannot be said for the react context API, as everything is managed by the state property of React, while we can create a global higher-order component that can contain the whole app state, but this is not really maintainable and not easy to read code.

In my opinion, a small scale app can easily adapt to the react context API. To integrate redux, we need three to four separate packages. This adds to the final build, bigger bundle size, a lot more code to process, which would increase the render times.

On the other hand, React context API is built-in, and no further package is required to use it.

However, when we talk about large scale apps, where there are numerous components and containers involved, I believe the preferred way to go is redux, as redux provides maintainability, ability to debug your code. Their various middlewares present helps to write efficient code, handle async code and debug better. We can separate the actions dispatchers and reducers in redux, which provide us with an easier and defined coding pattern.

The last approach can be to use both of these, but I have not tried it. We can connect containers with redux, and if the containers have deep child component trees, we can pass the data to children using context objects.

API test automation using Postman simplified: Part 2

In previous blog of this series, we talked about deciding factors for tool selection, POC, suite creation and suite testing.

Now moving one step further we’ll talk about next steps like command-line execution of postman collections, integration with CI tool, monitoring etc.

Thus I have structured this blog into following phases:

  • Command-line execution of postman collection
  • Integration with Jenkins and Report generation
  • Monitoring

Command-line execution of postman collection

Postman has a command-line interface called Newman. Newman makes it easy to run a collection of tests right from the command line. This easily enables running Postman tests on systems that don’t have a GUI, but it also gives us the ability to run a collection of tests written in Postman right from within most build tools. Jenkins, for example, allows you to execute commands within the build job itself, with the job either passing or failing depending on the test results.

The easiest way to install Newman is via the use of NPM. If you have Node.js installed, it is most likely that you have NPM installed as well.

$ npm install -g newman

Sample windows batch command to run postman collection for a given environment

newman run https://www.getpostman.com/collections/b3809277c54561718f1a -e Staging-Environment-SLE-API-Automation.postman_environment.json –reporters cli,htmlextra –reporter-htmlextra-export “newman/report.html” –disable-unicode –x

Above command uses cloud URL of collection under test. If someone doesn’t want to use cloud version, it is possible to import the collection JSON and pass the path in the command. The generated report file clearly shows the passed/failed/skipped tests along with request and responses and other useful information. In the post-build actions, we can add steps to email the attached report to intended recipients.

Integration with Jenkins and Report generation

Scheduling and executing postman collection through Jenkins is a pretty easy job. First, it requires you to install all the necessary plugins as needed.

E.g. We installed the following plugins:

  • js –For Newman
  • Email extension –For sending mails
  • S3 publisher –For storing the report files in aws s3 bucket

Once you have all the required plugins, you just need to create a job and do necessary configurations for:

  • Build triggers – For scheduling the job (time and frequency)
  • Build – Command to execute the postman collection.
  • Post build actions –Like storing the reports at the required location, sending emails etc.

If you notice, the test execution Newman report generated after Jenkins build execution looks something as shown in Figure 1:

Figure 1: Report in plain format due to Jenkins’s default security policy

This is due to one of the security features of Jenkins i.e. to send Content Security Policy (CSP) headers which describes how certain resources can behave. The default policy blocks pretty much everything – no JavaScript, inline CSS, or even CSS from external websites. This can cause problems with content added to Jenkins via build processes, typically using the Plugin. Thus with the default policy, our report will look something like this

Therefore it requires modifying the CSP to see the visually-appealing version of the Newman report. While turning this policy off completely is not recommended, it can be beneficial to adjust the policy to be less restrictive, allowing the use of external reports without compromising security. Thus after making the changes, our report will look at something as shown in Figure 2:

Figure 2: Properly formatted Newman report after modifying Jenkins’s Content Security Policy

One of the ways to achieve this is through making changes in the jenkins.xml file, which is located in your main Jenkins installation to permanently changing the Content Security Policy when Jenkins is running as a Windows Service. Simply add your new argument to the arguments element, as shown in Figure 3, save the file and restart Jenkins.

Figure 3: Sample snippet of Jenkins.xml showing modified argument for relaxing Content Security Policy

Monitoring

As we see in Jenkins integration, we have fixed the job frequency using the Jenkins scheduler, which means we have restricted our build to run at particular times in a day. This solution is working for us for now but what if someone wants it in such a way that stakeholders are getting informed whenever there is a failure and needs to be looked upon rather than spamming everyone with the regular mails even if everything passes.

One of the best ways is to have the framework integrated with code repository management system and trigger the automation whenever a new code change related to the feature is pushed and send report mail when any failure is detected by the automation script.

Postman provides a better solution in terms of monitors that lets you stay up to date on the health and performance of your APIs, Although we have not used this utility since we are using the free version and it has a limit of 1,000 free monitoring calls every month. You can create a monitor by navigating to New->Monitor in Postman. (Refer to Figure 4)

Postman monitors are based on collections. Monitors can be scheduled as frequently as every five minutes and will run through each request in your collection, similar to the collection runner. You can also attach a corresponding environment with variables you’d like to utilize during the collection run.

The value of monitors lies in your test scripts. When running your collection, a monitor will use your tests to validate the responses it’s receiving. When one of these tests fail, you can automatically receive an email notification or configure the available integrations to receive alerts in tools like Slack, Pager Duty, or HipChat.

Figure 4: Adding Monitor in Postman

Here we come to an end of this blog as we have discussed all the phases of postman end to end usage in terms of what we explored or implemented in our project. Hope this information helps in setting up an automation framework through postman in some other project as well.

API test automation using Postman simplified: Part 1

Every application you build today relies on APIs. This means it’s crucial to thoroughly verify APIs before rolling out your product to the client or end-users. Although multiple tools for automating API testing are available and known to QAs, we have to decide on the tool that best suites our project requirements and can scale and run without much maintenance/upgrade while creating a test suite. And once the tool is finalized, we have to design an end to end flow with the tool which is easy to use, and we can get most benefits out of automation efforts.

I am writing this two blog series to talk about end to end flow of API automation with Postman from deciding on the tool to implementing the suite till integration with CI tool and report generation .thus the content of these blogs is totally based on our experience and learning while setting it up in our project.

In the first blog of the series, we’ll talk about the phases till implementation while in the next blog we’ll discuss integration with Jenkins, monitoring etc.

Thus I have structured this blog into following phases:

  • Doing POC and Deciding on the tool depending on its suitability to meet project requirement
  • Checking tool’s scalability
  • Suite creation with basic components
  • Testing the Suite using Mock servers

Before moving on to talk on these topics in detail , For those who are new to postman, in brief, Postman is one of the most renowned tools for testing the APIs and is most commonly used by developers and testers. It allows for repeatable, reliable tests that can be automated and used in a variety of environments like Dev, Staging, and Production etc. It presents you with a friendly GUI for constructing requests and reading responses, easy for anyone to get started without any prior knowledge of any scripting language since Postman also has a feature called ‘Snippets’. By default, they are present in Java script but by using it you can generate code snippets in a variety of languages and frameworks such as Java, Python, C, CURL and many others.

Let’s now move to each phase one by one wherein I’ll talk about our project specific criteria and examples in detail.

Doing POC and Deciding on the tool

A few months back when we came up with a plan to automate API tests of our application, first question in mind was, what tool to use which will best suit our requirement?

Mostly the team was briefly familiar with Postman, JMETER, rest assured and FitNesse tools/frameworks for API automation. The main criteria for selecting the tool was to have an open source option which helps to quickly get started with the test automation task, is easy to use,  gives a nice and detailed reporting and is easy to integrate with CI tool.

We could quickly create a POC of complete end to end flow using postman and a similar POC for comparison purpose on JMETER. However, Postman came out as a better option in terms of reporting and user-friendliness since it does not require much of scripting knowledge and hence anyone in the team can pitch in anytime and contribute in the automation effort.

Checking tool’s scalability

Now since we had liked the tool and wanted to go ahead with the same to build a complete automation suite, next set of questions on our mind was related to limitations and scalability of Postman free version,

This was important to be evaluated first and foremost before starting the actual automation effort as we wanted to avoid any unnecessary rework. Thus we started finding answers to our questions. While we could find few answers through the web searches, for some of the clarifications we had to reach out to postman customer support to be double sure on the availability and limitations of the tool.

As a gist, it is important to know that if you are using postman free version then:

  • While using personal workspace there is no upper limit on the number of collections, variables, environments, assertions and collection runs but if you want to use shared/team workspace then there is a limit of 25 requests.
  • If you are using postman’s API for any purposes (for ex. add/update collections, update environments, or add and run monitors) then limit of 1000 request and rate limit of 60 applies.
  • Postman’s execution performance is not actually dependent on number of request but mainly depends on how large computations are being performed in the scripts

This helped us to understand whether free version suffices our requirements or not. Since we were not planning to use postman APIs or monitoring services, we were good to go ahead with postman free version.

Suite creation with basic components

Creating an automation suite with postman requires understanding of following building blocks (Refer Figure 1)

  • Collections & Folders: Postman Collections are a group of saved requests you can organize into folders. This helps in achieving the nice readable hierarchies of requests.
  • Global/Environment variables: An environment is a set of key-value pairs. It lets you customize requests using variables so you can easily switch between different setups without changing your requests. Global variables allow you to access data between collections, requests, test scripts, and environments. Environment variables have a little narrow scope and are applicable only for selected environment. For instance, we have multiple test environments like Integration, Staging, Production. So we can run same collection in all three environments without requiring any changes to collection but by just maintaining 3 environments with environment-specific values for the same keys.
  • Authentication options: APIs use authorization to ensure that client requests access data securely. Postman is equipped with various authorization methods from simple Basic Auth to special AWS signature to  OAuth and NTLM Authentication
  • Pre-Request: Pre-request scripts are snippets of code associated with a collection request that is executed before the request is sent. Some of the common use cases for pre-request scripts are Generating values and injecting them in requests through environment variables, converting data type/format before passing to test script etc.,
  • Tests: Tests are scripts written in JavaScript that are executed after a response is received. Tests can be run as part of a single request or run with a collection of requests.
  • Postman in-built js snippets for creating assertions: Postman allows us to write JavaScript code which can assert on the responses and automatically check the response. We can use the Snippets feature of the Tests tab to write assertions.

Figure 1: Basic Building Blocks of Postman

Testing the Suite using Mock servers

Using the base framework that we created during POC we were able to extend it to add multiple request and multiple tests around each request’s response to having a full-fledged automation suite ready with us and running daily.

In our case our first problem statement to be achieved in API automation was a set of APIs for a reporting module.

Since report contains dynamic data and generating a set of test data is also very tough due to multiple environmental factors, it was not possible for us to apply fixed assertions to validate data accuracy. That’s why we had to come up with other ways to test that don’t exactly match the correctness of the data but still are thorough enough to check the validity of the data and report actual failures.

Thus while doing this exercise what we followed and that really turned out quite beneficial for us was that before starting to write the tests in the tool itself, it is very important to clearly list down everything in detail as to what exactly we want to assert.

For simple APIs with static responses, this list might be pretty straightforward to define. But In our example, it required a good amount of brainstorming to come up with list of assertions which can actually check the validity of the response without knowing the data value itself.

So we thoroughly studied the API responses, came up with our PASS/FAIL criteria, listed down each assertion in own words in our plan  and then went ahead with converting them into actual postman assertions. For ex:

-> Response Code 200 OK
-> Schema Validation
-> Not Null check for applicable values
-> Exact value check for a set of values
-> Match Request start/end time with response start/end time
-> Range validation for a set of values (between 0-1)
-> Data Validation Logic: Detailed logic in terms of response   objects/data with if/else criteria for defined PASS/FAIL cases (details removed)

As we see in above list, we have a number of positive and negative tests covered. While we have such assertions in place in Postman, we can’t say it will work when such response gets generated in actual until we test it thoroughly. If we are testing postman collection of APIs with actual environment response, we might not get each type of failed responses.

And thus to test it we need a way to mock the API request and response which is very similar to the actual response but has some values modified to invalid values to test whether out script and assertion catch them as failure or not. This is possible in postman through mock servers. You can add a mock server for a new or existing collection by navigating to New->Mock server in postman (Refer to Figure 2)

Postman mock server lets you mock a server response, allowing a team to develop or write tests against a service that is not yet complete or is unstable so that instead of hitting the actual endpoint URL, request is made to specified request path given in mock server and accordingly mocked test responses are returned and we can see how our script behaves for such requests and responses. Thus during actual execution of live endpoints if similar scenarios occur, we already know how our script is going to handle them.

Figure 2: Adding mocked request/response using Mock Server in Postman

Now once we have our test suite ready with required cases added and tested, we are good to start scheduling it to run daily for our test environments so that it checks the API health and reports failures.

In our next blog we will discuss these phases in detail. Stay tuned.

Defence against Rising Bot Attacks

Bots are software that performs an automated task over the Internet. They are used for the productive task but they are frequently used for malicious activities. They are categorised as good and bad bots.

Good bots are used for positives purposes like Chabot for solving customer queries and web crawlers that are used for indexing search engines. A plain text file robot.txt can be placed in the root of site and rules can be configured in this file to allow/deny access to different site URLs. This way good bot can be controlled and are allowed to access certain site resources.

Then comes the bad bots, which are malicious programs and performs certain activities in the background in victims machine without the user’s knowledge.  Such activities include accessing certain websites without the user’s knowledge or stealing the user’s confidential information’s. These are also spread across the Internet to perform DDoS (Distributed Denial of service) attack on target websites. Following techniques can be used to deter malicious bots from accessing resource extensive API of web applications.

  1. Canvas Fingerprint:

Canvas fingerprint works on html5 canvas element. A small image is drawn on the canvas element of 1 x 1-pixel size. Each device generates a different hash of this image based on the browser, operating system and installed the graphics card. This technique is not sufficient enough to uniquely identify users because there will be certain group of users sharing the same configuration and device. But this has been observed that when the bot is scanning through the web pages it tries to access all link present on that landing page. So the time taken by the bot to click on the link on that page will always be similar. These two technique canvas fingerprint along with time to click can be combined to decide whether to provide access to the application or allow the user to validate that its a genuine user by shown a captcha. User will be given access to the site after successfully validating the captcha. So if this is observed that requests coming from the same device (same fingerprint) and with same time to click interval then this request will fall in bot category and has to be validated by showing captcha to the user. This will improve user experience, as captcha is not shown to all users but only for suspected requests.

  1. Honey Trap:

The honey trap works by having few hidden links on the landing pages along with the actual link. Since bots are going to try all link on the page when sniffing the landing page it will get trapped in the hidden link which points to 404 pages.  This data can be collected and use to identify the source of the request and later those sources can be blocked from accessing the website. Genuine user will only be able to see the actual link and will be able to access the site.

  1. Blacklist IP address:

Not the best solution because the bot is smart enough to change the IP address with each request. However, it will help to reduce some of the bot traffic by providing one layer of protection.

  1. Blacklist X-Requested-with:

There are some malicious apps that try to click certain links on the user’s device in the background and user does not know about this. Those requests have X-Requested-With header in the request which contains the app package name. The web application can be configured to block all requests that contain fraudulent X-Requested-With field value.

  1. Blacklist User-Agent:

There are many third-party service providers that maintains a list of User-Agents that bots use. The website can be configured to block all the requests coming from these blacklisted user agents. Similar to IP address this field can also be changed by bot owners with each request, hence does not provide full proof protection.

Conclusion

The separate defence can be used in different cases. There are a few advertising partners that try to convert the same user again and again just to increase their share. For such cases, canvas fingerprint will be the most suitable solution. It identifies the request coming from the same device several times within a time interval and mark it as bot traffic and ask the user to validate to proceed further.

Honey trap will be an ideal defence when there are many advertising partners working to bring in more traffic and we need to generate the report which partner provide bot traffic more. From this report bot source can be identified and later request coming from that traffic can be blocked. Other defence works on blacklisting, eg IP address, User-Agent and X-Requested-with. These are part of request headers and can be easily changed by bot from time to time. When using these blacklisting defences, the application owner needs to make sure they are using the most updated list of fraud causing agents. Given the pace of new frauds happening daily keeping track of updated fraud causing agent will be a challenge.

So this can be used as the first line of defence to filter most notorious bots and later canvas fingerprint will filter advanced bots.

 

Building a basic REST API using Django REST Framework

An API (Application Programming Interface) is a software that allows two applications to talk to each other.

In this tutorial, We will explore different ways to create a Django Rest Framework (DFR) API. We will build Django REST application with Django 2.X.X that allows users to create, edit, and delete API.

Why DRF:

Django REST framework is a powerful and flexible toolkit for building Web APIs.

Some reasons you might want to use REST framework:

  • The Web browsable API is a huge usability win for your developers.
  • Authentication policies including packages for OAuth1a and OAuth2.
  • Serialization that supports both ORM and non-ORM data sources.
  • Customizable all the way down – just use regular function-based views if you don’t need the more powerful features.
  • Extensive documentation, and great community support.
  • Used and trusted by internationally recognized companies including Mozilla, Red Hat, Heroku, and Eventbrite.

Traditionally, Django is known to many developers as an MVC Web Framework, but it can also be used to build a backend, which in this case is an API. We shall see how you can build a backend with it.

Let’s get started

In this blog, you will be building a simple API for a simple employee management service.

Setup your Dev environment:

Please install python 3. I am using python 3.7.3 here

You can check your python version using the command

$ python –V

Python 3.7.3

After installing python, you can go ahead and create a working directory for your API and then set up a virtual environment.

You can set up virtual env. by below command

$ pip install virtualenv

Create directory employee-management and use that directory

$ mkdir employee-management  && cd employee-management
# creates virtual environment named drf_api
 employee-management $ virtualenv --python=python3 drf_api
# activate the virtual environment named drf_api
 employee-management e$ source drf_api/bin/activate

 

This will activate the virtual env that you have just created.

Let’s install Django and djangorestframework in your virtual env.

I will be installing Django 2.2.3 and djangorestframework 3.9.4

(drf_api) employee-management $pip install Django=2.2.3
(drf_api) employee-management $pip install djangorestframework =3.9.4

Start Project:

After setting up your Dev environment, let’s start a Django Project. I am creating a project with name API

 

(drf_api) employee-management $ django-admin.py startproject api

(drf_api) employee-management $ cd api

 

Now create a Django app. I am creating employees app

 

(drf_api) employee-management $ django-admin.py startapp employees

 

Now you will have a directory structure like this:

 

api/

    manage.py

    api/

        __init__.py

        settings.py

        urls.py

        wsgi.py

    employees/

        migrations/

             __init__.py

        __init__.py

        admin.py

        apps.py

        models.py

        tests.py

        views.py

    drf_api/

 

The app and project are now created. We will now sync the database.  By default, Django uses sqlite3 as a database.

If you open api/settings.py you will notice this:

 

DATABASES = {
     'default': {
         'ENGINE': 'django.db.backends.sqlite3',
         'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
     }
 }

 

You can change the DB engine as per your need. E.g PostgreSQL etc.

We will create an initial admin user and set a password for the use.

 

(drf_api) employee-management $ python manage.py migrate

(drf_api) employee-management $ python manage.py createsuperuser --email superhuman@blabla.com --username admin

 

Let’s add your app as API, open the api/settings.py file and add the rest_framework and employee apps to INSTALLED_APPS.

 

INSTALLED_APPS = [
     ...
     'rest_framework',
     
 'employees'
  
 ]


open the api/urls.py file and add urls for the 'employees' app;

...from django.contrib import admin
 from django.urls import path, include
 urlpatterns = [
     path('admin/', admin.site.urls),
     path('', include(employees.urls'))
 ]

 

This makes your basic setup ready and now you can start adding code to your employees’ service API.

TDD – Test Driver Development

Before we write the business logic of our API, we will need to write a test. So this is what we are doing: Write a unit test for a view and then update the code to make so that your test case works

Let’s Write a test for the GET employees/ endpoint

Let’s create a test for the endpoint that returns all songs: GET employees/.

Open the employees/tests.py file and add the following lines of code;

Add this code to git and make gist link

For now, let’s attach screenshots:

Do not try to run this code yet. We have not added the view or model code yet. Let’s add the view now.

Add the View for GET employees/ endpoint

Now we will add the code the view that will respond to the request GET employees/.

Model: First, add a model that will store the data about the employees that will be returned in the response. Open the employees /models.py file and the following lines of code.

 

 

We will add our model to the admin. This will help in running the admin part of the employees. Like, add/remove employee via admin UI  Lets add the following lines of code to the employees /admin.py file.

 

 

Now run make migrations from the command line

(drf_api) employee-management $ python manage.py makemigrations

Now run migrate command. This will create the employee’s table in your DB.

(drf_api) employee-management $ python manage.py migrate

 

Serializer: Add a serializer. Serializers allow complex data such as query sets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML or other content types.

Add a new file employees/serializers.py and add the following lines of code;

 

 

Serializers also provide deserialization, allowing parsed data to be converted back into complex types, after first validating the incoming data. The serializers in REST framework work very similarly to Django’s Form and ModelForm classes.

 

View: Finally, add a view that returns all songs. Open the employees/views.py file and the following lines of code;

 

 

Here hew has specified how to get the objects from the database by setting the queryset attribute of the class and specify a serializer that will be used in serializing and deserializing the data.

The view in this code inherits from a generic viewset ListViewSet

Connect the views

Before you can run the tests, you will have to link the views by configuring the URLs.

Open the api/urls.py file and add the following lines of code;

 

 

Now go to employees/urls.py and add below code;

 

Let’s run the test!

First, let’s run automated tests. Run the command;

 (drf_api) employee-management $ python manage.py test

 

The output in your shell should be similar to this;

 

How to test this endpoint manually?

From your command line run below command

 

(drf_api) employee-management $  nohup python manage.py runserver & disown

 

Now type in http://127.0.0.1:8000/admin/ in your browser. You now will be prompted for username and password. Enter admin username and password which we have created while doing create user step.

The screen will look like below once you log in :

 

 

Let’s add a few employees by adding add button

 

Once you added the employees. Let’s test our view employees API by hitting URL below

 

http://127.0.0.1:8000/api/v1/employees/

 

If you are able to see the above screen. This means your API works.

Congrats! Your first API using DRF is live.

Scala code analysis and coverage report on Sonarqube using SBT

Introduction

This blog is all about configuring scoverage plugin with SonarQube for tracking statement coverage as well as static code analysis for Scala project. SonarQube has support for many languages but it doesn’t have support for Scala- so this blog will guide through configuring sonar-scala and scoverage plugins to generate code analysis and code coverage reports.

The scoverage plugin for SonarQube reads the coverage reports generated by sbt coverage test and displays those in sonar dashboard.

Here are the steps to configure Scala projects with SonarQube for code coverage as well as static code analysis.

  1. Install sonarqube and start the server.
  2. Go to the Sonarqube marketplace and install `SonarScala` plugin.

This plugin provides static code analyzer for Scala language. It supports all the standard metrics implemented by sonarQube including Cognitive complexity.

  1. Add `Scoverage` plugin to Sonarqube from the marketplace

This plugin provides the ability to import statement coverage generated by Scoverage for scala projects. Also, this plugin reads XML report generated by Scoverage and populates several metrics in Sonar.

Requirements:

i.  SonarQube 5.1

ii. Scoverage 1.1.0

4. Now add the `sbt-sonar` plugin dependency to your scala project                   addSbtPlugin(“com.github.mwz” % “sbt-sonar” % “1.6.0”)

This sbt plugin can be used to run sonar-scanner launcher to analyze a Scala project with SonarQube.

Requirements:

i.  sbt 0.13.5+

ii. Scala 2.11/2.12

iii. SonarQube server.

iv. sonar-scanner (See point#5 for installation)

5. Configure `sonar-scanner` executable

 

6. Now, configure the sonar-properties in your project. This can be done in 2 ways

  • Use sonar-project.properties file:

This file has to be placed in your root directory. To use an external config file you can set the sonarUseExternalConfig to true.

import sbtsonar.SonarPlugin.autoImport.sonarUseExternalConfig

sonarUseExternalConfig := true

  • Configure Sonar-properties in build file:
  • By default, the plugin expects the properties to be defined in the sonarProperties setting key in sbt
import sbtsonar.SonarPlugin.autoImport.sonarProperties

sonarProperties ++= Map(

"sonar.sources" -> "src/main/scala",

"sonar.tests" -> "src/test/scala",

"sonar.modules" -> "module1,module2")
  1. Now run the below commands to publish code analysis and code coverage reports in your sonarQube server.
  • sbt  coverage test
  • sbt  coverageReport
  • sbt  sonarScan

 

SonarQube integration is really useful to perform an automatic review of code to detect bugs, code smells and security vulnerabilities.  SonarQube can also track history and provide the visual representation of it.

Introduction to Akka Streams

Why Streams?

In software development, there can be cases where we need to handle the potentially large amount of data. So while handling these kinds of scenarios there can be issues such as `out of memory` exceptions so we should divide the data in chunks and handle the chunks independently.

There come Akka streams for rescue to do this in a more predictable and less chaotic manner.

Introduction

Akka streams consist of 3 major components in it – Source, Flow, Sink – and any non-cyclical stream consist of at least 2 components Source, Sink and any number of Flow element. Here we can say Source and Sink are the special cases of Flow.

  • Source – this is the Source of data. It has exactly one output. We can think of Source as Publisher.
  • Sink – this is the Receiver of data. It has exactly one input. We can think of Sink as Receiver.
  • Flow – this is the Transformation that acts on the Source. It has exactly one input and one output.

Here Flow sits in between the Source and Sink as they are the Transformations applied on the Source data.

 

 

A very good thing is that we can combine these elements to obtain another one e.g combine Source and Flow to obtain another Source.

Akka streams are called reactive streams because of its backpressure handling capabilities.

What are Reactive Streams?

Applications developed using streams can run into problems if Source is generating data too fast than the Sink can handle. This causes Sink to buffer the data – but the problem is if data is too large then Sink buffer will also grow and can lead to memory issues.

So to handle this Sink need to communicate with the Source – to slow down the generation of data until it finished handling of current data.  This handle of communication between Publisher and Receiver is called as Backpressure handling. And Streams that handle this mechanism are called Reactive Streams.

Example using Akka Stream:

In this example, let’s try to find out prime numbers between 1 to 10000 using Akka stream. Akka stream version used is 2.5.11.

 

package example.akka

import akka.{Done, NotUsed}
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl._

import scala.concurrent.Future
object AkkaStreamExample {

def isPrime(i :Int) : Boolean = {
 if (i <= 1) false
 else if (i == 2) true
 else !(2 until i).exists(x => i % x == 0)
 }

def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem("actor-system")
 implicit val materializer = ActorMaterializer()

val numbers = 1 to 10000

//Source that will iterate over the number sequence
 val numberSource: Source[Int, NotUsed] = Source.fromIterator(() => numbers.iterator)

//Flow for Prime number detection
 val isPrimeFlow: Flow[Int, Int, NotUsed] = Flow[Int].filter(num => isPrime(num))

//Source from original Source with Flow applied
 val primeNumbersSource: Source[Int, NotUsed] = numberSource.via(isPrimeFlow)

//Sink to print the numbers
 val consoleSink: Sink[Int, Future[Done]] = Sink.foreach[Int](println)

//Connect the Source with the Sink and run it using the materializer
 primeNumbersSource.runWith(consoleSink)
 }
}

 

Above example illustrated as a diagram:

 

 

  1. `Source` – based on the number iterator

`Source`, as explained already, represents a stream. Source takes two type parameters. The first one represents the type of data it emits and the second one is the type of the auxiliary value it can produce when ran/materialized. If we don’t produce any we use the NotUsed type provided by Akka.

The static methods to create Source are

  • fromIterator – its will accepts elements till iterator is empty
  • fromPublisher – uses object that provides publisher functionality
  • fromFuture – new Source from a given future
  • fromGraph – Graph is also a Source.
  1. `Flow` – filters out only prime numbers

Basically, `Flow` is an ordered set of transformations to the provided input. It takes 3 type parameters – input datatype, output datatype & auxiliary datatype.

We can create a Source by combining existing one and a Flow- as used in code

val primeNumbersSource: Source[Int, NotUsed] = numberSource.via(isPrimeFlow)

  1. `Sink` – prints numbers to the console

It is basically subscriber of the data and the last element of the Stream steps.

The sink is basically a Flow which uses foreach or fold function to run a procedure over its input elements and propagate the auxiliary value.

As with Source and Flow, the companion object provides a method for creating an instance of it. As mentioned above the two main methods of doing so are:

  • forEach – run the given function for each received element
  • foreachParallel – same as forEach – except runs in parallel
  • fold – run the given function for each received element, propagating the resulting value to the next iteration.

The runWith method produces a Future that will be completed when the Source is empty and Sink is finished with the processing of elements. If processing fails it returns Failure.

We can also create a RunnableGraph instance and run it manually using toMat (or viaMat).

  1. `ActorSystem` and `ActorMaterializer` are needed as Akka Stream uses Akka Actor model.

The `ActorMaterializer` class instance is needed to materialize a Flow into a Processor which represents a processing stage, which is a construct from the Reactive Streams standard, which Akka Streams implements.

In fact, Akka Streams employs back-pressure as described in the Reactive Streams standard mentioned above. Source, Flow, Sink get eventually transformed into low-level Reactive Streams constructs via the process of materialization.

App Store Connect API To Automate TestFlight Workflow

TestFlight

Most mobile application developers try to automate build sharing process as it is one of the most tedious tasks in an app development cycle. However, it always remained difficult especially for iOS developers because of Apple’s code signing requirements. So when iOS developers start thinking about automating build sharing, the first option which comes to their mind is TestFlight.

Before TestFlight acquisition by Apple, it was easy to automate build sharing process. TestFlight had it’s own public API’s (http://testflightapp.com/api) to upload and share builds from the command line. Developers used these API’s to write automation scripts. After Apple’s acquisition, TestFlight made part of app store connect and invalidated old API’s. Therefore to upload or share build developers had to rely on third-party tools like Fastlane.

App Store Connect API

In WWDC 2018, Apple announced new App Store Connect API and made it publicly available in November 2018. By using App Store Connect API, developers can now automate below TestFlight workflow without relying on any third party tool. The workflow includes:

In this short post, we will see a use case example of App Store Connect API for TestFlight.

Authentication

App Store Connect API is a REST API to access data from the Apple server. Use of this API requires authorization via JSON Web Token. API request without this token results in error “NOT_AUTHORIZED”. Generating the JWT Token is a tedious task. We need to follow the below steps to use the App Store Connect API:

  1. Create an API Key in app store connectportal
  2. Generate JWT token using above API key
  3. Send JWT token with API call

Let’s now deep dive into each step.

Creating the API Key

The API key is the pair of the public and private key. You can download the private key from App Store Connect and public key will be stored on the Apple server. To create the private key, follow the below steps:

  1. Login to app store connect portal
  2. Go to ‘Users and Access’ section
  3. Then select ‘Keys’ section

Account holder (Legal role) needs to request for access to generate the API key.

Once you get access, you can generate an API key.

There are different access levels for keys like Admin, App Manager, developer etc. Key with the ‘Admin’ access can be used for all App Store Connect API.

Once you generate the API key, you can download it. This key is available to download for a single time only, so make sure to keep it secure once downloaded.

The API key never expires, you can use it as long as it’s valid. In case you lose it, or it is comprised then remember to revoke it immediately. Because anyone who has this key can access your app store record.

Generate JWT Token

Now we have the private key required to generate the JWT token. To generate the token, we also need the below-mentioned parameters:

  1. Private key Id: You can find it on the Keys tab (KEY ID).
  2. Issuer Id: Once you generate the private key, you will get an Issuer_ID. It is also available on the top of the Keys tab.
  3. Token Expiry: The generated token can be used within a maximum of 20 minutes. It expires after lapse of the specified time.
  4. Audience: As of now it is “appstoreconnect-v1”
  5. Algorithm: The ES256 JWT algorithm is used to generate a token.

Once all the parameters are in place, we can generate the JWT token. To generate it, there is a Ruby script which is used in the WWDC demo.

require "base64"
require "jwt"
ISSUER_ID = "ISSUER_ID"
KEY_ID = "PRIVATE_KEY_ID"
private_key = OpenSSL::PKey.read(File.read("path_to_private_key/AuthKey_#{KEY_ID}.p8"))
token = JWT.encode(
 {
    iss: ISSUER_ID,
    exp: Time.now.to_i + 20 * 60,
    aud: "appstoreconnect-v1"
 },
 private_key,
 "ES256",
 header_fields={
 kid: KEY_ID }
)
puts token

 

Let’s take a look at the steps to generate a token:

  1. Create a new file with the name jwt.rb and copy the above script in this file.
  2. Replace Issuer_Id, Key_Id and private key file path values in the script with your actual
  3. To run this script, you need to install jwt ruby gemon your machine. Use the following command to install it: $ sudo gem install jwt
  4. After installing the ruby gem, run the above script by using the command: $ ruby jwt.rb

You will get a token as an output of the above script. You can use this token along with the API call! Please note that the generated token remains valid for 20 minutes. If you want to continue using it after 20 minutes, then don’t forget to create another.

Send JWT token with API call

Now that we have a token, let’s see a few examples of App Store Connect API for TestFlight. There are many APIs available to automate TestFlight workflow. We will see an example of getting information about builds available on App Store Connect. We will also look at an example of submitting a build to review process. This will give you an idea of how to use the App Store Connect API.

Example 1: Get build information:

Below is the API for getting the build information. If you hit this API without the jwt token, it will respond with an error

$ curl https://api.appstoreconnect.apple.com/v1/builds
{
 "errors": [{
 "status": "401",
 "code": "NOT_AUTHORIZED",
 "title": "Authentication credentials are missing or invalid.",
 "detail": "Provide a properly configured and signed bearer token, and make sure that it has not expired. Learn more about Generating Tokens for API Requests https://developer.apple.com/go/?id=api-generating-tokens"
 }]
}

So you need to pass above-generated jwt token in the request

$ curl https://api.appstoreconnect.apple.com/v1/builds --Header "Authorization: Bearer your_jwt_token”
{
"data": [], // Array of builds available in your app store connect account
"links": {
"self": "https://api.appstoreconnect.apple.com/v1/builds"
},
"meta": {
"paging": {
"total": 2,
"limit": 50
}
}
}

 

Example 2: Submit build for review process:

By using the above build API, you can get an ID for the build. Use this ID to submit a build for the review process. You can send the build information in a request body like:

{
 "data": {
 "type": "betaAppReviewSubmissions",
 "relationships": {
 "build": {
 "data": {
 "type": "builds",
 "id": “your_build_Id"
 }
 }
 }
 }
}

In the the above request body, you just need to replace your build ID. So the final request will look like:

$ curl -X POST -H “Content-Type: application/json” –data ‘{“data”:{“type”:”betaAppReviewSubmissions”,”relationships”:{“build”:{“data”:{“type”:”builds”,”id”:”your_build_Id”}}}}}’https://api.appstoreconnect.apple.com/v1/betaAppReviewSubmissions –Header “Authorization: Bearer your_jwt_token”

That’s it. The above API call will submit the build for the review process. This way you can use any other App Store Connect API like getting a list of beta testers or to manage beta groups.

Conclusion

We have seen the end-to-end flow for App store Connect API. By using these API you can automate TestFlight workflow. You can also develop tools to automate the release process without relying on any third-party tool. You can find the documentation for App Store Connect API here. I hope you’ll find this post useful. Good luck and have fun.

 

 

 

 

 

WebRTC – Basics of web real-time communication

WebRTC is a free open source standard for real-time, plugin-free video, audio and data communication between peers. Many solutions like Skype, Facebook, Google Hangout offer RTC but they need downloads, native apps or plugins. The guiding principles of the WebRTC project are that its APIs should be open source, free, standardized, built into web browsers and more efficient than existing technologies.

How does it work

  • Obtain a Video, Audio or Data stream from the current client.
  • Gather network information and exchange it with peer WebRTC enabled client.
  • Exchange metadata about the data to be transferred.
  • Stream audio, video or data.

That’s it ! .. well almost, it’s a dumbed down version of what actually happens. Since now you have an overall picture let’s dig into the details.

How it really works

WebRTC provides the implementation of 3 basic APIs to achieve everything.

  • MediaStream: Allowing the client to access a stream from a WebCam or microphone.
  • RTCPeerConnection: Enabling audio or video data transfer, with support for encryption and bandwidth management.
  • RTCDataChannel: Allowing peer-to-peer communication for any generic data.

Along with these capabilities, we will need a server (yes we still need a server !)  to identify the remote peer and to do the initial handshake. Once the peer has been identified we can directly transfer data between two peers if possible or relay the information using a server.

Let’s look at each of these steps in detail.

MediaStream

MediaStream has a getUserMedia() method to get access of Audio or Video or a data stream and provide success and failure handler.

 

navigator.getUserMedia(constraints, successCallback, errorCallback);

 

The constraints is a json which specifies if an audio or video access is required. In addition, we can specify some metadata about the constraints like video with and height, example:

 

navigator.getUserMedia({ audio: true, video: true}, successCallback, errorCallback);

 

RTCPeerConnection

This interface represents the connection between local WebRTC client and a remote peer. It is used to do the efficient transfer of data between the peers. Both the peers need to setup RtcPeerConnection at their end. In general, we use an RTCPeerConnection::onaddstream event callback to take care of audio/video stream.

  • The initiator of the call (the caller) needs to create an offer and send it to the callee, with the help of a signalling server.
  • Callee which receives the offer needs to create an answer and send it back to the caller using the signalling server.
ICE

It is a framework that allows web browsers to connect with peers. There are many reasons why a straight up connection from Peer A to Peer B simply won’t work. Most of the clients won’t have a public IP address as they are usually sitting behind a firewall and a NAT. Given the involvement of NAT, our client has to figure out the IP address of the peer machine. This is where Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN) servers come into the picture

STUN

A STUN server allows clients to discover their public IP address and the type of NAT they are behind. This information is used to establish a media connection. In most cases, a STUN server is only used during the connection setup and once that session has been established, media will flow directly between clients.

TURN

If a STUN server cannot establish the connection, ICE can switch to TURN. Traversal Using Relay NAT (TURN) is an extension to STUN, that allows media traversal over a NAT that does not allow a peer to peer connection required by STUN traffic. TURN servers are often used in the case of a symmetric NAT.

Unlike STUN, a TURN server remains in the media path after the connection has been established. That is why the term “relay” is used to define TURN. A TURN server literally relays the media between the WebRTC peers.

RTCDataChannel

The RTCDataChannel interface represents a bi-directional data channel between two peers of a connection. Objects of this type can be created using

 

RTCPeerConnection.createDataChannel()

 

Data channel capabilities make use of events based communication:

var peerConn= new RTCPeerConnection(),
     dc = peerConn.createDataChannel("my channel");
 
 dc.onmessage = function (event) {
   console.log("received: " + event.data);
 };

Links and References