Google Play Instant Run Integration

What is an instant app?

A Google Android instant app is a small software program that enables end-users to test out a portion of a native Android app without installing it on a device. Instant apps, although they run like local apps, are native containers with access to a device’s hardware. Because end users do not install them, instant apps do not take up storage on the device.

Why do we need an instant app?

One of the main benefits of instant apps is discoverability. App developers often struggle to increase the visibility of an app because it is buried within the app store. With instant apps, a Google search or company website can lead the end-user to an instant app.

Instant apps require less hassle on an end user’s part, which may decrease the likelihood of a user leaving a negative review if he is dissatisfied.

Android instant apps are particularly powerful for e-commerce organizations and game developers. Game developers, in general, can benefit from instant apps because end users can play a particular level of the game. Once that level piques the end user’s interest, he or she can download the full app.

Developing an instant app-

Modularize and refactor your application

To add support of the instant run feature, developers need to modify their project structure. This is because the vast majority of apps today are mostly single module builds while supporting Instant Apps requires to split builds into multiple modules called features.

Each feature represents a part of the application that can be downloaded on demand.

Below image represents the simple application structure for Instant App:

  • Base feature module

    The base feature module can be thought of as the root of your project. It contains the shared code and resources to be used by other modules. The base feature differentiates itself from other features with the property base feature set to true.

 

apply plugin: "com.android.feature"


android {

    baseFeature true

    ...

}

dependencies {

    feature project(“:feature”)

    application project(“:app”)

    ...

}
  • Feature modules

    These are modules that apply the new android. feature Gradle plugin. They are simply library projects, producing an AAR when consumed by other modules. It’s worth noting that they do not have application IDs because they are just library projects.

 

apply plugin: 'com.android.feature'

android {

    ...

}

dependencies {

    implementation project(':base')

    ...

}
  • APK module

    This is the normal build module that we are all familiar with. Now it’s set up to consume the base and feature modules in order to output an apk to be installed on user devices. Since it aims to output an installable artifact this module does have an application ID.

 

apply plugin: 'com.android.application'



android {

    defaultConfig {

        applicationId "com.example.instantappsdemo"

        ...

    }

    ...

}



dependencies {

  implementation project(":my-base-feature")

  implementation project(":my-feature")

}
  • Instant App module

    Implements the com.android.instant plugin. The consume feature modules and produce a split APK zip containing all of the features that will go into the Instant App. It’s pretty much an empty shell of a project without a manifest that only implements other features feature modules in the project.

 

apply plugin: "com.android.instantapp"



dependencies {

    implementation project(":my-base-feature")

    implementation project(":my-feature")

}

Support deep linking and App links

If you have built complex apps that support multiple users flows, you will likely have implemented deep linking. Deep linking allows anyone to create a URL that links directly into a particular screen or flow in your app. Since Instant Apps run on URLs, deep links and app links are now a requirement. One major difference between regular deep links is that custom URI schemes are not supported.

Every feature in your Instant App must have at least one entry point that is defined as a deep link. This defines what Activity users will see when they click an Instant App URL, or if they navigate to the feature from a different feature in your Instant App. Here is an example of an intent filter that binds a deep link pattern to an Activity.

 

<activity

            android:name=".InstantDemoActivity">

            <intent-filter

                android:autoVerify="true"

                android:order="1">

                <action android:name="android.intent.action.VIEW"/>



                <category  android:name="android.intent.category.BROWSABLE"/>

                <category android:name="android.intent.category.DEFAULT"/>



                <data android:host="example.com"/>

                <data android:pathPattern="/instantadplay/.*"/>

                <data android:scheme="https"/>

                <data android:scheme="http"/>

            </intent-filter>

        </activity>

 

App Links

You will also need to associate your web domain with your Instant App’s package name. This binding, known as Android App Links, proves to Google that you own and control the web domain that you wish to associate with your app.

Now, by setting up App Links for your Instant App, users without your installed app will be routed seamlessly to your Instant App.

However, App Links is a requirement for Instant Apps to work. To set it up, you simply need to host a single JSON file on the root of your domain or subdomain at <my-domain>/.well-known/assetlinks.json

 

[

  {

    "relation": [

      "delegate_permission/common.handle_all_urls"

    ],

    "target": {

      "namespace": "android_app",

      "package_name": "com.myapp.packagename",

      "sha256_cert_fingerprints": [

        "96:14:26:30:CC:E3:C0:9B:05:12:7B:9A:31:9E:88:36:82:12:84:27:4C:52:2F:05:FE:66:A8:AB:B9:F0:F5:F0"

      ]

    }

  }

]
  • Each feature of your Instant App must be under 4MB in size.
  • Every project that uses feature modules must have exactly one base module and every feature module must depend on the base module.

 

Actual implementation in our SampleApp Project.

Below are the screenshots of the project structure and each module structure we kept for this Test app.

 

  • SampleApp Project Structure:

  • Installed Module / APK Module: This module is containing the main app source code.

  • Base Feature Module: In our case, this module doesn’t contains code part or common source code part.

  • Feature Module: In our case, this module contains code part which is running for the instant run or if the user clicked on the “try now” button from play store.

  • Instant app Module: The feature module defined in its build .gradle file will work for the instant run feature. This module doesn’t have the code part and AndroidManifest.xml file.

While the roll out you will get some errors, so, Please consider below solutions for the errors.

Error

  • Your site ‘www.mywebsitename.com’ has not been linked through the Digital Assets Link protocol to your app. Please link your site through the Digital Assets Link protocol to your app.

 

Solution

  • Please check whether Google Play App Signing is enabled on Google Play Console. If Google Play App Signing is enabled, Google Play Console will replace your app key with a release key. And the key you set on Android Studio is treated as upload key. So, you need to modify your JSON with the release key.

 

Error

  • You should have at least one active APK / installed app that is mapped to site ‘www.mywebsitename.com’ via a web ‘intent-filter’.

 

Solution

  • Upload installable APK with same HOST web ‘intent-filter’. : Your apps main activity should be mapped with web intent-filter same as instant app module activity web intent-filter.

 

Error

  • Some users of this Instant App APKs will not be eligible for any of the APKs in your installed app.

 

Solution

  • Ensure that the targeting of your Instant App APKs matches the targeting of your APKs. : keep the targetSdkVersion same as your previously released APK and Keep the targetSdkVersion same for all the modules.

 

Error

  • All of your APPs require the following device features: android.hardware.camera, android.hardware.screen.portrait, and android.hardware.location

Solution

  • Add below lines in your installed module AndroidManifest.xml file
<uses-feature android:name="android.hardware.screen.portrait"

        android:required="false" />

<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" android:required="false"/>

<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" android:required="false"/>

<uses-permission android:name="android.permission.CAMERA" android:required="false"/>

<uses-permission android:name="android.permission.VIBRATE" android:required="false"/>

Data Auditing using Javers

Almost every application deals with data. And by using an application, users modify the data – data is created, updated or deleted. Eventually, we start looking for options to audit changes in data like who changed it or when was it changed and sometimes, we want to know more like the previous value before the modification. All these questions or requirements ask for a version-control system similar to the version-control system for our source code.

 

The magic of the log is that if it is a complete log of changes, it holds not only the contents of the final version of the table but also allows recreating all other versions that might have existed. It is, effectively, a sort of backup of every previous state of the table. But there is a complexity involved in terms of space and querying over the growing size of audit data, which will eventually outgrow the live application data. So, we should look out for options where we can separate the live data from audit data for scalability, have efficient indexing techniques available etc.

 

Javers provides answers to all these questions. It is a versatile open-source Java framework for data auditing. It provides options to store audit data in a separate database, even to the extent that we can store audit data for a relational DB, MYSQL, into a non-relational DB, MongoDB.  We can easily integrate this framework in our application to maintain and browse the history of changes in our data.

 

Features:

 

  1. Object diff: Javers is built on top of the object diff engine. It can be used as a standalone tool to compare two object graphs and get the difference between them as a list of atomic changes
  2. Javers Repository: Javers repository is a central part of Javers data auditing engine. It tracks every change made on audited data, so we can easily identify the change, when it was made, who made it. It provides three views on object history: changes, shadows, and snapshots. It provides powerful JQL, Javers Query language, to browse the detailed history of a given class, object or property.
    At the moment, Javers provides MongoDB implementation and SQL implementation for the following dialects: H2, PostgreSQL, MySQL/MariaDB, Oracle, and Microsoft SQL Server.
  3. JSON serialization: Javers persists each state of an object as a snapshot in JSON format. It has a well-designed and customizable JSON serialization and deserialization module, based on GSON and Java reflections. The mapping of domain objects to persistent format (JSON) — is done by Javers-core and this common JSON format is used by many Javers Repository implementations.

Let’s see Javers in action now. We will be using a spring boot REST API project.

For brevity, let’s have a single resource, product, for which we will be creating following REST APIs secured by basic authentication.

CRUD APIs:

  1. POST API to create a new product
  2. GET API to get the list of products
  3. PUT API to update a product

Audit APIs:

  1. GET API to get the list of versions of a product
  2. GET API to get the list of changes/diff between two versions of a product
  3. PUT API to roll-backwards and roll-forward the current version

We will be using in-memory h2 database to store our data and Javers will automatically create its tables required for Javers repository as following:

  • jv_global_id — domain object identifiers,
  • jv_commit — Javers commits metadata,
  • jv_commit_property — commit properties,
  • jv_snapshot — domain object snapshots.

You can find the complete project at https://github.com/Rizwanmisger/data-version-control

 

To get started, we will add a Javers dependency to pom.xml

 

 

Next, we will create a generic service class, ‘AuditService’, which will handle all the operations related to auditing our entities. It performs the following operations:

  1. Commit: This operation saves the snapshot of an entity along with the author/user making the change. For every new commit, a new version also called as a shadow is created.
  2. Get a version of an entity: The operation uses JQL to query for the version of an entity identified by its type, id, and version number. Then we can use to compare the current version with this retrieved version or even set it as the current version, which will either be a roll-backwards or roll-forward.
  3. Get all versions of an entity: The operation uses JQL to query for all the versions of an entity. Javers does maintain a chronological order for versions, but since we are providing an ability to roll-backwards or roll-forward the current version, we can’t rely on this order to identify the current version of an entity. However, we can take advantage of Javers’s diff tool to compare the current state of the entity present in actual data with the list of available versions to identify the current version.

 

 

Finally, we will add a controller to expose the data and audit logs using REST APIs. This controller will perform the following functions:

  1. Whenever a user hits POST API to create a new product, we will call the audit service’s commit method, which will save its snapshot as the first version of this product.
  2. Whenever a user requests change in a product by hitting PUT API, we will update the product and call audit service’s commit method to track this change and save its snapshot as a new version for this product.
  3. User will be able to see audit data by using audit APIs for products. We can use audit service’s functions to get a list of versions, compare different versions and switch the current version.

 

 

Conclusion

Javers is lightweight and versatile. Since it uses JSON for object serialization, we don’t have to provide detailed ORM-like mapping. Javers only needs to know some high-level facts about our data model. Its integration is seamlessly easy, we can also use Spring Boot starters to simplify integrating Javers with your application. All required Javers beans are created and auto-configured with reasonable defaults.

What is new in JUnit 5?

JUnit 5 is the new testing platform from Java, which consists of the following components. It requires Java 8 or above version.

JUnit Platform – It serves as a foundation for launching a testing framework on JVM. It also defines TestEngine API for developing a testing framework that runs on the platform.

JUnit Jupiter – It is the combination of the new extension for writing tests and extensions in JUnit 5.

JUnit Vintage – It provides a TestEngine for running JUnit 3 and JUnit 4 based tests on the platform.

Introduction:

In this article, I will explain the major new features of JUnit 5. To execute the test case, I have taken a simple application which is explained in the below section. I will walk through the test cases I added during that I will explain those features.

About the application:  The application is built using spring boot. It has a data access layer and the service layer. It does not have any controller as I mainly used this to demo test cases. It is a gradle based project. Here I have excluded junit4 and added junit5 using following snippet.

 

testImplementation('org.springframework.boot:spring-boot-starter-test') {
     exclude group: 'junit', module: 'junit'
 }
 testImplementation('org.junit.jupiter:junit-jupiter:5.5.1')

 

  • It has CRUD operation on employee DB. The employee has email, first name, last name.
  • On saving employee, it validates email of the employee.
  • While updating employee, it throws an error if given id of employee object is 0, if given employee uses email of another employee, if given email of an employee is not present. It permits to change firstname and last name only.
  • It fetches employee with id or gets all employees.
  • Deletes an employee by id.
  • Here I will be writing an integration test which interacts with DB and it uses spring context. Here are changes need in JUnit 5.
  • In JUnit4, we used to have “@RunWith(SpringRunner.class)”, but in jUnit5, it is replaced extension model, we have to annotate with “@ExtendWith(SpringExtension.class)”.
  • Here we have “BeforeEach”, “BeforeAll” , “AfterEach”, “AfterAll” annotation for testcase pre and postcondition. Name itself is intuitive to understand the purpose.

Following are some of the features of JUnit 5.

Lambda support in the assertion:

This facilitates to have a single assertion (assertAll) which can contain multiple assertions to evaluate individual conditions. Following is the snippet I have used for asserting all attributes of user.

 

@DisplayName("Valid employee can be stored")
 @Test
 public void validEmployeeCanBeStoredTest() throws EmployeeException {
   Employee savedEmployee = employeeService.save(employee);
   assertAll("All user attribute should be persisted",
       () -> assertEquals(employee.getFirstName(), savedEmployee.getFirstName(), "Firstname matched"),
       () -> assertEquals(employee.getLastName(), savedEmployee.getLastName(),"Lastname matched"),
       () -> assertEquals(employee.getEmail(), savedEmployee.getEmail(), "Email matched"),
       () -> assertTrue(savedEmployee.getId() > 0,"Employee ID generated"));
 }

 

Here you can see that using a single assertAll we can have a different type of assertions.

Note: DisplayName helps to get some descriptive information about the test case like following.

 

Parameterize test

It facilitates to run a testcase multiple times with different arguments. Instead of @Test, it is annotated with @ParameterizedTest. It must declare at least one source of the parameter for the argument. Here is some valid source type – “ValueSource”, “Null Empty Source”, “EnumSource”, “MethodSource”, “CsvSource”, “CsvFileSource”, “ArgumentSource”.

I will explain about ValueSource and MethodSource, rest all are not in the scope of this article and I might cover them in a separate article. Following is the snippet which I used for testing valid and invalid emails. You might have guessed it since we have to test it multiple times with different value this comes to be a good candidate to demonstrate.

 

@Tag("unit")
 class InvalidEmailTest {
 
   @DisplayName("Test with valid")
   @ParameterizedTest(name = "email = {0}")
   @ValueSource(strings = { "this@email.com", "email.example@example.com", "demo@local.co.in" })
   void validEmails(String email) {
     assertTrue(EmailValidator.isValidEmail(email));
   }
 
   @DisplayName("Test with invalid")
   @ParameterizedTest(name = "email = {0}")
   @MethodSource("invalidEmailsProvider")
   void invalidEmails(String email) {
     assertFalse(EmailValidator.isValidEmail(email));
   }
 
   private static Stream<String> invalidEmailsProvider() {
     return Stream.of("this@email", "email@example.", "local.co.in");
   }
 
 }

 

ValueSource is pretty much straight forward, here we need to pass a different value as an array, and it will automatically pass to test method.

MethodSource helps to customize the arguments using Stream. It helps to pass a complex object to the method.

Note: Here we don’t have a display name, rather we have specified it as “@ParameterizedTest(name = “email = {0}”)”, it will help to nicely show the testcase execution like following.

 

Nested test case

It helps to build a kind of test suite which contains more meaningful and related group of the test in single class. Here is the code snippet.

 

@Transactional
 @SpringBootTest()
 @ExtendWith(SpringExtension.class)
 @DisplayName("Invalid employee test")
 @Tag("integration")
 class InvalidEmployeeCreateTest {
 
   private Employee employee;
 
   @Autowired
   EmployeeService employeeService;
 
   @Nested
   @DisplayName("For invalid email") class ForInvalidEmail {
 
     int totalEmployee = 0;
 
     @BeforeEach
     void createNewEmployee() {
       totalEmployee = employeeService.findAll().size();
       employee = new Employee("TestFirstName", "TestLastName", "not-an-email");
     }
 
     @Test
     @DisplayName("throws invalid email exception")
     void shouldThrowException() {
       assertThrows(EmployeeException.class, () -> employeeService.save(employee));
     }
 
     @Nested
     @DisplayName("when all employee fetched") class WhenAllEmployeeFetched {
 
       @Test
       @DisplayName("there should not be any new employee added")
       void noEmployeeShouldBePresent() {
         assertTrue(employeeService.findAll().size() == totalEmployee, "No new employee present");
       }
     }
   }
 }

 

Here is the output for this.

 

 

Explanation:

It starts with ForInvalidEmail nested class, it contains “BeforeEach”  which is invoked before each of test case, there we are creating an employee object which contains an invalid email. This class contains single test method “shouldThrowException” which checks if we try to save that employee object it should throw an exception.

In “ForInvalidEmail”  nested class, I have added one more nested class named “WhenAllEmployeeFetched”. This nested class contains a method which ensures no new employee record is created.

Here, you can see that how we grouped 2 related tests ( should throw an exception and no employee should be added) using nested class.

Dynamic Test

Earlier, we have only @Test which is static and specified at compile time. Their behavior can’t be changed in runtime. In junit5, new annotation is added for this @TestFactory.  It is factory of the test case, and it must return a single DynamicNode or Stream instance.

Any Stream returned by a @TestFactory will be properly closed by calling stream.close(), making it safe to use a resource such as Files.lines().

As with @Test methods, @TestFactory methods must not be private or static and may optionally declare parameters to be resolved by ParameterResolvers.

A DynamicTest is a test case generated at runtime. It is composed of a display name and an Executable. Executable is a @FunctionalInterface which means that the implementations of dynamic tests can be provided as lambda expressions or method references.

Following is the code snippet using this.

 

@Transactional
 @SpringBootTest()
 @ExtendWith(SpringExtension.class)
 @Tag("integration")
 class EmployeeUpdateTest {
 
   private List<Employee> employees = Arrays.asList(
       new Employee("valid", "name", "email1@exmaple.com", 1),
       new Employee("invalid", "id", "email2@exmaple.com", 0), // Invalid employee
       new Employee("uses", "other email", "email1@exmaple.com", 3), // Invalid as it uses 1st employee email
       new Employee("email", "does not exist", "email4@exmaple.com", -1)); // Non-existent email can't be updated
   @Autowired
   EmployeeService employeeService;
   @TestFactory
   Stream<DynamicTest> dynamicEmployeeTest() {
     return employees.stream()
         .map(emp -> DynamicTest.dynamicTest(
             "Update Employee: " + emp.toString(),
             () -> {
               try {
                 assertNotNull(employeeService.update(emp));
               } catch (EmployeeException e) {
                 assertTrue(e.getMessage().toLowerCase().contains("error:"));
                 assertTrue(emp.getId() != 1); // All employee except 1 should fail
               }
             }
         ));
   }
 }

 

Explanation: Here we have dynamicEmployeeTest method which is annotated with @TestFactory and it returns stream of DynamicTest. As you can see, here we have a mix of some employees, where only 1 record is valid for update, rest all employees have some problems mentioned in the code comment. This dynamic method actually emits 4 test case for which 1st employee goes through try block and rest all falls through the exception block.

Advantage: It shortens the codebase to test. Using JUnit 4, we have to write 4 different test cases to verify such behaviour.

Here is the output when running from IDE.

 

 

Note: This does not follow the lifecycle of testcases as those testcases are dynamically generated. So, if you have before each / BeforeAll/ AfterEach/ AfterAll methods, those won’t be accessed or executed for testfactory methods.

Here are some more features of JUnit 5:

  • Conditional test execution.
  • Test Execution order
  • DI for constructor and methods
  • Repeated test
  • Timeout test.

References:

Building a basic REST API using Django Rest Framework

An API (Application Programming Interface) is a software that allows two applications to talk to each other.

In this tutorial, We will explore different ways to create a Django Rest Framework (DFR) API. We will build Django REST application with Django 2.X.X that allows users to create, edit, and delete API.

Why DRF:

Django REST framework is a powerful and flexible toolkit for building Web APIs.

Some reasons you might want to use REST framework:

  • The Web browsable API is a huge usability win for your developers.
  • Authentication policies including packages for OAuth1a and OAuth2.
  • Serialization that supports both ORM and non-ORM data sources.
  • Customizable all the way down – just use regular function-based views if you don’t need the more powerful features.
  • Extensive documentation, and great community support.
  • Used and trusted by internationally recognized companies including Mozilla, Red Hat, Heroku, and Eventbrite.

Traditionally, Django is known to many developers as an MVC Web Framework, but it can also be used to build a backend, which in this case is an API. We shall see how you can build a backend with it.

Let’s get started

In this blog, you will be building a simple API for a simple employee management service.

Setup your Dev environment:

Please install python 3. I am using python 3.7.3 here

You can check your python version using the command

$ python –V

Python 3.7.3

After installing python, you can go ahead and create a working directory for your API and then set up a virtual environment.

You can set up virtual env. by below command

$ pip install virtualenv

Create directory employee-management and use that directory

$ mkdir employee-management  && cd employee-management
# creates virtual environment named drf_api
 employee-management $ virtualenv --python=python3 drf_api
# activate the virtual environment named drf_api
 employee-management e$ source drf_api/bin/activate

 

This will activate the virtual env that you have just created.

Let’s install Django and djangorestframework in your virtual env.

I will be installing Django 2.2.3 and djangorestframework 3.9.4

(drf_api) employee-management $pip install Django=2.2.3
(drf_api) employee-management $pip install djangorestframework =3.9.4

Start Project:

After setting up your Dev environment, let’s start a Django Project. I am creating a project with name API

 

(drf_api) employee-management $ django-admin.py startproject api

(drf_api) employee-management $ cd api

 

Now create a Django app. I am creating employees app

 

(drf_api) employee-management $ django-admin.py startapp employees

 

Now you will have a directory structure like this:

 

api/

    manage.py

    api/

        __init__.py

        settings.py

        urls.py

        wsgi.py

    employees/

        migrations/

             __init__.py

        __init__.py

        admin.py

        apps.py

        models.py

        tests.py

        views.py

    drf_api/

 

The app and project are now created. We will now sync the database.  By default, Django uses sqlite3 as a database.

If you open api/settings.py you will notice this:

 

DATABASES = {
     'default': {
         'ENGINE': 'django.db.backends.sqlite3',
         'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
     }
 }

 

You can change the DB engine as per your need. E.g PostgreSQL etc.

We will create an initial admin user and set a password for the use.

 

(drf_api) employee-management $ python manage.py migrate

(drf_api) employee-management $ python manage.py createsuperuser --email superhuman@blabla.com --username admin

 

Let’s add your app as API, open the api/settings.py file and add the rest_framework and employee apps to INSTALLED_APPS.

 

INSTALLED_APPS = [
     ...
     'rest_framework',
     
 'employees'
  
 ]


open the api/urls.py file and add urls for the 'employees' app;

...from django.contrib import admin
 from django.urls import path, include
 urlpatterns = [
     path('admin/', admin.site.urls),
     path('', include(employees.urls'))
 ]

 

This makes your basic setup ready and now you can start adding code to your employees’ service API.

TDD – Test Driver Development

Before we write the business logic of our API, we will need to write a test. So this is what we are doing: Write a unit test for a view and then update the code to make so that your test case works

Let’s Write a test for the GET employees/ endpoint

Let’s create a test for the endpoint that returns all songs: GET employees/.

Open the employees/tests.py file and add the following lines of code;

Add this code to git and make gist link

For now, let’s attach screenshots:

Do not try to run this code yet. We have not added the view or model code yet. Let’s add the view now.

Add the View for GET employees/ endpoint

Now we will add the code the view that will respond to the request GET employees/.

Model: First, add a model that will store the data about the employees that will be returned in the response. Open the employees /models.py file and the following lines of code.

 

 

We will add our model to the admin. This will help in running the admin part of the employees. Like, add/remove employee via admin UI  Lets add the following lines of code to the employees /admin.py file.

 

 

Now run make migrations from the command line

(drf_api) employee-management $ python manage.py makemigrations

Now run migrate command. This will create the employee’s table in your DB.

(drf_api) employee-management $ python manage.py migrate

 

Serializer: Add a serializer. Serializers allow complex data such as query sets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML or other content types.

Add a new file employees/serializers.py and add the following lines of code;

 

 

Serializers also provide deserialization, allowing parsed data to be converted back into complex types, after first validating the incoming data. The serializers in REST framework work very similarly to Django’s Form and ModelForm classes.

 

View: Finally, add a view that returns all songs. Open the employees/views.py file and the following lines of code;

 

 

Here hew has specified how to get the objects from the database by setting the queryset attribute of the class and specify a serializer that will be used in serializing and deserializing the data.

The view in this code inherits from a generic viewset ListViewSet

Connect the views

Before you can run the tests, you will have to link the views by configuring the URLs.

Open the api/urls.py file and add the following lines of code;

 

 

Now go to employees/urls.py and add below code;

 

Let’s run the test!

First, let’s run automated tests. Run the command;

 (drf_api) employee-management $ python manage.py test

 

The output in your shell should be similar to this;

 

How to test this endpoint manually?

From your command line run below command

 

(drf_api) employee-management $  nohup python manage.py runserver & disown

 

Now type in http://127.0.0.1:8000/admin/ in your browser. You now will be prompted for username and password. Enter admin username and password which we have created while doing create user step.

The screen will look like below once you log in :

 

 

Let’s add a few employees by adding add button

 

Once you added the employees. Let’s test our view employees API by hitting URL below

 

http://127.0.0.1:8000/api/v1/employees/

 

If you are able to see the above screen. This means your API works.

Congrats! Your first API using DRF is live.

Reduce App size with On Demand Resources

Introduction

This blog is about On Demand Resources. Nowadays our apps are loaded with high-resolution artwork, images and resources. So much so that we need to constantly keep an eye on the IPA size of the app throughout its development life cycle. Sometimes we download some static content from the server even if it can be easily packed into our app bundle.

Apple introduced on-demand resources in iOS 9. It enables apps to load assets dynamically. You assign tags to some assets, then when you upload a build to the App Store, Apple hosts the tagged assets so that they are downloaded separately from the app. The app requests the assets when required, and can discard them when they are not needed anymore. This is a great way to save space on devices.

Why Does the IPA Size Matter?

Of course, it matters!!!!. At the end of the day, iOS developers are focused on delivering top of the class user experience. Longer download times for the app kills that for you. I mean the first impression is always the best impression.

Is There a Solution?

Well yes, the secret to keeping ipa size smaller is On Demand Resources. I’ll also outline a few pointers you should keep in my mind while organizing your slices.

On Demand Resources.

As the name suggests, iOS delivers some content for you ( images, pdf etc. ) as and when you require in your app.

The main idea behind using ODR is that you pack minimal slices into your bundle for the basic presentation of your app and request for any high-resolution images you might need as when they are to be presented to the user.

How is this different from downloading slices from Server?

Well if your content is static (for ex. A static image), there is technically no need for server setup to only download your content. You can still have it all jammed in your bundle and get the advantage of smaller IPAs as well.

How Can It Be Done?

Well first off, head straight to your xcode projects and click on a file to view its file inspector on the right-hand side.

There is the field for On Demand Resource tags.

The same field is also present in the attributes inspector when clicking on one of the images in the asset catalog.

You can add certain tags to your images in the asset catalog or any resource files. NSBundleResourceRequest has an APIs to fetch these resources using tags we specify. This is the core of ODR.

How Tags Work

You identify on-demand resources during development by assigning them one or more tags. A tag is a string identifier you create. You can use the name of the tag to identify how the included resources are used in your app.

At runtime, you request access to remote resources by specifying a set of tags. The operating system downloads any resources marked with those tags and then retains them in storage until the app finishes using them. When the operating system needs more storage, it purges the local storage associated with one or more tags that are no longer retained. Tagged sets of resources may remain on the device for some time before they are purged.

Creating and Assigning Tags

Usually, the operating system starts downloading resources associated with a tag when the tag is requested by an app and the resources are not already on the device. Some tags contain resources that are important the first time the app launches or are required soon after the first launch. For example, a tutorial is important the first time the app is launched, but it is unlikely to be used again.

You assign tags to one of three prefetch categories in the Prefetched view in the Resource Tags pane: Initial Install Tags, Prefetched Tag Order, and Download Only On Demand.

The default category for a tag is Download Only On Demand. The view displays the tags grouped by their prefetch category and the total size for each category. The size is based on the device that was the target of the last build. Tags can be dragged between categories.

  • Initial install tagsThe resources are downloaded at the same time as the app. The size of the resources is included in the total size for the app in the App Store. The tags can be purged when they are not being accessed by at least one NSBundleResourceRequest
  • Prefetch tag order. The resources start downloading after the app is installed. The tags will be downloaded in the order in which they are listed in the Prefetched tag order group.
  • Downloaded only on demandThe tags are downloaded when requested by the app.

Code for ODR

NSBundleResourceRequest is used for requesting the ODR content. In viewDidLoad() of the TableViewController class( that displays the images respective to a category ), we call the following method.

func conditionallyBeginAccessingResources(completionHandler: @escaping (Bool) -> Void)

This function checks if all the resources associated with tags passed in are available for use. If not, we will call:

func beginAccessingResources(completionHandler: @escaping (Error?) -> Void)

This call will download all the content associated with the tags passed in.

In the completion handler, we simply populate our data source with the images associated with the tags and they are displayed in a UITableView.

Conclusion

On-demand resources in iOS 9 and tvOS is a great way to reduce the size of your app and deliver a better user experience to people who download and use your application. While it’s very easy to implement and set up, there are quite a few details that you must keep in mind in order for the whole on-demand resources system to work flawlessly without excessive loading times and unnecessarily purging data.

References

https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/On_Demand_Resources_Guide/

https://www.raywenderlich.com/520-on-demand-resources-in-ios-tutorial

iMessage Stickers and Apps

Introduction

This blog is about iMessage app in iOS, We all use messaging capabilities on our iOS devices. This is a bold statement and I have no proof for it, but it’s difficult to imagine a person owning an iOS device without having sent or received messages. The main messaging application on iOS is iMessage, but it’s not the only messaging option for iOS. You can download and choose among a huge selection of various messaging applications.

Up until iOS 10, iMessage was fully closed. That is to say, it lived in its own sandbox (and still does), and did not allow any extensions to be attached to it. In iOS 10 that has changed, and web developers can finally write our own iMessage extensions that allow even more interactivity to be added to our conversations.

iMessage apps can be of two different types:

Sticker packs

This is a special, unusual kind of app that contains only images, with absolutely no code. You can create this kind of app so users can send the images to one another in iMessage. For instance, if you offer a sticker pack full of heart shapes, users can then download the app and attach those hearts to messages that they or others send. In other words, as the name implies, images can stick to messages!

 

Full-fledged apps

This is where you have full control over how your iMessage app works. You can do some really fun stuff in this mode, which we will review soon. For instance, you can change an existing sticker that was sent previously by one of your contacts, so that you and the person you’re chatting with can collaboratively send and receive messages to each other.

Setting Up a Sticker Pack Application

Problem

You want to create a simple iMessage application that allows your users to send stickers to each other, without writing any code.

Solution

Follow these steps:

  1. Open Xcode if it’s not already open.
  2. Create a new project. In the new project dialog, choose Sticker Pack Application and then click Next.

 

Enter a product name for your project and then click Next.

  1. You will then be asked to save the project somewhere. Choose an appropriate location to save the project to finish this process.
  2. You should now see your project opened in Xcode and then a file named xcstickers. Click on this file and place your sticker images inside.
  3. After you’ve completed these steps, test your application on the simulator and then on devices as thoroughly as possible. Once you are happy, you need to code sign and then release your app to the iMessage app store.

Discussion

With the opening up of iMessage as a platform where developers can build stand-alone apps, Apple has created a new type of store called iMessage App Store, where applications that are compatible with iMessage will show up in the list and users can purchase or download them without cost.

If you create a sticker pack app with no accompanying iOS app, your app shows up only in the iMessage App Store. If you create an iOS app with an accompanying iMessage extension (stickers), your app shows up both in the iOS App Store (for the main iOS app) and also in the iMessage App Store (for your iMessage extension).

NOTE

Your stickers can be PDF, PNG, APNG (PNG with an alpha layer), JPEG, or even (animated) GIF, but Apple recommends using PNG files for the sake of quality. If you are desperate to create a sticker app but have no images to test with, simply open Finder at  /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/, then open the ICNS files in that folder with Preview.app, export those ICNS files into PNG files, and drag and drop them into your Stickers.xcstickers file in Xcode. Then build and run your project on the simulator.

 

Building a Full-Fledged iMessage Application

Problem

You want to build a custom iMessage application where you have full control over the presentation of your stickers and how the user interacts with them.

Solution

Create an iMessage application in Xcode by following these steps:

  1. Open Xcode if it’s not already open.
  2. Create a new project. In the template, window choose iMessage Application and then click Next

 

 

3. You will be asked to save your project somewhere. Do so and then you should see Xcode open up your project

Discussion

Now that you have created your iMessage app, it’s time to learn a bit about what’s new in the Messages framework for iOS 10 SDK. This framework contains many classes, the most important of which are:

MSMessagesAppViewController

The main view controller of your extension. It gets displayed to users when they open your iMessage application.

MSStickerBrowserViewController

A view controller that gets added to the app view controller and is responsible for displaying your stickers to the user.

MSSticker

A class that encapsulates a single sticker. There is one MSStickerfor each sticker in your pack.

MSStickerView

Every sticker instance in MSSticker has to be placed inside a view to be displayed to the user in the browser view controller. MSStickerView is the class for that view.

When you build an iMessage application as we have just done, your app is then separated into two entry points:

  • The iOS app entry point with your app delegate and the whole shebang
  • The iMessage app extension entry point

This is unlike the sticker pack app that we talked about earlier in this chapter. Sticker pack apps are iMessage apps but have no iOS apps attached to them. Therefore there is no code to be written. In full-fledged iMessage apps, your app is divided into an iOS app and an iMessage app, so you have two of some files, such as the Assets.xcassets file.

Even with custom sticker pack applications, you can build the apps in two different ways:

  • Using the existing Messages classes, such as MSStickerBrowserViewController, which do the heavy lifting for you
  • Using custom collection view controllers that will be attached to your main MSMessagesAppViewControllerinstance

Follow these steps to program the actual logic of the app:

 

  1. Drag and drop your PNG stickers into your project’s structure, on their own and not in an asset catalog. The reason is that we need to find them using their URLs, so we need them to sit on the disk directly.
  2. Create a new Cocoa Touch class in your project that will be your MSStickerBrowserViewController
  3. Your instance of MSStickerBrowserViewControllerhas a property called stickerBrowserView of type MSStickerBrowserView, which in turn has a property named dataSource of type MSStickerBrowserViewDataSource?. Your browser view controller by default will become this data source, which means that you need to implement all the non-optional methods of this protocol, such as numberOfStickers(in:). So let’s do that now:
override func numberOfStickers(in   stickerBrowserView: MSStickerBrowserView) -> Int

 {   return stickers.count } 

 override func stickerBrowserView(_ stickerBrowserView: MSStickerBrowserView, stickerAt index: Int) -> MSSticker {   return stickers[index] }

Our browser view controller is done, but how do we display it to the user? Remember our MSMessagesAppViewController? Well, the answer is through that view controller. In the viewDidLoad() function of the aforementioned view controller, load your browser view controller and add it as a child view controller:

override func viewDidLoad() {   super.viewDidLoad()      let controller = BrowserViewController(stickerSize: .regular)      controller.willMove(toParentViewController: self)   addChildViewController(controller)     if let vcView = controller.view {     view.addSubview(controller.view)     vcView.frame = view.bounds     vcView.translatesAutoresizingMaskIntoConstraints = false     vcView.leftAnchor.constraint(equalTo: view.leftAnchor).isActive = true     vcView.rightAnchor.constraint(equalTo: view.rightAnchor).isActive = true     vcView.topAnchor.constraint(equalTo: view.topAnchor).isActive = true          vcView.bottomAnchor.constraint(equalTo:    view.bottomAnchor).isActive = true   }      controller.didMove(toParentViewController: self)    }

Now press the Run button on Xcode to run your application on the simulator or device.

In this list, simply choose the Messages app and continue. Once the simulator is running, you can manually open the Messages app, go to an existing conversation that is already placed for you there by the simulator, and press the Apps button on the keyboard.

Conclusion

In this blog, I introduced you to the new Messages framework in iOS 10, which allows you to create sticker packs and applications to integrate with iMessage. We covered the basic classes you need to be aware of, including MSStickerBrowserViewController, MSMessageAppViewController, MSSticker, and  MSStickerView.

The Messages framework provides APIs to give you a large amount of control over your iMessage apps. For further reading, I would recommend checking out Apple’s Messages Framework Reference.

App Store Connect API To Automate TestFlight Workflow

TestFlight

Most mobile application developers try to automate build sharing process as it is one of the most tedious tasks in an app development cycle. However, it always remained difficult especially for iOS developers because of Apple’s code signing requirements. So when iOS developers start thinking about automating build sharing, the first option which comes to their mind is TestFlight.

Before TestFlight acquisition by Apple, it was easy to automate build sharing process. TestFlight had it’s own public API’s (http://testflightapp.com/api) to upload and share builds from the command line. Developers used these API’s to write automation scripts. After Apple’s acquisition, TestFlight made part of app store connect and invalidated old API’s. Therefore to upload or share build developers had to rely on third-party tools like Fastlane.

App Store Connect API

In WWDC 2018, Apple announced new App Store Connect API and made it publicly available in November 2018. By using App Store Connect API, developers can now automate below TestFlight workflow without relying on any third party tool. The workflow includes:

In this short post, we will see a use case example of App Store Connect API for TestFlight.

Authentication

App Store Connect API is a REST API to access data from the Apple server. Use of this API requires authorization via JSON Web Token. API request without this token results in error “NOT_AUTHORIZED”. Generating the JWT Token is a tedious task. We need to follow the below steps to use the App Store Connect API:

  1. Create an API Key in app store connectportal
  2. Generate JWT token using above API key
  3. Send JWT token with API call

Let’s now deep dive into each step.

Creating the API Key

The API key is the pair of the public and private key. You can download the private key from App Store Connect and public key will be stored on the Apple server. To create the private key, follow the below steps:

  1. Login to app store connect portal
  2. Go to ‘Users and Access’ section
  3. Then select ‘Keys’ section

Account holder (Legal role) needs to request for access to generate the API key.

Once you get access, you can generate an API key.

There are different access levels for keys like Admin, App Manager, developer etc. Key with the ‘Admin’ access can be used for all App Store Connect API.

Once you generate the API key, you can download it. This key is available to download for a single time only, so make sure to keep it secure once downloaded.

The API key never expires, you can use it as long as it’s valid. In case you lose it, or it is comprised then remember to revoke it immediately. Because anyone who has this key can access your app store record.

Generate JWT Token

Now we have the private key required to generate the JWT token. To generate the token, we also need the below-mentioned parameters:

  1. Private key Id: You can find it on the Keys tab (KEY ID).
  2. Issuer Id: Once you generate the private key, you will get an Issuer_ID. It is also available on the top of the Keys tab.
  3. Token Expiry: The generated token can be used within a maximum of 20 minutes. It expires after lapse of the specified time.
  4. Audience: As of now it is “appstoreconnect-v1”
  5. Algorithm: The ES256 JWT algorithm is used to generate a token.

Once all the parameters are in place, we can generate the JWT token. To generate it, there is a Ruby script which is used in the WWDC demo.

require "base64"
require "jwt"
ISSUER_ID = "ISSUER_ID"
KEY_ID = "PRIVATE_KEY_ID"
private_key = OpenSSL::PKey.read(File.read("path_to_private_key/AuthKey_#{KEY_ID}.p8"))
token = JWT.encode(
 {
    iss: ISSUER_ID,
    exp: Time.now.to_i + 20 * 60,
    aud: "appstoreconnect-v1"
 },
 private_key,
 "ES256",
 header_fields={
 kid: KEY_ID }
)
puts token

 

Let’s take a look at the steps to generate a token:

  1. Create a new file with the name jwt.rb and copy the above script in this file.
  2. Replace Issuer_Id, Key_Id and private key file path values in the script with your actual
  3. To run this script, you need to install jwt ruby gemon your machine. Use the following command to install it: $ sudo gem install jwt
  4. After installing the ruby gem, run the above script by using the command: $ ruby jwt.rb

You will get a token as an output of the above script. You can use this token along with the API call! Please note that the generated token remains valid for 20 minutes. If you want to continue using it after 20 minutes, then don’t forget to create another.

Send JWT token with API call

Now that we have a token, let’s see a few examples of App Store Connect API for TestFlight. There are many APIs available to automate TestFlight workflow. We will see an example of getting information about builds available on App Store Connect. We will also look at an example of submitting a build to review process. This will give you an idea of how to use the App Store Connect API.

Example 1: Get build information:

Below is the API for getting the build information. If you hit this API without the jwt token, it will respond with an error

$ curl https://api.appstoreconnect.apple.com/v1/builds
{
 "errors": [{
 "status": "401",
 "code": "NOT_AUTHORIZED",
 "title": "Authentication credentials are missing or invalid.",
 "detail": "Provide a properly configured and signed bearer token, and make sure that it has not expired. Learn more about Generating Tokens for API Requests https://developer.apple.com/go/?id=api-generating-tokens"
 }]
}

So you need to pass above-generated jwt token in the request

$ curl https://api.appstoreconnect.apple.com/v1/builds --Header "Authorization: Bearer your_jwt_token”
{
"data": [], // Array of builds available in your app store connect account
"links": {
"self": "https://api.appstoreconnect.apple.com/v1/builds"
},
"meta": {
"paging": {
"total": 2,
"limit": 50
}
}
}

 

Example 2: Submit build for review process:

By using the above build API, you can get an ID for the build. Use this ID to submit a build for the review process. You can send the build information in a request body like:

{
 "data": {
 "type": "betaAppReviewSubmissions",
 "relationships": {
 "build": {
 "data": {
 "type": "builds",
 "id": “your_build_Id"
 }
 }
 }
 }
}

In the the above request body, you just need to replace your build ID. So the final request will look like:

$ curl -X POST -H “Content-Type: application/json” –data ‘{“data”:{“type”:”betaAppReviewSubmissions”,”relationships”:{“build”:{“data”:{“type”:”builds”,”id”:”your_build_Id”}}}}}’https://api.appstoreconnect.apple.com/v1/betaAppReviewSubmissions –Header “Authorization: Bearer your_jwt_token”

That’s it. The above API call will submit the build for the review process. This way you can use any other App Store Connect API like getting a list of beta testers or to manage beta groups.

Conclusion

We have seen the end-to-end flow for App store Connect API. By using these API you can automate TestFlight workflow. You can also develop tools to automate the release process without relying on any third-party tool. You can find the documentation for App Store Connect API here. I hope you’ll find this post useful. Good luck and have fun.

 

 

 

 

 

Text Recognition using Firebase ML Kit for Android

Firebase ML Kit Introduction

At Google I/O 2018, Google announced Firebase ML Kit, a part of the Firebase suite that intends to give our apps the ability to support intelligent features with more ease. The SDK currently comes with a collection of pre-defined capabilities that are commonly required in applications. Firebase ML Kit offers machine learning capabilities underneath a form of a wrapper, it also offers their capabilities inside of a single SDK.

 

Currently ML Kit offers the ability to:

  • Recognize text
  • Recognize landmarks
  • Face detection
  • Scan barcodes
  • Label images

Objective

Recognizing text in images, such as the text of a street sign, and recognizing the text of documents.

Recognize Text in Images with Firebase ML Kit

ML Kit has both a general-purpose API suitable for recognizing text in images, such as the text of a street sign and an API optimized for recognizing the text of documents. The general-purpose API has both on-device and cloud-based models. Document text recognition is available only as a cloud-based model.

Prerequisites

Before you proceed, make sure you have access to the following:

  • the latest version of Android Studio
  • a device or emulator running Android API level 21 or higher
  • a Google account for Firebase and Google Cloud

Create a Firebase Project

To enable Firebase services for your app, you must create a Firebase project for it. So log in to the Firebase console and, on the welcome screen, press the Add project button. In the dialog that pops up, give the project a name and press the Create project button.

 

From the overview screen of your new project, click Add Firebase to your Android app. Enter package name and other information and press the Register app button. Now downloads configuration file ( google-services.json) that contains all the necessary Firebase metadata for your app.

Configure Your Android Studio Project

  1.  Switch to the Project view in Android Studio to see your project root directory. Move the google-services.json file you just downloaded into your Android app module root directory
  2. Modify your project level build.gradle files to use Firebase.
  3. Add dependencies in app-level build.gradle:
  4. Finally, press “Sync now”.
  5. Add permissions in AndroidManifest.xml

On Device Text Recognition

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to the FirebaseVisionTextRecognizer’s processImage method. If the text recognition operation succeeds, a FirebaseVisionText object will be passed to the success listener. A FirebaseVisionText object contains the full text recognized in the image and zero or more TextBlock objects. Each TextBlock represents a rectangular block of text, which contains zero or more Line objects. Each Line object contains zero or more Element objects, which represent words and word-like entities (dates, numbers, and so on).

Demo: https://drive.google.com/openid=1XEgRMyGbQEprvM3F8h9jGrLOPyRvAZpB

On Cloud Text Recognition

If you want to use the Cloud-based model, and you have not already enabled the Cloud-based APIs for your project, do so now. Navigate to ML Kit section of the Firebase console. If you have not already upgraded your project to a Blaze plan, click Upgrade to do so. Only Blaze-level projects can use Cloud-based APIs. If Cloud-based APIs aren’t already enabled, click Enable Cloud-based APIs.

 

The document text recognition API provides an interface that is intended to be more convenient for working with images of documents on the cloud. To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass FirebaseVisionImage object to FirebaseVisionDocumentTextRecognizer’s processImage method. If the text recognition operation succeeds, it will return a FirebaseVisionDocumentText object. A FirebaseVisionDocumentText object contains the full text recognized in the image and a hierarchy of objects (blocks, paragraph, word, symbol) that reflect the structure of the recognized document.

Demo: https://drive.google.com/openid=1qIfLKV3Mz4MJDHUf5A17uT_0c1KWz_hK

 

Stay tuned for my next article.

Android life cycle aware components

What is a life cycle aware component?

A life cycle aware component is a component which is aware of the life cycle of other components like activity or fragment and performs some action in response to change in life cycle status of this component.

Why have life cycle aware components?

Let’s say we are developing a simple video player application. Where we have an activity named as VideoActivity which contains the UI to play the video and we have a class named as VideoPlayer which contains all the logic and mechanism to play a video. Our VideoActivity creates an instance of this VideoPlayer class in onCreate() method

 

 

Now as for any video player we would like it to play the video when VideoActivity is in foreground i.e, in resumed state and pause the video when it goes in background i.e when it goes in the paused state. So we will have the following code in our VideoActivity’s onResume() and onPause() methods.

 

 

Also, we would like it to stop playing completely and release the resources when the activity gets destroyed. Thus we will have the following code in VideoActivity’s onDestroy() method

When we analyze this code we can see that even for this simple application our activity has to take a lot of care about calling the play, pause and stop methods of VideoPlayer class. Now imagine if we add separate components for audio, buffering etc, then our VideoActivity has to take care of all these components inside its life cycle callback methods which leads to poor organization of code, prone to errors.

 

Using arch.lifecycle 

With the introduction of life cycle aware components in android.arch.lifecycle library, we can move all this code to the individual components. Our activities or fragments no longer need to play with these component logic and can focus on their own primary job i.e. to maintain UI. Thus, the code becomes clean, maintainable and testable.

The android.arch.lifecycle package provides classes and interfaces that prove helpful to solve such problems in an isolated way.

So let’s dive and see how we can implement the above example using life cycle aware components.

Life cycle aware components way

To keep things simple we can add the below lines to our app gradle file to add life cycle components from android.arch library

 

 

Once we have integrated the arch components we can make our VideoPlayer class implement LifecycleObserver, which is an empty interface with annotations.
Using the specific annotations with the VideoPlayer class methods it will be notified about the life cycle state changes in VideoActivity. So our VideoPlayer class will be like:

We are not done yet. We need some binding between this VideoPlayer class and the VideoActivity so that our VideoPlayer object gets notified about the life cycle state changes in VideoActivity.

Well, this binding is quite easy, VideoActivity is an instance of android.support.v7.app.AppCompatActivity which implements Lifecycleowner interface. Lifecycleowner interface is a single method interface which contains a method, getLifecycle(), to get the Lifecycle object corresponding to its implementing class which keeps track about the life cycle state changes of activity/fragment or any other component having a life cycle. This Lifecycle object is observable and notifies its observers about the change in state.

So we have our VideoPlayer, instance of LifecycleObserver, and we need to add this as an observer to the Lifecycle object of VideoActivity. So we will modify VideoActivity as:

Well it makes things quite resilient and isolated. Our VideoPlayer class logic is separated from VideoActivity. Our VideoActivity no longer needs to bother about calling its dependent components methods to pause or play in its life cycle callback methods which makes the code clean, manageable and testable.

Conclusion

The beauty of such separation of concern can be also felt when we are developing some library and we intend it to be used as a third party library. It should not be a concern for end users of our library i.e. developers who would be using our library, to call life cycle dependent methods of our library. They might miss it or may not be aware at all which methods to call (because developers don’t usually read the documentation completely) leading to memory leaks or worse app crashes.

Another use case can be when an activity depends on some network call handled by a  network manager class. We can make the network manager class life cycle aware so that it tries to supply the data to activity only when it is alive or better to not keep a reference to activity when it is destroyed. Thus, avoiding memory leaks.

We can develop a well managed app using the life cycle aware components provided by android.arch.lifecycle package. The resulting code will be loosely coupled and thus easy for modifications, testing and debugging which makes our life easy as developers.