Reduce App size with On Demand Resources


This blog is about On Demand Resources. Nowadays our apps are loaded with high-resolution artwork, images and resources. So much so that we need to constantly keep an eye on the IPA size of the app throughout its development life cycle. Sometimes we download some static content from the server even if it can be easily packed into our app bundle.

Apple introduced on-demand resources in iOS 9. It enables apps to load assets dynamically. You assign tags to some assets, then when you upload a build to the App Store, Apple hosts the tagged assets so that they are downloaded separately from the app. The app requests the assets when required, and can discard them when they are not needed anymore. This is a great way to save space on devices.

Why Does the IPA Size Matter?

Of course, it matters!!!!. At the end of the day, iOS developers are focused on delivering top of the class user experience. Longer download times for the app kills that for you. I mean the first impression is always the best impression.

Is There a Solution?

Well yes, the secret to keeping ipa size smaller is On Demand Resources. I’ll also outline a few pointers you should keep in my mind while organizing your slices.

On Demand Resources.

As the name suggests, iOS delivers some content for you ( images, pdf etc. ) as and when you require in your app.

The main idea behind using ODR is that you pack minimal slices into your bundle for the basic presentation of your app and request for any high-resolution images you might need as when they are to be presented to the user.

How is this different from downloading slices from Server?

Well if your content is static (for ex. A static image), there is technically no need for server setup to only download your content. You can still have it all jammed in your bundle and get the advantage of smaller IPAs as well.

How Can It Be Done?

Well first off, head straight to your xcode projects and click on a file to view its file inspector on the right-hand side.

There is the field for On Demand Resource tags.

The same field is also present in the attributes inspector when clicking on one of the images in the asset catalog.

You can add certain tags to your images in the asset catalog or any resource files. NSBundleResourceRequest has an APIs to fetch these resources using tags we specify. This is the core of ODR.

How Tags Work

You identify on-demand resources during development by assigning them one or more tags. A tag is a string identifier you create. You can use the name of the tag to identify how the included resources are used in your app.

At runtime, you request access to remote resources by specifying a set of tags. The operating system downloads any resources marked with those tags and then retains them in storage until the app finishes using them. When the operating system needs more storage, it purges the local storage associated with one or more tags that are no longer retained. Tagged sets of resources may remain on the device for some time before they are purged.

Creating and Assigning Tags

Usually, the operating system starts downloading resources associated with a tag when the tag is requested by an app and the resources are not already on the device. Some tags contain resources that are important the first time the app launches or are required soon after the first launch. For example, a tutorial is important the first time the app is launched, but it is unlikely to be used again.

You assign tags to one of three prefetch categories in the Prefetched view in the Resource Tags pane: Initial Install Tags, Prefetched Tag Order, and Download Only On Demand.

The default category for a tag is Download Only On Demand. The view displays the tags grouped by their prefetch category and the total size for each category. The size is based on the device that was the target of the last build. Tags can be dragged between categories.

  • Initial install tagsThe resources are downloaded at the same time as the app. The size of the resources is included in the total size for the app in the App Store. The tags can be purged when they are not being accessed by at least one NSBundleResourceRequest
  • Prefetch tag order. The resources start downloading after the app is installed. The tags will be downloaded in the order in which they are listed in the Prefetched tag order group.
  • Downloaded only on demandThe tags are downloaded when requested by the app.

Code for ODR

NSBundleResourceRequest is used for requesting the ODR content. In viewDidLoad() of the TableViewController class( that displays the images respective to a category ), we call the following method.

func conditionallyBeginAccessingResources(completionHandler: @escaping (Bool) -> Void)

This function checks if all the resources associated with tags passed in are available for use. If not, we will call:

func beginAccessingResources(completionHandler: @escaping (Error?) -> Void)

This call will download all the content associated with the tags passed in.

In the completion handler, we simply populate our data source with the images associated with the tags and they are displayed in a UITableView.


On-demand resources in iOS 9 and tvOS is a great way to reduce the size of your app and deliver a better user experience to people who download and use your application. While it’s very easy to implement and set up, there are quite a few details that you must keep in mind in order for the whole on-demand resources system to work flawlessly without excessive loading times and unnecessarily purging data.


iMessage Stickers and Apps


This blog is about iMessage app in iOS, We all use messaging capabilities on our iOS devices. This is a bold statement and I have no proof for it, but it’s difficult to imagine a person owning an iOS device without having sent or received messages. The main messaging application on iOS is iMessage, but it’s not the only messaging option for iOS. You can download and choose among a huge selection of various messaging applications.

Up until iOS 10, iMessage was fully closed. That is to say, it lived in its own sandbox (and still does), and did not allow any extensions to be attached to it. In iOS 10 that has changed, and web developers can finally write our own iMessage extensions that allow even more interactivity to be added to our conversations.

iMessage apps can be of two different types:

Sticker packs

This is a special, unusual kind of app that contains only images, with absolutely no code. You can create this kind of app so users can send the images to one another in iMessage. For instance, if you offer a sticker pack full of heart shapes, users can then download the app and attach those hearts to messages that they or others send. In other words, as the name implies, images can stick to messages!


Full-fledged apps

This is where you have full control over how your iMessage app works. You can do some really fun stuff in this mode, which we will review soon. For instance, you can change an existing sticker that was sent previously by one of your contacts, so that you and the person you’re chatting with can collaboratively send and receive messages to each other.

Setting Up a Sticker Pack Application


You want to create a simple iMessage application that allows your users to send stickers to each other, without writing any code.


Follow these steps:

  1. Open Xcode if it’s not already open.
  2. Create a new project. In the new project dialog, choose Sticker Pack Application and then click Next.


Enter a product name for your project and then click Next.

  1. You will then be asked to save the project somewhere. Choose an appropriate location to save the project to finish this process.
  2. You should now see your project opened in Xcode and then a file named xcstickers. Click on this file and place your sticker images inside.
  3. After you’ve completed these steps, test your application on the simulator and then on devices as thoroughly as possible. Once you are happy, you need to code sign and then release your app to the iMessage app store.


With the opening up of iMessage as a platform where developers can build stand-alone apps, Apple has created a new type of store called iMessage App Store, where applications that are compatible with iMessage will show up in the list and users can purchase or download them without cost.

If you create a sticker pack app with no accompanying iOS app, your app shows up only in the iMessage App Store. If you create an iOS app with an accompanying iMessage extension (stickers), your app shows up both in the iOS App Store (for the main iOS app) and also in the iMessage App Store (for your iMessage extension).


Your stickers can be PDF, PNG, APNG (PNG with an alpha layer), JPEG, or even (animated) GIF, but Apple recommends using PNG files for the sake of quality. If you are desperate to create a sticker app but have no images to test with, simply open Finder at  /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/, then open the ICNS files in that folder with, export those ICNS files into PNG files, and drag and drop them into your Stickers.xcstickers file in Xcode. Then build and run your project on the simulator.


Building a Full-Fledged iMessage Application


You want to build a custom iMessage application where you have full control over the presentation of your stickers and how the user interacts with them.


Create an iMessage application in Xcode by following these steps:

  1. Open Xcode if it’s not already open.
  2. Create a new project. In the template, window choose iMessage Application and then click Next



3. You will be asked to save your project somewhere. Do so and then you should see Xcode open up your project


Now that you have created your iMessage app, it’s time to learn a bit about what’s new in the Messages framework for iOS 10 SDK. This framework contains many classes, the most important of which are:


The main view controller of your extension. It gets displayed to users when they open your iMessage application.


A view controller that gets added to the app view controller and is responsible for displaying your stickers to the user.


A class that encapsulates a single sticker. There is one MSStickerfor each sticker in your pack.


Every sticker instance in MSSticker has to be placed inside a view to be displayed to the user in the browser view controller. MSStickerView is the class for that view.

When you build an iMessage application as we have just done, your app is then separated into two entry points:

  • The iOS app entry point with your app delegate and the whole shebang
  • The iMessage app extension entry point

This is unlike the sticker pack app that we talked about earlier in this chapter. Sticker pack apps are iMessage apps but have no iOS apps attached to them. Therefore there is no code to be written. In full-fledged iMessage apps, your app is divided into an iOS app and an iMessage app, so you have two of some files, such as the Assets.xcassets file.

Even with custom sticker pack applications, you can build the apps in two different ways:

  • Using the existing Messages classes, such as MSStickerBrowserViewController, which do the heavy lifting for you
  • Using custom collection view controllers that will be attached to your main MSMessagesAppViewControllerinstance

Follow these steps to program the actual logic of the app:


  1. Drag and drop your PNG stickers into your project’s structure, on their own and not in an asset catalog. The reason is that we need to find them using their URLs, so we need them to sit on the disk directly.
  2. Create a new Cocoa Touch class in your project that will be your MSStickerBrowserViewController
  3. Your instance of MSStickerBrowserViewControllerhas a property called stickerBrowserView of type MSStickerBrowserView, which in turn has a property named dataSource of type MSStickerBrowserViewDataSource?. Your browser view controller by default will become this data source, which means that you need to implement all the non-optional methods of this protocol, such as numberOfStickers(in:). So let’s do that now:
override func numberOfStickers(in   stickerBrowserView: MSStickerBrowserView) -> Int

 {   return stickers.count } 

 override func stickerBrowserView(_ stickerBrowserView: MSStickerBrowserView, stickerAt index: Int) -> MSSticker {   return stickers[index] }

Our browser view controller is done, but how do we display it to the user? Remember our MSMessagesAppViewController? Well, the answer is through that view controller. In the viewDidLoad() function of the aforementioned view controller, load your browser view controller and add it as a child view controller:

override func viewDidLoad() {   super.viewDidLoad()      let controller = BrowserViewController(stickerSize: .regular)      controller.willMove(toParentViewController: self)   addChildViewController(controller)     if let vcView = controller.view {     view.addSubview(controller.view)     vcView.frame = view.bounds     vcView.translatesAutoresizingMaskIntoConstraints = false     vcView.leftAnchor.constraint(equalTo: view.leftAnchor).isActive = true     vcView.rightAnchor.constraint(equalTo: view.rightAnchor).isActive = true     vcView.topAnchor.constraint(equalTo: view.topAnchor).isActive = true          vcView.bottomAnchor.constraint(equalTo:    view.bottomAnchor).isActive = true   }      controller.didMove(toParentViewController: self)    }

Now press the Run button on Xcode to run your application on the simulator or device.

In this list, simply choose the Messages app and continue. Once the simulator is running, you can manually open the Messages app, go to an existing conversation that is already placed for you there by the simulator, and press the Apps button on the keyboard.


In this blog, I introduced you to the new Messages framework in iOS 10, which allows you to create sticker packs and applications to integrate with iMessage. We covered the basic classes you need to be aware of, including MSStickerBrowserViewController, MSMessageAppViewController, MSSticker, and  MSStickerView.

The Messages framework provides APIs to give you a large amount of control over your iMessage apps. For further reading, I would recommend checking out Apple’s Messages Framework Reference.

App Store Connect API To Automate TestFlight Workflow


Most mobile application developers try to automate build sharing process as it is one of the most tedious tasks in an app development cycle. However, it always remained difficult especially for iOS developers because of Apple’s code signing requirements. So when iOS developers start thinking about automating build sharing, the first option which comes to their mind is TestFlight.

Before TestFlight acquisition by Apple, it was easy to automate build sharing process. TestFlight had it’s own public API’s ( to upload and share builds from the command line. Developers used these API’s to write automation scripts. After Apple’s acquisition, TestFlight made part of app store connect and invalidated old API’s. Therefore to upload or share build developers had to rely on third-party tools like Fastlane.

App Store Connect API

In WWDC 2018, Apple announced new App Store Connect API and made it publicly available in November 2018. By using App Store Connect API, developers can now automate below TestFlight workflow without relying on any third party tool. The workflow includes:

In this short post, we will see a use case example of App Store Connect API for TestFlight.


App Store Connect API is a REST API to access data from the Apple server. Use of this API requires authorization via JSON Web Token. API request without this token results in error “NOT_AUTHORIZED”. Generating the JWT Token is a tedious task. We need to follow the below steps to use the App Store Connect API:

  1. Create an API Key in app store connectportal
  2. Generate JWT token using above API key
  3. Send JWT token with API call

Let’s now deep dive into each step.

Creating the API Key

The API key is the pair of the public and private key. You can download the private key from App Store Connect and public key will be stored on the Apple server. To create the private key, follow the below steps:

  1. Login to app store connect portal
  2. Go to ‘Users and Access’ section
  3. Then select ‘Keys’ section

Account holder (Legal role) needs to request for access to generate the API key.

Once you get access, you can generate an API key.

There are different access levels for keys like Admin, App Manager, developer etc. Key with the ‘Admin’ access can be used for all App Store Connect API.

Once you generate the API key, you can download it. This key is available to download for a single time only, so make sure to keep it secure once downloaded.

The API key never expires, you can use it as long as it’s valid. In case you lose it, or it is comprised then remember to revoke it immediately. Because anyone who has this key can access your app store record.

Generate JWT Token

Now we have the private key required to generate the JWT token. To generate the token, we also need the below-mentioned parameters:

  1. Private key Id: You can find it on the Keys tab (KEY ID).
  2. Issuer Id: Once you generate the private key, you will get an Issuer_ID. It is also available on the top of the Keys tab.
  3. Token Expiry: The generated token can be used within a maximum of 20 minutes. It expires after lapse of the specified time.
  4. Audience: As of now it is “appstoreconnect-v1”
  5. Algorithm: The ES256 JWT algorithm is used to generate a token.

Once all the parameters are in place, we can generate the JWT token. To generate it, there is a Ruby script which is used in the WWDC demo.

require "base64"
require "jwt"
private_key ="path_to_private_key/AuthKey_#{KEY_ID}.p8"))
token = JWT.encode(
    iss: ISSUER_ID,
    exp: + 20 * 60,
    aud: "appstoreconnect-v1"
 kid: KEY_ID }
puts token


Let’s take a look at the steps to generate a token:

  1. Create a new file with the name jwt.rb and copy the above script in this file.
  2. Replace Issuer_Id, Key_Id and private key file path values in the script with your actual
  3. To run this script, you need to install jwt ruby gemon your machine. Use the following command to install it: $ sudo gem install jwt
  4. After installing the ruby gem, run the above script by using the command: $ ruby jwt.rb

You will get a token as an output of the above script. You can use this token along with the API call! Please note that the generated token remains valid for 20 minutes. If you want to continue using it after 20 minutes, then don’t forget to create another.

Send JWT token with API call

Now that we have a token, let’s see a few examples of App Store Connect API for TestFlight. There are many APIs available to automate TestFlight workflow. We will see an example of getting information about builds available on App Store Connect. We will also look at an example of submitting a build to review process. This will give you an idea of how to use the App Store Connect API.

Example 1: Get build information:

Below is the API for getting the build information. If you hit this API without the jwt token, it will respond with an error

$ curl
 "errors": [{
 "status": "401",
 "code": "NOT_AUTHORIZED",
 "title": "Authentication credentials are missing or invalid.",
 "detail": "Provide a properly configured and signed bearer token, and make sure that it has not expired. Learn more about Generating Tokens for API Requests"

So you need to pass above-generated jwt token in the request

$ curl --Header "Authorization: Bearer your_jwt_token”
"data": [], // Array of builds available in your app store connect account
"links": {
"self": ""
"meta": {
"paging": {
"total": 2,
"limit": 50


Example 2: Submit build for review process:

By using the above build API, you can get an ID for the build. Use this ID to submit a build for the review process. You can send the build information in a request body like:

 "data": {
 "type": "betaAppReviewSubmissions",
 "relationships": {
 "build": {
 "data": {
 "type": "builds",
 "id": “your_build_Id"

In the the above request body, you just need to replace your build ID. So the final request will look like:

$ curl -X POST -H “Content-Type: application/json” –data ‘{“data”:{“type”:”betaAppReviewSubmissions”,”relationships”:{“build”:{“data”:{“type”:”builds”,”id”:”your_build_Id”}}}}}’ –Header “Authorization: Bearer your_jwt_token”

That’s it. The above API call will submit the build for the review process. This way you can use any other App Store Connect API like getting a list of beta testers or to manage beta groups.


We have seen the end-to-end flow for App store Connect API. By using these API you can automate TestFlight workflow. You can also develop tools to automate the release process without relying on any third-party tool. You can find the documentation for App Store Connect API here. I hope you’ll find this post useful. Good luck and have fun.






Text Recognition using Firebase ML Kit for Android

Firebase ML Kit Introduction

At Google I/O 2018, Google announced Firebase ML Kit, a part of the Firebase suite that intends to give our apps the ability to support intelligent features with more ease. The SDK currently comes with a collection of pre-defined capabilities that are commonly required in applications. Firebase ML Kit offers machine learning capabilities underneath a form of a wrapper, it also offers their capabilities inside of a single SDK.


Currently ML Kit offers the ability to:

  • Recognize text
  • Recognize landmarks
  • Face detection
  • Scan barcodes
  • Label images


Recognizing text in images, such as the text of a street sign, and recognizing the text of documents.

Recognize Text in Images with Firebase ML Kit

ML Kit has both a general-purpose API suitable for recognizing text in images, such as the text of a street sign and an API optimized for recognizing the text of documents. The general-purpose API has both on-device and cloud-based models. Document text recognition is available only as a cloud-based model.


Before you proceed, make sure you have access to the following:

  • the latest version of Android Studio
  • a device or emulator running Android API level 21 or higher
  • a Google account for Firebase and Google Cloud

Create a Firebase Project

To enable Firebase services for your app, you must create a Firebase project for it. So log in to the Firebase console and, on the welcome screen, press the Add project button. In the dialog that pops up, give the project a name and press the Create project button.


From the overview screen of your new project, click Add Firebase to your Android app. Enter package name and other information and press the Register app button. Now downloads configuration file ( google-services.json) that contains all the necessary Firebase metadata for your app.

Configure Your Android Studio Project

  1.  Switch to the Project view in Android Studio to see your project root directory. Move the google-services.json file you just downloaded into your Android app module root directory
  2. Modify your project level build.gradle files to use Firebase.
  3. Add dependencies in app-level build.gradle:
  4. Finally, press “Sync now”.
  5. Add permissions in AndroidManifest.xml

On Device Text Recognition

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to the FirebaseVisionTextRecognizer’s processImage method. If the text recognition operation succeeds, a FirebaseVisionText object will be passed to the success listener. A FirebaseVisionText object contains the full text recognized in the image and zero or more TextBlock objects. Each TextBlock represents a rectangular block of text, which contains zero or more Line objects. Each Line object contains zero or more Element objects, which represent words and word-like entities (dates, numbers, and so on).


On Cloud Text Recognition

If you want to use the Cloud-based model, and you have not already enabled the Cloud-based APIs for your project, do so now. Navigate to ML Kit section of the Firebase console. If you have not already upgraded your project to a Blaze plan, click Upgrade to do so. Only Blaze-level projects can use Cloud-based APIs. If Cloud-based APIs aren’t already enabled, click Enable Cloud-based APIs.


The document text recognition API provides an interface that is intended to be more convenient for working with images of documents on the cloud. To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass FirebaseVisionImage object to FirebaseVisionDocumentTextRecognizer’s processImage method. If the text recognition operation succeeds, it will return a FirebaseVisionDocumentText object. A FirebaseVisionDocumentText object contains the full text recognized in the image and a hierarchy of objects (blocks, paragraph, word, symbol) that reflect the structure of the recognized document.



Stay tuned for my next article.

Android life cycle aware components

What is a life cycle aware component?

A life cycle aware component is a component which is aware of the life cycle of other components like activity or fragment and performs some action in response to change in life cycle status of this component.

Why have life cycle aware components?

Let’s say we are developing a simple video player application. Where we have an activity named as VideoActivity which contains the UI to play the video and we have a class named as VideoPlayer which contains all the logic and mechanism to play a video. Our VideoActivity creates an instance of this VideoPlayer class in onCreate() method



Now as for any video player we would like it to play the video when VideoActivity is in foreground i.e, in resumed state and pause the video when it goes in background i.e when it goes in the paused state. So we will have the following code in our VideoActivity’s onResume() and onPause() methods.



Also, we would like it to stop playing completely and release the resources when the activity gets destroyed. Thus we will have the following code in VideoActivity’s onDestroy() method

When we analyze this code we can see that even for this simple application our activity has to take a lot of care about calling the play, pause and stop methods of VideoPlayer class. Now imagine if we add separate components for audio, buffering etc, then our VideoActivity has to take care of all these components inside its life cycle callback methods which leads to poor organization of code, prone to errors.


Using arch.lifecycle 

With the introduction of life cycle aware components in android.arch.lifecycle library, we can move all this code to the individual components. Our activities or fragments no longer need to play with these component logic and can focus on their own primary job i.e. to maintain UI. Thus, the code becomes clean, maintainable and testable.

The android.arch.lifecycle package provides classes and interfaces that prove helpful to solve such problems in an isolated way.

So let’s dive and see how we can implement the above example using life cycle aware components.

Life cycle aware components way

To keep things simple we can add the below lines to our app gradle file to add life cycle components from android.arch library



Once we have integrated the arch components we can make our VideoPlayer class implement LifecycleObserver, which is an empty interface with annotations.
Using the specific annotations with the VideoPlayer class methods it will be notified about the life cycle state changes in VideoActivity. So our VideoPlayer class will be like:

We are not done yet. We need some binding between this VideoPlayer class and the VideoActivity so that our VideoPlayer object gets notified about the life cycle state changes in VideoActivity.

Well, this binding is quite easy, VideoActivity is an instance of which implements Lifecycleowner interface. Lifecycleowner interface is a single method interface which contains a method, getLifecycle(), to get the Lifecycle object corresponding to its implementing class which keeps track about the life cycle state changes of activity/fragment or any other component having a life cycle. This Lifecycle object is observable and notifies its observers about the change in state.

So we have our VideoPlayer, instance of LifecycleObserver, and we need to add this as an observer to the Lifecycle object of VideoActivity. So we will modify VideoActivity as:

Well it makes things quite resilient and isolated. Our VideoPlayer class logic is separated from VideoActivity. Our VideoActivity no longer needs to bother about calling its dependent components methods to pause or play in its life cycle callback methods which makes the code clean, manageable and testable.


The beauty of such separation of concern can be also felt when we are developing some library and we intend it to be used as a third party library. It should not be a concern for end users of our library i.e. developers who would be using our library, to call life cycle dependent methods of our library. They might miss it or may not be aware at all which methods to call (because developers don’t usually read the documentation completely) leading to memory leaks or worse app crashes.

Another use case can be when an activity depends on some network call handled by a  network manager class. We can make the network manager class life cycle aware so that it tries to supply the data to activity only when it is alive or better to not keep a reference to activity when it is destroyed. Thus, avoiding memory leaks.

We can develop a well managed app using the life cycle aware components provided by android.arch.lifecycle package. The resulting code will be loosely coupled and thus easy for modifications, testing and debugging which makes our life easy as developers.

Kotlin Kronicles for Android developers — part 1

Another blog on“why Kotlin”? cliché? Not really. This is more like a “why not Kotlin?” kind of blog post. This blog is my attempt to convince android app developers to migrate to Kotlin. It doesn’t matter if you have little or no knowledge of Kotlin, or you are an iOS developer who worships swift, read along, I am sure Kotlin will impress you (if not my writing).

I am going to show some of the amazing features of Kotlin programming language that makes development so much easy and fun. And makes the code so readable as if you are reading plain English. I read somewhere that “Programming language isn’t for computers, computers understand only 1s and 0s, it is for humans” I couldn’t agree more. There is a learning curve, sure, where isn’t? It pays off nicely. Kotlin makes us do more with fewer lines of code, kotlin makes us productive.

Lets quickly walk over some of the obvious reasons for migrating to Kotlin:

  • Kotlin is one of the officially supported languages for android app development as announced in Google IO 2017.
  • Kotlin is 100% interoperable with Java. Which basically means Kotlin can use Java classes and methods and vice versa.
  • Kotlin has several modern programming language features like lambdas, higher order functions, null safety, extensions etc.
  • Kotlin is developed and maintained by JetBrains, which is the company behind several integrated development environments that developers use every day (or IDEs like IntelliJ IDEA, PyCharm, PhpStorm, GoLand etc).

This is available all over the internet. This is the content of “Why Kotlin” category of blogs.

Let’s talk about something a little more interesting.

Higher Order Functions:

Kotlin functions are first class citizens. Meaning functions can be stored in variables, passed as arguments or returned from other functions. A higher-order function is a function that takes a function as a parameter or returns a function.

This may sound strange at first. Why in the world would I pass a function to another function? (or return a function from another function) It is very common in various programming languages including javascript, swift, python (and Kotlin apparently). An excellent example of a higher function is the map. The map is a higher order function that takes in a function as a parameter and returns a list of results of applying the given function in each item of the original list or array.

checkout the map function above in line 3. It applies the stringStrirrer() function to each item of x. The result of the map operation is in line 4 above.

Data classes:

Java POJOs or Plain Old Java Objects or simply classes that store some data require a lot of boilerplate code most of the times, like getters, setters, equals, hashCode, toString etc. Kotlin data class derives these properties and functions automatically from properties defined in the primary constructor.

Just one line of code to replace the several lines of java POJO. For custom behavior we can override functions in data classes. Other than this Kotlin data classes are also bundled with copy, components etc which allows copying object and de-structuring respectively.

Dealing With Strings:

Kotlin standard library makes dealing with strings so much easier. Here is a sample:

No helper classes, public static methods or StringUtils is required. We can invoke these functions as if they belong to String class itself.

Dealing with Collections:

Same as String, the helper methods in “java.util.Collections” class are no longer required. We can directly call “sort, max, min, reverse, swap etc” on collections.

Consider a bank use case. A bank has many customers, a customer does several transactions every month. Think in terms of objects:

As it is clear from the picture above, a bank has many customers, a customer has some properties (name, list of transactions etc) and several transactions, a transaction has properties like amount, type etc. It will look something like this in Java:

Find the customer with minimum balance:

I don’t know about you but I think the Kotlin way is much more clean, simple and readable. And we didn’t import any helper class for that (Java way needed Collections class). I can read it as plain English, which is more than what I can say for the Java counterpart. The motive here is not to compare Java with Kotlin, but to appreciate the Kotlin Kronicles.

There are several functions like map, filter, reduce, flatmap, fold, partition etc. Here is how we can simplify our tasks by combining these standard functions (for each problem statement below, imagine doing it in Java):

As it is clear from the above gist, we can solve mundane problems with much fewer lines of code. Readability wise I just love it. Above code explanation here:

  1. FlatMap: Returns a single list of all elements yielded from results of transform function being invoked on each element of the original array (in above cases the transform function returned list of transactions by each individual user)
  2. Filter and SumBy: Here we combined filter and sum by operations to write a one-liner code to find the total amount deposited and withdrawn from the bank considering all customers.
  3. Fold: Accumulates value starting with the initial value (0.0 in our case) and applying operation (when statement above) from left to right to current accumulator value and each element. Here we used fold and when to find the net amount deposited in the bank considering all deposits and withdrawals.
  4. Partition: Splits the original array into a pair of lists, where the first list contains elements for which predicate (the separation function in this case) yielded true, while the 2nd where it yielded false. Of course, we can filter twice, but this is so much easier.

So many complex operations simplified by the Kotlin standard library.


One of my favourite features. Kotlin extensions let us add functionality to a class without modifying the original class. Just like in Swift and C#. This can be very handy if used properly. Check this out:

In the above code, we just added a new function called “toINR()” to Kotlin’s Double type. So we basically added a new function in Kotlin’s primitive type, how about that 😎. And it is a one-liner function, no curly braces, return type, return statement whatsoever. Noticed that conciseness did you?.

Since Kotlin supports higher order functions we can combine this with extension functions to solidify our code. One very common problem with android development involving SQLite is, developers often forget to end the transaction. Then we waste hours debugging it. Here is how we can avoid it:

We added an extension function called “performDBTransaction” in SQLiteDatabaseThis function takes a parameter that is a function with no input and no output, and this parameter function is whatever we want executed in between begin and end transactions. This function calls beginTransaction() then the passed operation and then calls endTransaction(). We can use this function wherever required without having to double check if we called endTransaction or not.

I always forget to call commit() or apply() when storing data in Shared Preferences. Similar approach:

extension function persist() (line 9 above) takes care of it. We are calling persist() as if it is a part of SharedPreferences.Editor.

Smart Casts:

Going back to our bank example. Let’s say the transaction can be of three types as explained in below figure:

NEFT transaction has fixed charges, IMPS has some bank-related charges. Now we deal with a transaction object, the super class “Transaction”. We need to identify the type of transaction so that the transaction can be processed accordingly. Here is how this can be handled in Kotlin:

In line 10 and 11 in above code gist, we didn’t cast the Transaction object into NEFT or IMPS, yet we are able to invoke the functions of these classes. This is a smart cast in Kotlin. Kotlin has automatically casted the transaction object into its respective type.


As developers, we need to focus on stuff that matters, the core part, and boilerplate code isn’t one of them. Kotlin helps in reducing boilerplate code and makes development fun. Kotlin has many amazing features which ease development and testing. Do not let the fear of the unknown dictate your choice of the programming language; migrate your apps to kotlin now. The initial resistance is the only resistance.

I sincerely hope you enjoyed the first article of this Kotlin Kronicles series. We have just tapped the surface. Stay tuned for part 2. Let me know if you want me to cover anything specific.

Got any suggestions? shoot below in comments.

What is your favourite feature of Kotlin?

Keep Developing…

iOS Build Management using Custom Build Scheme


One of the best practices in iOS development is to be able to manage multiple environments during the development of a project. Many a time we might have to jump between DEV, QA, STAGE, and Production environments. As the owner of a product, clients request to have both development version of the app and production version of the app i.e. App store released version of the app on the same device.

If you have ever faced or might face this situation, then you need a custom build scheme.


This blog explains the significance of custom build schemes & build configurations in XCode. We will see how we can leverage these to configure an iOS project to support multiple build environments without the need to duplicate targets and keep the same code base.


  • XCode 8.0 onwards
  • Mac machine with macOS Sierra version

Advantages of Custom Builds

  • Write code that only runs on a particular environment. For example, on DEV you might want to have different values to constants in the app than on Production.
  • Switch between different environments easily to deliver a build that talks to the production server after testing your app in a development environment.

Difference Between Build Schemes & Build Configurations

Before we start actual changes on XCode, let’s understand the difference between build schemes & build configurations first.

A build scheme is a blueprint for an entire build process. It is a way of telling Xcode what build configurations you want to use to create the development, unit test, and production builds for a given target (framework or app bundle).

A build configuration is a specific group of builds settings that can be applied to any target.

Most app projects come with two build configurations and one build scheme. You get the debug and release build configurations along with a building scheme that runs the debug configuration for debugging purposes and the release configuration for archiving/submission.

For most projects, this is perfectly fine and requires no tweaking. However, if you want to offer both a DEV and a PRODUCTION version of the same app, it’s not quite enough. You must add a new build configuration to achieve this.

Adding a new build configuration

Whenever you wish to support multiple environments in the app, you need to start by adding a new build configuration. There are some important steps involved which sometimes seems confusing at first, so follow every step carefully.

  1. Open Xcode and select the project file.


2. Go to Editor → Add Configuration → Duplicate Debug Configuration.


Repeat Steps 1 and 2 for Release configuration.
NOTE: Remember that for every environment you must Duplicate Debug and Release configuration. Thus, if you want to support DEV, QA, STAGE, PRO then you should have the following configuration:

  • DEV-Debug, DEV-Release
  • QA-Debug, QA-Release
  • STAGE-Debug, STAGE-Release
  • PRO-Debug, PRO-Release


Creating a separate build scheme for every environment

We’re going to take our new build configurations and create a build scheme that runs them.

  1. Tap on the currently active scheme.
  2. In the dropdown, select New Scheme.


3. Provide a name to the new build scheme. I usually follow <Name of the app>-       <Environment>. For example, MultipleEnvApp-QA.


Once you’ve done this, notice that your new build scheme is selected.


We’re not done yet. We have a build scheme, but it isn’t using our new build configurations yet.

4. Click on your build scheme and select Edit Scheme.


5. Select the appropriate build configuration as per the environment. For example, our selected scheme is MultipleEnvApp-QA, hence choose respective QA build configurations.


That’s it. In the same way, you can create and configure schemes for STAGE and PRO environment. You can rename the default scheme as MultipleEnvApp-DEV

Writing code that runs on a particular environment of your app

Unfortunately, having separate build schemes isn’t quite enough. We also need a way to selectively run blocks of code on a particular environment. To do that, we are going to add a custom Swift flag that only applies to the particular build configurations we just created.

  1. Select the target and then Go to Build Settings, and scroll down to Other Swift Flags.
  2. You must add the flags for every configuration. For example, add the flag “-DQA” to both of the QA build configurations.


-D is the namespace for custom flags that can be passed into a build command.

You can ignore the “-D” for now.

3. Go to any of your source files. For example, AppDelegate and add these lines of code. 

We have created a global variable SOME_SERVICE_KEY and used a unique value for each environment. In this way, you can actually use different service keys, constants for different environments.

Different bundle identifiers for different build configurations

Optionally, if you want to use different bundle ID for different configurations, do the following:

  1. Create two app IDs on your Apple Developer portal.
  2. Go to your project settings and set the appropriate bundle identifiers for different build configurations.



That’s all there is to it. Now you are setup to deliver a configurable app for different environments using the same shared codebase. Here is the GitHub project that contains all the configurations which we followed in this blog.

Happy Coding!

Build an iOS app that connects to IoT device using Bluetooth

You must be aware of term “IoT” Internet of Things, this is one of the hot technology worldwide nowadays, as many products, devices are available in the market.

I won’t say detailed knowledge of IoT, but at the end of this article, you will have a high-level idea of how IoT system works. To understand it, we will create an iOS demo app which will send/receive data from IoT compatible device with the help of Arduino.

In this article, we will cover following points:
– Introduction to IoT
– Arduino Overview
– iOS Demo App to understand end to end flow.

Introduction to IoT
Internet of things is a system of devices connected to the internet with the ability to collect and exchange data. The device or “Thing” in IoT could be any device embedded with electronics, software, and sensors like lights in household, smart air conditioner or person with heart monitor.
Lets see what are the opportunities in IoT, why it became one of the hot technology:
  • The connected world of devices, people and data helps to create numerous business opportunities for many sectors. For example, If I own car parts manufacturing business then I might want to know which parts are most popular. Using IoT, I can use a sensor in a showroom, to detect which areas are more popular or in which area customer spends more time. I will use this data to identify parts and increase production of these parts.
  • Real-time updates offer resources to improve decision making more accurate.
  • Costs of IoT components have significantly gone down, which effectively means that the cost of IoT-linked devices is getting more affordable day by day.

There are many other opportunities which accelerated the market for Internet of Things. It is predicted that by 2020, 25 billion devices will be available in the market.

The network in IoT will be decided on the factors such as range, data, security, and power. These factors will decide the choice of network whether it is the internet, Bluetooth, WiFi or any other. IoT is used for the devices that would not necessary to have an internet connection. It is used for a device that can communicate with the network which can be the internet, Bluetooth, NFC or anything else. For short-range communications technology is of course Bluetooth. It is expected to be key for wearable products. For example, Smartwatch, Fitness band. There are many sources available for IoT, so we will not dig into this.

Arduino Overview

We know how to send or receive data over the internet from an iOS app. But many of us don’t know how this data operate the IoT devices. There are open source hardware and software available in the market which is used to control the IoT devices. One of these is Arduino.

Arduino is an open-source hardware and software. Arduino boards are able to read inputs from the different sensors and turn it into an output like turning on an LED, activating a motor or publishing it over the internet.

These boards can take following inputs and outputs:
  • Temperature, Humidity, Pressure etc
  • Light, Infrared signals
  • Sounds
  • Motion captures
  • Heart rate, muscle movement
  • Electrical current
  • Touch, Fingerprints
  • LEDs
  • LCDs
  • Speakers
  • Motors
  • The internet
There are different types of Arduino boards available depending on features like an ethernet port, wireless or USB device support. Common specification of these hardware boards are:
  • ATmega 328 8bit chip
  • 5-20V power supply
  • 32 KB flash memory
  • 20 I/O pins

You can tell Arduino boards what to do by sending a set of instructions to the microcontroller on the board. For this, we have to use Arduino Software (IDE) and Arduino programming language.

Arduino IDE:

To write code and upload it to the board, Arduino IDE is used. It is available for Mac, Windows, and Linux platform. You can download it at

Arduino programming language:
The coding language that Arduino uses is very much like C++, which is a common language in the world of computing.
Two important functions in Arduino language are:
  • setup( ) – Every program should have this function. This runs once at the start of the program like main () function. You can do initialisation stuff in this function.
  • loop( ) – Every program should have this function. This gets called repeatedly. You can use it to actively control the Arduino board.
Other Useful Function:
  • pinMode() – Set a pin as input or output
  • digitalWrite() – Set a digital pin high/low
  • digitalRead() – Read a digital pin’s state
  • analogRead() – Read an analog pin
  • analogWrite() – Write an analog value
  • delay() – Wait an amount of time
Example Code:
int ledPin = 3;
// setup initializes serial and the LED pin
void setup(){
     pinMode(ledPin, INPUT);

// loop checks the LED pin state each time and broadcast it whether it is high or
// low
void loop(){
    if (digitalRead(ledPin) == HIGH)


You will find more details about language at

Once you write the code, you can upload it on Arduino board by using Arduino IDE. You can download demo Arduino program to turn LED on-off. This program send/receives data over the Bluetooth and turns on-board LED on-off depending on data received. Also, this will broadcast state of the LED pin. You can use Arduino Leonardo board for this demo.

iOS Demo App 
I have created a sample iOS demo app which will send/receive data over the Bluetooth. For that, I have used Core-Bluetooth framework. You can download it here.
On launch of this app, it will try to connect to a nearby Bluetooth device which is Arduino board in our case. After successful connection, this app can send instructions to the board to turn LED on-off.
Steps To Run:
  • Upload LED demo program on Arduino board. You have to upload it by using Arduino IDE. You will find more details for uploading it in the referenced link above.
  • On successful upload, keep power up Arduino board.
  • Launch iOS demo app. It will connect automatically to the board on which program is uploaded.
  • Once it is connected, red line in the app turns green. Now you can send instructions to turn LED on-off on the board by using this app.
  • Once it is connected, red line in the app turns green. Now you can send instructions to turn LED on-off on the board by using this app.

You can modify this demo program for the internet instead of Bluetooth. For internet network, Arduino board which has a capability of broadcasting data over the internet is required.

This article is a good starting point for anyone who is interested in connecting an iOS app to IoT device using Bluetooth Low Energy. We saw how to hook up an Arduino board with an iOS app. We have a LED on Arduino board, but we can easily connect any other sensor to it. You can find the sample projects on github. I hope you’ll find these projects useful. Good luck and have fun!

Build your own custom Android ROM using Android Open Source Project(AOSP)


One of the best things about Android is custom ROMs. A custom Android ROM refers to a phone’s firmware, based on Google’s Android platform. The term ROM, which stands for Read Only Memory, really has very little to do with what a custom Android ROM actually is, can be confusing. Since Android is an open source mobile operating system that means anyone can download the source code, make modification to it, recompile it and release it for a wide variety of devices. Anyone can install ROMs to their device and achieve a modified appearance and behavior. Continue reading Build your own custom Android ROM using Android Open Source Project(AOSP)