Reduce App size with On Demand Resources


This blog is about On Demand Resources. Nowadays our apps are loaded with high-resolution artwork, images and resources. So much so that we need to constantly keep an eye on the IPA size of the app throughout its development life cycle. Sometimes we download some static content from the server even if it can be easily packed into our app bundle.

Apple introduced on-demand resources in iOS 9. It enables apps to load assets dynamically. You assign tags to some assets, then when you upload a build to the App Store, Apple hosts the tagged assets so that they are downloaded separately from the app. The app requests the assets when required, and can discard them when they are not needed anymore. This is a great way to save space on devices.

Why Does the IPA Size Matter?

Of course, it matters!!!!. At the end of the day, iOS developers are focused on delivering top of the class user experience. Longer download times for the app kills that for you. I mean the first impression is always the best impression.

Is There a Solution?

Well yes, the secret to keeping ipa size smaller is On Demand Resources. I’ll also outline a few pointers you should keep in my mind while organizing your slices.

On Demand Resources.

As the name suggests, iOS delivers some content for you ( images, pdf etc. ) as and when you require in your app.

The main idea behind using ODR is that you pack minimal slices into your bundle for the basic presentation of your app and request for any high-resolution images you might need as when they are to be presented to the user.

How is this different from downloading slices from Server?

Well if your content is static (for ex. A static image), there is technically no need for server setup to only download your content. You can still have it all jammed in your bundle and get the advantage of smaller IPAs as well.

How Can It Be Done?

Well first off, head straight to your xcode projects and click on a file to view its file inspector on the right-hand side.

There is the field for On Demand Resource tags.

The same field is also present in the attributes inspector when clicking on one of the images in the asset catalog.

You can add certain tags to your images in the asset catalog or any resource files. NSBundleResourceRequest has an APIs to fetch these resources using tags we specify. This is the core of ODR.

How Tags Work

You identify on-demand resources during development by assigning them one or more tags. A tag is a string identifier you create. You can use the name of the tag to identify how the included resources are used in your app.

At runtime, you request access to remote resources by specifying a set of tags. The operating system downloads any resources marked with those tags and then retains them in storage until the app finishes using them. When the operating system needs more storage, it purges the local storage associated with one or more tags that are no longer retained. Tagged sets of resources may remain on the device for some time before they are purged.

Creating and Assigning Tags

Usually, the operating system starts downloading resources associated with a tag when the tag is requested by an app and the resources are not already on the device. Some tags contain resources that are important the first time the app launches or are required soon after the first launch. For example, a tutorial is important the first time the app is launched, but it is unlikely to be used again.

You assign tags to one of three prefetch categories in the Prefetched view in the Resource Tags pane: Initial Install Tags, Prefetched Tag Order, and Download Only On Demand.

The default category for a tag is Download Only On Demand. The view displays the tags grouped by their prefetch category and the total size for each category. The size is based on the device that was the target of the last build. Tags can be dragged between categories.

  • Initial install tagsThe resources are downloaded at the same time as the app. The size of the resources is included in the total size for the app in the App Store. The tags can be purged when they are not being accessed by at least one NSBundleResourceRequest
  • Prefetch tag order. The resources start downloading after the app is installed. The tags will be downloaded in the order in which they are listed in the Prefetched tag order group.
  • Downloaded only on demandThe tags are downloaded when requested by the app.

Code for ODR

NSBundleResourceRequest is used for requesting the ODR content. In viewDidLoad() of the TableViewController class( that displays the images respective to a category ), we call the following method.

func conditionallyBeginAccessingResources(completionHandler: @escaping (Bool) -> Void)

This function checks if all the resources associated with tags passed in are available for use. If not, we will call:

func beginAccessingResources(completionHandler: @escaping (Error?) -> Void)

This call will download all the content associated with the tags passed in.

In the completion handler, we simply populate our data source with the images associated with the tags and they are displayed in a UITableView.


On-demand resources in iOS 9 and tvOS is a great way to reduce the size of your app and deliver a better user experience to people who download and use your application. While it’s very easy to implement and set up, there are quite a few details that you must keep in mind in order for the whole on-demand resources system to work flawlessly without excessive loading times and unnecessarily purging data.


iMessage Stickers and Apps


This blog is about iMessage app in iOS, We all use messaging capabilities on our iOS devices. This is a bold statement and I have no proof for it, but it’s difficult to imagine a person owning an iOS device without having sent or received messages. The main messaging application on iOS is iMessage, but it’s not the only messaging option for iOS. You can download and choose among a huge selection of various messaging applications.

Up until iOS 10, iMessage was fully closed. That is to say, it lived in its own sandbox (and still does), and did not allow any extensions to be attached to it. In iOS 10 that has changed, and web developers can finally write our own iMessage extensions that allow even more interactivity to be added to our conversations.

iMessage apps can be of two different types:

Sticker packs

This is a special, unusual kind of app that contains only images, with absolutely no code. You can create this kind of app so users can send the images to one another in iMessage. For instance, if you offer a sticker pack full of heart shapes, users can then download the app and attach those hearts to messages that they or others send. In other words, as the name implies, images can stick to messages!


Full-fledged apps

This is where you have full control over how your iMessage app works. You can do some really fun stuff in this mode, which we will review soon. For instance, you can change an existing sticker that was sent previously by one of your contacts, so that you and the person you’re chatting with can collaboratively send and receive messages to each other.

Setting Up a Sticker Pack Application


You want to create a simple iMessage application that allows your users to send stickers to each other, without writing any code.


Follow these steps:

  1. Open Xcode if it’s not already open.
  2. Create a new project. In the new project dialog, choose Sticker Pack Application and then click Next.


Enter a product name for your project and then click Next.

  1. You will then be asked to save the project somewhere. Choose an appropriate location to save the project to finish this process.
  2. You should now see your project opened in Xcode and then a file named xcstickers. Click on this file and place your sticker images inside.
  3. After you’ve completed these steps, test your application on the simulator and then on devices as thoroughly as possible. Once you are happy, you need to code sign and then release your app to the iMessage app store.


With the opening up of iMessage as a platform where developers can build stand-alone apps, Apple has created a new type of store called iMessage App Store, where applications that are compatible with iMessage will show up in the list and users can purchase or download them without cost.

If you create a sticker pack app with no accompanying iOS app, your app shows up only in the iMessage App Store. If you create an iOS app with an accompanying iMessage extension (stickers), your app shows up both in the iOS App Store (for the main iOS app) and also in the iMessage App Store (for your iMessage extension).


Your stickers can be PDF, PNG, APNG (PNG with an alpha layer), JPEG, or even (animated) GIF, but Apple recommends using PNG files for the sake of quality. If you are desperate to create a sticker app but have no images to test with, simply open Finder at  /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/, then open the ICNS files in that folder with, export those ICNS files into PNG files, and drag and drop them into your Stickers.xcstickers file in Xcode. Then build and run your project on the simulator.


Building a Full-Fledged iMessage Application


You want to build a custom iMessage application where you have full control over the presentation of your stickers and how the user interacts with them.


Create an iMessage application in Xcode by following these steps:

  1. Open Xcode if it’s not already open.
  2. Create a new project. In the template, window choose iMessage Application and then click Next



3. You will be asked to save your project somewhere. Do so and then you should see Xcode open up your project


Now that you have created your iMessage app, it’s time to learn a bit about what’s new in the Messages framework for iOS 10 SDK. This framework contains many classes, the most important of which are:


The main view controller of your extension. It gets displayed to users when they open your iMessage application.


A view controller that gets added to the app view controller and is responsible for displaying your stickers to the user.


A class that encapsulates a single sticker. There is one MSStickerfor each sticker in your pack.


Every sticker instance in MSSticker has to be placed inside a view to be displayed to the user in the browser view controller. MSStickerView is the class for that view.

When you build an iMessage application as we have just done, your app is then separated into two entry points:

  • The iOS app entry point with your app delegate and the whole shebang
  • The iMessage app extension entry point

This is unlike the sticker pack app that we talked about earlier in this chapter. Sticker pack apps are iMessage apps but have no iOS apps attached to them. Therefore there is no code to be written. In full-fledged iMessage apps, your app is divided into an iOS app and an iMessage app, so you have two of some files, such as the Assets.xcassets file.

Even with custom sticker pack applications, you can build the apps in two different ways:

  • Using the existing Messages classes, such as MSStickerBrowserViewController, which do the heavy lifting for you
  • Using custom collection view controllers that will be attached to your main MSMessagesAppViewControllerinstance

Follow these steps to program the actual logic of the app:


  1. Drag and drop your PNG stickers into your project’s structure, on their own and not in an asset catalog. The reason is that we need to find them using their URLs, so we need them to sit on the disk directly.
  2. Create a new Cocoa Touch class in your project that will be your MSStickerBrowserViewController
  3. Your instance of MSStickerBrowserViewControllerhas a property called stickerBrowserView of type MSStickerBrowserView, which in turn has a property named dataSource of type MSStickerBrowserViewDataSource?. Your browser view controller by default will become this data source, which means that you need to implement all the non-optional methods of this protocol, such as numberOfStickers(in:). So let’s do that now:
override func numberOfStickers(in   stickerBrowserView: MSStickerBrowserView) -> Int

 {   return stickers.count } 

 override func stickerBrowserView(_ stickerBrowserView: MSStickerBrowserView, stickerAt index: Int) -> MSSticker {   return stickers[index] }

Our browser view controller is done, but how do we display it to the user? Remember our MSMessagesAppViewController? Well, the answer is through that view controller. In the viewDidLoad() function of the aforementioned view controller, load your browser view controller and add it as a child view controller:

override func viewDidLoad() {   super.viewDidLoad()      let controller = BrowserViewController(stickerSize: .regular)      controller.willMove(toParentViewController: self)   addChildViewController(controller)     if let vcView = controller.view {     view.addSubview(controller.view)     vcView.frame = view.bounds     vcView.translatesAutoresizingMaskIntoConstraints = false     vcView.leftAnchor.constraint(equalTo: view.leftAnchor).isActive = true     vcView.rightAnchor.constraint(equalTo: view.rightAnchor).isActive = true     vcView.topAnchor.constraint(equalTo: view.topAnchor).isActive = true          vcView.bottomAnchor.constraint(equalTo:    view.bottomAnchor).isActive = true   }      controller.didMove(toParentViewController: self)    }

Now press the Run button on Xcode to run your application on the simulator or device.

In this list, simply choose the Messages app and continue. Once the simulator is running, you can manually open the Messages app, go to an existing conversation that is already placed for you there by the simulator, and press the Apps button on the keyboard.


In this blog, I introduced you to the new Messages framework in iOS 10, which allows you to create sticker packs and applications to integrate with iMessage. We covered the basic classes you need to be aware of, including MSStickerBrowserViewController, MSMessageAppViewController, MSSticker, and  MSStickerView.

The Messages framework provides APIs to give you a large amount of control over your iMessage apps. For further reading, I would recommend checking out Apple’s Messages Framework Reference.

Text Recognition using Firebase ML Kit for Android

Firebase ML Kit Introduction

At Google I/O 2018, Google announced Firebase ML Kit, a part of the Firebase suite that intends to give our apps the ability to support intelligent features with more ease. The SDK currently comes with a collection of pre-defined capabilities that are commonly required in applications. Firebase ML Kit offers machine learning capabilities underneath a form of a wrapper, it also offers their capabilities inside of a single SDK.


Currently ML Kit offers the ability to:

  • Recognize text
  • Recognize landmarks
  • Face detection
  • Scan barcodes
  • Label images


Recognizing text in images, such as the text of a street sign, and recognizing the text of documents.

Recognize Text in Images with Firebase ML Kit

ML Kit has both a general-purpose API suitable for recognizing text in images, such as the text of a street sign and an API optimized for recognizing the text of documents. The general-purpose API has both on-device and cloud-based models. Document text recognition is available only as a cloud-based model.


Before you proceed, make sure you have access to the following:

  • the latest version of Android Studio
  • a device or emulator running Android API level 21 or higher
  • a Google account for Firebase and Google Cloud

Create a Firebase Project

To enable Firebase services for your app, you must create a Firebase project for it. So log in to the Firebase console and, on the welcome screen, press the Add project button. In the dialog that pops up, give the project a name and press the Create project button.


From the overview screen of your new project, click Add Firebase to your Android app. Enter package name and other information and press the Register app button. Now downloads configuration file ( google-services.json) that contains all the necessary Firebase metadata for your app.

Configure Your Android Studio Project

  1.  Switch to the Project view in Android Studio to see your project root directory. Move the google-services.json file you just downloaded into your Android app module root directory
  2. Modify your project level build.gradle files to use Firebase.
  3. Add dependencies in app-level build.gradle:
  4. Finally, press “Sync now”.
  5. Add permissions in AndroidManifest.xml

On Device Text Recognition

To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the FirebaseVisionImage object to the FirebaseVisionTextRecognizer’s processImage method. If the text recognition operation succeeds, a FirebaseVisionText object will be passed to the success listener. A FirebaseVisionText object contains the full text recognized in the image and zero or more TextBlock objects. Each TextBlock represents a rectangular block of text, which contains zero or more Line objects. Each Line object contains zero or more Element objects, which represent words and word-like entities (dates, numbers, and so on).


On Cloud Text Recognition

If you want to use the Cloud-based model, and you have not already enabled the Cloud-based APIs for your project, do so now. Navigate to ML Kit section of the Firebase console. If you have not already upgraded your project to a Blaze plan, click Upgrade to do so. Only Blaze-level projects can use Cloud-based APIs. If Cloud-based APIs aren’t already enabled, click Enable Cloud-based APIs.


The document text recognition API provides an interface that is intended to be more convenient for working with images of documents on the cloud. To recognize text in an image, create a FirebaseVisionImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass FirebaseVisionImage object to FirebaseVisionDocumentTextRecognizer’s processImage method. If the text recognition operation succeeds, it will return a FirebaseVisionDocumentText object. A FirebaseVisionDocumentText object contains the full text recognized in the image and a hierarchy of objects (blocks, paragraph, word, symbol) that reflect the structure of the recognized document.



Stay tuned for my next article.

Build your own custom Android ROM using Android Open Source Project(AOSP)


One of the best things about Android is custom ROMs. A custom Android ROM refers to a phone’s firmware, based on Google’s Android platform. The term ROM, which stands for Read Only Memory, really has very little to do with what a custom Android ROM actually is, can be confusing. Since Android is an open source mobile operating system that means anyone can download the source code, make modification to it, recompile it and release it for a wide variety of devices. Anyone can install ROMs to their device and achieve a modified appearance and behavior. Continue reading Build your own custom Android ROM using Android Open Source Project(AOSP)

Firebase cloud messaging in iOS

Cloud messaging or push notification is one of those “topics” that gets left out. Primarily because we are too busy beautifying the app, or working on a new feature, or we think it isn’t a big deal. Push notifications are as big a deal as any. Whether you want to re-engage your users, or deliver personalised content, or display targeted advertisements, push notification is the way to go. Continue reading Firebase cloud messaging in iOS

Go ServerLess with Firebase cloud functions

Firebase Cloud function

With announcement of cloud functions beta at Google cloud next 2017 event, Google has added one of the highly requested features in the firebase suite. This is one major step from Google in making firebase serverless. In this post, we will see some of the capabilities, pros and cons, setup and deployment of firebase cloud functions. Google IO is just days away and knowing about firebase is surely going to help in understanding the upcoming firebase features. Continue reading Go ServerLess with Firebase cloud functions

Interface Design for MoSync applications


“Interface Design for MoSync applications” … doesn’t sound very interesting! As a matter of fact, it isn’t. There is not much new to what I am going to discuss here. The same age old problem, the same old solution, what’s new is implementing the solution for MoSync. This blog post is based more on personal experience of working with MoSync. What’s MoSync? I’ll talk about MoSync more in the next section, for now you can think of a powerful-yet-not-much-popular cross platform mobile application development framework.

Continue reading Interface Design for MoSync applications

MobileJira launched at the Windows App Store

Talentica’s Mobile Team recently launched the Windows version of the popular MobileJira App. The App, which is already available in the Android Play Store, is useful for all IT professionals who use Jira to track issues, bugs, tasks or deadlines for their projects.

Loaded with unique features, MobileJira is designed for simplicity and ease of use. It is targeted towards improving productivity and process compliance of testers, developers and managers. It helps manage JIRA tickets on the go for requirements like progress-update/resolve or close issues. It also supports major functionalities like search, issue details, updating work log/comments and view attachment.

MobileJira supports JIRA® 5.0 or later. It’s based on JIRA® REST API’s and requires the JIRA® server to run as a service. More details are available at Running JIRA as a Service.

Visit the Windows App Store to download the app.

Feel free to send in your feedback or suggestions at to help us further improve the app.

Cross Platform Tools – Choosing Right One for Your Mobile App

Different cross platform technologies have emerged to overcome the challenge of building native build for different operating systems. Although cross platform technologies come as good alternative, every platform has its own set of limitations. The big question is to identify an appropriate tool based on requirements. In this blog post we would address this problem by comparing some of the popular cross platform toolsWe will look at Titanimum, Kony and MoSync as they are some of the most popular cross platform tools being used today. We will be doing a comparative analysis of these tools based on the ease of development, memory footprint analysis and App performance.

Ease of Development

Titanium has powerful widgets and a rich platform. It provides eclipse based studio and uses JS for development. Titanium provides good documentation and a strong developer community. It also supports device debugging. It uses native SDK for testing the app on emulator. Disadvantages include its lack of support for Windows 8.
As an alternative tool MoSync uses both C++ and HTML 5/JS as the development language. Mosync Reload can be used to test changes on multiple devices and simulators. MoSync also provides extensive documentation. MoSync doesn’t provide debugging option for building native UI. It requires native SDK for testing native UI apps on emulator.
Kony, as compared to the above two, provides a strong IDE for developing apps. With the advanced drag and drop feature like native platforms, Kony stands out from other cross platform tools. It supports third party widget libraries like Sencha and JQuery Mobile. Kony provides strong support for Enterprise based apps. In spite of all advantages, Kony’s adoption remains low because it is a paid tool. The developer community is not strong and it takes a lot of time to build apps for platforms like BB.

Memory Footprint Analysis

Mobile users are sensitive about sizes of application. Different cross platform frameworks uses plugins to access native features which in turns increases size of executables.
Executable Size
Android Native
~ 1 MB
iOS Native
~ 1 MB
~ 8 MB
~ 2 MB
~ 6 MB
Both Kony and Titanium use a plugin to access native features in their build but MoSync generates and uses native widgets. For a very large size application these figures may be irrelevant but for simple Productivity category app size of the executable can become an issue.


We have seen a big improvement of performance of Apps running on cross platform tools in recent years. Earlier doing some native actions or showing animation affected the app performance, mainly on Android. Even today, performance of a native build is better than any app built on cross platform tools.

We tested performance for a large List View and Switching/Loading between pages and observed that for Titanium , Mosync and Kony. Performance is good and it is on par with native builds.

In a nutshell

The choice of platform depends on your requirements. MoSync can be preferred choice if existing dev expertise is in C/C++ and also for cases where application needs to be deployed for larger set of Mobile Platforms. For enterprise based apps, Kony can be a better. If your target platform is only Android and iOS then Titanium should be preferred. None of these platforms support Game development and they better be avoided if your App relies extensively on native features.

All cross platform tools still have to go long-way to become par with native platforms like Android and iOS in term of ease of development and performance.