5 Real-Life Applications Where Edge Computing Can Change the Game

5 Real-Life Applications Where Edge Computing Can Change the Game

The past decade has seen a substantial percolation of cloud-native offerings across industries. Technologies like containerization, cloud-native architecture helped these applications to scale. In fact, the predicted valuation of $308.5bn for the public cloud service revenue in 2021 is a good indicator of this evolving market scenario.

In sync with it rose the adoption rate of sensor tech and mobiles, which spurred tremendous growth in data generation.

However, network bandwidth didn’t scale at that pace, which is why cloud-native processing is facing challenges. And cloud latency is not helping this progress either.

That is why industries are experiencing a paradigmatic shift from cloud-native to edge computing. In fact, Gartner’s review of the edge computing market substantiates this argument. The company has revealed that edge computing is going to enter the mainstream in 2021.

IBM’s report reveals a possibility of the edge computing market growing from $3.5bn in 2019 to $43.4bn in 2027. The leap will be a gigantic one. As its appendage, the edge-native apps industry will also witness a substantial rise.

Engineers design edge-native applications with the edge’s features in mind, so we can expect this growth to be smooth in the coming years. It will surely benefit from the factors that are boosting the edge computing market at present. These factors are-

  • Latency

Edge computing has the ability to process data near to the source and prioritize traffic. This helps reduce data flow amount to and fro from the primary network and increase processing speed. It makes data more relevant.

  • Security

In a cloud-native setup, data goes to the cloud analyzer through a single pipe. If it is compromised, then the entire work of an organization can come to a standstill. With edge computing, such chances are less as hackers can access only a limited amount of data.

  • Reliability

When there is a connectivity issue, storing data locally and ensuring its processing is a more viable option than the traditional modes.

  • Cost-effective

Better segregation of data leads to better data management and reduces cost. Edge computing makes it easier. It optimizes the use of cloud and available bandwidth.

  • Scaling

Scaling between Edge and Cloud is an ability that helps edge computing manage data volume. The system is designed to ensure a balance and maximize output.

But what are the fields that stand to benefit from networks moving nearer to the edge?

While assessing various aspects, I realized that the changing dynamics of IoT and 5G technologies have the potential to impact more. However, with companies like Vodafone, Ericsson, and Huawei fast-tracking their architectural changes to accommodate 5G advancements, the chances of edge computing percolating industries are growing bigger.

Based on the above factors, we have identified a few use cases that are most suitable for edge computing applications.

  1. Video Processing

Cisco’s Global Cloud Index reveals some interesting aspects that shed light on why edge computing will gain ground in the coming years. According to the index, people, machines, and IoT will generate around 850 Zettabytes (ZB) by 2021.

Only 10% of it will be useful and it will 10X greater than the stored or used data (7.2 ZB) in 2021. The report also reveals that the useful data may exceed “data center traffic (21 ZB per year) by a factor of four”.

Among the data generated, a huge part is getting churned from video streaming. In fact, the streaming industry has seen a substantial hike in revenues during the COVID-19 pandemic.

In the US, the rise in people using a streaming site has grown 21% since 2018. Streaming platforms are now at loggerhead to improve user experience and for that, they are moving to better image quality. This will trigger a massive change in edge computing engagement.

But streaming is not the only point of concern. Interactive video experiences may also find edge computing’s use quite alluring. Interactive videos thrive by providing immediate results and that will open up spaces for edge computing.

The other area that might benefit from edge computing is content suggestions. Predictive analysis is gaining momentum to generate content for the targeted audience. With the help of edge computing, companies can do it locally and increase the speed of suggestions to ensure better engagement.

The segment will benefit more from a teaming up of technologies like edge computing, AI, and machine learning. Real-time personalization of data, customer data review, and behavior analytics to deliver actionable insights will become easier.

  1. AR/ VR

AR/ VR is an innovation that is set to transform how we consume content. Its ability to provide immersive experiences is expected to engage more customers than ever.

However, the process is not simple. It requires proper stitching of real-world and user’s motion in a digital world to ensure adequate synchronization. But this will trigger a need for a huge volume of the graphical rendering process.

To ensure seamless functioning, splitting the workload between AR/ VR and the edge is necessary. The process has latency-sensitive moves, which can be controlled if the edge takes over and handles the bandwidth issues by introducing a semblance in its use.

The use of AR/ VR in the retail space would be significant as it can transform the traditional brick-and-mortar experience. People can now enter a mall and get a customized route plan or buying chart in a grocery store based on their previous buying experience there. In addition, generating online content will become easier with edge computing services.

The burgeoning gaming sector will also benefit from edge computing as it can reduce the price of AR/ VR gears by taking over the image rendering capabilities. Advanced compute capabilities will take a backseat with the edge taking over the data rendering process. It will also increase the rendering opportunities.

Edge computing will allow end users to play a game using either a normal and heavier headset or a lighter device. By providing gamers with such options, it can boost the adoption rate of the AR/ VR gaming industry.

In fact, the impact of the combined force of AR/ VR and edge computing is going to be much wider including “serious gaming” use cases. Doctors with AR glasses can perform critical surgeries with overlaid X-ray reports or other physiological maps and it will be a huge boost for the healthcare sector.

For firefighters and soldiers, situational awareness is of utmost importance to chalk out their moves, including seeking guidance from their handlers. Advanced AR/ VR designs with edge computing can strengthen their steps by providing vision with extrasensory capabilities, better situational awareness, improved risk calculation, temperature reading, and improving decision-making.

  1. Emergency Services

Emergency services in the healthcare sector are going local to reach out more to people in distress. It is not always possible for patients in emergencies to visit a multispecialty hospital at a distance. To counter this, equipping the ambulance with adequate measures is a far better idea.

Edge computing and 5G technologies are a perfect combination for such scenarios. A blend of technological and computational resources will accelerate diagnosis and analytics and help the medical team work efficiently within the golden hour.

The medical wearable segment is witnessing investments pouring in from several sectors, which means an increased scope for research and adoption of new technologies. The connected medical devices segment that helps diagnose, monitor, and treat patients has the potentials to scale up to $52.2bn by 2022.

Such devices generate a massive amount of data and a huge portion of it requires real-time processing to ensure faster treatment. For instance, healthcare IT architectures can benefit by gathering health-related data. Simultaneously, it can develop rapidly, real-time analytics using edge computing to predict health emergencies and take actions accordingly.

IoT medical devices can detect anomalies, notify concerned authorities, and save time to allow doctors extra time to save a person’s life.

IoT and edge computing technologies can provide extra security in smart homes and cities in terms of emergency services. A smart tool like a security camera can process images and recognize voice to understand unwanted activities at the edge. It will help prevent leakage of private and sensitive data from audio and video devices.

  1. Preventive Measures

Edge computing has a massive role to play in preventing acts of terrorism. Security footages from various points are continuously uploading images to the cloud or a server for a better analysis. And the bulk is getting bigger with each passing day.

But if those can be analyzed at the edge using Deep Learning models, then officials can take action much faster and stop major crimes from happening.

In fact, the outbreak of the COVID-19 is an eye-opener in many ways. Scanning people and products, then uploading details and wait for the results to come to consume a lot of time. Such delays in a pandemic situation are unacceptable.

However, edge computing and the 5G network can change the scenario by increasing the assessment speed and reducing the waiting period.

  1. Industry 4.0

The industrial revolution has entered the 4.0 phase and its focus is on improving productivity by transforming the workforce and bolstering industrial growth by making economics more impacting. Such an overhaul is depending a lot on IoT adoption. Just in the manufacturing sector, the IIoT market spending is predicted to grow from $1.67bn in 2018 to $12.44bn in 2024.

The demand for better security and seamless operation will go up in sync with the expanding market.

As an integral part of IIoT, automation will gain big from a paradigmatic shift in procedures that edge computing promises.

Automation generates a massive amount of data, which can be used for AI-based analytics like predictive maintenance or reducing downtime etc. IIoT generated data is sensitive and industries might become reluctant to send and store data remotely over the cloud.

But with edge computing, data persistence and analytics can be done closer to data source ensuring data privacy and security.

Simultaneously, the storing of data on a local scale will help companies more in adhering to policies like GDPR. By enabling decentralization of specific processes and ensuring optimal physical location, edge computing will create IoT deployments that are more secure, reliable, and scalable.

Interestingly, this move will not be a restrictive one; rather, it will open up avenues for IoT applications. Industries like smart homes and healthcare will benefit from it.


The rising influx of rich data in these five areas is inspiring moves that focus more on actionable insights and process optimization. In sync, there is a hike in demand for safety and security.

These strategic mechanisms are getting substantial attention from governments and private investors, and it is a boon for edge computing. The coming years will witness more of this interplay between edge-native applications and these five aspects.

5 Key Technology Decisions for a Scalable MVP

5 Key Technology Decisions for a Scalable MVP

Wrong technology decisions are like a bad marriage. If you linger around it for long, then be prepared for more troubles. I realized this while working with a customer whose product was matched with an inexact technology by a CTO at an initial stage.

Over time, the rift between them widened, and adopting new technologies became more difficult. This led me to think of dilemmas that startups often face while zeroing in on technology.

There are 5 significant technical decisions that startups should consider before they start building a scalable MVP.

Using Microservices architecture pattern to build an MVP

Microservices architecture has become a buzzword now. You go to any seminar or talk, and you will find this as a hot cake selling fast. It got the thrust from OSS platforms like Netflix and others after it helped accelerate the development and deployment of such platforms and services. The accolades are well-earned. But is it a must for all? I say, ‘No.’

Microservices architecture is good only when the following two conditions back it up:

High scale

This pattern is suitable for services where billions of requests pour in each day. We used this to develop an ad server with similar criteria and earned favorable results. But such cases are exceptions- scaling a number like that from Day 1 is a huge task and it rarely happens.

Large teams & Agility

The other factor is the number of members in a team. My personal opinion is a team with more than 100 members and a strong business need for agility are possible use cases for microservices.

Let me share an experience. In 2016, we used microservices to build an MVP as the hype surrounding it was high. Microservices architecture is inherently distributed- their complexities made iterations frequent and deadlines got delayed. Finally, we rolled out the MVP after a year, instead of our initial plan to launch it within 6 months.

We had to fight real hard with distributed transactions, debugging was tough, and even simple user stories had complexities. The level of complexities we encountered was unprecedented. Distributed systems are hard to manage. The team had to implement complex patterns like outbox patterns and circuit breakers for transactional integrity and reliability. Over time, these patterns have matured but it pushed me to think, “Is this kind of complexity necessary? Is the ROI that much alluring?” The answer was negative.

Before considering the Microservices journey for your startup, ask these two questions – Do we need to support billions of requests every day? Will I ramp up my development team to 100+ engineers in a couple of years? If the answers are ‘No,’ do not go ahead. Instead, adopt the Modular Monolith. Like microservices, it comes with a database for each module that allows parallel development to an extent, but it is deployed as a single unit removing the complexities of distributed systems to another day.

Using NoSQL Stores without a specific need

When it comes to NoSQL, a lot of entrepreneurs or programmers are well-versed with the concept of 3Vs-

  • Volume
  • Variety
  • Velocity

NoSQL is suited for problems where the product velocity demands support for billions of requests or generates around 1TB of data each day. Also, if there is a constant influx of structured, semi-structured, or unstructured data, then NoSQL is necessary.

Its use in the flexible schema is widespread where one is not sure about the entity attributes as entities evolve. For instance, e-commerce sites are its biggest takers as RDBMS cannot provide flexibility to store the inventory. NoSQL is benefiting from such scenarios and cashing on the popularity of databases like Redis, MongoDB, and Cassandra. It has already covered 39.52% of the market.

However, there are cases where using NoSQL can usher in disaster. A few years back, I was developing an MVP for a fintech startup. Our choice of Aerospike as the DB for our transactional store turned out to be wrong. It does not support ACID guarantees, so we had to go with BASE (Basically Available Soft state Eventual consistency), which is operationally intensive and adds to the time, effort, and cost.

We ended up fighting the wrong battles. If volume, variety, and velocity are not your prerequisites, then don’t take up NoSQL. For such cases, RDMS is a good option.

Another aspect that gets neglected often during decision-making is the expertise in data modeling NoSQL stores. Data modeling in NoSQL is different from RDBMS. For a practical application of NoSQL, understanding the query pattern is of prime importance and modeling is done based on UX screens and query patterns instead of normalization techniques.

Moreover, with Mongo or Couchbase, the internal storage structure can give rise to more complexities. Then there is the pricing strategy – Dynamo and CosmosDB and similar engines, but their pricing strategies are entirely different. A shift to NoSQL would require more time to understand these models. If your venture gets support from freelancers with RDBMS background, then stick to RDBMS to keep things simple.

Using the standard test pyramid strategy for test automation 

Software testing is fast becoming a trend and around 78% of companies are now relying on it. To test tools, clients follow a pyramid:

In 2012, I got this opportunity to work on a platform called 1-9-90. In any social media platform, there are 1% influencers, 9% active users, and 90% passive users. The idea was to create content on our platform and publish it on different social media platforms to collect engagement using views, subscribers, shares, etc. But after 6 months, the company decided to change its stance. The intent was to understand how the brand is performing, which led us to build the Digital Consumer Intelligence Platform. We shifted from a content creator play to an API-only service. We had to scrap most of the test automation we had so painstakingly built.

What most people don’t realize is that Test automation is hard and often brittle in nature. You need adept developers and QA engineers working together to create a state-of-the-art regression suite. Like the DevOps movement and DesignOps movement, a DevQA movement is the need of the hour to write tests and ensure ROI that most teams fail to realize. If not performed precisely, the ROI will take a hit. For startups, I would recommend a test diamond-

It comprises module-specific unit tests, a lot of integration tests, and a few end-to-end tests. Ideally, develop test automation only if the product is market fit.

Having no objective targets for MVP 

The most challenging question for a Product Owner is – How will you objectively define the MVP’s success? Mostly, they know the problem and the solution but fail to put it across objectively in measurable terms.

In an earlier opportunity where we had a chance to work with a payment gateway, the MVP’s goal was to reduce checkout time. Instead of 30 seconds to complete a checkout flow, the process needed to happen within 10 seconds – this metric gave the team a clear, measurable objective. They could fine-tune the feature set and user experience until they achieved the said target.

Such clarity helps the team apply Customer Developer Principles, wherein you learn, measure, and iterate to meet your goals. Having said this, converting a feature goal into a measurable metric is not very easy and straightforward. It needs creativity, expertise, and Analytics tools to get things right.

Another mistake that some startups make is to focus their energies on building the bells and whistles. Entrepreneurs are incredibly passionate about their problems and they keep coming up with new ideas and ways to solve the problem. Dropping the current strategy and running after a new one is widespread among startup teams. Such actions confuse teams and often misguide them. Having a well-defined, measurable target helps the team validate pivots objectively.

Companies must stay away from bells and whistles as they spur production costs and delay the time to market. In my opinion, always focus on building a key differentiator, define the success of an MVP objectively to help the team stay aligned.

Poor Build vs. Buy Decisions

There are two clear-cut ways of empowering an MVP-

  • Build
  • Buy

Entrepreneurs often try to build everything from scratch and as an engineer, I would love such opportunities. But is it profitable for your MVP? You have to measure it in terms of cost, time, and effort. In most cases, I have found that buying is a pragmatic way forward.

If you have a unique business model where you need a new algorithm, which is also a differentiator, then don’t shy away from building one. But if your algorithm is not the star, there buy one and then optimize it.

A more dynamic approach would be to follow a Buy, Validate, and Build Methodology. In an earlier opportunity where we had a chance to work with a payment gateway, our client bought a payment aggregator’s license instead of building it. The goal of the MVP was to reduce the time of checkout and enhance the experience – which could be achieved without building the complete payment aggregator from scratch.

As a solution, we loaded it with features that solved the primary problem and fetched us paying customers. Then we replaced the payment aggregator with our own to improve margins. This helped us get the MVP ready on time and open the revenue stream, get the product market-fit and then work on profitability. The Buy-Validate-Build Methodology helped us validate and fail fast!

To summarize, consider these technology decisions while building MVP

  • Avoid Microservices. Use Modular Monolith
  • Avoid NoSQL stores. RDBMS still works for most cases
  • Do not do Test Automation till you reach Product-Market Fit
  • Do not build bells & whistles, have objective MVP Goals
  • Make Pragmatic Buy decisions, don’t build everything

How Can You Streamline Work from Home to Benefit More?

The year is quite unsettling in nature as systems have gone off the track by miles. In every sphere. In every sector. All these because of the COVID-19 pandemic. It has forced people to stand at a cusp of an upheaval where adopting and adapting to new structures are the only means of moving forward. And one such drastic change is the evolving work from home culture.

Work from home culture has spurred the adoption of virtual and remote setups, which has been witnessing support from rapid digitalization. But how can we streamline it? How can we ensure a smooth transformation to bolster the present and the future?

The answer is in emerging technologies and management. Proper implementation of these is required to ensure success. According to a Stanford University survey, around 42% of the U.S. labor force are now performing their duties from home. But their starts were not easy. When reports about massive breakouts of coronavirus got out, government mandates forced companies to launch this new pattern. But a lack of know-how of how to work from home and other factors acted like constraints for them.

Like many other companies, we faced problems. When there is a physical distance, sync-ups emerge as a huge challenge and so is maintaining a rapport among teammates. Even getting a proper update on jobs done consumes a lot of time.

But we were in no state to lose our mind and allow any plummet in productivity as that could have triggered a domino effect leading to the point of no return. Financial worries had a significant role to play in it and its gravity is now evident from the U.S. economic reports showing a plunge in GDP by 31.4% in the second quarter.

What Were Our Constraints

To weather such a crisis, we brainstormed to find out what aspects were bogging us down. We found out a few. They were,

  • Communication gaps
  • Feeling of work as isolated and unstructured
  • Tracking progress becomes difficult
  • Missing the feeling of working together, the office ambiance

In addition, as the days rolled by, we found out employees in different countries have started complaining about their well-being while working from their homes. Sometimes, it is related to work desk and posture but often issues like furlough, finances, career hiatus, fear of illness, and other factors impacted their outputs.

We certainly didn’t want those for our employees. We care for their well-being. Studies have revealed that when employees are happy and content, they become more productive and their satisfaction levels guarantee better outputs for customers.

That’s why we took some measured steps to ease the process. We had to make the lives of our employees more comfortable and we are glad that we did it when the time was ripe.

How We Streamline Our Work?

We started jotting down possible solutions quickly, whatever was there on top of our minds. Then we zeroed in on the most effective ones, the ones with the maximum output and minimum integration challenges. At the end of the process, we came up with four major buckets to address all the problems.

These were

  • Switching from pull to push method of communication
  • Writing things down
  • Respecting the need to create a connection
  • Adopting new tools to simplify the flow

Implementing Them

Each of these aspects required a specific type of handling. But the effective practice of these required a thorough understanding of behavioral patterns of employees and technologies. So, we dug deep and did a little bit of research to understand what suits each of them best.

  1. Switching from Pull to Push Method of Communication

When we are at the office, team leads or managers have face-to-face interactions with a group or individuals for regular status updates of projects. It is the pull mode.

But as the architecture changed, we realized the need for the push mode. We pushed the ownership of a project to the concerned employee. In the work from home method, we asked them to publish the work status and not to wait for being asked by their supervisors. This ensured a seamless flow in operation as update logs helped in improving synchronization.

Stanford report claims that productivity increases by 13% when people work from home. But you need the right process to capitalize on it.

To effectively put the system into practice, you can try making the following things mandatory.

For individuals,

  • Ensure status visibility on Skype or other mediums to avoid any confusion and also notifying others about their logging and out times
  • Update the job sheet to streamline procedures
  • Ask for help when stuck somewhere and don’t wait to be asked
  • Focus more on HRMS to ease the process of attendance regularization
  • Inform the meeting host about a delay in joining a meeting. Let your team or manager know if you are taking an unplanned leave

For leaders or managers,

  • Break calls into three distinct parts: updates, demos, and then discussions. This would prevent any digression and stretch of work hours
  • Set office hours and ask employees to schedule a DND to let them enjoy time with their families
  1. Writing Things Down

In remote setups, we were facing was a communication gap. Views were lacking clarity and misalignments in the flow were happening. To curb that, we ensured a practice of documentation. Yes, it took time but we got things streamlined.

To ensure proper implementation of this method, we asked individuals to

  • Integrate a process of self-explanatory documentation with logical subtasks, estimates, queries, and answers
  • Update a task with a small blurb to explain changes. This reduces the time spent on calls

We asked leaders or managers to

  • Create a team norm and discuss it with your team to set general work expectations
  • Mark a shared space like OneDrive for common contents
  • Circulate agendas before a meeting and then distribute minutes about that
  • Record all the essential calls
  1. Respecting the Need to Create a Connection

It may sound cliché, but it is true; we all are social animals. According to a Deloitte report, 45% of employees prefer social interaction while working, whereas 31% prefer collaboration. This clearly shows how much we need our peers by our side to boost our morale.

But connecting with people virtually is difficult. However, we can do a lot better if we just switch on our camera. It is because we are ‘visual beings’ and 90% of the information that our brain processes are visual.

  • Ensure meetings that are visual. Satya Nadella, the CEO of Microsoft Corp., said in a recent interview, “Video meetings are more transactional. Work happens before meetings, after meetings.”
  • Set up a time for team playtime. It can be Scrabble or an online game, which you can use as a stressbuster and a time to bond
  • Create a channel to post weird news or memes or anything
  • Come up with innovative ways to bond like ordering pizza for all and then having it on a virtual meet
  1. Adopting New Tools to Simplify Flow

We all knew that various software and AI would control the work atmosphere. But we didn’t expect it to be this soon. Now, when we have to adopt and adapt to confront challenges, we should make the most of it.

In fact, now employees have started realizing how they can benefit from various emerging technologies. Around 69% of the respondents in a survey conducted by HR Dive revealed that they feel technologies have empowered them.

We have found some tools useful in maintaining the flow. They are

  • Miro or Limnu for whiteboarding
  • Draw.io or drawing.net to explain flowcharts, block diagrams, org charts, etc.
  • Krisp.ai is proving its mettle in removing background noises from Zoom and Skype calls
  • Jira Assistant and Microsoft Teams for a common work area
  • Donut for team pairing and inspiring better social connect
  • StoryXpress Clapboard is essential in showing demos during a meeting

Deloitte revealed that around 61% of desk-based workers would like to continue their work from home culture or at least do it more often. It means that people are getting warmed up to this new concept. But there is a downside as well. From an organizational perspective, work from home is not often an ideal solution as technologies have their limitations.

While working with tools, we have to understand the psyche of our employees. For instance, video conferencing is great but short meetings like with a time-cap of 30 minutes are more effective. Otherwise, the mind gets tired. It also came out in Nadella’s talk, where he used Microsoft’s research works to substantiate his claims.

And again, demography has a huge role to play in it as socio-economic and political scenarios impact work cultures. You have to find a balance in work from home setup and for that, insights are crucial. Leaders have to take the onus of simplifying things and take charge of the pack to ensure a sound transition without affecting the goals.

5 Ways to Make Innovation a Way of Life in your Startup

As someone who works exclusively with startups, I often hear the word “disruption”. Disruption is here-to-stay, and every startup must make innovation a way of life. Innovation need not always be top-down and breakthrough; my experience says that small incremental innovations coming in from every level can change the game.

For the last three years, I have been a part of the innovation initiative in our organization. This initiative had the intention of developing a structured way that would encourage thought processes in the right direction. It was to identify innovation opportunities and open up diverse approaches towards problem-solving.

During this journey, we conducted multiple workshops and reviewed various innovations that were ‘one of their kind’. As I have worked as a submitter as well as a reviewer, I am sharing these learnings from my experiences while working with multiple startups-

While working on an insurance underwriting workflow, we proposed and implemented auto-decisioning rules to reduce the time required for underwriting policies. We came across a similar use case in a construction tech product. It had an approval workflow directly influencing the project timeline.

By drawing attention to previous use cases and changing technology landscape, we proposed a machine learning solution to auto-approve/reject documents and reduce the turnaround time. The intent was to bring down the project delays caused by approval workflows. This way, we could translate the knowledge we gained from one domain/vertical to another. Since we work with multiple startups, our engineering managers act as innovation enablers by abstracting out solutions and draw parallels among similar problems.

In a startup, team members with diverse domain backgrounds can achieve this by sharing previous solutioning experiences. When a problem surfaces, they should come up with solutions by reviewing their past experiences. For example, shopping flow for apps through the app store. For example, shopping flow for apps through an app store, Netflix-like OTT subscriptions, or e-commerce merchandise, can be abstracted to a generalized shopping flow, similar behavior can be observed, and analogous solutions can be implemented.

In another instance, an ongoing discussion with a senior executive at one of the FinTech startups revealed that he spent two hours daily collecting data from different financial portals and crunching it daily. One of our QA engineers had innovation in mind. He simplified this time-consuming exercise by using his UI automation skillset (generally used for UI testing) to build a utility that would fetch this data automatically every day.

It was a simple solution developed in merely two days without any fancy API integrations or data pipeline setup. Sometimes, simple and out-of-the-box thinking could give you frugal innovations, and unconventional use of tools & technologies can do wonders.

The startup ecosystem is very agile, nimble, and cost-sensitive. Frugal innovations, which reduce the unnecessary complexities and costs, are very much the need of the hour. Teams can come up with frugal innovations in areas where they face constraints. Out-of-the-box thinking could help overcome such obstacles.

For one of the telecom products that we worked on, the business was losing revenue, and it was going unnoticed. Even though subscribers were willing to subscribe, they were unable to do so because of insufficient balance.

The engineering team was aligned with the business process and kept an eye on the offerings and product KPIs. This in-depth knowledge of the business helped them to identify lost revenue opportunities and provide a solution with simple technology.  With the right understanding of business and technology, a minor change in technology can enable a massive impact on business.

To implement similar incremental innovations, have a keen eye on the business and product KPIs, and understand how the KPIs change with every new feature launch. This will help your teams to not only align with business or product needs but also drive tech-enabled innovations.

While building an investment platform for one of our customers, we came across a common problem of managing the database performance to build analytics in a monolith with a relational database application. An obvious approach to solve this problem was to go with performance monitoring tools such as New Relic. But keeping in mind the limitations, our team decided to build a tool that serves the purpose while overcoming challenges of out-of-the-box solutions. This tool not only gave in-depth insights into database performance but also provided support for all kinds of slice & dice operations.

So, we built our homegrown solution to instill the right kind of efficacy in areas covering from scaling the business to having a better experience for existing users and impact more. Solutions, devised with an in-depth knowledge of architecture and technology, will undoubtedly add immense value.

To bring in such innovations in your startup, ensure that the team has a learning mindset. You need to promote deep understanding and hands-on exposure to technology from the classic concepts to the latest architecture patterns/frameworks/developments. You can emulate an approach that incorporates learning as a part of core values, competency, and performance measures, and have recognition frameworks for appreciation.

For one of our FinTech products, we developed a future-ready middleware to on-board customers easily and quickly. For another insurance product client, we designed a different innovation. We had built a separate product to on-board insurance companies quickly with customized forms. In both these cases, we aimed at reducing the onboarding time and making the process less cumbersome. These serve as key essentials for startups to succeed.

Additionally, it was crucial to keep the end-user in mind while ensuring the best practices and easy operations. We often neglect operational innovations since they are thought of as “common sense”. However, innovation can change any part of the business, be it a one-off scenario or routine operations.

When you consider your startup, especially if it is a platform business, create a playbook to solve common and recurring challenges after considering similar cases.


When people talk about innovation, they usually refer to something gigantic and disruptive. However, those are merely small parts of the whole innovation puzzle. Most of the innovations successful at refining customer experiences are much more incremental. In our experience, incremental innovation is both a key differentiator and a stepping stone for something breakthrough.

How to Build a SaaS Product with Both Data and Run-time Isolation?

After a startup considers SaaS implementation, choosing the right SaaS architecture type is highly imperative to not only ensure the right pricing model but also accommodate special design requirements, such as scalability and customizability. Also, if you’re considering SaaS type 2 architecture to isolate both data and runtime environments, this article is a must-read. As an application architect working on enterprise software, let me walk you through how we helped a project management startup succeed by applying SaaS type 2.

The project management platform that we were working on was enterprise-level software. It based on a well-established algorithm to perform an optimal schedule for different types of project environments. However, to provide scheduling solutions at a much granular level, the product was going through a major overhaul in terms of new functionality for existing solutions. Also, we had to revamp the UI to make it more user-friendly.

Challenges that Came Along

The main challenge was to get early feedback for the new functionality from existing customers for quick product enrichment. Simultaneously, it was also necessary to give the product to a wide variety of potential customers for initial trials. This was to get them on-board for long-term engagement and provide scheduling solutions based on their needs.

While we started placing our focus on reducing the cycle time for features, it wasn’t possible with the traditional model of deployment wherein the product was hosted in the customer’s environment. Therefore, we decided to provide the platform as a SaaS offering. However, the immediate next step was to pick the right SaaS architecture, and this was crucial considering its role in fostering the platform’s future growth.

Arrival at the ‘Make-or-Break’ Decision

Since every organization’s business model is different, the task management and execution could be different. Engineers design these platforms in a way to make customization is easy for end-users. Moreover, the platform should be easily customizable for different customer environments. In one common time frame, multiple customers are going to use the platform to create portfolios for their organizations, which will hold very sensitive data specific to the businesses.

In this model, the customers were very clear and strict on the need to have complete isolation both at the application level as well as data level. We agreed that Type 2 architecture was the right fit for this case. Hence, we decided to implement it using our experience of saasifying products for growth-stage startups from various domains.

Dealing with the Architectural Roadblocks

The following are some of the architectural challenges that we encountered, and effectively tackled to drive successful implementation-


Each customer runs on a different scale; some customers have thousands of users using the platform for planning and execution. On the other hand, there are customers with very few top-level executives using the platform. Since we have the freedom to deploy the application at the customer level, the application was deployed keeping the size of user bases in mind.

Fast Customer Onboarding

We had to onboard new customers with minimal assistance from the Engineering or Implementation teams. For this, as soon as a new user signs up on the platform, we need to provide the application and database instance within minutes of signing up. We did this by using the automated scripts to deliver an application instance from a pre-configured base image quickly. Also, a unique URL for the application was generated using AWS Route 53. Once the provisioning happens, the user is notified that he/she is ready to use the platform with his unique URL (user-specific or organization-specific).


Architecture should support the customization of different business entities without any customer-specific deployment from the engineering team. These customizations were provided in the application via a configuration dashboard, wherein an admin user of an organization will set the configuration parameters based on the organization’s needs.

Hardware Utilization

We had to optimize the new architecture for hardware availability. It is imperative that there will be existing customers with huge data sets and customizations. But there will also be some customers with little data and almost zero customization. We did this by analyzing the costs of cloud infrastructures like instances, database servers, etc. and preparing the pricing plans for end-users accordingly.


By handling data isolation and application run time for each customer, we can solve a lot of security concerns. The data in transit was over HTTPS only. The application itself provides secure access to all customer data.


Our customer wanted to develop the existing platform as “Portfolio as a service.” They didn’t want to manage infrastructure and hire an admin for management. The implicit requirements were complete automation of provisioning, which was achieved with a one-click deployment for the product to provision application and database instances within no time. We built the architecture around multiple clusters so that all customers have their own runtimes (applications) and database server and we could prevent sharing of data or applications-


As demonstrated in the diagram, on every new customer onboarding, our automated services created keys and did provisioning of applications and databases as per the pricing plan adopted by customers. Once the step is complete, they could immediately start using the platform.

For every customer’s request, the load balancer identifies the right IP address of the application to process. Thereafter, the application gets fully-encrypted data from an isolated database, decrypts data using the keys, and sends it back to the user.

Advantages of SaaS Type 2 Architecture

They say- sometimes it’s the smallest decisions that can change things forever. It was our decision to probe the customer’s case and choose the right SaaS architecture type. This was to serve their purpose well. Some of the advantages that the customer enjoyed-

  • Handle security at the infrastructure level to ensure that the application doesn’t have to take care of data sharing.
  • No necessity of managing connection pools for tenant-specific databases.
  • Low chances of the system’s underutilization as scaling can be done differently for different clients.
  • Faster customer onboarding is possible as there are tenant-specific items.
  • Customize the system as per user’s need without worrying about its impact on other users.


We have customized customer onboarding, wherein customers can pick pricing plans as per portfolio size and number of users. Our fully automated deployment solution provisions verify instances in the cloud and ensure optimization of the system. SaaS type 2 architecture comes with several benefits. Startups considering implementing it must understand that automation and monitoring need heavy investment.

Top Considerations while Implementing Blockchain

If you are seeing technology making a difference in the startup ecosystem, you might have seen a lot of hype around Blockchain. Innovative characteristics of Blockchain like decentralization, immutability, transparency, and automation are useful for various industry verticals. This will inspire the creation of a multitude of use cases.

Blockchain technology is still in its nascent phase and, while cryptocurrency platforms like Bitcoin and Ethereum have been in use since long, its adoption into the mainstream software industry has been limited. Having worked on Blockchain implementation for startups from various domains, I have tried to list down the top seven considerations while implementing a Blockchain in a product.

On-Chain or Off-Chain

One of the key architectural decisions while working on Blockchain-based products is to understand where to go off-chain and where on-chain. It is for occasions when transaction data and business validation logic play a crucial role.

The primary constraint is the network latency due to the data replication across the Blockchain network. The degree of latency keeps increasing with increasing levels of data replication. For the same reason, Ethereum charges a reasonable fee to store data on the chain.

Some general guidelines-

  1. Data that is either directly required for transaction validation or need auditability should be stored on-chain. It is better to store referential data off-chain.
  2. If eventual consistency is good, you can develop transactions off-Chain, and update only the first and last state on-chain. This will increase overall throughput without utilizing additional network resources.

Public or Private Permissioned

Another important decision is the scope/access of the Blockchain itself, ranging between open & permissionless system to a private & controlled one. Public Blockchains are useful where the users are anonymous and equally. Public chains require a community around them to ensure that no one person has the authority to change rules. They need to be community-driven, and a single user cannot change the rules of the entire network. However, a large number of nodes may limit the throughput of the transactions. It is better to have some incentivization to carry out effective processing.

Permissioned Blockchain platforms control who can write/read on the Blockchain. If you compare them with public chains, they are scalable. They are suitable when controlled governance and compliance/regulations are important.

An example of a public permissionless chain is Libra, a global payment system by Facebook, which can be used by anyone for value exchange. On the other hand, an Insurance claim processing platform is a good use case to exemplify private permissioned Blockchain. It is essential to come up with this categorization at the initial stages itself. This is because both the categories require different kinds of consensus and identity management solutions.

Levels of Security

Tamper-resistance, resistance to double-spending attacks, and data consistency are some essential attributes of a secure distributed system. We can achieve the first two using cryptographic principles of Blockchain technology. For consistency across the system, we need an appropriate consensus mechanism.

In public-facing systems where anyone can join the network, all the nodes are trust-less with no one node having more privilege than others. For such scenarios, security is important to prevent any malicious node. There Blockchain with POW is better despite the over-consumption of network resources and limitations in transaction throughput.

In consortium-like systems, multiple parties interact and share information. In these systems, although node identities are well known, only some nodes are fully trusted for processing the transactions, and security is required against the semi-trusted nodes or external users not directly participating in the network. A Blockchain, with appropriate governance model and consensuses like PBFT or POS, will not only provide the desired security attributes to the system but also increase the operational efficiency because of high trust levels.

In a document workflow-based application, for example, where documents are exchanged between multiple parties for approval, a system of later type can provide the required security and efficiency.

Data Privacy Needs

Sometimes, data stored or transactions executed on Blockchain need protection on account of confidentiality or compliance rules, and herein privacy comes into the picture. For instance- in the case of financial trades and medical-records-based applications, transactions may need to be hidden with data visibility for selected stakeholders. Even in the case of bitcoin, transaction trend graphs may reveal the user’s true identity. These users may want to hide the beneficiary or amounts involved in these transactions.

Techniques like transaction mixing and zero-knowledge proof have been proposed to support that. Sometimes, there are variations in real-life situations where these techniques can’t fit directly and require the design of a new protocol using existing techniques.

Physical to Digital World Transition

We can turn physical assets (land registry, paper contracts, or fiat currency) into digital assets on the Blockchain. Leveraging from the decentralization of these documents will then become easier. However, this requires an inherent trust in the system. We would either need a trusted third party providing this guarantee or a physical legal agreement between the parties that cannot be repudiated in the court of law.

In the case of fiat currency-based applications, this trusted third party is a bank. But, choosing a bank with a good technical infrastructure is essential to ensure easy Blockchain integration.

Data Protection (GDPR)

GDPR compliance requires that a user can selectively reveal personal data to others and can exercise his/her right to the erasure of this data. As it is not possible to delete any data from the Blockchain, we should either keep such personal data Off-Chain (in centralized servers) or provide end-to-end encryption of his/her records so that it can be viewed only by that user.

Ease of Development & Deployment

Last but not least, we should have tools that ease out processes of development and deployment. A better smart contract framework means fewer bugs and more trust. A good container orchestration tool like Kubernetes is a must-have for upgrading the product on all the validator nodes.


Before building a real Blockchain-based product, you got to take a close look at the considerations mentioned above that can make or break your efforts. Barring hype and covering all the teething problems, I believe that blockchain technology has the potential to revolutionize industries. Happy Blockchaining!

Does your Startup Really need Blockchain?

‘To Blockchain or not to Blockchain’ – this is one big question that has been on the minds of startup founders in recent times. From supply chain monitoring to equity management and cross-border payments, Blockchain has been making its way into multiple areas. Startups, to meet their growth goals, are jumping onto the Blockchain bandwagon to generate buzz, convince investors, and raise new rounds of funding.

Many startup founders approached us with a common question in the recent past- Is Blockchain the right fit for my startup? That moved me to come up with a decision tree to enable pragmatic decision-making in this direction. However, the number of startup founders reaching out to us with this dilemma kept increasing of late, which inspired me to write a detailed article on this.

Whether to adopt Blockchain for your startup is not merely a technological decision but also a business decision. Being the frontliners of decision-making, it is crucial for founders to not fall for the hype but diligently analyze its potential from the business perspective– even in cases where a well-defined problem exists. While Blockchain’s unique properties have forced startup founders to think of it as essential and transformative technology, the ‘business benefit’ stands firm as a vital consideration in this decision. This article will cover both technology and business perspectives that founders need to consider while evaluating Blockchain.

Decision Tree: Evaluating the Technology Fit

Though many research papers feature decision trees to evaluate Blockchain use case feasibility with respect to technology, here is a simplified version of the framework-

Real-Life Use Cases

For a better understanding of the decision tree, let me take you through some of the real-life use cases across different verticals-


Use Case Do we need to store the states?

(user specific data and/or meta data)

Are multiple users involved/ updating the stored states?


Is any trusted third part involved?


Can the third party be eliminated? Decision


Social media application that involves user engagement and interaction Yes Yes Yes No No This is similar to a traditional centrally-managed application
Yes Yes The same use case can be implemented using Blockchain if and only if the control has to be released to the community
Food retailers receiving supplies from producers, wherein ensuring food quality is a key challenge Yes Yes No NA Yes
Organizations maintaining records of employee attendance Yes Yes Yes No No As long as there is mutual trust between organization and employees, there is no necessity of Blockchain. If any trusted third party is involved and Blockchain comes to picture, it would be mere over-engineering


Cost-Benefit Analysis: Evaluating the Business Fit

Every startup founder, who is planning to invest in Blockchain, should assess the ROI that will come from its implementation. You might be adopting Blockchain as a necessity or a differentiator for your product, but evaluation should always be done from a revenue generation perspective.

You might have to come up with a cost-benefit analysis as per your business, but I will help you with an example to better understand the approach. Let’s consider the case of food retailers mentioned above, wherein we would compare the high-level costs with different cost components.

Development Cost

If development efforts for building an MVP with a traditional centralized system approach were around X man-months, the efforts would be 30-40% higher in the case of a Blockchain-based approach, primarily for building Blockchain-based eco-system components. Usually, a Blockchain developer would cost you at least 1.5 times more than developers working on widely used technologies. This would make the development cost of Blockchain 2X higher than the traditional application development cost.

Infrastructure Cost

To evaluate the infrastructure cost, let’s assume the transaction volume of a few hundred transactions per second (TPS). If for a traditional solution the infrastructure cost is about X per year, it would be the same for a Blockchain-based approach. This is as per the assumption that nearly 8-10 nodes are part of the consortium. It boils down to one inference- Instead of a single party managing all the infrastructure nodes, every member of the consortium should own the node.

With the increasing transaction volume, the traditional approach can scale horizontally; however Blockchain-based solutions face the ‘Scalability Trilemma’. This is a famous term coined by Vitalin Buterin that, in layman terms, is akin to the phrase ‘you can’t have everything’. Businesses should clearly understand which aspect among the three- decentralization, security, and scalability- they intend to optimize and if that is in line with their value proposition.

Other Costs

A few other business efforts required in the case of Blockchain-based solutions include setting up the consortium, convincing the plausible members regarding benefits of joining the consortium, and expanding it to a level where it can be claimed as safe. Besides, it might also include devising legal rules and regulations to resolve conflicts.

When talking about benefits, a Blockchain-based approach can certainly enable business processes automation using smart contracts. The approach not only improves the overall process efficiency but also reduces operational costs for the businesses. This report [2] says that using Blockchain can minimize wastage of goods, which can result in savings of nearly 450K Euros annually. This value far exceeds the initial investment and operational cost that goes into a Blockchain-based solution. When the consortium further grows, such automation protocols would enable business communities to define industry-wide standards.


Though it might not have garnered the importance that it deserves, evaluating the feasibility of Blockchain is highly recommended for startup founders. This article aims at busting the Blockchain hype and encouraging in-depth evaluation from an intersection of business and technology perspectives.


[1]   K. Wüst and A. Gervais, “Do you need a Blockchain?,” 2018 Crypto Valley Conference on Blockchain Technology (CVCBT), Zug, 2018, pp. 45-54, doi: 10.1109/CVCBT.2018.00011.

[2]  G. Perboli, S. Musso and M. Rosano, “Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World Use Cases,” in IEEE Access, vol. 6, pp. 62018-62028, 2018, doi: 10.1109/ACCESS.2018.2875782.


How to Build SaaS Application with Data Isolation but No Run-time Isolation?

As you have already considered SaaS implementation, we recommend choosing the right SaaS architecture type so that all the hardware and automation costs you bear are well optimized. In case you are considering SaaS type 3 architecture for your startup, you are at the right place to get started.

Type 3 SaaS architecture is the right fit for cases that require data isolation but no isolation. In this type, different data stores are placed for different customers; however, the application is shared by all. Type 3 SaaS architecture is considered in businesses like e-mail marketing, content management systems (CMS), health care applications, and so on.

For your understanding of the type 3 SaaS architecture, I will take you through the example of an innovation management platform that I worked on for a fast-growing startup. The platform enabled industry leaders to tap into the collective intelligence of employees, partners, and customers, find the best ideas as well as make the right decisions. This platform drove innovation through the following-

  1. Employee engagement: Making ideation a part of daily lives and creating a culture of innovation
  2. Continuous improvement: Supercharging project discovery by tapping into the employee bases
  3. Product development: Creating the next big thing with people who understand the business well
  4. Customer experience: Engaging a wider workforce and reduce customer churn

It also enabled enterprises to manage the entire idea lifecycle, right from coming up with an idea of delivering impact at scale. Now, you must be wondering why we chose SaaS for this platform? The platform had to be made available as a service to enterprises with an option of subscription for a limited period. Herein, hosting/licensing wasn’t a viable option, taking into consideration the cost of deployment, data privacy concerns, and the IT assistance involved. We picked SaaS Type 3 deployment model for this platform wherein we could keep data of each enterprise isolated from others, all the while retaining flexibility of application runtime being shared.

SaaS architecture

Fig 1- Saas Type 3 Architecture

How Our Decision Paid Off?

Having the right foresight and visualization is the key to good decision-making. That worked well in this case too, when we could rightly foresee the results of deploying SaaS type 3 on this platform. This decision helped us address the areas mentioned below-

  • Data isolation
  • Server utilization, wherein we kept application runtime shared to use the server capacity optimally
  • Separating application runtime to the high-end server for some high-paying customers

What are the Challenges We Overcame and How?

Isolating data for each customer by having separate databases, all the while sharing a common application runtime, was a critical challenge that we tackled. In other words, we got one application runtime capable of supporting multiple databases for customer-specific data management. Along with this, we also had to accelerate customer onboarding in less time. This implies the deployment process should be automated enough to handle database provisioning, disaster recovery, and rollout of new versions.

Supporting Multiple Database Connections

As explained earlier, we had one application runtime that supported multiple databases for the respective customers. In our case, we had built N-number of Tomcat web applications deployed in one server that shared the common application runtime. This way, every customer had access to an independent application, with every application having its connection pool to manage connections. However, a plan of merging these deployments to one application is underway, so that we don’t have to run duplicate processes.

Faster Customer Onboarding

We brought down the customer onboarding time by automating the database creation with templatized data using Chef scripts. Apart from the database creation, it was also essential to set up a backup-recovery process and failover & load balancing for the application, which we could achieve by using the cloud solutions and Chef scripts.

Effective Disaster Recovery

As the solution helps in innovation management, the data was highly critical to our customers. This implied that our SaaS system should be able to weather any unexpected disasters and unforeseen accidents. To handle this, we had deployed the application & database across multiple availability zones that ensured timely updation of application and copies of the database whenever the primary DS is down.

Automated Deployments

For a new version rollout, along with the SaaS application deployment, we had to deploy a new version of the database or upgrade the existing version for each customer. However, with one-click deployment automation that we had in place, we could safely upgrade all customer applications to the new version all the while ensuring the existence of a recent backup in case of a rollback.

Utilizing Hardware

As we had an isolated database for each tenant, we had to spin up multiple DB servers for each of them, and this was more of a requirement rather than a choice. But since the application runtime can be shared, we had options of hosting it in a single server depending on the usage. By grouping customers based on utilization, we could reduce the number of servers and, in turn, accelerate the usage.

How did we Ensure Security?

As stated earlier, we isolated data for each customer by having a separate database all the while sharing a common application runtime. This came with the additional baggage of securing the application runtime that would restrict the urge of end-users to access other end-users’ data points. How did we implement this? Here’s how-

  • Maintaining separate configuration keys for each customer and rotating them on every release
  • Preserving encryption keys of databases fields for each customer and rotating them on every release

Apart from that, there were many other security compliances we had to follow for the SaaS application-

  • Our product was independently audited on an annual basis against a rigid set of SOC 2 controls
  • We have an open policy that allows our customers to perform penetration tests of our service
  • Our production environment is protected by a robust network infrastructure that provides a highly secured environment for all customer data
  • Data in transit is over HTTPS only and is encrypted with the TLS v1.2 protocol. User data, including login information, is always sent through encrypted channels
  • The hosting environment is a single isolated database and application components that ensure segregation, privacy, and security isolation in a multi-tenant physical hosting model. Instead of storing user data on backup media, we rely on full backups that are shipped to a physically different co-location site
  • Customer instances, including data, are hosted in geographically disparate data centers. Customers may choose the location to host their data based on the corporate location or user base location to minimize latency
  • We support Single Sign On (“SSO”), using the Security Assertion Markup Language (“SAML 2.0”). This allows network users to access our application without having to log in separately, with authentication federated from Active Directory
  • An automated process deletes customer data 30 days after the end of the customer’s term. Data can also be terminated immediately depending on the contract terms of the agreement.


Despite the above challenges, this model helped us live up to the promise made to the customer. Our SaaS application had ideas across enterprises remain isolated and high-security compliance remains ensured for every customer.

Top 4 learnings from implementing machine learning for startups

Every technology startup needs to embrace machine learning and AI to stay relevant in their business. Thanks to the buzz around AI / ML and VC’s interest to see AI / ML in the decks used by founders to pitch their ideas, startups are figuring out ways to introduce machine learning into their products. Machine learning, if implemented well, can have a direct impact on their ability to succeed and raise the next round of funding.  However, the path to implementation of machine learning solutions for a startup does come with a baggage of hurdles.

So why it is hard to implement machine learning-backed features for startups? Let’s go through the top considerations to implement ML for a startup and the right ways to address them-

1.   Availability of Data

A machine learning model is as good as the data used for training it.  For most startups, the biggest challenge is the availability of data, especially the one that is related to their business problem. Generic datasets are not useful when it comes to the unique problems that startups are trying to solve.  They would have to get hold of data for the feature they plan to roll out first. How does one get out of such a catch-22 situation?

One way to do this is to start with a simple machine learning model that can work with sparse data, refine it with rule-based extraction techniques, and roll out the model/ a subset of the feature to the customers.  For improving the model, try and set up a pipeline for a collection of labeled data.  Techniques such as data fingerprinting using autoencoders can also be used to incrementally develop the ML model.

2.   Choice of Model and Explainability

With the spiraling popularity of neural networks and their success in case of face recognition and other object recognition problems, most startups tend to implement neural networks to solve business problems. Some of the challenges faced while implementing neural network-based solutions are-

  • Neural networks need large amounts of the data train
  • Explainability, which is a big necessity for startups in FinTech and healthcare domains, could be a challenge

Machine learning models based on regression, decision trees, and Support Vector Machine (SVM) can serve as a good starting point.

3.   Data Pipeline

Why do we need a data pipeline when there is no data? Most people assume that once a model goes into production, the job is done. In reality, it’s just the beginning. The model performance in the test and production environments could vary based on the distribution and the size of data. Many times, the choice of the algorithm also depends on the scale of execution. One might need to compromise on the model accuracy and choose a simpler algorithm to ensure that the model scales and also controls cost.

In order to measure the performance of the model in production and iteratively improve upon it, a data pipeline is required to collect data, label data, retrain the model, and validate before deployment. Setting up the right validation method is also a key challenge, which I shall be discussing later in a separate article.

4.   Right Expertise

Considering all the above tasks, every startup needs a data scientist with a deep mathematical background, problem-solving skills, and engineering expertise. Most Data Scientists / Machine Learning experts have a post-graduation in mathematics and are best at building complex models but aren’t necessarily good at implementing an incremental engineering solution.

Individuals excelling in both engineering and mathematical skills are rare to find and expensive to hire. But then, the question is- do we need data scientists? Can ML services such as AutoML or AmazonML do the job for us? Unfortunately, no. What these platforms provide is a set of tools for data analysis and model building. Startups still need a seasoned data scientist who’d flawlessly discovers features, figure out the model, and choose the right validation method.

The idea of pairing up a data scientist with a product engineer works really well. While the product engineer helps with the data pipeline and the extraction rules, the data scientist focuses on feature engineering, model development, and validation.


A machine learning solution can take considerable time to build, and it might require a year or two to attain absolute accuracy in terms of performance.  It also requires IT infrastructure to store and process the data, which could turn out to be an expensive pursuit. Startups cannot afford to wait for a year to figure out if the problem can be solved or not making the use of ML. So, it’s imperative to know the efficacy of the solution as early as possible. In other words, fail fast.

Similar to the lean methodology for product development, startups need to adopt an iterative approach to ML model development – starting with simple models, setting up a data pipeline to collect labeled data, and move towards more complex algorithms.