Breaking Down the Product Development Code for a Growth-Phase Startup

Breaking Down the Product Development Code for a Growth-Phase Startup

Challenges change drastically when a startup makes a transition from the early stage to the growth stage. It often finds engineering bottlenecks, which stem from a three-pronged need to work on a product roadmap, nurture existing customers, and onboard new customers. Some of these challenges come from product development and some have their roots connected with the business front. 

As a Director of Engineering, I have worked with startups in different stages like early, growth, and acquisition. I still do. My projects bring me close to various industries and products. Their diversity made me realize that VP-Engineering should focus more on mitigating technological and operational risks related to product development. 

But how?  

Knowing the stumbling blocks and reducing their impact are two different things. The latter requires a detailed study of each product development challenge. In this article, I have drilled down to get closer to pain points and revealed ways to strike them out.  

D E F I N E – What to Do?  

Are you spending more than 20% of your time on Bug fixes for the last 3 Sprints?  

It isn’t easy to balance feature work and maintenance tasks when you have customers to support and deliver features from your roadmap. When you reach this point, your success depends on how you can walk this tightrope.     

“What is important is seldom urgent and what is urgent is seldom important.” -Dwight D. Eisenhower

Eisenhower’s  Decision Matrix can help you prioritize your product backlog. It will help if you have a dedicated percent of time in each Sprint for Feature Work (Important) and Bug Fixes (Urgent). We can follow a structure comprising 80% Feature work and 20% Bug fixes.   

Whenever the Bug fixes time goes beyond 20%, and it will definitely go that way, my advice would be to implement regular retrospection and deploy corrective actions. Crossing that 20% mark means that there is a problem in Develop (Branching), Testing (Functional and Non-Functional), Deploy, or Rollout stage.  

D E V E L O P – How to Do?  

Do you have teams with Dedicated Product Owner & Design Engineer?  

As you are juggling with existing customers, onboarding new customers, and new feature development, the right team structure will help to make progress on all fronts. Once you define your priorities and assign weightage, have parallel Sprints/Kanban boards for each of those. Mark the logical separation between different modules and set up teams accordingly. For example, have a customer implementation team focus on onboarding new customers or have an RnD team try out different POCs. Try it out with few customers or have teams for each module – Payment team – to work with payment integration, etc.  

For all the teams to make progress, each front should have dedicated Product and Design personnel on the team. It is one notable differentiator I have seen in fast-growing startups. 

How good is your team structure for your architecture?  

Technical Architecture can act as a guiding principle for the logical separation of teams to run parallel and independent workstreams. Teams having true microservices architecture can have separate teams. Each team can own one microservice with representation from engineering, product, and design.    

For Layered architectures, you can have the separation based on layers or the team’s expertise (Frontend Team/Backend Team/Full Stack Developers). Microservices architecture gives a lot of flexibility in structuring teams to achieve the expected growth. 

Is your branching strategy catering to customer-specific demands?  

An ineffective branching strategy can jeopardize development practice. There is a chance that you might start building custom features for your customers. However, with each step forward in that direction, you might end up creating customer-specific branches. Besides, it could lead you into a big mess. The ensuing chaos of earlier actions will make your team focus more on maintaining what has been shipped rather than building new features.     

It’s not easy to come up with one effective branching strategy. First of all, you have to choose from many and then, experts have different opinions about each one of them. But if you ask me, I would recommend Git-Flow. It is the outcome of my working experience with all different kinds of development teams. Develop and Master should be the main branches with a defined strategy for supporting branches (feature branch, release branch, hotfix branch with a defined time to live). While zeroing in on one branching strategy, you should also consider the need to support and maintain multiple versions (e.g., Enterprise Apps vs. Consumer Apps).  

T E S T – Do it Better 

Do you know what type of automated test run would suit you best?  

When one of the startups gradually moved from an early stage (only 1 customer) to a growth stage (more customers started onboarding), the team started stretching to meet deliverables because of last-minute regression issues. There was no other way to detect these early due to the limited bandwidth of QAs to do regression. To get out of this deadlock, we started automating key regression scenarios to work with available QA bandwidth and slowly freeing up QAs to test new features.  

In the growth stage, regressions often obstruct the production flow. VP-Engineering’s role is to prevent it to avoid any unnecessary hassles later. Also, to ensure minimal production bugs, it is important to test every line of code, including conditional statements, loops, and functions with multiple parameters. As these tests to be repeated for every build, automation is the key to ensuring the product’s stability and reporting the issues of the code before it hits production.   

Good test coverage (unit tests, integration tests, functional tests, acceptance tests) and code coverage will help to prevent such issues. Exhaustive automation suits for smoke, integration, regression and functional tests can help you achieve the expected velocity. Some of the tools like JUnit, PyUnit, PyTest, Jest, etc., can be used for unit tests. Similarly, there are plenty of functional automation tools available. Selenium is one of the popular software testing tools. It covers almost  27.48% of the entire segment.  

Code coverage will tell you how much code is executed/covered during both manual and automation testing. Some of the popular code coverage tools are JCov, JaCoCo, So far, in our projects, we have used JaCoCo extensively.  

The important thing is to have good automation coverage so that the smoke, regression, and integration suits run on the build whenever necessary without consuming QA members from the team. In fact, companies are well aware of its importance. A  recent survey has revealed that around 78% of companies rely on automated testing for regression. It is enough to suggest how important the integration of the automation suite into your CI/CD pipeline is.   

Are you using the right kind of performance monitoring?  

Performance degradation is the next area that should concern you. It tends to bloat into a big problem if you keep your eyes away from it for long. When you monitor the right performance metrics for your application, you get a chance to take corrective action on time. As  Peter Drucker rightly said, “If you can’t measure it, you can’t improve it.” 

So, the first step is to set up the right tools to measure the right KPIs. Once it is set, for every release mandate performance and load test, any new code that impacts the performance can be corrected right on time.   

If you have benchmarked your application with # of concurrent transactions supported, you can work around it to define your architecture requirements when the need to change that benchmark rises. You can use performance testing tools, such as JMeter, LoadUI, Loadrunner, etc., to benchmark the application performance and then use the monitoring tools such as NewRelic, AppDynamics, DataDog to monitor the performance against the benchmark. 

How secure is your application?  

Entrepreneurs focus more on adding features to the product when their startups are in the early stage. As a result, security takes a backseat. However, many of the big tech firms have experienced data breaches. There is a possibility that attacks on applications will happen at some point in time. Such an assumption makes security testing imperative. As a preventive measure, I recommend penetration testing every year or simulate the attack and identify the vulnerabilities, along with different scanning and auditing. For instance, the financial sector is full of early adopters of detailed cybersecurity and penetration testing. As its outcome, the cyberattack success percentage in this sector is only 5.3%.  

There are multiple online pen test tools available, we have used Burpsuite.   

D E P L O Y  

Is your deployment strategy aligned with customer demographics? 

Deciding on the right deployment strategy should be a flexible decision. If you are unsure of how the code will impact production, Canary may help you better. Overall, this decision is contextual. You have to consider various decision parameters like if the downtime is acceptable, whether all users should be moved to a new version at one go, rolled out based on geography, etc. Almost all deployment tools offer these capabilities. Deployment Strategies are – recreated, rolling, blue-green, canary, shadow, A-B testing, etc. 

R O L L O U T  

Are you using the feature flag?  

In agile, you want to do continuous deployment and always remain extra cautious about not accidentally releasing something to customers. Product managers want to roll out the feature to limited users for A-B testing, and it is important to equip them with the tool to control it. Feature flags can come to the rescue by decoupling your deployment from rollout (release).   

Rolling out alpha/beta features to production can always be tricky. Whether you want to do A/B testing or test out beta features with a few customers, feature flags can play a key role in enabling you to do that. You can bank on tools like Launchdarkely and Optimizely for feature flag management. 

M E A S U R E 

Do you know about the engagement of every feature you roll out?  

If customers are not using features the way they should be, then knowing the reasons driving such a change becomes important. Again, the first step is to measure the feature usage and its impact on the overall user engagement.  

Mixpanel and amplitude are two leading tools that can be used to understand the user behavior, user journey and engagement with the product.   

Wrapping Up 

Slight delays in adding features to the product often pose multiple risks. It might shake up user loyalty, pose questions regarding return on investment, or hold you back from getting a competitive edge. That is why dig deep into your execution process to learn how much you are aligned with the steps I have mentioned here and reduce your engineering bottlenecks  

If you want to discuss these issues further, feel free to comment and share your views.  

5 Real-Life Applications Where Edge Computing Can Change the Game

5 Real-Life Applications Where Edge Computing Can Change the Game

The past decade has seen a substantial percolation of cloud-native offerings across industries. Technologies like containerization, cloud-native architecture helped these applications to scale. In fact, the predicted valuation of $308.5bn for the public cloud service revenue in 2021 is a good indicator of this evolving market scenario.

In sync with it rose the adoption rate of sensor tech and mobiles, which spurred tremendous growth in data generation.

However, network bandwidth didn’t scale at that pace, which is why cloud-native processing is facing challenges. And cloud latency is not helping this progress either.

That is why industries are experiencing a paradigmatic shift from cloud-native to edge computing. In fact, Gartner’s review of the edge computing market substantiates this argument. The company has revealed that edge computing is going to enter the mainstream in 2021.

IBM’s report reveals a possibility of the edge computing market growing from $3.5bn in 2019 to $43.4bn in 2027. The leap will be a gigantic one. As its appendage, the edge-native apps industry will also witness a substantial rise.

Engineers design edge-native applications with the edge’s features in mind, so we can expect this growth to be smooth in the coming years. It will surely benefit from the factors that are boosting the edge computing market at present. These factors are-

  • Latency

Edge computing has the ability to process data near to the source and prioritize traffic. This helps reduce data flow amount to and fro from the primary network and increase processing speed. It makes data more relevant.

  • Security

In a cloud-native setup, data goes to the cloud analyzer through a single pipe. If it is compromised, then the entire work of an organization can come to a standstill. With edge computing, such chances are less as hackers can access only a limited amount of data.

  • Reliability

When there is a connectivity issue, storing data locally and ensuring its processing is a more viable option than the traditional modes.

  • Cost-effective

Better segregation of data leads to better data management and reduces cost. Edge computing makes it easier. It optimizes the use of the cloud and available bandwidth.

  • Scaling

Scaling between Edge and Cloud is an ability that helps edge computing manage data volume. The system is designed to ensure a balance and maximize output.

But what are the fields that stand to benefit from networks moving nearer to the edge?

While assessing various aspects, I realized that the changing dynamics of IoT and 5G technologies have the potential to impact more. However, with companies like Vodafone, Ericsson, and Huawei fast-tracking their architectural changes to accommodate 5G advancements, the chances of edge computing percolating industries are growing bigger.

Based on the above factors, we have identified a few use cases that are most suitable for edge computing applications.

  1. Video Processing

Cisco’s Global Cloud Index reveals some interesting aspects that shed light on why edge computing will gain ground in the coming years. According to the index, people, machines, and IoT will generate around 850 Zettabytes (ZB) by 2021.

Only 10% of it will be useful and it will 10X greater than the stored or used data (7.2 ZB) in 2021. The report also reveals that the useful data may exceed “data center traffic (21 ZB per year) by a factor of four”.

Among the data generated, a huge part is getting churned from video streaming. In fact, the streaming industry has seen a substantial hike in revenues during the COVID-19 pandemic.

In the US, the rise in people using a streaming site has grown 21% since 2018. Streaming platforms are now at loggerhead to improve user experience and for that, they are moving to better image quality. This will trigger a massive change in edge computing engagement.

But streaming is not the only point of concern. Interactive video experiences may also find edge computing’s use quite alluring. Interactive videos thrive by providing immediate results and that will open up spaces for edge computing.

The other area that might benefit from edge computing is content suggestions. Predictive analysis is gaining momentum to generate content for the targeted audience. With the help of edge computing, companies can do it locally and increase the speed of suggestions to ensure better engagement.

The segment will benefit more from a teaming up of technologies like edge computing, AI, and machine learning. Real-time personalization of data, customer data review, and behavior analytics to deliver actionable insights will become easier.

  1. AR/ VR

AR/ VR is an innovation that is set to transform how we consume content. Its ability to provide immersive experiences is expected to engage more customers than ever.

However, the process is not simple. It requires proper stitching of real-world and user’s motion in a digital world to ensure adequate synchronization. But this will trigger a need for a huge volume of the graphical rendering process.

To ensure seamless functioning, splitting the workload between AR/ VR and the edge is necessary. The process has latency-sensitive moves, which can be controlled if the edge takes over and handles the bandwidth issues by introducing a semblance in its use.

The use of AR/ VR in the retail space would be significant as it can transform the traditional brick-and-mortar experience. People can now enter a mall and get a customized route plan or buying chart in a grocery store based on their previous buying experience there. In addition, generating online content will become easier with edge computing services.

The burgeoning gaming sector will also benefit from edge computing as it can reduce the price of AR/ VR gears by taking over the image rendering capabilities. Advanced compute capabilities will take a backseat with the edge taking over the data rendering process. It will also increase the rendering opportunities.

Edge computing will allow end users to play a game using either a normal and heavier headset or a lighter device. By providing gamers with such options, it can boost the adoption rate of the AR/ VR gaming industry.

In fact, the impact of the combined force of AR/ VR and edge computing is going to be much wider including “serious gaming” use cases. Doctors with AR glasses can perform critical surgeries with overlaid X-ray reports or other physiological maps and it will be a huge boost for the healthcare sector.

For firefighters and soldiers, situational awareness is of utmost importance to chalk out their moves, including seeking guidance from their handlers. Advanced AR/ VR designs with edge computing can strengthen their steps by providing vision with extrasensory capabilities, better situational awareness, improved risk calculation, temperature reading, and improving decision-making.

  1. Emergency Services

Emergency services in the healthcare sector are going local to reach out more to people in distress. It is not always possible for patients in emergencies to visit a multispecialty hospital at a distance. To counter this, equipping the ambulance with adequate measures is a far better idea.

Edge computing and 5G technologies are a perfect combination for such scenarios. A blend of technological and computational resources will accelerate diagnosis and analytics and help the medical team work efficiently within the golden hour.

The medical wearable segment is witnessing investments pouring in from several sectors, which means an increased scope for research and adoption of new technologies. The connected medical devices segment that helps diagnose, monitor, and treat patients has the potentials to scale up to $52.2bn by 2022.

Such devices generate a massive amount of data and a huge portion of it requires real-time processing to ensure faster treatment. For instance, healthcare IT architectures can benefit by gathering health-related data. Simultaneously, it can develop rapidly, real-time analytics using edge computing to predict health emergencies and take actions accordingly.

IoT medical devices can detect anomalies, notify concerned authorities, and save time to allow doctors extra time to save a person’s life.

IoT and edge computing technologies can provide extra security in smart homes and cities in terms of emergency services. A smart tool like a security camera can process images and recognize voice to understand unwanted activities at the edge. It will help prevent leakage of private and sensitive data from audio and video devices.

  1. Preventive Measures

Edge computing has a massive role to play in preventing acts of terrorism. Security footages from various points are continuously uploading images to the cloud or a server for a better analysis. And the bulk is getting bigger with each passing day.

But if those can be analyzed at the edge using Deep Learning models, then officials can take action much faster and stop major crimes from happening.

In fact, the outbreak of the COVID-19 is an eye-opener in many ways. Scanning people and products, then uploading details and wait for the results to come to consume a lot of time. Such delays in a pandemic situation are unacceptable.

However, edge computing and the 5G network can change the scenario by increasing the assessment speed and reducing the waiting period.

  1. Industry 4.0

The industrial revolution has entered the 4.0 phase and its focus is on improving productivity by transforming the workforce and bolstering industrial growth by making economics more impacting. Such an overhaul is depending a lot on IoT adoption. Just in the manufacturing sector, the IIoT market spending is predicted to grow from $1.67bn in 2018 to $12.44bn in 2024.

The demand for better security and seamless operation will go up in sync with the expanding market.

As an integral part of IIoT, automation will gain big from a paradigmatic shift in procedures that edge computing promises.

Automation generates a massive amount of data, which can be used for AI-based analytics like predictive maintenance or reducing downtime etc. IIoT generated data is sensitive and industries might become reluctant to send and store data remotely over the cloud.

But with edge computing, data persistence and analytics can be done closer to data source ensuring data privacy and security.

Simultaneously, the storing of data on a local scale will help companies more in adhering to policies like GDPR. By enabling decentralization of specific processes and ensuring optimal physical location, edge computing will create IoT deployments that are more secure, reliable, and scalable.

Interestingly, this move will not be a restrictive one; rather, it will open up avenues for IoT applications. Industries like smart homes and healthcare will benefit from it.


The rising influx of rich data in these five areas is inspiring moves that focus more on actionable insights and process optimization. In sync, there is a hike in demand for safety and security.

These strategic mechanisms are getting substantial attention from governments and private investors, and it is a boon for edge computing. The coming years will witness more of this interplay between edge-native applications and these five aspects.

5 Key Technology Decisions for a Scalable MVP

5 Key Technology Decisions for a Scalable MVP

Wrong technology decisions are like a bad marriage. If you linger around it for long, then be prepared for more troubles. I realized this while working with a customer whose product was matched with an inexact technology by a CTO at an initial stage.

Over time, the rift between them widened, and adopting new technologies became more difficult. This led me to think of dilemmas that startups often face while zeroing in on technology.

There are 5 significant technical decisions that startups should consider before they start building a minimum viable product that is scalable.

Using Microservices architecture pattern to build an MVP

Microservices architecture has become a buzzword now. You go to any seminar or talk, and you will find the idea selling like a hot cake. It got the thrust from OSS platforms like Netflix and others after it helped accelerate the development and deployment of such platforms and services. The accolades are well-earned. But is it a must for all? I say, ‘No.’

Microservices architecture is good only when the following two conditions back it up:

High scale

This pattern is suitable for services where billions of requests pour in each day. We used this to develop an ad server with similar criteria and earned favorable results. But such cases are exceptions- scaling a number like that from Day 1 is a huge task and it rarely happens.

Large teams & Agility

The other factor is the number of members in a team. My personal opinion is a team with more than 100 members and a strong business need for agility are possible use cases for microservices.

Let me share an experience. In 2016, we used microservices for an MVP development, as the hype surrounding it was high. Microservices architecture is inherently distributed- their complexities made iterations frequent, and deadlines got delayed. Finally, we rolled out the minimum viable product after one year, instead of our initial plan to launch it within 6 months.

We had to fight real hard with distributed transactions, debugging was tough, and even simple user stories had complexities. The level of complexities we encountered was unprecedented. Distributed systems are hard to manage. The team had to implement complex patterns like outbox patterns and circuit breakers for transactional integrity and reliability. Over time, these patterns have matured but it pushed me to think, “Is this kind of complexity necessary? Is the ROI that much alluring?” The answer was negative.

Before considering the Microservices journey for your startup, ask these two questions – Do we need to support billions of requests every day? Will I ramp up my development team to 100+ engineers in a couple of years? If the answers are ‘No,’ do not go ahead. Instead, adopt the Modular Monolith. Like microservices, it comes with a database for each module that allows parallel development to an extent, but it is deployed as a single unit removing the complexities of distributed systems to another day.

Using NoSQL Stores without a specific need

When it comes to NoSQL, a lot of entrepreneurs or programmers are well-versed with the concept of 3Vs-

  • Volume
  • Variety
  • Velocity

NoSQL is suited for problems where the product velocity demands support for billions of requests or generates around 1TB of data each day. Also, if there is a constant influx of structured, semi-structured, or unstructured data, then NoSQL is necessary.

Its use in the flexible schema is widespread where one is not sure about the entity attributes as entities evolve. For instance, e-commerce sites are its biggest takers as RDBMS cannot provide flexibility to store the inventory. NoSQL is benefiting from such scenarios and cashing on the popularity of databases like Redis, MongoDB, and Cassandra. It has already covered 39.52% of the market.

However, there are cases where using NoSQL can usher in disaster. A few years back, I was developing an MVP for a fintech startup. Our choice of Aerospike as the DB for our transactional store turned out to be wrong. It does not support ACID guarantees, so we had to go with BASE (Basically Available Soft state Eventual consistency), which is operationally intensive and adds to the time, effort, and cost.

We ended up fighting the wrong battles. If volume, variety, and velocity are not your prerequisites, then don’t take up NoSQL. For such cases, RDMS is a good option.

Another aspect that gets neglected often during decision-making is the expertise in data modeling NoSQL stores. Data modeling in NoSQL is different from RDBMS. For a practical application of NoSQL, understanding the query pattern is of prime importance and modeling is done based on UX screens and query patterns instead of normalization techniques.

Moreover, with Mongo or Couchbase, the internal storage structure can give rise to more complexities. Then there is the pricing strategy – Dynamo and CosmosDB and similar engines, but their pricing strategies are entirely different. A shift to NoSQL would require more time to understand these models. If your venture gets support from freelancers with RDBMS background, then stick to RDBMS to keep things simple.

Using the standard test pyramid strategy for test automation 

Software testing is fast becoming a trend and around 78% of companies are now relying on it for their testing purposes during software development. To test tools, clients follow a pyramid:

In 2012, I got this opportunity to work on a platform called 1-9-90. In any social media platform, there are 1% influencers, 9% active users, and 90% passive users. The idea was to create content on our platform and publish it on different social media platforms to collect engagement using views, subscribers, shares, etc. But after 6 months, the company decided to change its stance. The intent was to understand how the brand is performing, which led us to build the Digital Consumer Intelligence Platform. We shifted from a content creator play to an API-only service. We had to scrap most of the test automation we had so painstakingly built.

What most people don’t realize is that Test automation is hard and often brittle in nature. You need adept developers and QA engineers working together to create a state-of-the-art regression suite. Like the DevOps movement and DesignOps movement, a DevQA movement is the need of the hour to write tests and ensure ROI that most teams fail to realize. If not performed precisely, the ROI will take a hit. For startups, I would recommend a test diamond-

It comprises module-specific unit tests, a lot of integration tests, and a few end-to-end tests. Ideally, develop test automation only if the product is market fit.

Having no objective targets for MVP 

The most challenging question for the Product Owner is – How will you objectively define the MVP’s success? Mostly, they know the problem and the solution but fail to put it across objectively in measurable terms.

In an earlier opportunity where we had a chance to work with a payment gateway, the MVP’s development goal was to reduce checkout time. Instead of 30 seconds to complete the checkout flow, the process needed to happen within 10 seconds – this metric gave the team a clear, measurable objective. They could fine-tune the feature set and user experience until they achieved the said target.

Such clarity helps the team apply Customer Developer Principles, wherein you learn, measure, and iterate to meet your goals. Having said this, converting a feature goal into a measurable metric is not very easy and straightforward. It needs creativity, expertise, and Analytics tools to get things right.

Another mistake that some startups make is to focus their energies on building the bells and whistles. Entrepreneurs are incredibly passionate about their problems and they keep coming up with new ideas and ways to solve the problem. Dropping the current strategy and running after a new one is widespread among startup teams. Such actions confuse teams and often misguide them. Having a well-defined, measurable target helps the team validate pivots objectively.

Companies must stay away from bells and whistles as they spur the production costs and delay the time to market. In my opinion, always focus on building a key differentiator, define the success of an MVP objectively to help the team stay aligned.

Poor Build vs. Buy Decisions

There are two clear-cut ways of empowering an MVP-

  • Build
  • Buy

Entrepreneurs often try to build everything from scratch and as an engineer, I would love such opportunities. But is it profitable for your MVP? You have to measure it in terms of cost, time, and effort. In most cases, I have found that buying is a pragmatic way forward.

If you have a unique business model where you need a new algorithm, which is also a differentiator, then don’t shy away from building one. But if your algorithm is not the star, there buy one and then optimize it.

A more dynamic approach would be to follow a Buy, Validate, and Build Methodology. In an earlier opportunity where we had a chance to work with a payment gateway, our client bought a payment aggregator’s license instead of building it. The goal of the MVP was to reduce the time of checkout and enhance the experience – which could be achieved without building the complete payment aggregator from scratch.

As a solution, we loaded it with features that solved the primary problem and fetched us paying customers. Then we replaced the payment aggregator with our own to improve margins. This helped us get the MVP product ready on time and open the revenue stream, get the product-market-fit and then work on profitability. The Buy-Validate-Build Methodology helped us validate and fail fast!

To summarize, consider these technological decisions while building MVP

  • Avoid Microservices. Use Modular Monolith
  • Avoid NoSQL stores. RDBMS still works for most cases
  • Do not do Test Automation till you reach the Product-Market Fit
  • Do not build bells & whistles, have objective MVP Goals
  • Make Pragmatic Buy decisions, don’t build everything

How Can You Streamline Work from Home to Benefit More?

How Can You Streamline Work from Home to Benefit More?

The year is quite unsettling in nature as systems have gone off the track by miles. In every sphere. In every sector. All these because of the COVID-19 pandemic. It has forced people to stand at a cusp of an upheaval where adopting and adapting to new structures are the only means of moving forward. And one such drastic change is the evolving work from home culture.

Work from home culture has spurred the adoption of virtual and remote setups, which has been witnessing support from rapid digitalization. But how can we streamline it? How can we ensure a smooth transformation to bolster the present and the future?

The answer is in emerging technologies and management. Proper implementation of these is required to ensure success. According to a Stanford University survey, around 42% of the U.S. labor force are now performing their duties from home. But their starts were not easy. When reports about massive breakouts of coronavirus got out, government mandates forced companies to launch this new pattern. But a lack of know-how of how to work from home and other factors acted like constraints for them.

Like many other companies, we faced problems. When there is a physical distance, sync-ups emerge as a huge challenge and so is maintaining a rapport among teammates. Even getting a proper update on jobs done consumes a lot of time.

But we were in no state to lose our mind and allow any plummet in productivity as that could have triggered a domino effect leading to the point of no return. Financial worries had a significant role to play in it and its gravity is now evident from the U.S. economic reports showing a plunge in GDP by 31.4% in the second quarter.

What Were Our Constraints

To weather such a crisis, we brainstormed to find out what aspects were bogging us down. We found out a few. They were,

  • Communication gaps
  • Feeling of work as isolated and unstructured
  • Tracking progress becomes difficult
  • Missing the feeling of working together, the office ambiance

In addition, as the days rolled by, we found out employees in different countries have started complaining about their well-being while working from their homes. Sometimes, it is related to work desk and posture but often issues like furlough, finances, career hiatus, fear of illness, and other factors impacted their outputs.

We certainly didn’t want those for our employees. We care for their well-being. Studies have revealed that when employees are happy and content, they become more productive and their satisfaction levels guarantee better outputs for customers.

That’s why we took some measured steps to ease the process. We had to make the lives of our employees more comfortable and we are glad that we did it when the time was ripe.

How We Streamline Our Work?

We started jotting down possible solutions quickly, whatever was there on top of our minds. Then we zeroed in on the most effective ones, the ones with the maximum output and minimum integration challenges. At the end of the process, we came up with four major buckets to address all the problems.

These were

  • Switching from pull to push method of communication
  • Writing things down
  • Respecting the need to create a connection
  • Adopting new tools to simplify the flow

Implementing Them

Each of these aspects required a specific type of handling. But the effective practice of these required a thorough understanding of behavioral patterns of employees and technologies. So, we dug deep and did a little bit of research to understand what suits each of them best.

  1. Switching from Pull to Push Method of Communication

When we are at the office, team leads or managers have face-to-face interactions with a group or individuals for regular status updates of projects. It is the pull mode.

But as the architecture changed, we realized the need for the push mode. We pushed the ownership of a project to the concerned employee. In the work from home method, we asked them to publish the work status and not to wait for being asked by their supervisors. This ensured a seamless flow in operation as update logs helped in improving synchronization.

Stanford report claims that productivity increases by 13% when people work from home. But you need the right process to capitalize on it.

To effectively put the system into practice, you can try making the following things mandatory.

For individuals,

  • Ensure status visibility on Skype or other mediums to avoid any confusion and also notifying others about their logging and out times
  • Update the job sheet to streamline procedures
  • Ask for help when stuck somewhere and don’t wait to be asked
  • Focus more on HRMS to ease the process of attendance regularization
  • Inform the meeting host about a delay in joining a meeting. Let your team or manager know if you are taking an unplanned leave

For leaders or managers,

  • Break calls into three distinct parts: updates, demos, and then discussions. This would prevent any digression and stretch of work hours
  • Set office hours and ask employees to schedule a DND to let them enjoy time with their families
  1. Writing Things Down

In remote setups, we were facing was a communication gap. Views were lacking clarity and misalignments in the flow were happening. To curb that, we ensured a practice of documentation. Yes, it took time but we got things streamlined.

To ensure proper implementation of this method, we asked individuals to

  • Integrate a process of self-explanatory documentation with logical subtasks, estimates, queries, and answers
  • Update a task with a small blurb to explain changes. This reduces the time spent on calls

We asked leaders or managers to

  • Create a team norm and discuss it with your team to set general work expectations
  • Mark a shared space like OneDrive for common contents
  • Circulate agendas before a meeting and then distribute minutes about that
  • Record all the essential calls
  1. Respecting the Need to Create a Connection

It may sound cliché, but it is true; we all are social animals. According to a Deloitte report, 45% of employees prefer social interaction while working, whereas 31% prefer collaboration. This clearly shows how much we need our peers by our side to boost our morale.

But connecting with people virtually is difficult. However, we can do a lot better if we just switch on our camera. It is because we are ‘visual beings’ and 90% of the information that our brain processes are visual.

  • Ensure meetings that are visual. Satya Nadella, the CEO of Microsoft Corp., said in a recent interview, “Video meetings are more transactional. Work happens before meetings, after meetings.”
  • Set up a time for team playtime. It can be Scrabble or an online game, which you can use as a stressbuster and a time to bond
  • Create a channel to post weird news or memes or anything
  • Come up with innovative ways to bond like ordering pizza for all and then having it on a virtual meet
  1. Adopting New Tools to Simplify Flow

We all knew that various software and AI would control the work atmosphere. But we didn’t expect it to be this soon. Now, when we have to adopt and adapt to confront challenges, we should make the most of it.

In fact, now employees have started realizing how they can benefit from various emerging technologies. Around 69% of the respondents in a survey conducted by HR Dive revealed that they feel technologies have empowered them.

We have found some tools useful in maintaining the flow. They are

  • Miro or Limnu for whiteboarding
  • or to explain flowcharts, block diagrams, org charts, etc.
  • is proving its mettle in removing background noises from Zoom and Skype calls
  • Jira Assistant and Microsoft Teams for a common work area
  • Donut for team pairing and inspiring better social connect
  • StoryXpress Clapboard is essential in showing demos during a meeting

Deloitte revealed that around 61% of desk-based workers would like to continue their work from home culture or at least do it more often. It means that people are getting warmed up to this new concept. But there is a downside as well. From an organizational perspective, work from home is not often an ideal solution as technologies have their limitations.

While working with tools, we have to understand the psyche of our employees. For instance, video conferencing is great but short meetings like with a time-cap of 30 minutes are more effective. Otherwise, the mind gets tired. It also came out in Nadella’s talk, where he used Microsoft’s research works to substantiate his claims.

And again, demography has a huge role to play in it as socio-economic and political scenarios impact work cultures. You have to find a balance in work from home setup and for that, insights are crucial. Leaders have to take the onus of simplifying things and take charge of the pack to ensure a sound transition without affecting the goals.

5 Ways to Make Innovation a Way of Life in your Startup

5 Ways to Make Innovation a Way of Life in your Startup.

As someone who works exclusively with startups, I often hear the word “disruption”. Disruption is here to stay, and every startup must make innovation a way of life. Innovation need not always be top-down and breakthrough; my experience says that small incremental innovations coming in from every level can change the game.

For the last three years, I have been a part of the innovation initiative in our organization. This initiative had the intention of developing a structured way that would encourage thought processes in the right direction. It was to identify innovation opportunities and open up diverse approaches towards problem-solving.

During this journey, we conducted multiple workshops and reviewed various innovations that were ‘one of their kind’. As I have worked as a submitter as well as a reviewer, I am sharing these learnings from my experiences while working with multiple startups-

sharing informationWhile working on an insurance underwriting workflow, we proposed and implemented auto-decisioning rules to reduce the time required for underwriting policies. We came across a similar use case in a construction tech product. It had an approval workflow directly influencing the project timeline.

By drawing attention to previous use cases and changing technology landscape, we proposed a machine learning solution to auto-approve/reject documents and reduce the turnaround time. The intent was to bring down the project delays caused by approval workflows. This way, we could translate the knowledge we gained from one domain/vertical to another. Since we work with multiple startups, our engineering managers act as innovation enablers by abstracting out solutions and draw parallels among similar problems.

In a startup, team members with diverse domain backgrounds can achieve this by sharing previous solutioning experiences. When a problem surfaces, they should come up with solutions by reviewing their past experiences. For example, shopping flow for apps through the app store. For example, shopping flow for apps through an app store, Netflix-like OTT subscriptions, or e-commerce merchandise, can be abstracted to a generalized shopping flow, similar behavior can be observed, and analogous solutions can be implemented.

Innovative tools In another instance, an ongoing discussion with a senior executive at one of the FinTech startups revealed that he spent two hours daily collecting data from different financial portals and crunching it daily. One of our QA engineers had innovation in mind. He simplified this time-consuming exercise by using his UI automation skillset (generally used for UI testing) to build a utility that would fetch this data automatically every day.

It was a simple solution developed in merely two days without any fancy API integrations or data pipeline setup. Sometimes, simple and out-of-the-box thinking could give you frugal innovations, and unconventional use of tools & technologies can do wonders.

The startup ecosystem is very agile, nimble, and cost-sensitive. Frugal innovations, which reduce the unnecessary complexities and costs, are very much the need of the hour. Teams can come up with frugal innovations in areas where they face constraints. Out-of-the-box thinking could help overcome such obstacles.

Innovations in StartupsFor one of the telecom products that we worked on, the business was losing revenue, and it was going unnoticed. Even though subscribers were willing to subscribe, they were unable to do so because of insufficient balance.

The engineering team was aligned with the business process and kept an eye on the offerings and product KPIs. This in-depth knowledge of the business helped them to identify lost revenue opportunities and provide a solution with simple technology.  With the right understanding of business and technology, a minor change in technology can enable a massive impact on business.

To implement similar incremental innovations, have a keen eye on the business and product KPIs, and understand how the KPIs change with every new feature launch. This will help your teams to not only align with business or product needs but also drive tech-enabled innovations.

Innovations in Startups

While building an investment platform for one of our customers, we came across a common problem of managing the database performance to build analytics in a monolith with a relational database application. An obvious approach to solve this problem was to go with performance monitoring tools such as New Relic. But keeping in mind the limitations, our team decided to build a tool that serves the purpose while overcoming challenges of out-of-the-box solutions. This tool not only gave in-depth insights into database performance but also provided support for all kinds of slice & dice operations.

So, we built our homegrown solution to instill the right kind of efficacy in areas covering from scaling the business to having a better experience for existing users and impact more. Solutions, devised with an in-depth knowledge of architecture and technology, will undoubtedly add immense value.

To bring in such innovations in your startup, ensure that the team has a learning mindset. You need to promote deep understanding and hands-on exposure to technology from the classic concepts to the latest architecture patterns/frameworks/developments. You can emulate an approach that incorporates learning as a part of core values, competency, and performance measures, and have recognition frameworks for appreciation.

Innovations in StartupsFor one of our FinTech products, we developed a future-ready middleware to on-board customers easily and quickly. For another insurance product client, we designed a different innovation. We had built a separate product to on-board insurance companies quickly with customized forms. In both these cases, we aimed at reducing the onboarding time and making the process less cumbersome. These serve as key essentials for startups to succeed.

Additionally, it was crucial to keep the end-user in mind while ensuring the best practices and easy operations. We often neglect operational innovations since they are thought of as “common sense”. However, innovation can change any part of the business, be it a one-off scenario or routine operations.

When you consider your startup, especially if it is a platform business, create a playbook to solve common and recurring challenges after considering similar cases.


When people talk about innovation, they usually refer to something gigantic and disruptive. However, those are merely small parts of the whole innovation puzzle. Most of the innovations successful at refining customer experiences are much more incremental. In our experience, incremental innovation is both a key differentiator and a stepping stone for something breakthrough.

How to Build a SaaS Product with Both Data and Run-time Isolation?

After a startup considers SaaS implementation, choosing the right SaaS architecture type is highly imperative to not only ensure the right pricing model but also accommodate special design requirements, such as scalability and customizability. Also, if you’re considering SaaS type 2 architecture for data isolation and runtime isolation environments, this article on how to build a SaaS  product easily is a must-read. As an application architect working on enterprise software, let me walk you through how we helped a project management startup succeed by applying SaaS type 2 to build a SaaS product.

The project management platform that we were working on was enterprise-level software. It based on a well-established algorithm to perform an optimal schedule for different types of project environments. However, to provide scheduling solutions at a much granular level, the SaaS product was going through a major overhaul in terms of new functionality for existing solutions. Also, we had to revamp the UI to make it more user-friendly.

Challenges that Came Along

The main challenge was to get early feedback for the new functionality from existing customers for quick SaaS product enrichment. Simultaneously, it was also necessary to give the SaaS product to a wide variety of potential customers for initial trials. This was to get them on-board for long-term engagement and provide scheduling solutions based on their needs.

While we started placing our focus on reducing the cycle time for features, it wasn’t possible with the traditional model of deployment wherein the SaaS product was hosted in the customer’s environment. Therefore, we decided to provide the platform with the SaaS model offering. However, the immediate next step was to pick the right SaaS architecture, and this was crucial considering its role in fostering the platform’s future growth.

Arrival at the ‘Make-or-Break’ Decision

Since every organization’s business model is different, the task management and execution could be different. Engineers design these platforms in a way to make customization is easy for end-users. Moreover, the platform should be easily customizable for different customer environments. In one common time frame, multiple customers are going to use the platform to create portfolios for their organizations, which will hold very sensitive information for data related to the businesses.

In this SaaS model, the customers were very clear and strict on the need to have complete isolation both at the application level as well as data level. We agreed that Type 2 architecture was the right fit for this case. Hence, we decided to implement this software as a service using our experience of saasifying products for growth-stage startups from various domains.

Dealing with the Architectural Roadblocks

The following are some of the architectural challenges that we encountered  while building a SaaS product, and effectively tackled to drive successful implementation-


Each customer runs on a different scale; some customers have thousands of users using the platform for planning and execution. On the other hand, there are customers with very few top-level executives using the platform. Since we have the freedom to deploy the application at the customer level, the application was deployed keeping the size of user bases in mind.

Fast Customer Onboarding

We had to onboard new customers with minimal assistance from the Engineering or Implementation teams. For this, as soon as a new user signs up on the platform, we need to provide the application and the database instance within minutes of signing up. We did this by using the automated scripts to deliver an application instance from a pre-configured base image quickly. Also, a unique URL for the application was generated using AWS Route 53. Once the provisioning happens, the user is notified that he/she is ready to use the platform with his unique URL (user-specific or organization-specific).


Architecture should support the customization of different business entities without any customer-specific deployment from the engineering team. These customizations were provided in the application via a configuration dashboard, wherein an admin user of an organization will set the configuration parameters based on the organization’s needs.

Hardware Utilization

We had to optimize the new architecture for hardware availability. It is imperative that there will be existing customers with huge data sets and customizations. But there will also be some customers with little data and almost zero customization. We did this by analyzing the costs of cloud infrastructures like instances, database servers, etc. and preparing the pricing plans for end-users accordingly.


By handling data isolation and application run time for each customer, we can solve a lot of security concerns. The data in transit was over HTTPS only. The application itself provides secure access to all customer data.


Our customer wanted to develop the existing platform as “Portfolio as a service.” They didn’t want to manage infrastructure and hire an admin for management. The implicit requirements were complete automation of provisioning, which was achieved with a one-click deployment for the product to provision application and database instances within no time. We built the architecture around multiple clusters so that all customers have their own runtimes (applications) and database server . With this, we could prevent sharing of data or applications and security parameters could be followed


As demonstrated in the diagram, on every new customer onboarding, our automated services created keys and did provisioning of applications and the databases as per the pricing plan adopted by customers. Once the step is complete, they could immediately start using the platform.

For every customer’s request, the load balancer identifies the right IP address of the application to process. Thereafter, the application gets fully encrypted data from an isolated database, decrypts data using the keys, and sends it back to the user.

Advantages of SaaS Type 2 Architecture

They say- sometimes it’s the smallest decisions that can change things forever. It was our decision to probe the customer’s case and choose the SaaS architecture type which is the right one. This was to serve their purpose well. Some of the advantages that the customer enjoyed-

  • Handle security at the infrastructure level to ensure that the application doesn’t have to take care of data sharing.
  • No necessity of managing connection pools for tenant-specific databases.
  • Low chances of the system’s underutilization as scaling can be done differently for different clients.
  • Faster customer onboarding is possible as there are multi-tenant specific items.
  • Customize the system as per user’s need without worrying about its impact on other users.


We have customized customer onboarding, wherein customers can pick pricing plans as per portfolio size and number of users. Our fully automated deployment solution provisions verify instances in the cloud and ensure optimization of the system. Software as a service type 2 architecture comes with several benefits. Startups considering implementing a SaaS product type 2 must understand that automation and monitoring need heavy investment.

Top Considerations while Implementing Blockchain

If you are seeing the technology making a difference in the startup ecosystem, you might have seen a lot of hype around Blockchain. Innovative characteristics of Blockchain platform like decentralization, immutability, transparency, and automation are useful for various industry verticals. This will inspire the creation of a multitude of use cases.

Blockchain technology is still in its nascent phase and, while cryptocurrency platforms like Bitcoin and Ethereum have been in use since long, its adoption into the mainstream software industry has been limited. Having worked on Blockchain platform implementation for startups from various domains, I have tried to list down the top seven considerations  that should back the implementation of the Blockchain in a product.

On-Chain or Off-Chain

One of the key architectural decisions while implementing the Blockchain-based products is to understand where to go off-chain and where on-chain. It is for occasions when transaction data and business validation logic play a crucial role.

The primary constraint is the network latency due to the data replication across the Blockchain platform network. The degree of latency keeps increasing with increasing levels of data replication. For the same reason, Ethereum charges a reasonable fee to store data on the chain.

Some general guidelines-

  1. Data that is either directly required for transaction validation or need auditability should be stored on-chain. It is better to store referential data off-chain.
  2. If eventual consistency is good, you can develop transactions off-Chain, and update only the first and last state on-chain. This will increase overall throughput without utilizing additional network resources.

Public or Private Permissioned

Another important decision is the scope/access of the Blockchain itself, ranging between open & permissionless system to a private & controlled one. Public Blockchain platforms are useful where the users are anonymous and equally. Public chains require a community around them to ensure that no one person has the authority to change rules. They need to be community-driven, and a single user cannot change the rules of the entire network. However, a large number of nodes may limit the throughput of the transactions. It is better to have some incentivization to carry out effective processing.

Permissioned Blockchain platforms control who can write/read on the private Blockchain. If you compare them with public chains, they are scalable. They are suitable when controlled governance and compliance/regulations are important.

An example of a public permissionless chain is Libra, a global payment system by Facebook, which can be used by anyone for value exchange. On the other hand, an Insurance claim processing platform is a good use case to exemplify private permissioned Blockchain. It is essential to come up with this categorization at the initial stages itself. This is because both the categories require different kinds of consensus and identity management solutions.

Levels of Security

Tamper-resistance, resistance to double-spending attacks, and data consistency are some essential attributes of a secure distributed system. We can achieve the first two using cryptographic principles of Blockchain technology. For consistency across the system, we need an appropriate consensus mechanism.

In public-facing systems where anyone can join the network, all the nodes are trust-less with no one node having more privilege than others. For such scenarios, security is important to prevent any malicious node. There in the Blockchain with POW is better despite the over-consumption of network resources and limitations in transaction throughput.

In consortium-like systems, multiple parties interact and share information. In these systems, although node identities are well known, only some nodes are fully trusted for processing the transactions, and security is required against the semi-trusted nodes or external users not directly participating in the network. A Blockchain, with appropriate governance model and consensuses like PBFT or POS, will not only provide the desired security attributes to the system but also increase operational efficiency because of high trust levels.

In a document workflow-based application, for example, where documents are exchanged between multiple parties for approval, a system of later type can provide the required security and efficiency.

Data Privacy Needs

Sometimes, data stored, or transactions executed on Blockchain need protection on account of confidentiality or compliance rules, and herein privacy comes into the picture. For instance- in the case of financial trades and medical-records-based applications, transactions may need to be hidden with data visibility for selected stakeholders. Even in the case of bitcoin, transaction trend graphs may reveal the user’s identity. These users may want to hide the beneficiary or amounts involved in these transactions.

Techniques like transaction mixing and zero-knowledge proof have been proposed to support that. Sometimes, there are variations in real-life situations where these techniques cannot fit directly and require the design of a new protocol using existing techniques.

Physical to Digital World Transition

We can turn physical assets (land registry, paper contracts, or fiat currency) into digital assets on the Blockchain. Leveraging from the decentralization of these documents will then become easier. However, this requires an inherent trust in the system. We would either need a trusted third party providing this guarantee or a physical legal agreement between the parties that cannot be repudiated in the court of law.

In the case of fiat currency-based applications, this trusted third party is a bank. But choosing a bank with a good technical infrastructure is essential to ensure easy Blockchain integration.

Data Protection (GDPR)

GDPR compliance requires that a user can selectively reveal personal data to others and can exercise his/her right to the erasure of this data. As it is not possible to delete any data from the Blockchain platform, we should either keep such personal data Off-Chain (in centralized servers) or provide end-to-end encryption of his/her records so that it can be viewed only by that user.

Ease of Development & Deployment

Last but not least, we should have tools that ease out processes of development and deployment. A better smart contract framework means fewer bugs and more trust. A good container orchestration tool like Kubernetes is a must-have for upgrading the product on all the validator nodes.


Before building a real Blockchain-based product, you got to take a close look at the considerations mentioned above that can make or break your efforts. Barring hype and covering all the teething problems, I believe that blockchain technology is a proof of something that has the potential to revolutionize industries. Happy Blockchaining!

Does your Startup Really need Blockchain?

‘To Blockchain or not to Blockchain’ – this is one big question that has been on the minds of startup founders in recent times. From supply chain monitoring to equity management and cross-border payments, Blockchain technology has been making its way into multiple areas. Startups, to meet their growth goals, are jumping onto the Blockchain bandwagon to generate buzz, convince investors, and raise new rounds of funding.

Many startup founders approached us with a common question in the recent past- Is Blockchain the right fit for my startup? That moved me to come up with a decision tree to enable pragmatic decision-making in this direction. However, the number of startup founders reaching out to us with this dilemma kept on increasing, which inspired me to write a detailed article on this.

Whether to adopt the Blockchain technology for your startup is not merely a technological decision but also a business decision. Being the frontliners of decision-making, it is crucial for founders to not fall for the hype but diligently analyze its potential from the business perspective– even in cases where a well-defined problem exists. While Blockchain’s unique properties have forced startup founders to think of it as essential and transformative technology, the ‘business benefit’ stands firm as a vital consideration in this decision. This article will cover both technology and business perspectives that founders need to consider while evaluating Blockchain technology.

Decision Tree: Evaluating the Technology Fit

Though many research papers feature decision trees to evaluate Blockchain use case feasibility with respect to technology, here is a simplified version of the framework-

Real-Life Use Cases

For a better understanding of the decision tree, let me take you through some of the real-life use cases across different verticals-


Use Case Do we need to store the states?

(user specific data and/or meta data)

Are multiple users involved/ updating the stored states?


Is any trusted third part involved?


Can the third party be eliminated? Decision


Social media application that involves user engagement and interaction Yes Yes Yes No No This is similar to a traditional centrally managed application
Yes Yes The same use case can be implemented using Blockchain if and only if the control has to be released to the community
Food retailers receiving supplies from producers, wherein ensuring food quality is a key challenge Yes Yes No NA Yes
Organizations maintaining records of employee attendance Yes Yes Yes No No As long as there is mutual trust between organization and employees, there is no necessity of Blockchain. If any trusted third party is involved and Blockchain comes to picture, it would be mere over-engineering


Cost-Benefit Analysis: Evaluating the Business Fit

Every startup founder, who is planning to invest in Blockchain, should assess the ROI  (return on investment) that will come from its implementation. You might be adopting Blockchain network as a necessity or a differentiator for your product, but evaluation should always be done from a revenue generation perspective.

You might have to produce a cost-benefit analysis as per your business, but I will help you with an example to better understand the approach. Let us consider the case of food retailers mentioned above, wherein we would compare the high-level costs with different cost components.

Development Cost

If development efforts for building an MVP (Minimum Viable Product) with a traditional centralized system approach were around X man-months, the efforts would be 30-40% higher to supply in the case of a Blockchain-based approach, primarily for building the Blockchain-based eco-system components. Usually, a Blockchain developer would cost you at least 1.5 times more than developers working on widely used technologies. This would make the development cost of Blockchain 2X higher than the traditional application development cost.

Infrastructure Cost

To evaluate the infrastructure cost, let us assume the transaction volume of a few hundred transactions per second (TPS). If for a traditional solution the infrastructure cost is about X per year, it would be the same for a Blockchain-based approach. This is as per the assumption that nearly 8-10 nodes are part of the consortium. It boils down to one inference- Instead of a single party managing all the infrastructure nodes, every member of the consortium should own the node.

With the increasing transaction volume, the traditional approach can scale horizontally; however, Blockchain-based solutions face the ‘Scalability Trilemma’. This is a famous term coined by Vitalin Buterin that, in layman terms, is akin to the phrase ‘you can’t have everything’. Businesses should clearly understand which aspect among the three- decentralization, security, and scalability- they intend to optimize and if that is in line with their value proposition.

Other Costs

A few other business efforts required in the case of Blockchain-based solutions include setting up the consortium, convincing the plausible members regarding the benefits of joining the consortium and and expanding it to a level where it can be claimed as safe. Besides, it might also include devising legal rules and regulations to resolve conflicts.

When talking about benefits, a Blockchain-based approach can certainly enable business processes automation using smart contracts. The approach not only improves the overall process efficiency but also reduces operational costs for the businesses. This report [2] says that using Blockchain can minimize wastage of goods, which can result in savings of nearly 450K Euros annually. This value far exceeds the initial investment and operational cost that goes into a Blockchain-based solution. When the consortium further grows, such automation protocols would enable business communities to define industry-wide standards.


Though it might not have garnered the importance that it deserves, evaluating the feasibility of Blockchain is highly recommended for startup founders. This article aims at busting the Blockchain hype and encouraging in-depth evaluation from an intersection of business and technology perspectives.


[1]   K. Wüst and A. Gervais, “Do you need a Blockchain?,” 2018 Crypto Valley Conference on Blockchain Technology (CVCBT), Zug, 2018, pp. 45-54, doi: 10.1109/CVCBT.2018.00011.

[2]  G. Perboli, S. Musso and M. Rosano, “Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World Use Cases,” in IEEE Access, vol. 6, pp. 62018-62028, 2018, doi: 10.1109/ACCESS.2018.2875782.


How to Build SaaS Application with Data Isolation but No Run-time Isolation?

Are you considering SaaS   (software as a service) product implementation? Then, choosing the right SaaS architecture type would be a wise option, so that you could optimize the hardware and automation cost Among SaaS architectures, type 3 requires special attention

Type 3 SaaS product architecture is the right fit for cases that require data isolation but no isolation. In this type, different data stores are placed for different customers; however, the application is shared by all. Type 3 SaaS architecture is considered in businesses like e-mail marketing, content management systems (CMS), health care applications, and so on.

For your understanding of the type 3 SaaS architecture, I will take you through the example of an innovation management platform that I worked on for a fast-growing startup. The platform enabled industry leaders to tap into the collective intelligence of employees, partners, and customers, find the best ideas as well as make the right decisions. This platform drove innovation through the following-

  1. Employee engagement: Making ideation a part of daily lives and creating a culture of innovation
  2. Continuous improvement: Supercharging project discovery by tapping into the employee bases
  3. Product development: Creating the next big thing with people who understand the business well
  4. Customer experience: Engaging a wider workforce and reducing customer churn

It also enabled enterprises to manage the entire idea lifecycle, right from producing an idea of delivering impact at scale. Now, you must be wondering why we chose SaaS for this platform? The platform had to be made available as a service to enterprises with an option of subscription for a limited period. Herein, hosting/licensing was not a viable option, taking into consideration the cost of deployment, data privacy concerns, and the IT assistance involved. We picked the SaaS Type 3 deployment model for this platform wherein we could keep data of each enterprise isolated from others, all the while retaining flexibility of application runtime being shared.

SaaS architecture

Fig 1- Saas Type 3 Architecture

How Our Decision Paid Off?

Having the right foresight and visualization is the key to good decision-making. That worked well in this case too when we could rightly foresee the results of deploying the SaaS model type 3 on this platform. This decision helped us address the areas mentioned below-

  • Data isolation
  • Server utilization, wherein we kept application runtime shared to use the server capacity optimally
  • Separating application runtime to the high-end server for some high-paying customers

What are the Challenges We Overcame and How?

Isolating data for each customer by having separate databases, all the while sharing a common application runtime, was a critical challenge that we tackled. In other words, we have got one application runtime capable of supporting multiple databases for customer-specific data management. Along with this, we also had to accelerate customer onboarding in less time. This implies the deployment process should be automated enough to handle database provisioning, disaster recovery, and rollout of new versions.

Supporting Multiple Database Connections

As explained earlier, we had one application runtime that supported multiple databases for the respective customers. In our case, we had built N-number of Tomcat web applications deployed in one server that shared the common application runtime. This way, every customer had access to an independent application, with every application having its connection pool to manage connections. However, a plan of merging these deployments to one application is underway, so that we do not have to run duplicate processes.

Faster Customer Onboarding

We brought down the customer onboarding time by automating the database creation with templatized data using Chef scripts. Apart from the database creation, it was also essential to set up a backup-recovery process and failover & load balancing for the application, which we could achieve by using the cloud solutions and Chef scripts.

Effective Disaster Recovery

As the solution helps in innovation management, the data was extremely critical to our customers. This implied that our SaaS system should be able to weather any unexpected disasters and unforeseen accidents. To handle this, we had deployed the application & database across multiple availability zones that ensured timely updating of application and copies of the database whenever the primary DS is down.

Automated Deployments

For an updated version rollout, along with the SaaS application deployment, we had to deploy an updated version of the database or upgrade the existing version for each customer. However, with one-click deployment automation that we had in place, we could safely upgrade all customer applications to the updated version all the while ensuring the existence of a recent backup in case of a rollback.

Utilizing Hardware

As we had an isolated database for each tenant, we had to spin up multiple DB servers for each of them, and this was more of a requirement rather than a choice. But since the application runtime can be shared, we had options of hosting it in a single server depending on the usage. By grouping customers based on utilization, we could reduce the number of servers and, in turn, accelerate the usage.

How did we Ensure Security?

As stated earlier, we isolated data for each customer by having a separate database all the while sharing a common application runtime. This came with the additional baggage of securing the application runtime that would restrict the urge of end-users to access other end-users’ data points. How did we implement this? Here’s how-

  • Maintaining separate configuration keys for each customer and rotating them on every release
  • Preserving encryption keys of databases fields for each customer and rotating them on every release

Apart from that, there were many other security compliances we had to follow while building a SaaS product or application-

  • Our product was independently audited on an annual basis against a rigid set of SOC 2 controls
  • We have an open policy that allows our customers to perform penetration tests of our service
  • Our production environment is protected by a robust network infrastructure that provides a highly secured environment for all customer data
  • Data in transit is over HTTPS only and is encrypted with the TLS v1.2 protocol. User data, including login information, is always sent through encrypted channels
  • The hosting environment is a single isolated database and application components that ensure segregation, privacy, and security isolation in a multi-tenant physical hosting model. Instead of storing user data on backup media, we rely on full backups that are shipped to a physically different co-location site
  • Customer instances, including data, are hosted in geographically disparate data centers. Customers may choose the location to host their data based on the corporate location or user base location to minimize latency
  • We support Single Sign On (“SSO”), using the Security Assertion Markup Language (“SAML 2.0”). This allows network users to access our application without having to log in separately, with authentication federated from Active Directory
  • An automated process deletes customer data 30 days after the end of the customer’s term. For Data can also be terminated immediately depending on the contract terms of the agreement.


Despite the above challenges, this model helped us live up to the promise made to the customer. Our SaaS application had ideas across enterprises remain isolated and high-security compliance remains ensured for every customer.