5 Key Technology Decisions for a Scalable MVP

5 Key Technology Decisions for a Scalable MVP

Wrong technology choice is like a bad marriage. If you linger around it for long, then be prepared for more troubles. I realized this while working with a customer whose product was matched with an inexact technology by a CTO at an initial stage. Over time, the rift between them widened and adopting new technologies became more difficult. This led me to think of dilemmas that startups often face while zeroing in on technology.

There are 5 significant technical aspects that startups should consider before they start building their MVP.

Using Microservices architecture pattern to build an MVP

Microservices architecture has become a buzzword now. You go to any seminar or talk, and you will find this as a hot cake selling fast. It got the thrust from OSS platforms like Netflix and others after it helped accelerate the development and deployment of such platforms and services. The accolades are well-earned. But is it a must for all? I say, ‘No.’

Microservices architecture is good only when the following two conditions back it up:

High scale – This pattern is suitable for services where billions of requests pour in each day. We used this to develop an ad server with similar criteria and earned favorable results. But such cases are exceptions- scaling a number like that from Day 1 is a huge task and it rarely happens.

Large teams & Agility – The other factor is the number of members in a team. My personal opinion is a team with more than 100 members and a strong business need for agility are possible use cases for microservices.

Let me share an experience. In 2016, we used microservices to build an MVP as the hype surrounding it was high. Microservices architecture is inherently distributed- their complexities made iterations frequent and deadlines got delayed. Finally, we rolled out the MVP after a year, instead of our initial plan to launch it within 6 months. We had to fight real hard with distributed transactions, debugging was tough, and even simple user stories had complexities. The level of complexities we encountered was unprecedented. Distributed systems are hard to manage. The team had to implement complex patterns like outbox patterns and circuit breakers for transactional integrity and reliability. Over time, these patterns have matured but it pushed me to think, “Is this kind of complexity necessary? Is the ROI that much alluring?” The answer was negative.

Before considering the Microservices journey for your startup, ask these two questions – Do we need to support billions of requests every day? Will I ramp up my development team to 100+ engineers in a couple of years? If the answers are ‘No,’ do not go ahead. Instead, adopt the Modular Monolith. Like microservices, it comes with a database for each module that allows parallel development to an extent, but it is deployed as a single unit removing the complexities of distributed systems to another day.

Using NoSQL Stores without a specific need

When it comes to NoSQL, a lot of entrepreneurs or programmers are well-versed with the concept of 3Vs-

  • Volume
  • Variety
  • Velocity

NoSQL is suited for problems where the product velocity demands support for billions of requests or generates around 1TB of data each day. Also, if there is a constant influx of structured, semi-structured, or unstructured data, then NoSQL is necessary.

Its use in the flexible schema is widespread where one is not sure about the entity attributes as entities evolve. For instance, e-commerce sites are its biggest takers as RDBMS cannot provide flexibility to store the inventory. NoSQL is benefiting from such scenarios and cashing on the popularity of databases like Redis, MongoDB, and Cassandra. It has already covered 39.52% of the market.

However, there are cases where using NoSQL can usher in disaster. A few years back, I was developing an MVP for a fintech startup. Our choice of Aerospike as the DB for our transactional store turned out to be wrong. It does not support ACID guarantees, so we had to go with BASE (Basically Available Soft state Eventual consistency), which is operationally intensive and adds to the time, effort, and cost. We ended up fighting the wrong battles. If volume, variety, and velocity are not your prerequisites, then don’t take up NoSQL. For such cases, RDMS is a good option.

Another aspect that gets neglected often during decision making is the expertise in data modeling NoSQL stores. Data modeling in NoSQL is different from RDBMS. For a practical application of NoSQL, understanding the query pattern is of prime importance and modeling is done based on UX screens and query patterns instead of normalization techniques. Moreover, with Mongo or Couchbase, the internal storage structure can give rise to more complexities. Then there is the pricing strategy – Dynamo and CosmosDB and similar engines, but their pricing strategies are entirely different. A shift to NoSQL would require more time to understand these models. If your venture gets support from freelancers with RDBMS background, then stick to RDBMS to keep things simple.

Using the standard test pyramid strategy for test automation 

Software testing is fast becoming a trend and around 78% of companies are now relying on it. To test tools, clients follow a pyramid:

test pyramid

 

In 2012, I got this opportunity to work on a platform called 1-9-90 – there are 1% influencers, 9% active users, and 90% passive users in any social media platform. The idea was to create content on our platform and publish it on different social media platforms to collect engagement using views, subscribers, shares, etc. But after 6 months, the company decided to change its stance. The intent was to understand how the brand is performing, which led us to build the Digital Consumer Intelligence Platform. We shifted from a content creator play to an API-only service. We had to scrap most of the test automations we had so painstakingly built.

What most people don’t realize is that Test automation is hard and often brittle in nature. You need adept developers and QA engineers working together to create a state-of-the-art regression suite. Like the DevOps movement and DesignOps movement, a DevQA movement is the need of the hour to write tests and ensure ROI that most teams fail to realize. If not performed precisely, the ROI will take a hit. For startups, I would recommend a test diamond-

Test diamond

It comprises module-specific unit tests, a lot of integration tests, and a few end-to-end tests. Ideally, develop test automation only if the product is market fit.

Having no objective targets for MVP 

The most challenging question for a Product Owner is – How will you objectively define the MVP’s success? Mostly, they know the problem and the solution but fail to put it across objectively in measurable terms.

In an earlier opportunity where we had a chance to work with a payment gateway, the MVP’s goal was to reduce checkout time. Instead of 30 seconds to complete a checkout flow, the process needed to happen within 10 seconds – this metric gave the team a clear, measurable objective. They could fine-tune the feature set and user experience until they achieved the said target. Such clarity helps the team apply Customer Developer Principles, wherein you learn, measure, and iterate to meet your goals. Having said this, converting a feature goal into a measurable metric is not very easy and straightforward. It needs creativity, expertise, and Analytics tools to get things right.

Another mistake that some startups make is to focus their energies on building the bells and whistles. Entrepreneurs are incredibly passionate about their problems and they keep coming up with new ideas and ways to solve the problem. Dropping the current strategy and running after a new one is widespread among startup teams. Such actions confuse teams and often misguide them. Having a well-defined, measurable target helps the team validate pivots objectively.

Companies must stay away from bells and whistles as they spur production costs and delay the time to market. In my opinion, always focus on building a key differentiator, define the success of an MVP objectively to help the team stay aligned.

Poor Build vs. Buy Decisions

There are two clear-cut ways of empowering an MVP-

  • Build
  • Buy

Entrepreneurs often try to build everything from scratch and as an engineer, I would love such opportunities. But is it profitable for your MVP? You have to measure it in terms of cost, time, and effort. In most cases, I have found that buying is a pragmatic way forward.

If you have a unique business model where you need a new algorithm, which is also a differentiator, then don’t shy away from building one. But if your algorithm is not the star, there buy one and then optimize it.

A more dynamic approach would be to follow a Buy, Validate, and Build Methodology. In an earlier opportunity where we had a chance to work with a payment gateway, our client bought a payment aggregator’s license instead of building it. The goal of the MVP was to reduce the time of checkout and enhance the experience – which could be achieved without building the complete payment aggregator from scratch. As a solution, we loaded it with features that solved the primary problem and fetched us paying customers. Then we replaced the payment aggregator with our own to improve margins. This helped us get the MVP ready on time and open the revenue stream, get the product market fit and then work on profitability. The Buy-Validate-Build Methodology helped us validate and fail fast!

To summarize, while building MVP

  • Avoid Microservices. Use Modular Monolith
  • Avoid NoSQL stores. RDBMS still works for most cases
  • Do not do Test Automation till you reach Product-Market Fit
  • Do not build bells & whistles, have objective MVP Goals
  • Make Pragmatic Buy decisions, don’t build everything

How Can You Streamline Work from Home to Benefit More?

The year is quite unsettling in nature as systems have gone off the track by miles. In every sphere. In every sector. All these because of the COVID-19 pandemic. It has forced people to stand at a cusp of an upheaval where adopting and adapting to new structures are the only means of moving forward.

One such drastic change is the evolving work from home culture. It has spurred the adoption of virtual and remote setups, which has been witnessing support from rapid digitalization. But how can we streamline it? How can we ensure a smooth transformation to bolster the present and the future?

The answer is in emerging technologies and management. Proper implementation of these is required to ensure success. According to a Stanford University survey, around 42% of the U.S. labor force are now performing their duties from home. But their starts were not easy. When reports about massive breakouts of coronavirus got out, government mandates forced companies to launch this new pattern. But a lack of know-how of how to work from home and other factors acted like constraints for them.

Like many other companies, we faced problems. When there is a physical distance, sync-ups emerge as a huge challenge and so is maintaining a rapport among teammates. Even getting a proper update on jobs done consumes a lot of time.

But we were in no state to lose our mind and allow any plummet in productivity as that could have triggered a domino effect leading to the point of no return. Financial worries had a significant role to play in it and its gravity is now evident from the U.S. economic reports showing a plunge in GDP by 31.4% in the second quarter.

What Were Our Constraints

To weather such a crisis, we brainstormed to find out what aspects were bogging us down. We found out a few. They were,

  • Communication gaps
  • Feeling of work as isolated and unstructured
  • Tracking progress becomes difficult
  • Missing the feeling of working together, the office ambiance

In addition, as the days rolled by, we found out employees in different countries have started complaining about their well-being while working from their homes. Sometimes, it is related to work desk and posture but often issues like furlough, finances, career hiatus, fear of illness, and other factors impacted their outputs.

We certainly didn’t want those for our employees. We care for their well-being. Studies have revealed that when employees are happy and content, they become more productive and their satisfaction levels guarantee better outputs for customers.

That’s why we took some measured steps to ease the process. We had to make the lives of our employees more comfortable and we are glad that we did it when the time was ripe.

How We Went Ahead?

We started jotting down possible solutions quickly, whatever was there on top of our minds. Then we zeroed in on the most effective ones, the ones with the maximum output and minimum integration challenges. At the end of the process, we came up with four major buckets to address all the problems.

These were

  • Switching from pull to push method of communication
  • Writing things down
  • Respecting the need to create a connection
  • Adopting new tools to simplify the flow

Implementing Them

Each of these aspects required a specific type of handling. But the effective practice of these required a thorough understanding of behavioral patterns of employees and technologies. So, we dug deep and did a little bit of research to understand what suits each of them best.

  1. Switching from Pull to Push Method of Communication

When we are at the office, team leads or managers have face-to-face interactions with a group or individuals for regular status updates of projects. It is the pull mode.

But as the architecture changed, we realized the need for the push mode. We pushed the ownership of a project to the concerned employee. In this new method, we asked them to publish the work status and not to wait for being asked by their supervisors. This ensured a seamless flow in operation as update logs helped in improving synchronization.

Stanford report claims that productivity increases by 13% when people are working from home. But you need the right process to capitalize on it.

To effectively put the system into practice, you can try making the following things mandatory.

For individuals,

  • Ensure status visibility on Skype or other mediums to avoid any confusion and also notifying others about their logging and out times
  • Update the job sheet to streamline procedures
  • Ask for help when stuck somewhere and don’t wait to be asked
  • Focus more on HRMS to ease the process of attendance regularization
  • Inform the meeting host about a delay in joining a meeting. Let your team or manager know if you are taking an unplanned leave

For leaders or managers,

  • Break calls into three distinct parts: updates, demos, and then discussions. This would prevent any digression and stretch of work hours
  • Set office hours and ask employees to schedule a DND to let them enjoy time with their families
  1. Writing Things Down

In remote setups, we were facing was the communication gap. Views were lacking clarity and misalignments in the flow were happening. To curb that, we ensured a practice of documentation. Yes, it took time but we got things streamlined.

To ensure proper implementation of this method, we asked individuals to

  • Integrate a process of self-explanatory documentation with logical subtasks, estimates, queries, and answers
  • Update a task with a small blurb to explain changes. This reduces the time spent on calls

We asked leaders or managers to

  • Create a team norm and discuss it with your team to set general work expectations
  • Mark a shared space like OneDrive for common contents
  • Circulate agendas before a meeting and then distribute minutes about that
  • Record all the essential calls
  1. Respecting the Need to Create a Connection

It may sound cliché, but it is true; we all are social animals. According to a Deloitte report, 45% of employees prefer social interaction while working, whereas 31% prefer collaboration. This clearly shows how much we need our peers by our side to boost our morale.

But connecting with people virtually is difficult. However, we can do a lot better if we just switch on our camera. It is because we are ‘visual beings’ and 90% of the information that our brain processes are visual.

  • Ensure meetings that are visual. Satya Nadella, the CEO of Microsoft Corp., said in a recent interview, “Video meetings are more transactional. Work happens before meetings, after meetings.”
  • Set up a time for team playtime. It can be scrabble or an online game, which you can use as a stressbuster and a time to bond
  • Create a channel to post weird news or memes or anything
  • Come up with innovative ways to bond like ordering pizza for all and then having it on a virtual meet
  1. Adopting New Tools to Simplify Flow

We all knew that various software and AI would control the work atmosphere. But we didn’t expect it to be this soon. Now, when we have to adopt and adapt to confront challenges, we should make the most of it.

In fact, now employees have started realizing how they can benefit from various emerging technologies. Around 69% of the respondents in a survey conducted by HR Dive revealed that they feel technologies have empowered them.

We have found some tools useful in maintaining the flow. They are

  • Miro or Limnu for whiteboarding
  • Draw.io or drawing.net to explain flowcharts, block diagrams, org charts, etc.
  • Krisp.ai is proving its mettle in removing background noises from Zoom and Skype calls
  • Jira Assistant and Microsoft Teams for a common work area
  • Donut for team pairing and inspiring better social connect
  • StoryXpress Clapboard is essential in showing demos during a meeting

Deloitte revealed that around 61% of desk-based workers would like to continue their work from home culture or at least do it more often. It means that people are getting warmed up to this new concept. But there is a downside as well. From an organizational perspective, work from home is not often an ideal solution as technologies have their limitations.

While working with tools, we have to understand the psyche of our employees. For instance, video conferencing is great but short meetings like with a time-cap of 30 minutes are more effective. Otherwise, the mind gets tired. It also came out in Nadella’s talk, where he used Microsoft’s research works to substantiate his claims.

And again, demography has a huge role to play in it as socio-economic and political scenarios impact work cultures. You have to find a balance and for that, insights are crucial. Leaders have to take the onus of simplifying things and take charge of the pack to ensure a sound transition without affecting the goals.

5 Ways to Make Innovation a Way of Life in your Startup

As someone who works exclusively with startups, I often hear the word “disruption”. Disruption is here-to-stay, and every startup must make innovation a way of life. Innovation need not always be top-down and breakthrough; my experience says that small incremental innovations coming in from every level can change the game.

For the last three years, I have been a part of the innovation initiative in our organization. This initiative was aimed at having a structured way that would encourage thought processes in the right direction to identify innovation opportunities and open up diverse approaches towards problem-solving. During this journey, we conducted multiple workshops and reviewed various innovations that were ‘one of their kind’. As I’ve worked as a submitter as well as a reviewer, I’m sharing these learnings distilled out of my experiences while working with multiple startups-

While working on an insurance underwriting workflow, we proposed and implemented auto-decisioning rules to reduce the time required for underwriting policies. We came across a similar use case in a construction tech product, which again had an approval workflow directly influencing the project timeline.

By drawing parallels to the previous use case and with changing technology landscape, we proposed to build a machine learning solution to auto-approve/reject documents and reduce the turnaround time, which will help bring down the project delays caused by approval workflows. This way, we could translate the knowledge we gained from one domain/vertical to another. Since we work with multiple startups, our engineering managers act as innovation enablers by abstracting out solutions and draw parallels among similar problems.

In startups, team members with diverse domain backgrounds can achieve this by conducting roundtable discussions where they share previous solutioning experiences. When a problem surfaces up, they should take a step back and see if any past experiences can help draw parallels and come up with solutions. For example, shopping flow for apps through the app store. For example, shopping flow for apps through an app store, Netflix-like OTT subscriptions, or e-commerce merchandise, can be abstracted to a generalized shopping flow, similar behavior can be observed, and analogous solutions can be implemented.

In another instance, an ongoing discussion with a senior executive at one of the FinTech startups revealed that he spent two hours daily collecting data from different financial portals and crunching it daily. One of our QA engineers simplified this time-consuming exercise by using his UI automation skillset (generally used for UI testing) to build a utility that would fetch this data automatically every day.

It was a simple solution developed in merely two days without any fancy API integrations or data pipeline setup. Sometimes, simple and out-of-the-box thinking could give you frugal innovations, and unconventional use of tools & technologies can do wonders.

The startup ecosystem is very agile, nimble, and cost-sensitive. Frugal innovations, which reduce the unnecessary complexities and costs, are very much the need of the hour. Teams can come up with frugal innovations in areas where they face constraints and out of the box thinking could help to overcome them.

For one of the telecom products that we worked on, the business was losing revenue, and it was going unnoticed. Even though subscribers were willing to subscribe, they were unable to do so because of insufficient balance.

The engineering team was very much aligned with the business process and kept an eye on the offerings and product KPIs. This in-depth knowledge of the business not only helped them to identify lost revenue opportunities but also provide a solution with the use of simple technology.  With the right understanding of business and technology, a minor change in technology can enable a massive impact on business.

To implement similar incremental innovations, have a keen eye on the business and product KPIs, and understand how the KPIs change with every new feature launch. This will help your teams to not only align with business or product needs but also drive tech-enabled innovations.

While building an investment platform for one of our customers, we came across a common problem of managing the database performance to build analytics in a monolith with a relational database application. An obvious approach to solve this problem was to go with performance monitoring tools such as New Relic, but keeping in mind the limitations, our team decided to build a tool that best serves the purpose while overcoming challenges associated with out-of-the-box solutions. This tool not only gave in-depth insights into database performance but also provided support for all kinds of slice & dice operations.

So, we built our homegrown solution that would bring the right kind of efficacy required with considerable impact, right from scaling the business to having a better experience for existing users. With an in-depth knowledge of architecture and technology, any solution brought to the table will undoubtedly add immense value.

To bring in such innovations in your startup, ensure that the team has a learning mindset: promote deep understanding and hands-on exposure to technology from the classic concepts to the latest architecture patterns/frameworks/developments. You can emulate an approach that incorporates learning as a part of core values, competency, and performance measures, and have recognition frameworks for appreciation.

For one of our FinTech products, we developed a future-ready middleware to on-board customers easily and quickly. For another insurance product client, we had built a separate product to on-board insurance companies quickly with customized forms. In both these cases, we aimed at reducing the onboarding time and making the process less cumbersome, which is one of the key essentials for startups to succeed.

Additionally, it was crucial to keep the end-user in mind while ensuring the best practices and easy operations. Operational innovations are often neglected since they are thought of as “common sense”. However, innovation can change any part of the business, be it a one-off scenario or routine operations.

When you consider your startup, especially if it is a platform business, create a playbook to solve common and recurring challenges after considering similar cases.

Summary

When people talk about innovation, they usually refer to something gigantic and disruptive. However, those are merely small parts of the whole innovation puzzle. Most of the innovations successful at refining customer experiences are much more incremental. In our experience, incremental innovation is not only the key to differentiate your startup from others but also a stepping stone for something breakthrough.

How to Build a SaaS Product with Both Data and Run-time Isolation?

After a startup considers SaaS implementation, choosing the right SaaS architecture type is highly imperative to not only ensure the right pricing model but also accommodate special design requirements, such as scalability and customizability. Also if you’re considering SaaS type 2 architecture, wherein isolation of both data and runtime environments is required, this article is a must-read. As an application architect working on enterprise software, let me walk you through how we helped a project management startup succeed by applying SaaS type 2.

The project management platform that we were working on was an enterprise-level software based on a well-established algorithm to perform an optimal schedule for different types of project environments. However, to further delve into providing scheduling solutions at a much granular level, the product was going through a major overhaul in terms of new functionality being provided in addition to existing solutions, along with a UI revamp to make it more user-friendly.

Challenges that Came Along

The main challenge was to get early feedback for the new functionality from the existing customers for quick incorporation into the product. At the same time, it was also necessary to give the product to a wide variety of potential customers for initial trials, get them on-board for long term engagement, and provide scheduling solutions based on their needs.

While we started placing our focus on reducing the cycle time for features, it wasn’t possible with the traditional model of deployment wherein the product was hosted in the customer’s environment. Therefore, we decided to provide the platform as a SaaS offering. However, the immediate next step was to pick the right SaaS architecture, and this was crucial considering its role in fostering the platform’s future growth.

Arrival at the ‘Make-or-Break’ Decision

Since every organization’s business model is different, the task management and execution could be different, so that these platforms are designed in a way that customization is easy for end-users. Moreover, the platform should be easily customizable for different customer environments. In one common time frame, multiple customers are going to use the platform to create portfolios for their organizations, which will hold very sensitive data specific to the businesses.

In this model, the customers were very clear and strict on the need to have complete isolation both at the application level as well as data level. We could agree that Type 2 architecture was the right fit for this case and decided to implement it using our experience of saasifying products for growth-stage startups from various domains.

Dealing with the Architectural Roadblocks

Given below are some of the architectural challenges that we encountered, and our approach of effectively tackling them to drive successful implementation-

Scaling

Each customer runs on a different scale; some customers have thousands of users using the platform for planning and execution. On the other hand, there are customers with very few top-level executives using the platform. Since we have the freedom to deploy the application at the customer level, the application was deployed keeping the size of user bases in mind.

Fast Customer Onboarding

New customers need to be onboarded quickly, with minimal assistance from the Engineering or Implementation teams. For this, as soon as a new user signs up on the platform, we need to provision the application and database instance within minutes of signing up. This was done using the automated scripts to provision an application instance from a pre-configured base image quickly. Also, a unique URL for the application was generated using AWS Route 53. Once the provisioning happens, the user is notified that he/she is ready to use the platform with his unique URL (user-specific or organization specific).

Customization

Architecture should support the customization of different business entities without any customer-specific deployment from the engineering team. These customizations were provided in the application via a configuration dashboard, wherein an admin user of an organization will set the configuration parameters based on the organization’s needs.

Hardware Utilization

The new architecture should be optimized for hardware availability. While there will be existing customers with huge data sets and customizations, there will be some customers with few data and almost zero customization. This was done by analyzing the costs of cloud infrastructures like instances, database servers, etc. and preparing the pricing plans for end-users accordingly.

Security

A lot of security aspects were already handled by isolating data as well as application runtime for each customer. The data in transit was over HTTPS only. The application itself provides secure access for all customer data.

Cloud

In the current example that we are discussing, the customer wanted to modify the existing platform in the form of “Portfolio as a service”, and they didn’t want to manage infrastructure and hire an admin for management. The implicit requirements were complete automation of provisioning, which was achieved with a one-click deployment for the product to provision application and database instances within no time. The architecture was built around multiple clusters so that all customers have their own runtimes (applications) and database server and no sharing of data or applications are entertained-

Design

As demonstrated in the diagram given below, on every new customer onboarding, our automated services created keys and did provisioning of applications and databases as per the pricing plan adopted by customers. Once this is step is completed, they could immediately start using the platform.

For every customer’s request, the load balancer identifies the right IP address of application to process. Thereafter, the application gets fully-encrypted data from an isolated database, decrypts data using the keys, and sends it back to the user.

Advantages of SaaS Type 2 Architecture

They say- sometimes it’s the smallest decisions that can change things forever. In our case, it was our decision to probe the customer’s case up close and choose the right SaaS architecture type that would serve their purpose well. Some of the advantages that the customer enjoyed-

  • Security is handled at the infrastructure level so that the application doesn’t have to take care of data sharing.
  • No necessity of managing connection pools for tenant-specific databases.
  • Low chances of the system’s underutilization as scaling can be done differently for different clients.
  • Faster customer onboarding is possible since no tenant-specific items are built.
  • The system can be customized as per user’s need without worrying about its impact on other users.

 Conclusion

We have customized customer onboarding in place, wherein customers can pick pricing plans as per portfolio size and the number of users, and our fully automated deployment solution provisions correct instances in the cloud and ensure optimization of the system. While SaaS type 2 architecture comes with several benefits, startups considering to implement it must be aware of the heavy investments on automation and monitoring that are tied to it.

Top Considerations while Implementing Blockchain

If you are seeing technology making a difference in the startup ecosystem, you might have seen a lot of hype around Blockchain. Innovative characteristics of Blockchain- decentralization, immutability, transparency, and automation- can be applied to various industry verticals, thereby creating a multitude of use cases.

Blockchain technology is still in its nascent phase and, while cryptocurrency platforms like Bitcoin and Ethereum have been in use since long, its adoption into the mainstream software industry has been limited. Having worked on Blockchain implementation for startups from various domains, I have tried to list down the top seven considerations while implementing a Blockchain in a product.

On-Chain or Off-Chain

One of the key architectural decisions while working on Blockchain-based products is- which part of the functionality should be implemented on the Blockchain and which is to be considered off-chain (i.e. on the centralized servers), both in terms of the transaction data and business validation logic.

The primary constraint is the network latency due to the data replication across the Blockchain network. The degree of latency keeps increasing with increasing levels of data replication. For the same reason, Ethereum charges a reasonable fee to store data on the chain.

Some general guidelines-

  1. Data that is either directly required for transaction validation or need auditability should be stored on-chain. All other types of data that are referential should be stored off-chain.
  2. In cases wherein eventual consistency is good enough, transactions can be carried out off-Chain, with only the first and last state being updated on-chain. This will increase overall throughput without utilizing additional network resources.

Public or Private Permissioned

Another important decision is the scope/access of the Blockchain itself, ranging between open & permissionless system to a private & controlled one. Public Blockchains are useful, wherein the users are to be kept anonymous and treated equally. Public chains require a community around them to ensure that no one person has the authority to change rules. They need to be community-driven, and a single user cannot change the rules of the entire network. However, a large number of nodes may limit the throughput of the transactions, and some incentivization is required to carry out effective processing.

Permissioned Blockchain platforms, on the other hand, control who can write/read on the Blockchain and are scalable when compared to public chains. They could be suitable where controlled governance is required, and compliance/regulations need to be followed.

An example of a public permissionless chain is Libra, a global payment system by Facebook, which can be used by anyone for value exchange. On the other hand, an Insurance claim processing platform is a good use case to exemplify private permissioned Blockchain. This categorization must be thought in the initial stages itself, as both the categories require different kinds of consensus and identity management solutions.

Levels of Security

Tamper-resistant, resistance to double-spending attacks, and data consistency are some of the desired attributes of a secure distributed system. While the first two can be achieved using cryptographic principles of Blockchain technology, an appropriate consensus mechanism is required to achieve consistency across the system.

In public-facing systems where anyone can join the network, all the nodes are trust-less with no one node having more privilege than others. In these scenarios, security is required against any malicious node, and a Blockchain with POW like the consensus is better suited despite the over-consumption of network resources and limitations in terms of transaction throughput.

In consortium like systems, multiple parties interact and share information. In these systems, although node identities are well known, only some nodes are fully trusted for processing the transactions, and security is required against the semi-trusted nodes or external users not directly participating in the network. A Blockchain, with appropriate governance model and consensuses like PBFT or POS, will not only provide the desired security attributes to the system but also increase the operational efficiency because of high trust levels.

In a document workflow-based application, for example, where documents are exchanged between multiple parties for approval, a system of later type can provide the required security and efficiency.

Data Privacy Needs

Sometimes, data stored or transactions executed on Blockchain need protection on account of confidentiality or compliance rules, and herein privacy comes into the picture. For instance- in the case of financial trades and medical-records based applications, transactions may need to be hidden with data visibility for selected stakeholders. Even in the case of bitcoin wherein transactions are done by anonymous users, transaction trend graphs may provide insights that can reveal the user’s true identity. These users may want to hide the beneficiary or amounts involved in these transactions.

Techniques like transaction mixing and zero-knowledge proof have been proposed to support that. Sometimes, there are variations in real-life situations where these techniques can’t fit directly and require the design of a new protocol using existing techniques.

Physical to Digital World Transition

Physical assets (like a land registry, physical objects, paper contracts, or fiat currency) can be represented into digital assets on the Blockchain and can benefit from decentralization. However, this requires an inherent trust in the system. We would either need a trusted third party providing this guarantee or a physical legal agreement between the parties that cannot be repudiated in the court of law.

In the case of fiat currency based applications, this trusted third party is a bank.  In that case, choosing a bank with good technical infrastructure is essential so that the Blockchain platform can be integrated with the bank easily.

Data Protection (GDPR)

GDPR compliance requires that a user can selectively reveal personal data to others and can exercise his/her right to the erasure of this data. As it is not possible to delete any data from the Blockchain, we should either keep such personal data Off-Chain (in centralized servers) or provide end-to-end encryption of his/her records so that it can be viewed only by that user.

Ease of Development & Deployment

Last but not least, we should have tools that ease out processes of development and deployment. A better smart contract framework means fewer bugs and more trust. A good container orchestration tool like Kubernetes is a must-have for upgrading the product on all the validator nodes.

Conclusion

Before building a real Blockchain-based product, you got to take a close look at the considerations mentioned above that can make or break your efforts. Stripping away all the hype and covering all the teething problems, I believe that blockchain technology is meant to revolutionize industries in a similar fashion of Big Data or any other emerging technology. Happy Blockchaining!

Does your Startup Really need Blockchain?

‘To Blockchain or not to Blockchain’ – this is one big question that has been on the minds of startup founders in recent times. From supply chain monitoring to equity management and cross-border payments, Blockchain has been making its way into multiple areas. Startups, to meet their growth goals, are jumping onto the Blockchain bandwagon to generate buzz, convince investors, and raise new rounds of funding.

Many startup founders approached us with a common question in the recent past- Is Blockchain the right fit for my startup? That triggered me to help them with a decision tree that will enable pragmatic decision-making in this direction. However, the number of startup founders reaching out to us with this dilemma kept increasing of late, which inspired me to write a detailed article on this.

Whether to adopt Blockchain for your startup is not merely a technological decision but also a business decision. Being the frontliners of decision-making, it is crucial for founders to not fall for the hype but diligently analyze if adopting Blockchain is right from the business perspective– even in cases where a well-defined problem exists. While Blockchain’s unique properties have forced startup founders to think of it as an essential and transformative technology, the ‘business benefit’ stands firm as a vital consideration in this decision. This article will cover both technology and business perspectives that founders need to consider while evaluating Blockchain.

Decision Tree: Evaluating the Technology Fit

Though many research papers feature decision trees to evaluate Blockchain use case feasibility with respect to technology, here is a simplified version of the framework-

Real-Life Use Cases

 For a better understanding of the decision tree, let me take you through some of the real-life use cases across different verticals-

 

Use Case Do we need to store the states?

(user specific data and/or meta data)

Are multiple users involved/ updating the stored states?

 

Is any trusted third part involved?

 

Can the third party be eliminated? Decision

 

Remarks
Social media application that involves user engagement and interaction Yes Yes Yes No No This is similar to a traditional centrally-managed application
Yes Yes The same use case can be implemented using Blockchain if and only if the control has to be released to the community
Food retailers receiving supplies from producers, wherein ensuring food quality is a key challenge Yes Yes No NA Yes
Organizations maintaining records of employee attendance Yes Yes Yes No No As long as there is mutual trust between organization and employees, there is no necessity of Blockchain. If any trusted third party is involved and Blockchain comes to picture, it would be mere over-engineering

 

Cost-Benefit Analysis: Evaluating the Business Fit

Every startup founder, who is planning to invest in Blockchain, should assess the ROI that will come from its implementation. You might be adopting Blockchain as a necessity or a differentiator for your product, but evaluation should always be done from a revenue generation perspective.

You might have to come up with a cost-benefit analysis as per your business, but I will help you with an example to better understand the approach. Let’s consider the case of food retailers mentioned above, wherein we would compare the high-level costs with different cost components.

Development Cost

If development efforts for building an MVP with a traditional centralized system approach were around X man-months, the efforts would be 30-40% higher in the case of a Blockchain-based approach, primarily for building Blockchain-based eco-system components. Usually, a Blockchain developer would cost you at least 1.5 times more than developers working on widely used technologies. This would make the development cost of Blockchain 2X higher than the traditional application development cost.

Infrastructure Cost

To evaluate the infrastructure cost, let’s assume the transaction volume of a few hundred transactions per second (TPS). If for a traditional solution the infrastructure cost is about X per year, it would be the same for a Blockchain-based approach. This is as per the assumption that nearly 8-10 nodes are part of the consortium. It boils down to one inference- Instead of a single party managing all the infrastructure nodes, every member of the consortium should own the node.

With the increasing transaction volume, the traditional approach can scale horizontally; however Blockchain-based solutions face the ‘Scalability Trilemma’. This is a famous term coined by Vitalin Buterin that, in layman terms, is akin to the phrase ‘you can’t have everything’. Businesses should clearly understand which aspect among the three- decentralization, security, and scalability- they intend to optimize and if that is in line with their value proposition.

Other Costs

A few other business efforts required in the case of Blockchain-based solutions include setting up the consortium, convincing the plausible members regarding benefits of joining the consortium, and expanding it to a level where it can be claimed as safe. Besides, it might also include devising legal rules and regulations to resolve conflicts.

When talking about benefits, a Blockchain-based approach can certainly enable business processes automation using smart contracts. The approach not only improves the overall process efficiency but also reduces operational costs for the businesses. This report [2] says that using Blockchain can minimize wastage of goods, which can result in savings of nearly 450K Euros annually. This value far exceeds the initial investment and operational cost that goes into a Blockchain-based solution. When the consortium further grows, the Blockchain-based automation protocols would enable business communities to define industry-wide standards.

Summary

Though it might not have garnered the importance that it deserves, evaluating the feasibility of Blockchain is highly recommended for startup founders. This article aims at busting the Blockchain hype and encouraging in-depth evaluation from an intersection of business and technology perspectives.

References

[1]   K. Wüst and A. Gervais, “Do you need a Blockchain?,” 2018 Crypto Valley Conference on Blockchain Technology (CVCBT), Zug, 2018, pp. 45-54, doi: 10.1109/CVCBT.2018.00011.

[2]  G. Perboli, S. Musso and M. Rosano, “Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World Use Cases,” in IEEE Access, vol. 6, pp. 62018-62028, 2018, doi: 10.1109/ACCESS.2018.2875782.

 

How to Build SaaS Application with Data Isolation but No Run-time Isolation?

As you have already considered SaaS implementation, we recommend choosing the right SaaS architecture type so that all the hardware and automation costs you bear are well optimized. In case you are considering SaaS type 3 architecture for your startup, you are at the right place to get started.

Type 3 SaaS architecture is the right fit for cases that require data isolation but no isolation. In this type, different data stores are placed for different customers; however, the application is shared by all. Type 3 SaaS architecture is considered in businesses like e-mail marketing, content management systems (CMS), health care applications, and so on.

For your understanding of the type 3 SaaS architecture, I will take you through the example of an innovation management platform that I worked on for a fast-growing startup. The platform enabled industry leaders to tap into the collective intelligence of employees, partners, and customers, find the best ideas as well as make the right decisions. This platform drove innovation through the following-

  1. Employee engagement: Making ideation a part of daily lives and creating a culture of innovation
  2. Continuous improvement: Supercharging project discovery by tapping into the employee bases
  3. Product development: Creating the next big thing with people who understand the business well
  4. Customer experience: Engaging a wider workforce and reduce customer churn

It also enabled enterprises to manage the entire idea lifecycle, right from coming up with an idea of delivering impact at scale. Now, you must be wondering why we chose SaaS for this platform? The platform had to be made available as a service to enterprises with an option of subscription for a limited period. Herein, hosting/licensing wasn’t a viable option, taking into consideration the cost of deployment, data privacy concerns, and the IT assistance involved. We picked SaaS Type 3 deployment model for this platform wherein we could keep data of each enterprise isolated from others, all the while retaining flexibility of application runtime being shared.

SaaS architecture

Fig 1- Saas Type 3 Architecture

How Our Decision Paid Off?

Having the right foresight and visualization is the key to good decision-making. That worked well in this case too, when we could rightly foresee the results of deploying SaaS type 3 on this platform. This decision helped us address the areas mentioned below-

  • Data isolation
  • Server utilization, wherein we kept application runtime shared to use the server capacity optimally
  • Separating application runtime to the high-end server for some high-paying customers

What are the Challenges We Overcame and How?

Isolating data for each customer by having separate databases, all the while sharing a common application runtime, was a critical challenge that we tackled. In other words, we got one application runtime capable of supporting multiple databases for customer-specific data management. Along with this, we also had to accelerate customer onboarding in less time. This implies the deployment process should be automated enough to handle database provisioning, disaster recovery, and rollout of new versions.

Supporting Multiple Database Connections

As explained earlier, we had one application runtime that supported multiple databases for the respective customers. In our case, we had built N-number of Tomcat web applications deployed in one server that shared the common application runtime. This way, every customer had access to an independent application, with every application having its connection pool to manage connections. However, a plan of merging these deployments to one application is underway, so that we don’t have to run duplicate processes.

Faster Customer Onboarding

We brought down the customer onboarding time by automating the database creation with templatized data using Chef scripts. Apart from the database creation, it was also essential to set up a backup-recovery process and failover & load balancing for the application, which we could achieve by using the cloud solutions and Chef scripts.

Effective Disaster Recovery

As the solution helps in innovation management, the data was highly critical to our customers. This implied that our system should be able to weather any unexpected disasters and unforeseen accidents. To handle this, we had deployed the application & database across multiple availability zones that ensured timely updation of application and copies of the database whenever the primary DS is down.

Automated Deployments

For a new version rollout, along with the application deployment, we had to deploy a new version of the database or upgrade the existing version for each customer. However, with one-click deployment automation that we had in place, we could safely upgrade all customer applications to the new version all the while ensuring the existence of a recent backup in case of a rollback.

Utilizing Hardware

As we had an isolated database for each tenant, we had to spin up multiple DB servers for each of them, and this was more of a requirement rather than a choice. But since the application runtime can be shared, we had options of hosting it in a single server depending on the usage. By grouping customers based on utilization, we could reduce the number of servers and, in turn, accelerate the usage.

How did we Ensure Security?

As stated earlier, we isolated data for each customer by having a separate database all the while sharing a common application runtime. This came with the additional baggage of securing the application runtime that would restrict the urge of end-users to access other end-users’ data points. How did we implement this? Here’s how-

  • Maintaining separate configuration keys for each customer and rotating them on every release
  • Preserving encryption keys of databases fields for each customer and rotating them on every release

Apart from that, there were many other security compliances we had to follow-

  • Our product was independently audited on an annual basis against a rigid set of SOC 2 controls
  • We have an open policy that allows our customers to perform penetration tests of our service
  • Our production environment is protected by a robust network infrastructure that provides a highly secured environment for all customer data
  • Data in transit is over HTTPS only and is encrypted with the TLS v1.2 protocol. User data, including login information, is always sent through encrypted channels
  • The hosting environment is a single isolated database and application components that ensure segregation, privacy, and security isolation in a multi-tenant physical hosting model. Instead of storing user data on backup media, we rely on full backups that are shipped to a physically different co-location site
  • Customer instances, including data, are hosted in geographically disparate data centers. Customers may choose the location to host their data based on the corporate location or user base location to minimize latency
  • We support Single Sign On (“SSO”), using the Security Assertion Markup Language (“SAML 2.0”). This allows network users to access our application without having to log in separately, with authentication federated from Active Directory
  • An automated process deletes customer data 30 days after the end of the customer’s term. Data can also be terminated immediately depending on the contract terms of the agreement.

Conclusion

Despite the above challenges, this model helped us live up to the promise made to the customer, i.e. ideas across enterprises remain isolated and high-security compliance remains ensured for every customer.

Distributed transactions are Not Micro-services

(A Quick Note for the Readers- This is purely an opinion-based article distilled out of my experiences)

I’ve been a part of many Architecture-based discussions, reviews, and implementations, and have shipped many microservices’ based systems to the production. I pretty much agree with the ‘Monolith first’ approach of Martin Fowler. However, I’ve seen many people go in the opposite direction and justifying the pre-mature optimization, which can lead to an unstable and chaotic system.

It’s highly important to understand if you are building microservices just for the purpose of distributed transactions, you’re going to land onto great trouble.

What is a Distributed System?

Let’s go by an example, in an Ecommerce app this will be the order flow in a monolithic version

In Microservice version, the same thing will be like this

In this version, the transaction is dived into two separate transactions by two services and now the atomicity needs to be managed by the API controller.

You need to avoid distributed transactions while building microservices. If you’re spawning your transactions in multiple microservices or calling multiple rest APIs or PUB/SUB, which can be easily done with in-process single service and a single database, then there’s a high chance that you’re doing it the wrong way.

Challenges in Using Microservices to Implement Distributed Transactions

  1. Chaotic testing, as compared to the ones in in-process transactions. It’s really hard to stabilize features written in a distributed fashion, as you not only test happy cases but also cases like service down, timeout, and error handlings of rest APIs.
  2. Unstable and intermittent bugs, which you will start seeing in production.
  3. Sequencing, in real word everyone needs some kind of sequencing when it comes to transactions, but it’s not easy to stabilize a system that is asynchronous (like node.js) and distributed.
  4. Performance, which is a big one and is a by-product of premature optimization. Initially, your transactions might not handle big jsons, but might appear later, and in-process where the same memory is accessible to subsequent codes and transactions, in microservice world where a transaction is distributed it could be painful (now every microservices will load data, serialize and deserialize or same large Db calls multiple time).
  5. Refactoring, every time you make changes in the design level, you will end up having new problems (1-3), which leads to engineering team a mod “resistant to change”
  6. Slow features, the whole concept behind microservices is to “build and deploy features independently and fast” but now you may need to build, test, stabilize, and deploy bunch of services and it will slow down
  7. Unoptimized hardware utilization, there is a high chance that most of the hardware will be under utilized and you might be start shipping many services in same container or same VMs, resulting in high I/O. Suddenly if some big request comes into the system, it could make it go hyper utilized, which will then make you separate that component out, further making the system under-utilized if these kind of requests are not coming anymore, and now there will be a team to handle this infinite oscillation that could have been avoided.

Do’s & Dont’s for Building Microservices from Scratch

  1. Don’t think of microservices as an exercise similar to refactoration of code in different directories. If some code files seem to be logically separated, it’s always a good idea to separate them in one package, however, to create a microservice herein is nothing but premature optimization.
  2. If you need to call rest APIs to complete a request, think twice about it (I would rather recommend to avoid it completely). Same goes for a messaging-based system before creating new producers and consumers, try not to have them at all.
  3. Always focus on different user experiences and their diverse scaling requirements, like for e-commerce vendors APIs are bulky and transactional, as compared to consumer API, it’s a good way of identifying components
  4. Avoid integration tests (yes, you just heard it right ). If you create 10 services and write hundreds of integration tests, you’re creating A chaotic situation altogether. Instead, start with 2-4 services, write hundreds of unit tests, and write 5 integration tests, which I’m sure you won’t regret later.
  5. Consider batch processing, as this design would turn out to be good in performance and less chaotic. For instance, let’s say in e-commerce, you have products in both vendor and consumer databases. Herein, instead of writing distributed transactions to make new products in both the DBs, you can first write only in the vendor DB and run batch processes to pick 100 new products and insert them into consumer DB.
  6. Consider setup auditor or create your own, so that you’ll easily be able to debug and fix an atomic operation when it fails instead of looking into different databases. In case you wish to reduce your late-night intermittent bug fixes, set this early on and use in all the places. So, the solution could be like this 
  7. I would recommend to overlook synchronizing. I have seen many people trying to use this as a way to stabilize the ecosystem, but it introduces new problems (like time outs) then fixing. In the end, services should remain scalable.
  8. Don’t partition your database early, if possible every microservice should have its own database but not all of them need databases. You should create persistent microservices first, and then try to use them inside other microservices. If your most/all microservices are connecting to Databases then it’s a design smell, scale the persistent microservices horizontally with more instances
  9. Don’t create a new git repository for new microservices, first create well unit tested core components, reuse (don’t copy) them in high level components, and from a single repository you might be able to spawn many microservices. Every time you need same code in another repository don’t copy them, rather move it to core component, write super quick unit test, and reuse in all microservices.
  10. Async programming , this can be a real problem if transactions are written in proper sequence handling . there might be some fire and forget scenario could have come which might not impact in normal scenario but in regress or heavy load these fire and forget might not even exected ) lead to inconsistent scenarios.

Check above example here developer thought calling sendOTP Service don’t need to synchronize and did classic “fire and forget”, now in normal testing and low load OTP will be send always but in heavy load sometime sendOTP would not get chance to execute .

Microservices Out of Monolithic: A Cheatsheet

  1. 1-5 of the above-mentioned are applicable
  2. Forget big-bang, you have a stable production system (might not be scalable though)and have to still use 50-70% of existing system in new one.
  3. Start collecting data and figuring out pain points in the system, like tables, non-scalable APIs, performance bottlenecks, intermittent performance issues, and load testing results.
  4. Make a call over scaling by adding hardware vs optimization, however, there’s cost involved in both the cases and you’ll have to decide which is lower. Many a time it’s easier to add more nodes and solve a problem (optimizing the system might involve development and testing cost which might be way higher than just addng nodes).
  5. Consider using the incremental approach. For example, let’s say I’ve an ecommerce app that is monolith (vendor and consumer both), and I come to know that we will be scaling with more new vendors in the coming six months. The first intuition would be to re-architect, however, in case of incremental approach you will determine that your biggest request hit will be from consumer side and product search. The product catalogue will need to be refactored, so you will not change anything in the existing app and it will work as it is for all vendors APIs and consumer transactions. Only for the new problems you will be creating another microservice and another db, replicate the data using batch processing from primary DB, and redirect all search and product catalogue APIs to new microservice.
  6. Optimization, you’ll have to shift your key area of focus on optimizing problematic components (scaling with adding more hardware might not work here).
  7. Partition of your DB to fix problems (don’t ignore this). Many people out there might not agree to this but you need to fix the core design problems instead of adding a counter mechanism like caching.
  8. Don’t rush into new techs and tools, you should be using when you have enough expertise and readiness in your team. Always pick stable opensource small projects instead of the new, trendy library or framework promising too many things.

Still Distributed Transactions in Microservices? Here’s the Way Forward

  1. Compositions, if you think you should merge couple of microservices or integrate transactions in one service, it’s never late to do this exercise.
  2. Build consistent and useful audit for transactions, and make sure you always capture audits even your service gets timed out. A simple example of setting up elk stack, structured logs with transaction ids, entity ids and ability to define policies that will enable you to trace your failed transactions and fix them by data operation teams (this is supercritical). You need to enable them to fix these, if it comes to engineering team then your audit setup is failed)
  3. Redesign your process for chaos testing. Don’t test with hypothetical scenarios (like killing a service then see how other components behave), instead try to produce the situation or data or sequences which can kill or time out a service and then see how resiliency/retry works in other services.
  4. For new requirements, always do estimates, impact analysis, and build an action plan based on your testing time and not development time (since now you will spend most of the time testing).
  5. Integrate a circuit breaker in your ecosystem, so that you’ll be able to check whether all services- the ones going to participate in these transactions- are live and healthy. This way you can avoid half-cooked transactions big time even before starting the transactions.
  6. Adopt batch process, wherein you convert some of critical transactions in batch and offline to make the system more stable and consistent. For example, for the e-commerce example mentioned above, you can use the following-

Here you will still get scaling, isolation, and independent deployment but batch process will make it far more consistent.

  1. Don’t try to build two-phase commit, instead go for an arbitrator pattern which essentially supports resiliency, retry, error handling, timeout handling, and rollback. This is applicable for PUB-SUB as well, with this you don’t need to make every service robust and just have to ensure that arbitrator is capable of handling most of the scenarios.
  2. For performance, you can use IPC, memory sharing across processes, and TCP, if there are chatty microservices check for gRPC or websockets as an alternative of rest APIs.
  3. Configurations can become real nightmares if not handled properly. If your apps fail in production due to missing configuration and you are busy rolling back, fixing and redeploying, you would require something else here. It’s very hard to make every microservice configuration savy and you can never figure out all missing configurations before shipping to productions. So, follow this

Hard code à config files à Data bases à api à discovery

  1. Enable service discovery, in case if you haven’t.

Conclusion

You can use microservices but must also have the pitfalls in the back of your mind. Avoid premature optimizations, and your target should be building stable and scalable products instead of building microservices. Monolith is never bad, however, SOA is versatile and capable of measuring everything. You don’t require a system where everything is essentially microservices, rather a well-built system with combination of monoliths, microsevices, and SOAs can fly really high.

How to Choose the Right SaaS Architecture for Your Startup?

Having worked as a solution architect and designed multiple SaaS applications over the years, I could see most startups struggling to choose the right SaaS architecture for their product offering.  

In this article, I’ve compiled all my learnings into a cheat sheet to help startup founders, who’re looking to build SaaS applications,  make a pragmatic decision backed by proven facts and data 

How does SaaS Architecture Impact Pricing and Profitability? 

Customers are increasingly choosing the ‘pay as you go’ pricing modelas this model offers flexibility as compared to one-time pricing model. In order to enable ‘pay as you go’ for your customers, you need the right architecture to support it. When we say the right architecture, it should allow your startup to track usage of services and offer customers the flexibility of managing infrastructure as per their requirements.  

A poorly designed architecture creates limitations in setting the right pricing strategy for the offeringsthereby impacting the acquisition of new customers. On the other hand, a good architecture not only helps in setting the right pricing model but also accommodating special architecture-design requirements, such as scalability and customizability. For having a clear idea of pricing model before setting up SaaS architecture, a startup needs to get answers to these questions-  

  • How would your customers pay? 
  • For what services (computation and values) would the customers pay? 
  • How will the usage be measured and invoices be created for the customers? 

In a SaaS setup, costs incurred in managing operations impact the profitability to a large extent. The optimization of operational expenses involved in managing the SaaS model depends on three crucial factors – infrastructure cost, IT administration cost, and licensing cost. 

However, the bigger question is- how do you ensure that these costs are well-optimized and priced correctly? I’ve listed below a few examples to demonstrate the same 

Salesforce Online- Salesforce provides a lead management system for sales and marketing teams for enterprises. The online version uses cloud (so none of their customers needs to worry about hardware and IT procurement) and chargecustomers based on the size of sales and marketing teams (so that they don’t have to pay one-time high license cost). 

SQL Azure- SQL server is the industry leader when it comes to RDBMSand provides a hosted solution wherein customer needs to pay high license cost, hire a DBA for regulating backup, geographical replication, and disaster recovery (important for databases). But, Azure SQL is a cloud-based system that’s accessible online where you only pay for storage and IOPS, with rest is taken care of by Azure (cloud provider). 

WordPress – Every enterprise needs a content management system, and WordPress has been at the forefront of this. WordPress provides an online platform with white-labelled solutions, customization, and multiple integrations for their customers. WordPress collects customer usage data and charges on the basis of it. 

Why is it Important to Pick the Right SaaS Type 

This might a common question popping up in your mind. Let me explain with two different examples-  

Example 1- Let’s consider an instance where a startup introduces isolated application VMs (Virtual Machines) for all its customers. In the majority of cases, these boxes will remain under-utilized. With customers paying only for utilization, the startup could end up with huge losses.  

Example 2- Consider a second instance where all customers of the startup share the database servers & application servers and paying only for utilization. This is a fair game for the startup as all of the hardware and automation are being properly utilizedHowever, in case of sudden increase in the utilization of these servers by one of the customers, other customers might have face performance issues and unexpected breakdowns.  

When a startup starts building a SaaS application, it essentially bears the hardware and automation costs. Hence, it’s crucial to ensure that all of the above-mentioned costs are well optimized by picking the right SaaS architecture based on your offerings. Damage control is still possible in the above-mentioned examples but you will definitely lose a big share of time and opportunities 

What are Different Architecture Types of SaaS Applications? 

Type of SaaS Architecture

 

Type 4 (Doesn’t require data & runtime isolation) 

This is the most basic type of SaaS application. In this type, you assume all of your customers to grow uniformly, and accordingly, the customer ID is created. These customer IDs are added to all of the tables/collection and all customers share the database and application hardware.  

Type 3 (Requires data isolation but no runtime isolation) 

This is one-step advanced as compared to type 4. In this type, different data stores are put in place for different customers, however, the application is shared by all the customers.  

Type 2 (Requires data & runtime isolation on the cloud) 

This type involves separate applications and separate data stores for all customers. In this case, the cost of isolation is typically passed on to the customers. 

Type 1 (Requires data & runtime isolation, but not on the cloud) 

This type is a version of type 2 wherein customer wants data to be stored on their own network and not on the cloud. Herein, the customer still opts for pay as you go or pricing model that’s based on users/featureas per on-boarding.  

How to Pick the Right SaaS Type for your Startup? 

Depending on the type of industry and nature of data, a customer’s requirement for security and shareability varies. A startup can try to understand the needs of multiple customers, and refer to the flowchart given below to select the right SaaS architecture type- 

On a Final Note 

As the startup growsit isn’t easy to mold the existing architecture to accommodate demands from the growing user base. Hence, it’s always good to choose the right SaaS architecture at the start, so that you don’t lose out on business because of rigid architecture.