Breaking Down the Product Development Code for a Growth-Phase Startup
Director of Engineering - 19 May 2021 -
Director of Engineering - 19 May 2021 -
Challenges change drastically when a startup makes a transition from the early stage to the growth stage. It often finds engineering bottlenecks, which stem from a three-pronged need to work on a product roadmap, nurture existing customers, and onboard new customers. Some of these challenges come from product development and some have their roots connected with the business front.
As a Director of Engineering, I have worked with startups in different stages like early, growth, and acquisition. I still do. My projects bring me close to various industries and products. Their diversity made me realize that VP-Engineering should focus more on mitigating technological and operational risks related to product development.
Knowing the stumbling blocks and reducing their impact are two different things. The latter requires a detailed study of each product development challenge. In this article, I have drilled down to get closer to pain points and revealed ways to strike them out.
Are you spending more than 20% of your time on Bug fixes for the last 3 Sprints?
It isn’t easy to balance feature work and maintenance tasks when you have customers to support and deliver features from your roadmap. When you reach this point, your success depends on how you can walk this tightrope.
“What is important is seldom urgent and what is urgent is seldom important.” -Dwight D. Eisenhower
Eisenhower’s Decision Matrix can help you prioritize your product backlog. It will help if you have a dedicated percent of time in each Sprint for Feature Work (Important) and Bug Fixes (Urgent). We can follow a structure comprising 80% Feature work and 20% Bug fixes.
Whenever the Bug fixes time goes beyond 20%, and it will definitely go that way, my advice would be to implement regular retrospection and deploy corrective actions. Crossing that 20% mark means that there is a problem in Develop (Branching), Testing (Functional and Non-Functional), Deploy, or Rollout stage.
Do you have teams with Dedicated Product Owner & Design Engineer?
As you are juggling with existing customers, onboarding new customers, and new feature development, the right team structure will help to make progress on all fronts. Once you define your priorities and assign weightage, have parallel Sprints/Kanban boards for each of those. Mark the logical separation between different modules and set up teams accordingly. For example, have a customer implementation team focus on onboarding new customers or have an RnD team try out different POCs. Try it out with few customers or have teams for each module – Payment team – to work with payment integration, etc.
For all the teams to make progress, each front should have dedicated Product and Design personnel on the team. It is one notable differentiator I have seen in fast-growing startups.
How good is your team structure for your architecture?
Technical Architecture can act as a guiding principle for the logical separation of teams to run parallel and independent workstreams. Teams having true microservices architecture can have separate teams. Each team can own one microservice with representation from engineering, product, and design.
For Layered architectures, you can have the separation based on layers or the team’s expertise (Frontend Team/Backend Team/Full Stack Developers). Microservices architecture gives a lot of flexibility in structuring teams to achieve the expected growth.
Is your branching strategy catering to customer-specific demands?
An ineffective branching strategy can jeopardize development practice. There is a chance that you might start building custom features for your customers. However, with each step forward in that direction, you might end up creating customer-specific branches. Besides, it could lead you into a big mess. The ensuing chaos of earlier actions will make your team focus more on maintaining what has been shipped rather than building new features.
It’s not easy to come up with one effective branching strategy. First of all, you have to choose from many and then, experts have different opinions about each one of them. But if you ask me, I would recommend Git-Flow. It is the outcome of my working experience with all different kinds of development teams. Develop and Master should be the main branches with a defined strategy for supporting branches (feature branch, release branch, hotfix branch with a defined time to live). While zeroing in on one branching strategy, you should also consider the need to support and maintain multiple versions (e.g., Enterprise Apps vs. Consumer Apps).
Do you know what type of automated test run would suit you best?
When one of the startups gradually moved from an early stage (only 1 customer) to a growth stage (more customers started onboarding), the team started stretching to meet deliverables because of last-minute regression issues. There was no other way to detect these early due to the limited bandwidth of QAs to do regression. To get out of this deadlock, we started automating key regression scenarios to work with available QA bandwidth and slowly freeing up QAs to test new features.
In the growth stage, regressions often obstruct the production flow. VP-Engineering’s role is to prevent it to avoid any unnecessary hassles later. Also, to ensure minimal production bugs, it is important to test every line of code, including conditional statements, loops, and functions with multiple parameters. As these tests to be repeated for every build, automation is the key to ensuring the product’s stability and reporting the issues of the code before it hits production.
Good test coverage (unit tests, integration tests, functional tests, acceptance tests) and code coverage will help to prevent such issues. Exhaustive automation suits for smoke, integration, regression and functional tests can help you achieve the expected velocity. Some of the tools like JUnit, PyUnit, PyTest, Jest, etc., can be used for unit tests. Similarly, there are plenty of functional automation tools available. Selenium is one of the popular software testing tools. It covers almost 27.48% of the entire segment.
Code coverage will tell you how much code is executed/covered during both manual and automation testing. Some of the popular code coverage tools are JCov, JaCoCo, Coverage.py. So far, in our projects, we have used JaCoCo extensively.
The important thing is to have good automation coverage so that the smoke, regression, and integration suits run on the build whenever necessary without consuming QA members from the team. In fact, companies are well aware of its importance. A recent survey has revealed that around 78% of companies rely on automated testing for regression. It is enough to suggest how important the integration of the automation suite into your CI/CD pipeline is.
Are you using the right kind of performance monitoring?
Performance degradation is the next area that should concern you. It tends to bloat into a big problem if you keep your eyes away from it for long. When you monitor the right performance metrics for your application, you get a chance to take corrective action on time. As Peter Drucker rightly said, “If you can’t measure it, you can’t improve it.”
So, the first step is to set up the right tools to measure the right KPIs. Once it is set, for every release mandate performance and load test, any new code that impacts the performance can be corrected right on time.
If you have benchmarked your application with # of concurrent transactions supported, you can work around it to define your architecture requirements when the need to change that benchmark rises. You can use performance testing tools, such as JMeter, LoadUI, Loadrunner, etc., to benchmark the application performance and then use the monitoring tools such as NewRelic, AppDynamics, DataDog to monitor the performance against the benchmark.
How secure is your application?
Entrepreneurs focus more on adding features to the product when their startups are in the early stage. As a result, security takes a backseat. However, many of the big tech firms have experienced data breaches. There is a possibility that attacks on applications will happen at some point in time. Such an assumption makes security testing imperative. As a preventive measure, I recommend penetration testing every year or simulate the attack and identify the vulnerabilities, along with different scanning and auditing. For instance, the financial sector is full of early adopters of detailed cybersecurity and penetration testing. As its outcome, the cyberattack success percentage in this sector is only 5.3%.
There are multiple online pen test tools available, we have used Burpsuite.
Is your deployment strategy aligned with customer demographics?
Deciding on the right deployment strategy should be a flexible decision. If you are unsure of how the code will impact production, Canary may help you better. Overall, this decision is contextual. You have to consider various decision parameters like if the downtime is acceptable, whether all users should be moved to a new version at one go, rolled out based on geography, etc. Almost all deployment tools offer these capabilities. Deployment Strategies are – recreated, rolling, blue-green, canary, shadow, A-B testing, etc.
Are you using the feature flag?
In agile, you want to do continuous deployment and always remain extra cautious about not accidentally releasing something to customers. Product managers want to roll out the feature to limited users for A-B testing, and it is important to equip them with the tool to control it. Feature flags can come to the rescue by decoupling your deployment from rollout (release).
Rolling out alpha/beta features to production can always be tricky. Whether you want to do A/B testing or test out beta features with a few customers, feature flags can play a key role in enabling you to do that. You can bank on tools like Launchdarkely and Optimizely for feature flag management.
Do you know about the engagement of every feature you roll out?
If customers are not using features the way they should be, then knowing the reasons driving such a change becomes important. Again, the first step is to measure the feature usage and its impact on the overall user engagement.
Mixpanel and amplitude are two leading tools that can be used to understand the user behavior, user journey and engagement with the product.
Slight delays in adding features to the product often pose multiple risks. It might shake up user loyalty, pose questions regarding return on investment, or hold you back from getting a competitive edge. That is why dig deep into your execution process to learn how much you are aligned with the steps I have mentioned here and reduce your engineering bottlenecks
If you want to discuss these issues further, feel free to comment and share your views.