Quality With a Start-up Approach – Efficient and Cost Effective
Related Topics: QA & Testing
by Rani Acharya –
Let’s face it. Startups face a myriad of challenges to survive and sustain. They are fighting to stay afloat against steep competition from the corporate world. They don’t have the kind of cash and backing that many established companies have. With little cash to play around, there is no room for error. The expectation from the investors is unrealistic. Deliver fast, something that the customers will use. We all know customer is king. Customers must be satisfied with your product. Winning their trust is a no-brainer, as how your company will grow. To win customer’s trust and loyalty, startups must have the quality to wow them. But being cash strapped and low on resources, quality is often an afterthought. Our research suggests that companies who invest in quality assurance are most likely to go to the next level of maturity and have a better chance of success.
Startups have their own way of working. The same resource plays multiple roles of Developer, Unit Tester and functional tester. So what is the right way to make sure quality is built into the product and yet it does not cost them a leg and an arm. Here is the three-pronged approach:
- Collaborate, Collaborate, Collaborate. Software bugs don’t appear in the form of bad code alone. Most defects happen because of inadequate understanding of the requirements. And those defects are the worst kind, because you designed, coded and tested something that the customers didn’t want. The earlier you find the defect, the cheaper it is to fix. Collaborate with your customers at every step. I can’t stress this more. Ask what the customer needs, demo your product, get early feedback. Reach the goal together. Invest time in doing static testing. Talk to customers to find out what they need and convert it clearly to a technical requirement. Second level of collaboration comes in the form of early feedback. Share with your customers what you have developed at the end of each sprint. Your functionality might be working great, but it might not necessarily be the function that the customer wanted. Being fit for use is the customer’s definition of quality. With limited budget and time, you want to keep your Cost of Quality (COQ) low. So what is Cost of Quality? As per the definition provided by Quality Assurance Institute (QAI), Cost of Quality is the money spent beyond what it would cost to build a product right the first time. If every resource could produce defect free code the first time, the COQ would be zero. But since we don’t live in a perfect world, there are costs to produce a defect free product.These are the three COQ categories:
- Prevention Cost – Money required to prevent errors and to do the job right the first time, is considered prevention cost. These costs are associated with requirement review, creation of plan for quality and creation of quality system.
- Appraisal Cost – Appraisal cost is the money spent to review the completed products against requirement. These costs include verification and validation of code.
- Failure Cost – This is the cost associated with defective products. These costs are generated by failures, such as cost of operating buggy software, damage incurred by using them and the unavailability of the application.
Studies show that COQ is approximately 50% of the total cost of building a product. Of the 50% COQ, 40% is due to failure. Failure costs come in during the last phase of SDLC and are very expensive. The more we can shift testing to the left, the cheaper it is to fix a defect and keep our COQ low. Continuous collaboration early with customers will ensure that your failure costs are low.
- Risk based automated testing. Automation is extremely important in agile delivery. With automated tests, you bring consistency to your testing. It reduces the errors that the manual testers insert into the testing process. With automated continuous testing, defects will be found early in the cycle. Developers can find and fix the issues throughout the SDLC process. It’s easier to scale with automation. With newer AI tools, you can increase your coverage pretty rapidly. But do we have the time to automate everything?Before I talk about risk based automated testing, let’s talk what risk means and how to assess it. Risk is the probability that an undesirable event will occur. There are three things you need to consider when managing risk.
- The event that could occur – Risk
- Probability that the event will happen
- The impact that the event will cause
Risk = Probability*Impact
Before you test or automate, identify your risks, which risks do you care about and what is the frequency of occurrence for such a risk. The loss due a risk is amplified by the number of time it happens.
Based on the risk, determine which functionalities are most important to your customers. Classify your test cases into three buckets: Most risky, medium risk and low risk. For example, if it’s a fintech application any functionality that deals with financial transactions, data confidentiality or regulatory compliance needs to get utmost attention. UI components might be the least critical for such companies. But UI components will be the most critical functionality for retail business. Risk, probability and its impact will greatly vary, based on the industry you are developing the software for.
Once the test cases have been classified, automate the riskiest category first. If time permits automate the less critical ones. If you have time and money to automate only the riskiest test cases, these test cases should run with every build. You will find defects early and reliably that you care about the most.
- Don’t just automate the tests, automate the end to end test process: If you are only automating the test cases, it won’t give you the fast turnaround you are looking for. The fewer test activities to track, the fewer manual work you have to do to execute the tests or generate the metrics to track quality, the better off you are. Automate any task, that is repetitive, that doesn’t provide value, or that can be automated. Here are some areas where you can automate the test process:
- Integration between tools that document requirements and tools that document and execute tests. Most organizations use separate tools for documenting the requirements and executing the tests. It becomes testers nightmare trying to establish traceability matrix. Have we tested all the user stories, have all the tests passed for a user story etc. Having connectors between these tools where update on one tool will automatically update the status on another tool, will be tremendous time savers.
- Integrating the automation tests into the CI/CD pipeline. Instead of manually running the automated tests, create job in your CI/CD pipeline to fire off the automated tests. There are tools that can run your tests in parallel, saving you time and effort. Strive for a zero touch execution and reporting.
- Integration between your automated tests on CI/CD pipeline and your reporting tool. You will need to measure how good your quality is and where improvements are needed. How good your build is, what has broken from the last time you made the build, how much time did it take to make the build and execute the tests. Instead of collating these results manually, get these metrics through a reporting tool. The result is not only accurate but it’s available real time 24X7.
Okay, now you have implemented the above three pronged approach in building quality to your product, but how do you know it’s effective. Organizations must measure the effectiveness of test processes and identify what’s working and what are the areas of improvement. There are thousands of things that we can measure. So how do you know what to measure? Find out what is the goal of your company. Business success metrics drive software improvements. Measure only what matters. Here are a few metrics that you can measure during the development phase to ensure quality is being built into the product
- Unit test coverage – Unit Test Coverage gives a rough approximation for how well the code has been tested by developers. This is the first step to find defects and is the cheapest as well.
- % automated test coverage – This metric reports on the percentage of test coverage achieved by automated testing, as compared to manual testing. It helps quantify the progress of test automation initiatives, such as in sprint automation. The pressure to get software out quickly can cause teams to write fewer tests for user stories. It’s important to have visibility on what is being tested and how quickly can we test by using automated scripts.
- Regression defects – Finding defects while code is being developed and tested is part of the SDLC process. But breaking existing code while adding new code can be dangerous. It demonstrates the fragility of the code. It can be used as a future indicator of the quality of product when it is deployed to production environment.
Here are a few metrics that are important to consider after the product has been deployed to production environment.
- Defect found in production environment – Defect escapes are defects that were not found during testing and were reported by customers. What is the trend of these defects being open and closed within a time period, what is the severity of these defects, how is the customer getting affected, is there a workaround for the defect, is customer losing revenue? These are important measures of customer satisfaction.
- Application crash – How many times an application crashes during a given timeframe. This is related to MTTR and MTBF
- Mean time to recover/repair (MTTR) – Mean Time to Repair (MTTR) is a metric that measures how quickly a broken software is fixed, tested and deployed to production again.
- Mean time between failures (MTBF) – Mean time between failures (MTBF) is a metric that measures the average time between one software failure to the next. This metric determines the reliability of your application. Both MTTR and MTBF are overall measures of your software’s performance in production environment.
A start-up cannot be quality obsessed at the early stages. The product must be launched as an MVP to the customers and iterated frequently to understand the pulse of the market. However, the moment there is customer acceptance, start-ups need to up their quality quotient significantly. That might mean longer release cycles and short-term customer anxiety. The trade-off is almost necessary to have the quick turnaround which is the lifeline of any start-up.