by Joel Pascua –
The challenge of providing high-quality code is as old as programming itself, and as applications evolve to meet complex business needs, consume and process much more data than ever before, and leverage on all advances that new and more powerful technology has to offer, the challenge has become even greater. Traditionally, validation is left in the hands of manual testers who perform tests through screen navigation and/or backend database queries to confirm the code is providing the expected behavior. The past few years, however, have seen an accelerated shift from this traditional way of testing and into automation, which in itself has significantly evolved to be more than just record and playback and into a complex multi-faceted solution.
Automated testing started in the unit testing area and simple User Interface (UI) record and playback tools mimicking the click and keystroke user interaction with limited flexibility to test functionality. Then Selenium and Webdriver frameworks revolutionized the competency by allowing for write once and run anywhere concepts.
However, as technology innovation pressed forward, automation shortcomings surfaced, such as legacy patterns adding complexity to the code, application changes breaking the framework that was not able to be overcome, and the effectiveness of automation eroded with ever shorter release cycles.
Enter Robotic Process Automation (RPA) combined with Artificial Intelligence (AI). These two complementary technologies are the new disruptors that are taking software quality to a whole new level enabling more effective testing of the most complex, multi-system entry combined with an ability to perform analysis on testing results and generating recommendations for manual efforts to be focused on. Automation is no longer limited to testing the UI or databases but now can be used for a variety of areas, including Interfaces, output validation, and even beyond testing into the development space.
In the early years of automation, the goal was to automate the regression test suite based on manual test cases. This is not the case anymore. Automation capabilities have expanded to include test data creation, output validation, test data management, and quality risk analysis.
RPA tools have evolved from once requiring a developer with a depth of skills to program expected behaviors to low/no-code solutions that can be managed by a semi-technical business analyst or tester. The new solutions enable a faster “idea to implementation” cycle with fewer delays and less rework as the person who will use the tool is the person programming it.
One of the biggest challenges in testing is the creation of “real-world” test data due to the amount, variety, and combination requirements sufficient to cover the multitude of application scenarios. Couple this with the now-standard need to test with data from external systems and the challenge increases many folds. For example, claims data may require a make, model, matching VIN for a customer with three accidents of a specific type from the sample DMV record and for a unique accident location. RPA and AI can help address these challenges. RPA can be leveraged to automate the data creation process by looking up data from multiple external systems, adding a randomization event, and placing it into a standard set of records. AI algorithms can consolidate the data and apply meaningful algorithms to adjust the data and provide a variety to truly exercise the system instead of just repeating the same top 40 bands from years past, which can prevent the identification of defects.
At the core of automation frameworks remain the goal to replicate testing efforts of entering data, validation of elements on the screens, but as automation capabilities have grown, so has its reach of testing efforts. Now automation can perform detailed and intricate testing of interfaces between systems by replicating the response from the other system. Automation is now being used to achieve a more comprehensive validation between the UI, interfaces, and the database.
The days of fragile and regularly maintaining test scripts have been replaced with frameworks and componentized scripting. Instead of a holistic end-to-end script, scenarios are defined as components that test each step’s variability. For example, a screen may be a component that uses framework based functions to perform various validations such as a lookup validation. Instead of hard coding script data, the script now uses an external data source such as a spreadsheet or database for all input parameters. The componentizing of the script allows for easy adjustments of the automation efforts as the screen is updated or changed.
Every system either receives/sends data from other systems before the business process chain or downstream for other systems to review and export. Testing these systems is generally limited due to the complexity of moving data in test environments and the limited knowledge of the other system capabilities. Through RPA, it is possible to script the process to transfer data, kick-off batch jobs, and test external systems from the same set of scripts that tested the newly developed system. RPA is not limited to screen testing; it can also perform lookups on report output to verify the data entered on system 1 as “A” is displayed on system 2’s report as “1”.
RPA, combined with AI can analyze unstructured data, such as social media posting, reading documents, and even analyzing speech patterns to determine intent. One of the most tedious testing efforts and often under testing portions of the core policy/billing/claims systems is the output validation. There is a lot of data and variables that must be tested from declarations pages, claim letters, or billing invoices. The RPA framework can read the output pages of text/images and confirm the output matches the expected results. Once the base set of test cases have been built, it will be possible to create thousands of tests to confirm every variation of a form, which can provide a significant lift, especially on countrywide forms with a large number of text variables.
With the low coding capabilities, testers can create spot bots to help them bridge the manual testing efforts. For example, a tester is working on a new policy administration system, and the first five application pages have proven to be meeting requirements through prior manual testing efforts. The tester could create a spot bot that enters the data to enable the tester to test the balance of the application process. The spot bot would not be a formalized testing effort but rather an aid to allow the tester to complete the testing effort more quickly.
The power of automation testing augmented by RPA and AI has proven to be a powerful tool in the testing competency. While the majority of the efforts to date have focused on improving the speed and accuracy of the manual testing efforts, there are many new creative implementations of automation just on the horizon.
Conventional test cases focus on the known and expected behaviors of the system for a given scenario. To truly test the system and ensure the production quality, exploratory testing must be performed using real-world like behaviors, actions outside of the reasonable expectations and try completely random entries. Usually, this would be accomplished towards the end of a testing cycle by the test team for a control duration as quickly the team will repeat the same behaviors. Using a tool like AI, the paradigm has shifted as AI can learn what real-world users are doing on the system, areas they focus on, etc. through user screen tracking and heat map development. Based on this information, AI can provide a direction of where to focus, the types of testing and errors seen by the end-users to be used as a test case for exploratory testing. Through this type of testing, the implementation team can quickly find areas of weakness and process breakdowns that could cause downtime or customer dissatisfaction.
Misspellings, grammar issues, and other UI issues always put the development team’s hard work overcoming incredibly complex technical issues into a bad light. Often a user would say if they can’t spell the word correctly, how can I trust the rest of the system works? The use of AI-augmented with Natural Language Processing (NPL) can help identify possible UI issues. The AI would be able to scan through all of the text, error messages, and other communication methods to ensure the same tone, wording, and style are used throughout the application. Also, it would be able to run spell checking, proper word usage, and consistency of messaging throughout the form.
Using AI’s ability to train and pattern recognition, it is possible to build test cases for similar functionality of the system in a completely automated fashion. When rolling out a policy administration system, states are generally grouped into like categories such as the Midwest is one group, NY/NJ/PA is another group, etc. AI can learn from the testing team’s manual and automation efforts for one of these states, such as KS from the Midwest grouping. Applying this learning to another state such as Nebraska will be incredibly easy as each state only has a minor if any variation on the screens and output except the rating. If AI finds a difference in the system behavior, it can be flagged for a tester or BA to review and direct how to test the difference.
With the augmentation of AI, automation frameworks have started the process of self or auto-healing test scripts. When there are changes in the screens or system behaviors, the tester will work with the code to make the necessary updates. As more updates are made, AI can detect familiar patterns and starts to recognize the updates and starts making recommendations based on prior experience. This enables the testers to focus the efforts on new features and functions instead of maintenance of the existing automation.
Often senior executives ask how many test cases are we running, and the higher the number of test cases, the more comfortable they feel there is adequate testing. Through techniques like pairwise testing and others, more coverage is provided with fewer more targeted test cases focused on valid, proven variable combinations instead of perceived coverage. Now, AI can be used to analyze risk areas in the codebase using a combination of test code coverage results from various types of tests (unit, functional, performance) against the changes to the code. In addition, redundant tests can be called-out and eliminated from the test suite allowing for shorter test executions.
Defect classification can be improved by AI learning and analyzing thousands of test runs to immediately identify whether an error was caused by the application, environment, or outdated automation. Defects are analyzed by severity based on usage of the system and impact from behaviors AI learned from the users.
Applications continue to become more sophisticated, using data from multiple systems via microservices and APIs. The challenge becomes the ability to monitor production at a micro-level and provide meaningful alerts to the production support teams to quickly resolve issues that could impact production uptime. Through RPA, small controlled testing and monitoring can be performed that will not create production records but verify service levels.
Testing using manual test scripts has quickly become archaic and replaced with augmenting technology enabling faster, more efficient, and higher quality output. Testing can now be done around the clock by a single team performing manual testing during the day and automated testing by night. Soon RPA and AI will enable the testers to be even more laser-focused on identifying deviations from expected system behavior in minutes instead of days. How will you harness automation testing to improve your success?