Validata Blog: Talk AI-powered Testing

The defect prioritization and analysis stage /The  Laser-focused AI for bug hunting and defects classification post series

The defect prioritization and analysis stage /The Laser-focused AI for bug hunting and defects classification post series

This is the second post of the series and we will describe how ValidatAI prioritizes and analyses the defects to automatically identify their root cause.

Usually the heavy lifting of root cause analysis, falls on operation teams to uncover when problems appear in production and when ITOps reports more issues to development without traceability to the source code, it often creates a “works on my machine” response from development – especially when closing tickets and the ‘number of check-ins’ are low, teams are incentivized.

We may often believe that the problem is resolved but in reality, we have just addressed a symptom of the problem and not the actual root cause. ValidatAI can identify the root causes of the defects, so that permanent solutions can be found and proper assignments and escalations can be done. Its main objective is to identify, assess and prioritize the problematic areas of the project, in early stages.

Unlike human intuitions that are not quantifiable, in such a case, a machine learning algorithm can automatically analyse and identify the patterns of defects to find the root cause and enable the assignment to the correct team, reducing defect turnaround time and improving productivity.

ValidatAI looks for common patterns across defects and their information and identifies the most possible root cause candidates through deterministic causation, proposing potential reasons that causes a test case to fail, reducing the time it takes to analyze the root cause of a defect. This approach is known as Model-Based diagnosis.

Defects Prioritization

After the system has identified the clusters, it prioritises them based on the risk that they impose, and so it proposes an order in which they should be addressed (value @ risk). Given a new bug report, the system can predict bugs that may be caused by the given bug and assist in bug assignment and prioritization. Common and frequent sequences of reports will be modeled. We can estimate the probability of software defects related to a feature based on our past observations, so we can prioritize what to test.

After we have extracted “labels” for each defect report, we need to reduce the quantity of the entries to a certain point, when we analyze and compare mostly distinct pair-wise dependencies which can be extended to three or four elements at a time in extreme occasions, reducing the overall execution.

We group together a set of defects in a way that defects in the same cluster are more similar to each other than to those in other clusters. Prioritized list of defects will be created, based on the defects Portfolio, the total risk and any restrictions imposed by the QA. Interpretability since the beginning has been an important area of research since Deep Learning models can achieve high accuracy but at the expense of high abstraction (accuracy vs interpretability problem). The model needs not only its initial training data, but also continuously updated retraining and refreshed data in real-time.

Generating workflows with High Test Coverage – Bug Analysis Model

The AI system prioritizes coverage, in an intelligent way, across the model. It directs the test flows and data values to maximize coverage. AI interprets requirements and instantly generates the minimal number of test cases from the requirements to maximize risk coverage and finds optimal test sets to maximize risk coverage and defect detection. We need to have an amalgamation of algorithms into a holistic model to get true sense of coverage.

But how AI can decide what’s relevant to test and what’s not?
We can estimate the probability of software defects related to a feature based on our past observations, so we can prioritize what to test (defect statistics). Keeping track of features’ usage frequencies gives us a hint to what might be relevant to test, what to automate and even what to build. We identify and extract unique workflow patterns that are being revealed within the log files. We use Convolutional Neural Networks (CNN) with a linear regression top layers. The deep learning model is able to understand the hidden dependencies beyond pair-wise entities in a more abstract way. This provides us an estimation time-to-fix for each entry.

The self-optimising AI engine continuously learns from the defect reports including text pre-processing, features extraction and selection and classifier building, and through natural language processing (NLP) and optical character recognition (OCR) it transforms the text giving more relevance and context.

Through AI-generated recommendations and predictions, QA, DevOps, Project Managers and other IT specialists are able to manage their projects better and faster by getting contextual insights of the defect metrics.

Copyright © 2018 Validata Group

powered by pxlblast
Our website uses cookies. By continuing to use this website you are giving consent to cookies being used. For more information on how we use cookies, please read our privacy policy