Measure Reliability of Automation – using Machine learning

Slides:



Advertisements
Similar presentations
GENI Experiment Control Using Gush Jeannie Albrecht and Amin Vahdat Williams College and UC San Diego.
Advertisements

HP Quality Center Overview.
Dynamic Service Composition with QoS Assurance Feb , 2009 Jing Dong UTD Farokh Bastani UTD I-Ling Yen UTD.
Achieving Success With Service Oriented Architecture Derek Ireland 17th March, 2005.
© 2004 Visible Systems Corporation. All rights reserved. 1 (800) 6VISIBLE Holistic View of the Enterprise Business Development Operations.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
SE 450 Software Processes & Product Metrics Reliability Engineering.
Testing in SDLC. COURSE CONTENT - Summary Part 1 – Life Cycle / Processes / SDLC Part 2 – LC Management in Turkcell.
The complete resource tool online for the conference, meeting and event industry.
CSCI ClearQuest 1 Rational ClearQuest Michel Izygon - Jim Helm.
An Example Use Case Scenario
In the name of God :).
Event Management & ITIL V3
Teaching material for a course in Software Project Management & Software Engineering – part V.
Objective: Enable portability and semi-automatic management of applications across clouds regardless of provider platform or infrastructure thus expanding.
MODEL-BASED SOFTWARE ARCHITECTURES.  Models of software are used in an increasing number of projects to handle the complexity of application domains.
It’s tough out there … Software delivery challenges.
Design and implementation Chapter 7 – Lecture 1. Design and implementation Software design and implementation is the stage in the software engineering.
SG SCM with MKS scmGalaxy Author: Rajesh Kumar
ENGINEERING PRACTICES FOR CONTINUOUS DELIVERY: From Metrics and Deployment Pipelines to Integration and Microservices By Neal Ford with Tim Brown Deployment.
Clouding with Microsoft Azure
Benefits of a Virtual SIL
A Case Study: Automated Continuous Software Engineering Cycle (ACSEC)
Digital Transformation Services
A scalable approach for Test Automation in Vector CAST/Manage with
SOFTWARE TESTING OVERVIEW
Overview – SOE PatchTT November 2015.
Chapter 18 Maintaining Information Systems
Trends like agile development and continuous integration speak to the modern enterprise’s need to build software hyper-efficiently Jenkins:  a highly.
Extended BDD Automation for Future Agile Applications
ATTRACT TWD Symposium, Barcelona, Spain, 1st July 2016
DEFECT PREDICTION : USING MACHINE LEARNING
Database Testing in Azure Cloud
Galen - Automated testing of look and feel
AI emerging trend in QA Sanjeev Kumar Jha, Senior Consultant
End-to-End REST Service Testing Automation
Ansible and Zabbix Rushikesh Prabhune (Software Technical Consultant)
Advantages OF BDD Testing
Continuous Performance Engineering
Software Product Testing
SANJAN BORA (LEAD QA ENGINEER)
Author: Karankumar Wadhwani, Test Solutions Consultant
Importance of RPA (Robotic Process Automation) in software Testing.
X in [Integration, Delivery, Deployment]
Pankaj Kumar, Tech Lead Bhuvaneswari Radhakrishnan, Senior Engineer
Faster delivery using Device Farm
Casablanca Platform Enhancements to Support 5G Use Case (Network Deployment, Slicing, Network Optimization and Automation Framework) 5G Use Case Team.
ARTIFICIAL INTELLIGENCE IN SOFTWARE TESTING
True ROI of Automation? Allscripts India LLP
Cross Platform Network Calls Automation
One Quality – Integrated Digital Assurance Automation Framework
Workflow-based Automation Framework for Agile Software Development
Unleashing the power of customized reports testing framework
IMPACTED TESTS BASED ON
Project insights using mining software repositories
Agile Testing Using Virtualization
Enhanced Security Testing- Do Automate Debuggers
Automated Testing and Integration with CI Tool
System migration – An automated approach to overcome challenges
Casablanca Platform Enhancements to Support 5G Use Case (Network Deployment, Slicing, Network Optimization and Automation Framework) 5G Use Case Team.
APPLICATION LIFECYCLE MANAGEMENT(ALM) QUALITY CENTER(QC)
JOINED AT THE HIP: DEVSECOPS AND CLOUD-BASED ASSETS
Continuous Integration
APPLICATION LIFECYCLE MANAGEMENT(ALM) QUALITY CENTER(QC)
True ROI of Automation? Allscripts India LLP
Open Source Tool Based Automation solution with Continuous Integration and end to end BDD Implementation Arun Krishnan - Automation Manager Maria Afzal-
Bringing more value out of automation testing
Using Customer feedback for Automated Test-suite
Fertilizer Industry Safety Information Analysis and Sharing Program
Generate the Best Leads using Marketing Service From Global Mail Media GLOBAL MAIL MEDIA
Presentation transcript:

Measure Reliability of Automation – using Machine learning Varun Bhal – Lead Software Engineer Adobe Systems, Noida

Why do we need to measure the reliability? How can we measure it? Abstract With the technological advancements, we are moving towards Automation of various stages of our SDLC lifecycle, including testing of the product / software. At the same time, it becomes equally important to measure / judge the reliability of the Automation framework, testing infrastructure, test cases, other aspects involved. Why do we need to measure the reliability? How can we measure it? How its going to help us?

Find new failing test cases Detect deviations in the run in terms of : Objective : Find new failing test cases Detect deviations in the run in terms of : Total test cases executed Failed test cases Test execution time, etc. Graphs comparing the historical results for different builds across different platforms Auto triaging of known failures

How to implement auto triaging? Find previous instance of failed test case on same configuration Check if its triaged Compare the failure reason Triage with same bug Not Found New Failure Found No Yes Not matched Matched

How to implement deviation notifications ? Start to capture below data per run : Total test cases Failed test cases Untriaged test cases (new failures) Test execution time While pushing new results, compare its data values with averaged data of last few runs (eg. 3 runs). Measure the difference in the values, compare it with threshold values If deviation is more than threshold, send notification with details.

How to implement deviation graphs ? Database would have below data per run : Total test cases Failed test cases Untriaged test cases (new failures) Test execution time Plot graphs based on this data for different suites In case of multiple runs of 1 configuration, values should be averaged for all the runs

Example Graphs: 1. Failing test cases Suite name

Example Graphs: 2. Execution time Suite name

Example Graphs: 3. Total test cases Suite name

Example deviation emails: Hidden Suite name

Increased confidence in the product as well as automation. Benefits : Early issue detection (esp. injections) will thereby help us achieve timelines easily. Increased confidence in the product as well as automation. Trends of various matrices over a period of time will help us take project critical decisions. To know the reliability of what we do is always great. It gives us answers for various W-H family questions 

Author Biography Varun is a Lead Software engineer at Adobe with 5.5 years of industry experience in automation and tools development. He is a part of Flash Runtime team at Adobe and have exposure of working in installation, deployment and runtime test automation across different platforms, including Mobile technology. This is where he came across with this practice and got successful in implementing it in automation.

Thank You!!!