Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cognition in Testing Cognitive Software

Similar presentations


Presentation on theme: "Cognition in Testing Cognitive Software"— Presentation transcript:

1 Cognition in Testing Cognitive Software
Sindu Vijayan – Principal QE Vidhya Nandhini Paramanathan – Sr Principal QE Manhattan Associates

2 Abstract The technology advancements over the last decade has made storage and connectivity so affordable that the Digital world right now is transforming at a speed unimaginable. No field is untouched by this digital revolution and so is software testing. The enormous data demands the usage of cognitive or machine learning logic to derive meaningful information. The once niche of programs that use cognitive logic better termed as “Non-testable Programs” is now becoming an integral part of any business solution. This article touches upon the best practices, techniques and challenges taking examples from supply chain solutions. While traditional testing techniques can be used in such applications where ever applicable; often assessing the correctness of the solution remains a challenge. While discussing about “Testing the non-testable programs” The article touches upon the following areas at a broader level: Effectiveness of the Algorithm/Logic Correctness of Implementation End to End Solution Accuracy Quality Bench Mark Cognitive solutions are often about optimization or better insights over the traditional models. There are no fixed boundaries or pass fail criteria. Taking an example of forecast projections, we use what we call the “Time Travel Technique” – where validations would be conducted against the actuals by moving back in time. Other techniques like cross validations with key performance indicators can also help test the accuracy. The closer you are to reality the better the predictions. There are no one-fits-all solutions available while addressing the “Non-testable programs”. Every such solution needs a unique test plan and methods to ensure that the primary objective of the Quality function is met – The solution provided meets the customer needs.

3 The World Right now The Potential Impacts!!!
The technology advancements over the last decade has made storage and connectivity so affordable that the Digital world right now is transforming at a speed unimaginable. 44 Trillions of Giga bytes of data by 2020 Data doubles every year The potential information the data holds is also unimaginable The Potential Impacts!!! Technology is no longer the differentiator! Business Intelligence Transformation demands: Right information which steers the business in the right direction. Detection of Trends early enough for better response time. The emerging trends are capture over a period of time. Rise of Cognitive Computing. AI – Once a luxury is now a necessity! An integral part of software solution is an integral part of testing too !!

4 Cognitive Solutions what do they do?
Add optimization to the otherwise traditional approach Can be used to provide better insights Are statistical in nature Cognitive Solutions in Real life! Cognitive Solutions make meaningful information out of data that seems unrelated. Examples: Sentimental Analysis/Trend Analysis for future planning using social media data – Marketing Strategy Real time weather or traffic analysis - predicting customer inflow or supply of goods Major Challenges in Testing the Enormous Data Identifying the right data for validation Validate the Effectiveness of the Solution

5 “The Solution provided is as Good as the Data”
What’s the big deal about data? “The Solution provided is as Good as the Data” The Data Set chosen determines the effectiveness and accuracy of the solution. In cases where the results of a cognitive solution feeds into another cognitive solution; the effects of poor data can be multifold. Most of the Data Science models assume that the features(predictors) are accurately measured. So Generate your own “enormous” data for testing????

6 The Right Data for validation Generate a Standard Data Set
Create Data set from snapshots of proven data sources. The data should be crafted carefully with the “actuals” captured. Required to test the impact of different flavors of data. Data is Static in nature. Sales matrix gathered from different verticals over a period of time – Eg: a sales forecasting system Real-Time Data feeds – See what the customer sees Real-time production like data. Testing solutions that analyze trends or customer sentiments would benefit from this approach. With real-time data, the actual results are not readily available. It is a challenge to assess the effectiveness of the solution as actuals are not always available. The patterns may be diluted as compared to a standard data set manually crafted The two methods serve two different purpose. The most beneficial would be to use both the techniques in the order above – if applicable.

7 Validate the effectiveness of the solution
Testing the Non-Testable Programs Non-Testable Programs have: No Test oracles applicable No direct way of assessing the results of most such solutions The level of optimization is different from solution to solution Test the Non-Testable: How optimized are the results with and without the cognitive solution? The Test methodologies: Effectiveness of the Algorithms Correctness of the implementations End to End Solution Accuracy Quality Bench Mark The level of optimization required for each solution is unique!

8 Effectiveness of Algorithms
Compare prototypes with other algorithms Utilize tools(with different algorithms) to compare results Often helps while choosing the right model or algorithm Compare the calculated results with the actuals available Eg: Precision v/s Recall for classification problems Data Seeding Techniques The technique can be used to artificially inject a pattern. Evaluate the response of the system. Identifies if the cognitive solution can capture changing trends/react to changes Eg: Association rule learning where injecting large no: of records of a particular association would identify the newly fed association with a strong affinity score.

9 Correctness of Implementation
Often Cognitive solutions use prototyping for the algorithmic logic. A well tested prototype eases the implementation testing. Methods: Bench mark testing against prototype. Benchmarking against the same data is the way to go. The relatively easier form of testing as we have a standard set to compare against. Automation Testing can be used  Validate the OTS solutions Off-The-Shelf solutions are becoming popular for cognitive solutions where the component with the algorithmic logic can be purchased. Challenges: The algorithmic logic is usually Intellectual Property or not shared. Optimization of the Cognitive logic is a challenge. However, we could assess the effectiveness by benchmarking techniques.

10 End to End Solution Accuracy
The Rule of Thumb: Compare the results with and without the cognitive solution. The extend of improvement is the gauge! Time Travel Technique: Collect the data with actuals over a period of time. Go back in time make the predictions. Compare the predictions to the actuals Cross validation techniques can be used. Taking the example above: Evaluate KPI’s like loss sales and Service Level Agreements A well optimized cognitive solution should reduce the Lost Sales and there by improve the SLAs. How Close?

11 Quality Bench Mark Yet another challenge is testing between versions where algorithms are optimized. Quality Bench Mark technique: Collect a Standard data set Run End to End solution accuracy validation technique Metrics captured Accuracy of the product version over version is measured. Evaluates if the optimization attempts have paid off. Conclusion Testing the ‘Non-Testable programs’ remains a challenge No one-fits-all test solution available. Every solution needs a unique test plan that suits the problem it tries to address. Primary objective of any Quality function – Ensure the solution meets the customer needs.

12 References & Appendix

13 Author Biography Sindu Vijayan:
Principal Quality Engineer with Manhattan Associates. Sindu leads the testing activities in the Inventory suite of product which are associated with Planning, forecasting and optimization of supply chain inventory. With an over all industry experience of above 14 years she has been associated with Manhattan for 11+ years. She is a Masters in computer application from Bharathiyar university and also holds a Post Graduate Diploma in Retail Management. She is also a QAI Certified Manager in Software Quality. Vidhya Nandhini Paramanathan: Vidhya is a Sr Principal Quality Engineer with Manhattan Associates. She leads the testing activities in the Inventory suite of product which are associated with Planning, forecasting and optimization of supply chain inventory. Vidhya has an overall 14+ years of industry experience and has been associated with Mahattan for 11+ years.

14 Thank You!!!


Download ppt "Cognition in Testing Cognitive Software"

Similar presentations


Ads by Google