Quality Assessment for CBSD: Techniques and A Generic Environment Presented by: Cai Xia Supervisor: Prof. Michael Lyu Markers: Prof. Ada Fu Prof. K.F.

Slides:



Advertisements
Similar presentations
Overview Introduces a new cohesion metric called Conceptual Cohesion of Classes (C3) and uses this metric for fault prediction Compares a new cohesion.
Advertisements

SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Alternate Software Development Methodologies
Component-Based Software Development: Technologies, Quality Assurance Schemes, and Risk Analysis Tools Cai Xia Supervisor: Prof. Michael R. Lyu Markers:
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
Fei Xing1, Ping Guo1,2 and Michael R. Lyu2
Prediction of fault-proneness at early phase in object-oriented development Toshihiro Kamiya †, Shinji Kusumoto † and Katsuro Inoue †‡ † Osaka University.
Figures – Chapter 24.
SSP Re-hosting System Development: CLBM Overview and Module Recognition SSP Team Department of ECE Stevens Institute of Technology Presented by Hongbing.
1 Static Testing: defect prevention SIM objectives Able to list various type of structured group examinations (manual checking) Able to statically.
Design Metrics Software Engineering Fall 2003 Aditya P. Mathur Last update: October 28, 2003.
Software engineering for real-time systems
Quality Prediction for Component-Based Software Development: Techniques and A Generic Environment Presented by: Cai Xia Supervisor: Prof. Michael Lyu Examiners:
An Experimental Evaluation on Reliability Features of N-Version Programming Xia Cai, Michael R. Lyu and Mladen A. Vouk ISSRE’2005.
1 Complexity metrics  measure certain aspects of the software (lines of code, # of if-statements, depth of nesting, …)  use these numbers as a criterion.
CASE Tools CIS 376 Bruce R. Maxim UM-Dearborn. Prerequisites to Software Tool Use Collection of useful tools that help in every step of building a product.
Predicting Class Testability using Object-Oriented Metrics M. Bruntink and A. van Deursen Presented by Tom Chappell.
Testing Components in the Context of a System CMSC 737 Fall 2006 Sharath Srinivas.
Software Process and Product Metrics
Introduction to Software Testing
Comp 587 Parker Li Bobby Kolski. Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects.
Chapter One Overview of Database Objectives: -Introduction -DBMS architecture -Definitions -Data models -DB lifecycle.
Introduction To System Analysis and design
Lecture 17 Software Metrics
Providing a Software Quality Framework for Testing of Mobile Applications Dominik Franke and Carsten Weise RWTH Achen University Embedded Software Laboratory.
Quality Assurance for Component- Based Software Development Cai Xia (Mphil Term1) Supervisor: Prof. Michael R. Lyu 5 May, 2000.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
1 CS 456 Software Engineering. 2 Contents 3 Chapter 1: Introduction.
Software Engineering Laboratory, Department of Computer Science, Graduate School of Information Science and Technology, Osaka University 1 Refactoring.
Paradigm Independent Software Complexity Metrics Dr. Zoltán Porkoláb Department of Programming Languages and Compilers Eötvös Loránd University, Faculty.
SWEN 5430 Software Metrics Slide 1 Quality Management u Managing the quality of the software process and products using Software Metrics.
Software Measurement & Metrics
This chapter is extracted from Sommerville’s slides. Text book chapter
Achieving High Software Reliability Using a Faster, Easier and Cheaper Method NASA OSMA SAS '01 September 5-7, 2001 Taghi M. Khoshgoftaar The Software.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Software Waterfall Life Cycle
Measurement and quality assessment Framework for product metrics – Measure, measurement, and metrics – Formulation, collection, analysis, interpretation,
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Boris Milašinović Faculty of Electrical Engineering and Computing University of Zagreb, Croatia 15th Workshop on "Software Engineering Education and Reverse.
Chapter 5 System Modeling. What is System modeling? System modeling is the process of developing abstract models of a system, with each model presenting.
Software Metric Tools Joel Keyser, Jacob Napp, Carey Norslien, Stephen Owings, Tristan Paynter.
Survey of Tools to Support Safe Adaptation with Validation Alain Esteva-Ramirez School of Computing and Information Sciences Florida International University.
Object-Oriented (OO) estimation Martin Vigo Gabriel H. Lozano M.
Ontology Support for Abstraction Layer Modularization Hyun Cho, Jeff Gray Department of Computer Science University of Alabama
Chapter 18 Object Database Management Systems. Outline Motivation for object database management Object-oriented principles Architectures for object database.
Using Bayesian Belief Networks in Assessing Software Architectures Jilles van Gurp & Jan Bosch.
CS223: Software Engineering Lecture 21: Unit Testing Metric.
Choosing a Formal Method Mike Weissert COSC 481. Outline Introduction Reasons For Choosing Formality Application Characteristics Criteria For A Successful.
Managing Qualitative Knowledge in Software Architecture Assesment Jilles van Gurp & Jan Bosch Högskolan Karlskrona/Ronneby in Sweden Department of Software.
1 Week 7 Software Engineering Spring Term 2016 Marymount University School of Business Administration Professor Suydam.
Design Metrics CS 406 Software Engineering I Fall 2001 Aditya P. Mathur Last update: October 23, 2001.
Software Test Metrics When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure,
CIS 375 Bruce R. Maxim UM-Dearborn
A Hierarchical Model for Object-Oriented Design Quality Assessment
Software Metrics 1.
Object-Oriented Analysis and Design
Design Characteristics and Metrics
Object-Oriented Metrics
Design Metrics Software Engineering Fall 2003
Design Metrics Software Engineering Fall 2003
INFS 6225 – Object-Oriented Systems Analysis & Design
Software testing strategies 2
Component-Based Software Engineering: Technologies, Development Frameworks, and Quality Assurance Schemes X. Cai, M. R. Lyu, K.F. Wong, R. Ko.
Testing, Reliability, and Interoperability Issues in CORBA Programming Paradigm 11/21/2018.
Predict Failures with Developer Networks and Social Network Analysis
Software Metrics SAD ::: Fall 2015 Sabbir Muhammad Saleh.
Quality Assurance for Component-Based Software Development
Chapter 8: Design: Characteristics and Metrics
Presentation transcript:

Quality Assessment for CBSD: Techniques and A Generic Environment Presented by: Cai Xia Supervisor: Prof. Michael Lyu Markers: Prof. Ada Fu Prof. K.F. Wong May 11, 2001

Outline Introduction Software Metrics and Quality Assessment Models Experiment setup Prototype Conclusion

Introduction Component-based software development (CBSD) is to build software systems using a combination of components CBSD aims to encapsulate function in large components that have loose couplings. A component is a unit of independent deployment in CBSD. The over quality of the final system greatly depends on the quality of the selected components.

Introduction Existing Software Quality Assurance (SQA) techniques and methods have explored to measure or control the quality of software systems or process. Management/process control Software testing Software metrics Quality prediction techniques

Introduction Due to the special feature of CBSD, conventional SQA techniques and methods are uncertain to apply to CBSD. We propose a set of experiments to investigate most efficient and effective SQA approach suitable to CBSD. Component-based Program Analysis and Reliability Evaluation (ComPARE)

Quality Assessment Techniques Software metrics: Process metrics Static code metrics Dynamic metrics Quality prediction model: Summation model Product model Classification tree model Case-based reasoning method Bayesian Belief Network

Progress and Dynamic Metrics MetricDescription Time Time spent from the design to the delivery (months) EffortThe total human resources used (man*month) Change ReportNumber of faults found in the development MetricDescription Test Case Coverage The coverage of the source code when executing the given test cases. It may help to design effective test cases. Call Graph metricsThe relationships between the methods, including method time (the amount of time the method spent in execution), method object count (the number of objects created during the method execution) and number of calls (how many times each method is called in you application). Heap metricsNumber of live instances of a particular class/package, and the memory used by each live instance.

Static Code Metrics AbbreviationDescription Lines of Code (LOC)Number of lines in the components including the statements, the blank lines of code, the lines of commentary, and the lines consisting only of syntax such as block delimiters. Cyclomatic Complexity (CC)A measure of the control flow complexity of a method or constructor. It counts the number of branches in the body of the method, defined by the number of WHILE statements, IF statements, FOR statements, and CASE statements. Number of Attributes (NA) Number of fields declared in the class or interface. Number Of Classes (NOC)Number of classes or interfaces that are declared. This is usually 1, but nested class declarations will increase this number. Depth of Inheritance Tree (DIT) Length of inheritance path between the current class and the base class. Depth of Interface Extension Tree (DIET) The path between the current interface and the base interface. Data Abstraction Coupling (DAC)Number of reference types that are used in the field declarations of the class or interface. Fan Out (FANOUT)Number of reference types that are used in field declarations, formal parameters, return types, throws declarations, and local variables. Coupling between Objects (CO)Number of reference types that are used in field declarations, formal parameters, return types, throws declarations, local variables and also types from which field and method selections are made. Method Calls Input/Output (MCI/MCO) Number of calls to/from a method. It helps to analyze the coupling between methods. Lack of Cohesion Of Methods (LCOM) For each pair of methods in the class, the set of fields each of them accesses is determined. If they have disjoint sets of field accesses then increase the count P by one. If they share at least one field access then increase Q by one. After considering each pair of methods, LCOM = (P > Q) ? (P - Q) : 0

Software Quality Assessment Models Summation Model Product Model m i is the value of one particular metric, is weighting factor, n is the number of metrics, and Q is the overall quality mark. m i s are normalized as a value which is close to 1

Software Quality Assessment Models Classfication Tree Model –classify the candidate components into different quality categories by constructing a tree structure

Software Quality Assessment Models Case-Based Reasoning –A CBR classifier uses previous “ similar ” cases as the basis for the prediction. case base. –The candidate component that has a similar structure to the components in the case base will inherit a similar quality level. –Euclidean distance, z-score standardization, no weighting scheme, nearest neighbor.

Software Quality Assessment Models Bayesian Network –a graphical network that represents probabilistic relationships among variables –enable reasoning under uncertainty –The foundation of Bayesian networks is the following theorem known as Bayes ’ Theorem: where H, E, c are independent events, P is the probability of such event under certain circumstances

BBN Example: reliability prediction

Experiment Setup Investigate applicability of existing QA techniques on CBSD Adopt three different kinds of software metrics: Process metrics static code metrics dynamic metrics. Use several quality prediction models: Classification tree model Case-based reasoning Bayesian Belief Network

Experiment Setup Models BBN CBR Tree MetricsProcess Metrics Static Metrics Dynamic Metrics LOC CC NOC Coverage Call Graph Heap Time Effort CR Sub-metrics

Experiment Setup Data Set: Real life project --- Soccer Club Management System Java language CORBA platform same specification set of programs by different teams.

Experiment Setup Experiment procedures: Collect metrics of all programs Apply on different models Design test cases Use test results as indicator of quality Validate the prediction results again test result Adopt cross evaluation: training data and test set Select the most effective prediction model(s) Adjust the coefficients/weights of different metrics in the final model(s)

An Experiment Example TeamClient LOC Server LOC Client Class Client Method Server Class Server Method Test Results P P P P P P P P

An Experiment Example

Why ComPARE? An environment integrates the functions Automate the experiment procedure Look into both the whole system and components

Why ComPARE? Final System Component 1 Component 2 Component 3 Component 4 ComPARE

Architecture of ComPARE Metrics Computation Criteria Selection Model Definition Model Validation Result Display Case Base Failure Data Candidate Components System Architecture

Features of ComPARE To predict the overall quality by using process metrics, static code metrics as well as dynamic metrics. To integrate several quality prediction models into one environment and compare the prediction result of different models To define the quality prediction models interactively

Features of ComPARE To display quality of components by different categories To validate reliability models defined by user against real failure data To show the source code with potential problems at line-level granularity To adopt commercial tools in accessing software data related to quality attributes

Prototype GUI of ComPARE for metrics, criteria and tree model MetricsTree ModelCriteria

Prototype GUI of ComPARE for prediction display, risky source code and result statistics Statistics DisplaySource code

Conclusion Problem statement: conventional SQA techniques are uncertain to apply to CBSD. We propose an environment of a set of experiments to investigate most efficient and effective approach suitable to CBSD. As future work, we will conclude whatever approaches applicable to CBSD through experiments.

Q & A