Model Comparison: A Key Challenge for Transformation Testing and Version Control in Model Driven Software Development Yuehua Lin, Jing Zhang, Jeff Gray.

Slides:



Advertisements
Similar presentations
Three-Step Database Design
Advertisements

Presented by: Thabet Kacem Spring Outline Contributions Introduction Proposed Approach Related Work Reconception of ADLs XTEAM Tool Chain Discussion.
IEC Substation Configuration Language and Its Impact on the Engineering of Distribution Substation Systems Notes Dr. Alexander Apostolov.
Chapter 1 Software Development. Copyright © 2005 Pearson Addison-Wesley. All rights reserved. 1-2 Chapter Objectives Discuss the goals of software development.
Requirements Analysis Concepts & Principles
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
[ §4 : 1 ] 4. Requirements Processes II Overview 4.1Fundamentals 4.2Elicitation 4.3Specification 4.4Verification 4.5Validation Software Requirements Specification.
Data Structures and Programming.  John Edgar2.
Systems Analysis – Analyzing Requirements.  Analyzing requirement stage identifies user information needs and new systems requirements  IS dev team.
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
The Data Attribution Abdul Saboor PhD Research Student Model Base Development and Software Quality Assurance Research Group Freie.
Managing the development and purchase of information systems (Part 1)
ITEC224 Database Programming
Parser-Driven Games Tool programming © Allan C. Milne Abertay University v
Programming in Java Unit 3. Learning outcome:  LO2:Be able to design Java solutions  LO3:Be able to implement Java solutions Assessment criteria: 
1 Introduction to Software Engineering Lecture 1.
Unit-1 Introduction Prepared by: Prof. Harish I Rathod
Presented by: Ashgan Fararooy Referenced Papers and Related Work on:
Object-Oriented Software Engineering using Java, Patterns &UML. Presented by: E.S. Mbokane Department of System Development Faculty of ICT Tshwane University.
Requirements Validation
Ch- 8. Class Diagrams Class diagrams are the most common diagram found in modeling object- oriented systems. Class diagrams are important not only for.
Chapter 4 Automated Tools for Systems Development Modern Systems Analysis and Design Third Edition 4.1.
® IBM Software Group © 2009 IBM Corporation Essentials of Modeling with the IBM Rational Software Architect, V7.5 Module 15: Traceability and Static Analysis.
Concepts and Realization of a Diagram Editor Generator Based on Hypergraph Transformation Author: Mark Minas Presenter: Song Gu.
A Model Transformation Approach to Automated Model Construction and Evolution Yuehua Lin Jeff Gray Department of.
Class Diagrams. Terms and Concepts A class diagram is a diagram that shows a set of classes, interfaces, and collaborations and their relationships.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
AUTOMATIC GENERATION OF MODEL TRAVERSALS FROM METAMODEL DEFINITIONS Authors: Tomaž Lukman, Marjan Mernik, Zekai Demirezen, Barrett Bryant, Jeff Gray ACM.
DARE: Domain analysis and reuse environment Minwoo Hong William Frakes, Ruben Prieto-Diaz and Christopher Fox Annals of Software Engineering,
Building Enterprise Applications Using Visual Studio®
- The most common types of data models.
ITIL: Service Transition
Logical Database Design and the Rational Model
Advanced Computer Systems
Unit - 3 OBJECT ORIENTED DESIGN PROCESS AND AXIOMS
Computer-Aided Design
UML Diagrams By Daniel Damaris Novarianto S..
Definition CASE tools are software systems that are intended to provide automated support for routine activities in the software process such as editing.
Design by Contract Jim Fawcett CSE784 – Software Studio
Design by Contract Jim Fawcett CSE784 – Software Studio
Modern Systems Analysis and Design Third Edition
PLM, Document and Workflow Management
Course Outcomes of Object Oriented Modeling Design (17630,C604)
Object-Oriented Software Engineering Using UML, Patterns, and Java,
Modern Systems Analysis and Design Third Edition
Unified Modeling Language
Distribution and components
课程名 编译原理 Compiling Techniques
CV-1: Vision The overall vision for transformational endeavors, which provides a strategic context for the capabilities described and a high-level scope.
UML Diagrams Jung Woo.
Krishnakumar Balasubramanian
Software Requirements analysis & specifications
Chapter 10: Process Implementation with Executable Models
Model-Driven Analysis Frameworks for Embedded Systems
Applying Domain-Specific Modeling Languages to Develop DRE Systems
Introduction to Software Testing
Modern Systems Analysis and Design Third Edition
Multiple Aspect Modeling of the Synchronous Language Signal
Modern Systems Analysis and Design Third Edition
Chapter 1 Introduction(1.1)
Analysis models and design models
An Introduction to Software Architecture
A Generative Approach to Model Interpreter Evolution
Requirement Analysis using
Principles of Programming Languages
Automated Analysis and Code Generation for Domain-Specific Models
Software Development Process Using UML Recap
Modern Systems Analysis and Design Third Edition
T-FLEX DOCs PLM, Document and Workflow Management.
Software Architecture & Design
Presentation transcript:

Model Comparison: A Key Challenge for Transformation Testing and Version Control in Model Driven Software Development Yuehua Lin, Jing Zhang, Jeff Gray {liny, zhangj, gray}@cis.uab.edu http://www.cis.uab.edu/softcom/ * This research is funded by the DARPA Information Exploitation Office (DARPA/IXO), under the Program Composition for Embedded Systems (PCES) program.

Two Challenges in MDSD Lack of a mechanism for validating the correctness of model transformations Problem: Model transformation specifications can be erroneous Proposed Solution: Testing may be applied to model transformations Current version control tools do not match the structural nature of models Need to archive the history of model changes Model-centric version control As models and model transformations are elevated to first-class artifacts within the software development process, there is an increasing need to introduce core software engineering principles and processes into modeling activities. Such as design, version control, testing and maintenance can be integrated into the development environment of model and transformations to improve their quality. However, testing and version control tools and techniques are not widely available in most modeling environments such as they do in typical software development. There are two challenges: Lack of a mechanism for validating the correctness of model transformations; Model transformations are first class citizens in MDSD. In the context of model-to-model transformation, transformation rules and application strategies are written in a special language, called the transformation specification, which can be either graphical or textual. When a model transformation is carried out, its specifications are interpreted by the transformation engine to generate the target model(s) from the source model(s). There is a need to ensure the correctness of model transformation specifications before they are applied to a collection of source models or reused to perform future transformation tasks. Such correctness can be determined by checking if the output of the model transformation satisfies its intent (i.e., there are no differences between the output model and the expected model). Testing is an effective approach to revealing errors in the specifications. However, this work has been neglected among the literature on model transformation research. 2)Current version control Tools do not match the structural nature of models. Traditional version control and configuration management systems (e.g., CVS and Microsoft SourceSafe) have been available for many years to assist developers in this important function. However, these tools operate under a linear file-based paradigm that is purely textual, but models are structurally rendered in a tree or graphical notion. Thus, there is an abstraction mismatch between currently available version control tools and the hierarchical nature of models. Lack of a mechanism for validating the correctness of model transformations Problem: Model transformation specifications can be erroneous Proposed Solution: Testing may be applied to model transformations Key Challenge: A lack of testing tools Current version control tools do not match the structural nature of models Need to archive the history of model changes Model-centric version control Traditional version control tools (e.g. CVS and MS SourceSafe) operate under a linear file-based paradigm, but models are structurally rendered in a tree or graphical notion

Model Transformation Testing Bugs? We propose model transformation testing as an effective approach to ensure the correctness of a model transformation. In a model transformation environment, assuming the model transformation engine works correctly and the input models are properly specified, the model transformation specifications are the only artifacts that need to be validated and verified . Thus, model transformation testing here becomes model transformation specification testing. Like software testing, model transformation specification testing can only show the presence of errors in a specification and not their absence. However, as a more lightweight alternative to formal methods, it can be very effective in revealing such errors. As an initial prototype, Figure 1 shows a framework for testing model transformation specifications that assists in generating tests, running tests, and documenting tests automatically. There are three primary components to the testing framework: test case constructor, test engine, and test analyzer. The test case constructor consumes the test specification and then produces test cases for the to-be-tested transformation specification. The generated test cases are passed to the test engine to exercise the specified test cases. Within the testing engine, there is an executor and a comparator. The executor is responsible for executing the transformation on the source model and the comparator compares the target model to the expected model and collects the results of comparison. The test analyzer visualizes the results provided by the comparator and provides a capability to navigate among any differences.

Model-Centric Version Control Storage and retrieval of versioned models Merging of versioned models Change detection between versioned models Visualization of differences * Generally, model-centric version control is concerned with these issues: Storage and retrieval of versioned models Change detection between versioned models which needs a comparison between two versioned models. Visualization of differences through hierarchical models. Various shapes and colors are used for marking numerous types of changes: a red circle is used to mark something is deleted, a blue square is used to mark something is created, and a green star used to mark the object whose one or more attributes have been changed. 4) Other issues such as mergence of versioned models. Version 1.0 Version 2.0

Model Comparison Calculate the mappings and differences between models. In model transformation testing Model comparison is to compare the actual output model and the expected output model to find the errors in a model transformation specification. In model-centric version control Model comparison is to compare versioned models to detect changes. Generally, it is possible to compare two fine-grained models that have precisely defined syntactical structure. The main task of model comparison is to calculate the mappings and the differences between models. According to the above discussion of model transformation testing, model comparison needs to be performed to discover differences between the expected model (the expected result) and the target model (the actual result). Similarly, model comparison is also required in model-centric version control to detect changes between versioned models. Manual comparison of models is tedious, exceedingly time consuming, and susceptible to error. Thus, for the purpose of automation of model comparison and visualization of the results, the following issues need to be explored deeply.

Issues toward Automation of Model Comparison What properties of models need to be compared? In modeling languages: a model consists of entities, associations, and attributes Represent model differences: operational terms such as “create”, “delete”, “set/update”, etc At which level to compare models? Intermediate representation like XML, or underlying internal data structure? What are effective algorithms? Powerful or practical? general or specific? How to visualize model differences? Interpret the results (e.g. indicate various differences using coloring) Support navigation of differences hierarchically For automation of model comparison, we need to explore these issues. The first fundamental issue of model comparison is what properties of models need to be compared. Generally, the structural properties need to be compared to discover the differences between two models. However, structures of models may vary in different modeling languages. For models defined in any MOF-based modeling language, elements, links and features (e.g., names of elements and links) need to be compared [2]. Another fundamental issue for model comparison is how to define and represent the mappings and differences between two models. Some researchers propose to use operational terms such as “create”, “delete”, “set”, etc to represent the model difference computation. Models are usually rendered in a graphical notation and may be persistently stored in an intermediate language like XML. Thus, it is possible to compare models via their XML representation, but problems are raised with this approach. An XML document is generally a linear structure that represents hierarchy by linked identifiers to other parts of the document, and this does not match the hierarchy that is visually depicted in most modeling languages and tools. One possibility is to compare models by traversing the underlying internal representation provided by the host tool. However, such an approach could become tool-specific and not reusable. Most models are graph-like. But, existing graph matching algorithms are often too expensive. Possibly we need to loosen the constraints on graph matching to simplify the algorithm (i.e., requiring that each element of a model have a universally unique identifier (UUID), or using name matching). In a domain-specific modeling environment, the meta-model information may be used to aid the comparison such as grouping the elements and relationships based on type information. Another concern with model comparison algorithms is we need to develop general algorithms that can be applied to different modeling environment, which may be hard, or just develop algorithms specific to a modeling environment? Several modeling tools (e.g. the GME) capture the hierarchy and containment within models explicitly, whereby the modeler recursively clicks on higher-level models in order to reveal the contents of the containment. The hierarchical nature of models makes it difficult to observe visually the mappings and differences between two models. A new visualization technique is essential toward understanding the results of a model transformation testing framework, and to comprehend the changes between model versions. Currently, the commonly proposed technique to visualize the model differences is coloring [9], where different colors are used to indicate whether a model element is missing or redundant. However, a mature version control system requires more complicated techniques to discover richer information and navigation through model differences hierarchically.

For publications, video demonstrations, and presentations, please see: Questions? For publications, video demonstrations, and presentations, please see: http://www.gray-area.org/Research/C-SAW/