Metrics in Education (in informatics) Ivan Pribela, (Zoran Budimac)

Slides:



Advertisements
Similar presentations
The Efficiency of Algorithms Chapter 4 Copyright ©2012 by Pearson Education, Inc. All rights reserved.
Advertisements

A Metric for Software Readability by Raymond P.L. Buse and Westley R. Weimer Presenters: John and Suman.
Style checker for JAVA Baile Herculane, – U. Sacklowski, Dept. of Comp. Sc., HU-Berlin1 A style checker for JAVA and its application at.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
Framework for plagiarism detection in Java code Anastas Misev Institute of Informatics Faculty of Natural Science and Mathematics University Ss Cyril and.
March 25, R. McFadyen1 Metrics Fan-in/fan-out Lines of code Cyclomatic complexity Comment percentage Length of identifiers Depth of conditional.
Nov R. McFadyen1 Metrics Fan-in/fan-out Lines of code Cyclomatic complexity* Comment percentage Length of identifiers Depth of conditional.
User and Task Analysis Howell Istance Department of Computer Science De Montfort University.
1 Static Testing: defect prevention SIM objectives Able to list various type of structured group examinations (manual checking) Able to statically.
Detailed Design Kenneth M. Anderson Lecture 21
Soft. Eng. II, Spr. 02Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 6 Title : The Software Quality Reading: I. Sommerville, Chap: 24.
Software Metrics II Speaker: Jerry Gao Ph.D. San Jose State University URL: Sept., 2001.
Chapter3: Language Translation issues
March R. McFadyen1 Software Metrics Software metrics help evaluate development and testing efforts needed, understandability, maintainability.
28/06/2015Dr Andy Brooks1 MSc Software Maintenance MS Viðhald hugbúnaðar Fyrirlestur 42 Maintainability Index Revisited.
SMIILE Finaly COBOL! and what else is new Gordana Rakić, Zoran Budimac.
Advanced Topics in Software Engineering ATSE 2009 Topics, participants and results Gordana Rakic, Zoran Budimac.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
USING SOFTWARE METRICS IN EDUCATIONAL ENVIRONMENT Ivan Pribela, Zoran Budimac, Gordana Rakić.
Introduction SWE 619. Why Is Building Good Software Hard? Large software systems enormously complex  Millions of “moving parts” People expect software.
McCann Associates Presented by: Michael Childs, Barbara Dyer, Ira Taylor, Bruce Nugyen Chad Warner & Joe Koury, McCann Associates.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
Software Metrics *** state of the art, weak points and possible improvements Gordana Rakić, Zoran Budimac Department of Mathematics and Informatics, Faculty.
1 EECS 231 ADVANCED PROGRAMMING. 2 Staff Instructor Vana Doufexi Ford Building, 2133 Sheridan, #2-229 Teaching Assistant.
Lecture 17 Software Metrics
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
Design-Making Projects Work (Chapter7) n Large Projects u Design often distinct from analysis or coding u Project takes weeks, months or years to create.
Data Structures & AlgorithmsIT 0501 Algorithm Analysis I.
1 Software Quality CIS 375 Bruce R. Maxim UM-Dearborn.
1 Computing Software. Programming Style Programs that are not documented internally, while they may do what is requested, can be difficult to understand.
Spring, Reinventing etextbook - Virginia Tech – Computer Science Click to edit Master title style Survey of Automated Assessment Approaches for.
SWEN 5430 Software Metrics Slide 1 Quality Management u Managing the quality of the software process and products using Software Metrics.
Computer Science Department Data Structure & Algorithms Problem Solving with Stack.
University of Toronto Department of Computer Science CSC444 Lec05- 1 Lecture 5: Decomposition and Abstraction Decomposition When to decompose Identifying.
Mining and Analysis of Control Structure Variant Clones Guo Qiao.
Guidelines for Designing Inquiry-Based Learning Environments on the Web: Professional Development of Educators Byung-Ro Lim IST, Indiana Univ. July 20,
Understanding Meaning and Importance of Competency Based Assessment
Software Measurement & Metrics
By: TARUN MEHROTRA 12MCMB11.  More time is spent maintaining existing software than in developing new code.  Resources in M=3*(Resources in D)  Metrics.
Copyright © 2006 Addison-Wesley. All rights reserved.1-1 ICS 410: Programming Languages.
Extending HTML CPSC 120 Principles of Computer Science April 9, 2012.
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 23 Instructor Paulo Alencar.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Software Metrics (Part II). Product Metrics  Product metrics are generally concerned with the structure of the source code (example LOC).  Product metrics.
Cohesion and Coupling CS 4311
1 Introduction to Software Engineering Lecture 1.
Software Design Design is the process of applying various techniques and principles for the purpose of defining a device, a process,, or a system in sufficient.
©Dr I M Bradley Doing the project and other things.
SOFTWARE METRICS. Software Process Revisited The Software Process has a common process framework containing: u framework activities - for all software.
Towards the better software metrics tool motivation and the first experiences Gordana Rakić Zoran Budimac.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
The Software Development Process
An Analysis of Three States Alignment Between Language Arts and Math Standards and Alternate Assessments Claudia Flowers Diane Browder* Lynn Ahlgrim-Delzell.
Article Summary of The Structural Complexity of Software: An Experimental Test By Darcy, Kemerer, Slaughter and Tomayko In IEEE Transactions of Software.
Gordana Rakić, Zoran Budimac
Generating Software Documentation in Use Case Maps from Filtered Execution Traces Edna Braun, Daniel Amyot, Timothy Lethbridge University of Ottawa, Canada.
1 Measuring Similarity of Large Software System Based on Source Code Correspondence Tetsuo Yamamoto*, Makoto Matsushita**, Toshihiro Kamiya***, Katsuro.
Prior Learning Assessment (PLA) Pilot Project At VSU Prepared by the PLA Assessors Group.
Chapter 1: Preliminaries Lecture # 2. Chapter 1: Preliminaries Reasons for Studying Concepts of Programming Languages Programming Domains Language Evaluation.
Assessment of Geant4 Software Quality
Software Metrics 1.
CSC 221: Computer Programming I Spring 2010
Software Engineering (CSI 321)
CSC 221: Computer Programming I Fall 2005
Lecture 17 Software Metrics
*current controlled assessment plans are unknown
Halstead software science measures and other metrics for source code
Software Design Lecture : 9.
Coupling Interaction: It occurs due to methods of a class invoking methods of other classes. Component Coupling: refers to interaction between two classes.
1. Cyclomatic complexity
Presentation transcript:

Metrics in Education (in informatics) Ivan Pribela, (Zoran Budimac)

Content ● Motivation ● Areas of application o Used metrics o Proposed metrics ● Conclusions

Motivation (educational) ● Designing and coding computer programs is a must- have competence o Needed in academic training and future labour o Requires a learning by doing approach ● Giving the students enough information about what they are doing right and wrong while solving the problems is essential for better knowledge acquisition ● This will allow the students to better understand their faults and to try to correct them ● Industry standard metrics are not always understood o Values of the metrics are not saying much o There is a need for human understandable explanation of the values o New metrics more approachable by students in some cases

Motivation (“cataloguing”) Making a “catalogue” of metrics in different fields: - Software - (software) networks - educational software metrics - ontologies … and possibly finding relations (and / or new uses) between them

Areas of Application ● Assignment difficulty measurement ● Student work assessment ● Plagiarism detection ● Student portfolio generation

1. Assignment difficulty measurement ● Done on a model solution o Better quality solution o Less complex code ● Different measures for reading and writing o Measure quality level not just the structure o Level of thinking required and cognitive load ● Three difficulty categories o Readability o Understandability (complexity + simplicity + structuredness + readability) o Writeability

Metrics used ● Fathom based on Flesch-Kincaid readability measure o No justification or explanation for weightings and thresholds ● Software Readability Ease Score o No details for the calculation ● Halstead’s difficulty measure o Obvious, but no thresholds ● McCabe’s cyclomatic complexity o Strongly correlate, but no thresholds ● Average block depth o Obvious, but no thresholds ● Magel’s regular expression metric o Strongly correlate, but no thresholds

Metrics used ● Sum of all operators in the executed statements o Dynamic measure, only relevant for readability ● Number of commands in the executed statements o Dynamic measure, only relevant for readability ● Total number of commands o Strongly correlates ● Total number of operators o Strongly correlates ● Number of unique operators o Weakly correlates

Metrics used ● Total number of lines of code (excluding comments and empty lines) o Strong correlation ● Number of flow-of-control constructs o Strong correlation ● Number of function definitions o Some correlation ● Number of variables o Weak correlation

Fathom (Starsinic, 1998) ● Code Complexity o CC = (average expression length in tokens) * (average statement length in expressions) * (average subroutine length in statements) * 0.08 ● Very readable o <2.91 ● Very complex and hard to read o >6.85 ● Values are emprical

Magel’s regular expression metric (Magel, 1981) ● Expression o a (b+c) ● Metric value o 6

Magel’s regular expression metric (Magel, 1981) ● Expression o ab (ab)* c ● Metric value o 8

2. Student work assessment ● Three main goals for measurement o Diagnostic assessment  Measurement of the performance of a cohort of students in order to identify any individual learning difficulties, common problems, and areas of strength o Formative assessment  Provides feedback directly to students with the goal of helping them to improve their understanding of the subject o Summative assessment  Measures how well students have achieved learning objectives

Metrics used ● McCabe’s cyclomatic complexity o (Edwards, 2014) o (Jurado, Redondo, Ortega, 2012) o (Pribela, Rakic, Budimac, 2012) o (Cardell-Oliver, 2011) o (Breuker, Derriks, Brunekreef, 2011) o (Mengel, Yerramilli, 1999) o (Joy, Luck, 1999) o (Jackson, Usher, 1997) o (Thorburn, Rowe, 1997) o (Jackson, 1996) o (Leach, 1995)

Metrics used ● Berry-Meekings style guides / style violation o (Edwards, 2014) o (Pribela, Rakic, Budimac, 2012) o (Cardell-Oliver, 2011) o (Joy, Griffiths, Boyatt, 2005) o (Truong, Roe, Bancroft, 2004) o (Jackson, 2000) o (Mengel, Yerramilli, 1999) o (Joy, Luck, 1998) o (Jackson, Usher, 1997) o (Thorburn, Rowe, 1997) o (Jackson, 1996)

Metrics used ● Halstead’s complexity measures o (Mengel, Yerramilli, 1999) o (Joy, Luck, 1999) o (Leach, 1995) ● Coupling / Cohesion / Modularisation o (Jackson, 2000) o (Joy, Luck, 1998) o (Leach, 1995) ● Scope number o (Joy, Luck, 1999)

Metrics used ● Lines of code w/o comments & empty lines o (Edwards, 2014) o (Jurado, Redondo, Ortega, 2012) o (Pribela, Rakic, Budimac, 2012) o (Cardell-Oliver, 2011) o (Breuker, Derriks, Brunekreef, 2011) o (Benford, Burke, Foxley, Higgins 1995) ● Number of classes/methods/fields o (Edwards, 2014) o (Cardell-Oliver, 2011) o (Breuker, Derriks, Brunekreef, 2011) o (Joy, Griffiths, Boyatt, 2005)

Metrics used ● Custom o (Jurado, Redondo, Ortega, 2012) o (Breuker, Derriks, Brunekreef, 2011) o (Benford, Burke, Foxley, Higgins, 1995) ● Algorithm parameterization ● Number of blocks/arrays/variables ● Number of method calls/recursive calls ● Operational complexity/control complexity ● Number of value/reference parameters

Berry-Meekings style guidelines (Berry, Meekings, 1985) ● Metric Boundary Values Cntr.L - S - F - H Module length15% Identifier length14% Comment lines12% Indentation12% Blank lines11% Characters per line9% Spaces per line8% Defines8% Reserved words6% Include files5% Gotos-20%

3. Plagiarism detection ● Types of detection o Attribute counting systems  Different systems use different features in calculations  Better for smaller alteration o Structure metric  Use string matching on source or token streams  Better at detection of sophisticated hiding techniques o Hybrid  Mix of both approaches

Levels of plagiarism ● Verbatim copying ● Changing comments ● Changing whitespace and formatting ● Renaming identifiers ● Reordering code blocks ● Reordering statements within code blocks ● Changing the order of operands/operators in expressions ● Changing data types ● Adding redundant statements or variables ● Replacing control structures with equivalent structures

Metrics used ● Number of lines/words/characters o (Stephens, 2001), (Jones, 2001), (Jones, 2001)... We are using our “universal” eCST representation for “many” programming languages that could us lead to detect plagiarism between (e.g.) Java and Erlang

4. Student portfolio generation ● Collection of all significant work completed o Major coursework o Independent studies o Senior projects ● Critical in many areas of the arts o Typically used during the interview process o To show the artistic style and capabilities of the candidate o To observe own progression in terms of depth and capability ● In computer science o Software quality metrics applied to student portfolio submissions o Maximizes the comparative evaluation of portfolio components o Provides a quantifiable evaluation of submitted elements

Student portfolio generation ● Should include exemplars of all significant work produced ● Students select which items are included ● Public portfolios distinct from internal portfolios ● Open to inspection by the student and the department ● A number of commonly used software quality metrics exists ● Too coarse for sufficient student feedback ● Requirements for pedagogically-oriented software metric

Conclusion ● Not as bad as it first seemed ● Various standard metrics are used ● Some new have been devised ● Some metrics are still needed