Download presentation
Presentation is loading. Please wait.
Published byDoreen Fisher Modified over 9 years ago
1
By: TARUN MEHROTRA 12MCMB11
2
More time is spent maintaining existing software than in developing new code. Resources in M=3*(Resources in D) Metrics should be made – 1>Difficulty experienced by Pgmer in understanding a program. 2>Speed & Accuracy with which modification is done.
3
In 1972 M.Halstead publish his theory of Software Science. According to him amt of effort required to generate a pgm can be derived from: 1)Distinct operators 2)Distinct operands 3)Total frequencies of operators. 4)Total frequencies of operands. From these 4 Quantities Halstead calculates the number of mental comparisons required to generate a program
4
Recently T.McCabe developed a definition of complexity based on “Decision Structure of a Program”. In simply stated,McCabe’s metric counts the number of basic control path segments which when combined, will generate every possible path through a program.
5
No exact mathematical relationship between two metrics, since no causal relationships exist between number of operators,operands and control paths. Yet, as the No. of control path increases there would be an anticipated increase in the number of operators and significant correlation not be surprising.
6
There are 2 different complexity to be assessed: Computational-refers to “the quantitative aspects of the solutions to computational problems.eg. Comparing efficiency of alternate algorithmic solutions. Psychological-refers to “characteristic of software which make it difficult to understand and work with. No simple relationship is expected between two.
7
This report investigates the extent to which Halstead and McCabe metrics assess the psychological complexity of understanding and modifying software. 2 experiments reported were designed to investigate factors which influence: ->Understanding an existing program(EXP 1) ->Accurately implementing modifications to it.(EXP 2)
8
1)Participants Each experiment involves 36 programmers. All have working knowledge of Fortran. 2)Procedures Introductory exercise: Material was prepared for each participant with written instructions on the experimental task. All were presented with same short Fortran program( Euclid algo).
9
3) Independent Variables Program Class: In Exp 1- Nine programs were chosen varied from 36 to 57 statements. In Exp 2- Three of the Nine programs from Exp 1. Complexity of control flow: Control Flow Structures at 3 levels of complexity were defined for each program.
11
Variable name mnemonicity: In Exp 1:Three Levels of mnemonicity for variable names were manipulated independently of program structure. Comments: In Exp 2 :Three Levels of commenting in Exp 2: 1) Global-Appeared at front of the pgm. 2)In-Line-Placed throughout the pgm. 3)None
12
Modifications: In Exp 2:Three types of modification were selected for each program. Experimental Design: In Exp 1: Nine Programs,each one with 3 types of Control flow, with 3 levels of variable mnemonicity. Total=81 programs. In Exp 2: Three Programs,each one with 3 types of Control flow, 3 levels of commenting, modifications at 3 levels of difficulty. Total =81 programs.
13
Complexity Measures ->Halstead’s effort metric (E):
14
->McCabe’s ->Length :Total number of Fortran statements excluding comments.
15
4) Dependent Variables ->In Experiment 1: Percent of statements correctly recalled. ->In Experiment 2: 1)Accuracy of the implemented modification 2)Time taken by the participant to perform the task. So, Performance measures were: 1)Percent of changes correctly implemented to program. 2)Number of minutes required to complete changes.
16
1. Experimental Manipulations 2. Distributional Information on Complexity Measures 3. Correlations with Performance 4. Moderator Effects
17
Experimental manipulations: Experiment 1: Mean of 51 percent of the statements were correctly recalled across the experimental tasks. Performance on naturally structured programs was superior to that on unstructured programs. Differences in the mnemonicity of variable names had no effect on performance.
18
Experiment 2: An average accuracy score of 62 percent was achieved overall implemented modifications. Average time to complete modification was 17.9 min. Accuracy and time were uncorrelated.
19
Distributional Information on Complexity M Distributional Information on Complexity Measures :
20
Analysis Substantial inter correlations were observed among the complexity metrics in both exp. Exp 1- Halstead and McCabe were strongly related while their correlations with length were moderate. Exp 2-All three measures were strongly correlated on both unmodified and modified programs.
21
Correlations with performance-(EXP 1)
22
Analysis: These correlations were all negative, indicating that fewer lines were correctly recalled as the level of complexity represented by these three measures increased. Little difference was observed between the correlations in the aggregated and unaggregated data.
24
There were three data points (circled in Fig. 1) which were developed by averaging across three participants who consistently outscored others. High scores on these three data points resulted from the failure of random assignment. With the three data points of the exceptional group removed, the correlations for all three complexity metrics improved (third row of Table III).
25
CORRELATIONS OF COMPLEXITY METRICS WITH ACCURACY AND TIME IN EXPERIMENT 2
26
Analysis: The correlations computed from the aggregated data were slightly larger than those computed from the unaggregated data. The complexity metrics were generally more strongly correlated with time to completion than with the accuracy of the implementation, especially on the modified programs. The largest number of significant correlations were observed for metrics computed from the modified programs.
27
Moderator Effects –
28
Analysis In Experiment 1, Halstead's E and McCabe's v(G) correlated significantly with performance only on unstructured programs. While a similar pattern of correlations emerged in Experiment 2 between Halstead's E and time to complete the modification.
29
Broader Difference
30
Analysis Differences in correlations between in-line and no-comment conditions either achieved or bordered on significance in all cases.
32
Analysis Finally, relationships between complexity metrics and performance measures were moderated by the participants' years of professional programming experience. As is evident in Table VII, the complexity metrics were more strongly related to performance among less experienced programmers in all cases.
33
THANK YOU
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.