Download presentation
Presentation is loading. Please wait.
Published byFerdinand Greene Modified over 9 years ago
1
CEN 4021 21 st Lecture CEN 4021 Software Engineering II Instructor: Masoud Sadjadi http://www.cs.fiu.edu/~sadjadi/ sadjadi@cs.fiu.edu Monitoring (POMA) Analysis and Evaluation of Information
2
21 st LectureCEN 4021: Software Engineering II Acknowledgements Dr. Onyeka Ezenwoye Dr. Peter Clarke 2
3
21 st LectureCEN 4021: Software Engineering II Agenda Monitoring (POMA) –Analysis and Evaluation of Information
4
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation of Information Any data collected must be reliable, accurate and valid. Reliable data – Data that are collected and recorded according to the defined rules of measurement and metric. Accurate data – Data that are collected and tabulated according to the defined level of precision of measurement and metric. Valid data – Data that are collected, tabulated, and applied according to the defined intention of applying the measurement.
5
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont The level of data accuracy includes rounding and specifying the number of significant figures. Validity addresses the applicability of the data to assess the particular issue or to measure the particular attribute.
6
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Consider the following computed measurement of average problem level: Avg. problem level = [SUM(# of severity k problems x severity k) / n] Where: n = total # of problems found SUM = the summation function k = a discrete value between 1 and 4 2 3 (2.7 can be rounded up to 3)
7
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Is average in this case valid? Care needs to be taken in considering the validity of data in the analysis of some attribute. Care need to be taken when using derived information after applying some computation or transformation to the raw data
8
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Distribution of Data One of the simplest forms of analysis is to look at the distribution of the collected data. Data distribution – A description of a collection of data that shows the spread of the values and the frequency of occurrences of the values of the data
9
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Data distribution examples –Skew of the distribution Severity level 1: 23 Severity level 2: 46 Severity level 3: 79 Severity level 4: 95 –Range of data values Functional area 1: 2 Functional area 2: 7 Functional area 3: 3 Functional area 4: 8 Functional area 5: 0 # of severity level by functional area Do these examples indicate where the product is good or bad?
10
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Data distribution examples –Data trends Week 1: 20 Week 2: 23 Week 3: 45 Week 4: 67 Week 5: 35 Week 6:15 # of problems found by functional area
11
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Centrality and Dispersion Provides a way to compare groups of data. Provides manager a way to characterize a set of related data, whether those data deal with product quality, project productivity, or some other attribute. Centrality analysis – is an analysis of a data set to find the typical value of that data set.
12
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Centrality and Dispersion Go through sections: –Average value Can be affected by one or more extreme data points –Median Value Divides data into upper and lower halves What about data trends? What about extreme data points?
13
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Centrality and Dispersion Standard deviation –Sometimes it is good to know how the distribution of data is dispersed form the central value of either the average or the median. –It is more difficult to utilize the central value if there is a large dispersion from the central value – average or mean value.
14
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Standard deviation –A very common dispersion measurement is the standard deviation (see text book for an example). Control Charts –Control Chart – A chart used to assess and control the variability of some process or product characteristic. It usually involves establishing the upper and lower limits (the control limits) of data variations from the data set’s average value. If an observed data value falls outside the control limits, then it would trigger evaluation of the characteristic.
15
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Data Smoothing: Moving averages –To “smooth” out variations in data and prevent alarm from being raised from a few spikes, the data from two or three weeks are combined. The resulting combined value is called the moving average. –Moving average – A technique for expressing data by computing the average of a fixed grouping (e.g., data for a fixed period) of data values; it is often used to suppress the effects of one extreme data point. –Data smoothing – A technique used to decrease the effects of individual, extreme variability in data values. –See table 10.1 in text book.
16
21 st LectureCEN 4021: Software Engineering II
17
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Data Correlation –Correlating attributes is a very useful tool for spmr, but it must be used carefully. Data correlation speaks only to the potential existence of a relationship between attributes; it does not necessarily imply cause and effect. –Data correlation – A technique that analyze the degree of relationship between sets of data. –Linear regression – A technique that estimates the relationship between two sets of data by fitting a straight line to the two sets of data values.
18
21 st LectureCEN 4021: Software Engineering II Analysis and Evaluation cont Normalization of Data –Normalizing data – A technique used to bring data characterizations to some common or standard level so that comparisons become more meaningful. –The size of the functional area or its complexity should be taken into account rather than just collecting the raw number in the area. Some measurement of size of the functional area or its complexity must be used to normalize the data.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.