Download presentation
Presentation is loading. Please wait.
Published byAlaina Knight Modified over 9 years ago
1
Software Prediction Models Forecasting the costs of software development
2
Prediction Study Outcomes Vary Estimation-by-analogy beats regression Or not Classification and regression trees (CART) beats regression Or not Artificial neural networks beat regression Or not
3
Why Are The Results Conflicting? Poor data or research procedure Complex techniques may require expert users; hence applications may vary Small sample size Measurement process that is flawed Selective use of differing parameters may result in different rankings
4
Key Terms Accuracy indicator – Some measure of a process – A summary statistic based on that measure Leave-one-out cross-validation Arbitrary function approximator taxonomy – Many-data versus sparse-data – Linear versus nonlinear – Supervised versus unsupervised Reliability versus validity
5
Indicator 1: MMRE Mean magnitude of relative error (MMRE) is an average where the MRE=|actual-prediction|/actual Claimed advantages of MMRE – Compare across data sets* – Independent of units – Compare across differing prediction models* – Scale independence *An hypothesis challenged by this paper
6
Indicator 2: MER Magnitude of the error relative to the estimate (MER) is defined as MER = |actual-prediction|/prediction
7
Indicator 3: AR The absolute residual (AR) is defined as AR = |actual-prediction|
8
Other Measures Standard deviation (SD) Relative standard deviation (RSD) Log standard deviation (LSD) Balanced relative error (BRE) Inverted balanced relative error (IBRE)
9
Standard Deviation of Residuals, Denoted SD
10
Algebraic Simplification
11
Relative Standard Deviation (RSD)
12
Log Standard Deviation (LSD)
13
Balanced Relative Error (BRE)
14
Inverted Balanced Relative Error (IBRE)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.