Download presentation
Presentation is loading. Please wait.
1
Automating The Selection of a Simulation Warm-up Period Stewart Robinson, Katy Hoad, Ruth Davies Warwick Business School Cardiff University - October 2008 The AutoSimOA Project A 3 year, EPSRC funded project in collaboration with SIMUL8 Corporation. http://www.wbs.ac.uk/go/autosimoa
2
Research Aim To create an automated system for advising a non-expert user on how to obtain accurate measures of model performance i.e. warm-up, run-length and number of replications For implementation into simulation software AutoSimOA = Automated SIMulation Output Analysis
3
Simulation model Warm-up analysis Run-length analysis Replications analysis Use replications or long-run? Recommendation possible? Recommend- ation Output data Analyser Obtain more output data
4
The Initial Bias Problem Model may not start in a “typical” state. Can cause initial bias in the output. Many methods proposed for dealing with initial bias, e.g.: –Initial steady-state conditions –Run model for ‘long’ time –Deletion of initial transient data (‘warm-up period’)
5
The Initial Bias Problem
6
This project uses: Deletion of the initial transient data by specifying a warm- up period (or truncation point). The question is: How do you estimate the length of the warm-up period required?
7
Methods fall into 5 main types : 1. Graphical Methods. 2. Heuristic Approaches. 3. Statistical Methods. 4. Initialisation Bias Tests. 5. Hybrid Methods.
8
Literature search – 44 methods Summary of methods and literature references on project web site: http://www.wbs.ac.uk/go/autosimoa
9
Short-listing warm-up methods for automation using literature Short-listing Criteria » Accuracy & robustness » Ease of automation » Generality » Computer running time
10
Short-listing results: reasons for rejection of methods
11
Statistical methods: –Goodness of Fit (GoF) test –Algorithm for a static data set (ASD) –Algorithm for a Dynamic data set (ADD) Heuristics: –MSER-5 –Kimbler’s Double Exponential Smoothing –Euclidean Distance Method (ED) Short-listing results: 6 Methods taken forward to testing
12
Preliminary testing of shortlisted methods Rejected methods: –ASD & ADD required a prohibitively large number of replications –GoF & Kimbler’s method consistently and severely underestimated truncation point. –ED failed to give any result on majority of occasions MSER-5 most accurate and robust method
13
MSER-5 warm-up method
14
Further Testing of MSER-5 1.Artificial data – controllable & comparable initial bias functions steady state functions 2.Full factorial design. 3.Set of performance criteria.
15
ParametersLevels Data TypeSingle run Data averaged over 5 reps Error typeN(1,1), Exp(1) Auto- correlation None, AR(1), AR(2), MA(2), AR(4), ARMA(5,5) Bias Severity1, 2, 4 Bias Length0%, 10%, 40%, 100% Bias directionPositive, Negative Bias shape7 shapes Artificial Data Parameters
16
Mean Shift: Linear: Quadratic: Exponential: Oscillating (decreasing):
17
Add Initial Bias to Steady state: Superpostion: Bias Fn, a(t), added onto end of steady state function: e.g. 2. Full factorial design: 3048 types of artificial data set MSER-5 run with each type 100 times
18
i.Coverage of true mean. ii.Closeness of estimated truncation point (Lsol) to true truncation point (L). iii.Percentage bias removed by truncation. iv.Analysis of the pattern & frequency of rejections of Lsol (i.e. Lsol > n/2). 3. Performance Criteria
19
MSER-5 Results Does the true mean fall into the 95% CI for the estimated mean? Non-truncated data sets Truncated data sets % of cases yes 7.7% yesno0% no 19.8% noyes72.5% i. Coverage of true mean.
20
ii. Closeness of Lsol to L: Wide range of Lsol values. e.g. (Positive bias functions, single run data, N(1,1) errors, MA(2) auto-correlation, bias severity value of 2 and true L = 100.)
21
iii. Percentage bias removed by truncation.
22
Effect of data parameters on bias removal No significant effect: Error type Bias direction Significant effect: Data type Auto-correlation type Bias shape Bias severity Bias length
23
More bias removed by using averaged replications rather than a single run.
24
The stronger the auto-correlation, the less accurate the bias removal. Effect greatly reduced by using averaged data.
25
The more sharply the initial bias declines, the more likely MSER-5 is to underestimate the warm-up period and to remove increasingly less bias.
26
As the bias severity increases, MSER-5 removes an increasingly higher percentage of the bias.
27
Longer bias removed slightly more efficiently than shorter bias. Shorter bias - more overestimations - partly due to longer bias overestimations being more likely to be rejected.
28
Rejections caused by: high auto-correlation, bias close to or over n/2, smooth end to data = ‘end point’ rejection. Averaged data slightly increases probability of getting ‘end point’ rejection but increases probability of more accurate L estimates. iv. Lsol rejections
29
Giving more data to MSER-5 in an iterative fashion produces a valid Lsol value where previously the Lsol value had been rejected. e.g. ARMA(5,5)
30
Lsol valuesPercentage of cases Lsol = 071% Lsol ≤ 5093% Testing MSER-5 with data that has no initial bias. Want Lsol = 0 Lsol > 50 mainly due to highest auto-correlated data sets - AR(1) & ARMA(5,5). Rejected Lsol values: 5.6% of the 2400 Lsol values produced. 93% from the highest auto-correlated data ARMA(5,5).
31
Testing MSER-5 with data that has 100% bias. Want 100% rejection rate: Actual rate = 61%
32
Summary MSER-5 most promising method for automation –Not model or data type specific. –No estimation of parameters needed. –Can function without user intervention. –Shown to perform robustly and effectively for the majority of data sets tested. –Quick to run. –Fairly simple to understand.
33
Heuristic framework around MSER-5 Iterative procedure for procuring more data when required. ‘Failsafe’ mechanism - to deal with possibility of data not in steady state; insufficient data provided when highly auto- correlated. Being implemented in SIMUL8.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.