2
Something “feels the same” regardless of scale 4 What is that???
Something “feels the same” regardless of scale 5 Self-similar in nature
Something “feels the same” regardless of scale 6 The Koch snowflake fractal
Something “feels the same” regardless of scale 7 The Koch snowflake fractal
Something “feels the same” regardless of scale 8 The Koch snowflake fractal
Something “feels the same” regardless of scale 9
10 Categories: Exact self-similarity: Strongest Type Approximate self-similarity: Loose Form Statistical self-similarity: Weakest Type
11 Approximate self-similarity: Recognisably similar but not exactly so. e.g. Mandelbrot set Statistical self-similarity: Only numerical or statistical measures that are preserved across scales
In case of Stochastic Objects e.g. time-series Self-similarity is used in the distributional sense 12
Recently, network packet traffic has been identified as being self-similar. Current network traffic modeling using Poisson distributing (etc.) does not take into account the self-similar nature of traffic. This leads to inaccurate modeling of network traffic. 13
A Poisson process When observed on a fine time scale will appear bursty When aggregated on a coarse time scale will flatten (smooth) to white noise A Self-Similar (fractal) process When aggregated over wide range of time scales will maintain its bursty characteristic 14
15 packets per time unit Ethernet traffic August’89 trace
16
17
18
19 Bursty Data Streams Aggregation Smooth Pattern Streams Bursty Data Streams Aggregation Bursty Aggregate Streams Reality (self-similar): Current Model: Consequence: Inaccuracy
Long-range Dependence autocorrelation decays slowly Hurst Parameter Developed by Harold Hurst (1965) H is a measure of “burstiness” ▪ also considered a measure of self-similarity 0 < H < 1 H increases as traffic increases ▪ i.e., traffic becomes more self-similar 20
X = (X t : t = 0, 1, 2, ….) is covariance stationary random process (i.e. Cov(X t,X t+k ) does not depend on t for all k) Let X (m) ={X k (m) } denote the new process obtained by averaging the original series X in non-overlapping sub-blocks of size m. Mean , variance 2 Suppose that Autocorrelation Function r(k) k -β, 0<β<1 21 e.g. X(1)= 4,12,34,2,-6,18,21,35 Then X(2)=8,18,6,28 X(4)=13,17
X is exactly second-order self-similar if The aggregated processes have the same autocorrelation structure as X. i.e. r (m) (k) = r(k), k 0 for all m =1,2, … X is asymptotically second-order self-similar if the above holds when [ r (m) (k) r(k), m Most striking feature of self-similarity: Correlation structures of the aggregated process do not degenerate as m 22
23 lag ACF
24
Correlation structures of their aggregated processes degenerate as m i.e. r (m) (k) 0 as m for k = 1,2,3,... Short Range Dependence Processes: Exponential Decay of autocorrelations i.e. r(k) ~ p k, as k , 0 < p < 1 Summation is finite 25
Processes with Long Range Dependence are characterized by an autocorrelation function that decays hyperbolically as k increases Important Property: This is also called non-summability of correlation 26
The intuition behind long-range dependence: While high-lag correlations are all individually small, their cumulative affect is important Gives rise to features drastically different from conventional short-range dependent processes 27
Hurst Parameter H, 0.5 < H < 1 Three approaches to estimate H (Based on properties of self-similar processes) Variance Analysis of aggregated processes Rescaled Range (R/S) Analysis for different block sizes: time domain analysis Periodogram Analysis: frequency domain analysis (Whittle Estimator) 28
Variance of aggregated processes decays as: Var(X (m) ) = am -b as m infinite, For short range dependent processes (e.g. Poisson Process): Var(X (m) ) = am -1 as m infinite, Plot Var(X (m) ) against m on a log-log plot Slope > -1 indicative of self-similarity 29
30 Slope=-1 Slope=-0.7
31 where For a given set of observations, Rescaled Adjusted Range or R/S statistic is given by
X k = 14,1,3,5,10,3 Mean = 36/6 = 6 W 1 =14-(1*6 )=8 W 2 =15-(2*6 )=3 W 3 =18-(3*6 )=0 W 4 =23-(4*6 )=-1 W 5 =33-(5*6 )=3 W 6 =36-(6*6 )=0 32 R/S = 1/S*[8-(-1)] = 9/S
For self-similar data, rescaled range or R/S statistic grows according to cn H H = Hurst Paramater, > 0.5 For short-range processes, R/S statistic ~ dn 0.5 History: The Nile river In the ’s, Harold Edwin Hurst studied the 800-year record of flooding along the Nile river. (yearly minimum water level) Finds long-range dependence. 33
34 Slope = 1.0 Slope = 0.5 Slope = 0.79
Provides a confidence interval Property: Any long range dependent process approaches fractional Gaussian noise (FGN), when aggregated to a certain level Test the aggregated observations to ensure that it has converged to the normal distribution 35
Self-similarity manifests itself in several equivalent fashions: Non-degenerate autocorrelations Slowly decaying variance Long range dependence Hurst effect 36
Leland and Wilson collected hundreds of millions of Ethernet packets without loss and with recorded time-stamps accurate to within 100µs. Data collected from several Ethernet LAN’s at the Bellcore Morristown Research and Engineering Center at different times over the course of approximately 4 years. 38
39
40 H=0.5 H=1 Estimate H 0.8
41 Higher Traffic, Higher H High Traffic Mid Traffic Low Traffic 1.3%-10.4% 3.4%-18.4% 5.0%-30.7% Packets
Observation shows “contrary to Poisson” Network UtilizationH 42 As number of Ethernet users increases, the resulting aggregate traffic becomes burstier instead of smoother
Pre-1990: host-to-host workgroup traffic Post-1990: Router-to-router traffic Low period router-to-router traffic consists mostly of machine-generated packets Tend to form a smoother arrival stream, than low period host-to-host traffic 43
Ethernet LAN traffic is statistically self-similar H : the degree of self-similarity H : a function of utilization H : a measure of “burstiness” Models like Poisson are not able to capture self-similarity 44
46
The superposition of many ON/OFF sources whose ON-periods and OFF-periods exhibit the Noah Effect produces aggregate network traffic that features the Joseph Effect. 47 Also known as packet train models Noah Effect: high variability or infinite variance Joseph Effect: Self-similar or long-range dependent traffic
Traditional traffic models: finite variance ON/OFF source models Superposition of such sources behaves like white noise, with only short range correlations 48
Questions related to self-similarity can be reduced to practical implications of Noah Effect Queuing and Network performance Network Congestion Controls Protocol Analysis 49
The Queue Length distribution Traditional (Markovian) traffic: decreases exponentially fast Self-similar traffic: decreases much more slowly Not accounting for Joseph Effect can lead to overly optimistic performance 50 Effect of H (Burstiness)
How to design the buffer size? Trade-off between Packet Lose and Packet Delay 51
52 Packet LosePacket Delay Short Range DependenceDecrease ExponentiallyFixed Limit Long Range DependenceDecrease SlowlyAlways Increase Compare SRD and LRD when increase buffer size
Protocol design should take into account knowledge about network traffic such as the presence or absence of the self-similarity. 53 Parsimonious Models Small number of parameters Every parameter has a physically meaningful interpretation e.g. Mean , Variance 2, H Doesn’t quantify the effects of various factors in traffic
Demonstrated the existence of self-similarity in Ethernet Traffic irrespective of time scales Proposed the degree of self-similarity can be measured by Hurst parameter H (higher H implies burstier traffic) Illustrated the difference between the self-similar and standard models Explained Importance of self similarity in design, control, performance analysis 54
55