Download presentation
Presentation is loading. Please wait.
1
Packet implementation: discretization
Rates Control: or I’ll introduce the packets implementation of the fluid control model first, then I’ll present some simulation of the new protocol with NS2. Currently, we are working mainly on the discretized control version of the equation. And, we use the utilitity function U(x)=Klog(x). By chooseing different value of K, we could achieve different kinds of fairness. Here we assume that all sources use the same value of K1. Then, at the equilibrium, all the flows across some bottle link should share the bandwith absolutely equally, regardless of the time delay and other parameters. By discretizing the equation, we should implement the integrator and maintain one middle state variable. Here Ts is the integral period. On the link side, One implementation is to keep track of the virtual queue by the service rate of the virtual capacity. Then we need another timer on the link to ack as the virtual queue with a smller capacity. For high speed link, it may be much resource-consuming. We could implement it as an integrator similarily as that in the source. Here y is the average arrival rate, and Ts is the price update interval.
2
Packet implementation: discretization
ECN based price control: Link: Source: Shift-register to save the last N ECN bits Equilibrium: One key point of the implementation is the price estimation and feedback from the link to the sources. Currently, we use the mechanism of REM. On the link side, we map the price to a marking probability to the packets. The ECN bit of the packet is marked randomly. On the source side, a shift register is maintained to save ECN packets of the latest N packets. And the marking probabilty is estimated by the ECN bits of the latest arrivals. The noise and error of the price estimation exists here. And they are the main obstacle of the implementation. Note that the equlibrium is only up to the capacity of the bottle link and the number of the sources. Here gama means also the possible utilizaiton of the link bandwith. The equilibrium price is up to the equilibrium rate and the parameter of the utility function. To make the price estimated as accurate as much, it’s better to make the probabilty equilibrium around 50%. So, it’s important to select the parameter PHI so that the protocol could work well in most of the conditions. Another point about the parameter is that: form the equation, we could get the dynamic of the rate. From this equation, we could find that when the price or the rate is very small, the rate increase linearly. When the price is large, it increases or decreases exponentially. Of courese, it’s not so strict. Window dynamics:
3
Packet implementation: Source Side
Here is the current implementation of the protocol. We didn’t consider too much about the complexity and tried to make it the same as the control law but in a packet level. And, the current version didn’t deal with packet drop, retransmission and timeout. For each integrating interval, the state ksai is updated according to the equation, and we get the expected congestion window. We call it the expected window because we could do something before we set the final congestion window. Then we update the congestion window on each ACK arrival. At the same time, the marking probability is estimated and we could get the approximate price. These two operations are the main idea of the source side. According the requirement of the scalable stability, we set alpha no less than pi/2. And, the parameter is coming from the zero of the fluid model. Phi is the constant known by all the sources and links. Currently, we set the integrating interval about 5 ms.
4
Packet implementation: Link Side
The implementation of the link queue is almost the same as REM.
5
Ns Simulation: two-way, long-lived traffic
Scenario: 2^6, 2^6, 2^7,2^8,2^9 sources started at 0, 20, 40, 60, 80 seconds RTTs 40, 80,120,160, 200ms,link capacity 2Gbps (250pkts/ms) 2Gbps, delay=20ms 32 ftps 64ftps 128 ftps 256 ftps Duplex links td1 td2 td3 td4 td5 td_i=(i-1)*10ms We run simulations in different cases. Here is the scenario we use for the next slides. Steven suggests this scenario sources … In all simulations, we assume that the buffer of the link is unlimited so there is not packets drop.
6
Ns Simulation: two-way, long-lived traffic
New Protocol Cwnds Rates (pkts/sec) Queue (pkts) Utilization Here is the results of the new protocol. Here are some points. The system keeps stable and convergent with different time delays and different number of sources The window is quite smooth.(change very often). The system track the change of the system. Not so fast. The system is proportionally fair. The queue is very small in most of the time. Around 20. And the queue is quite smooth. The utilization of the link is very high, even during the change of the system. Limitation: The response speed is relative slow. To increase K could increase the speed, but with a shoot of the queue. The system may be needed to work in a high marking probability.. When the window is very small, the queue may be more noise. Marking Prob Estimated Prob
7
Ns Simulation: two-way, long-lived traffic
NewReno/RED NewReno/AdaptiveRED Cwnds Cwnds Queue (pkts) Queue (pkts) Utilization Utilization With the same scenario, we run different tcp and queue mechanism. Here are th Reno/red. First, the window is not so smooth so the rate of the source is variant. Queue and the utilization are a pair of contradition. When the utilization is large, the queue become noise. Here we could find both the utilization and the queue not so good as the new protocol. A strange point here is that the adaptive red seems not so much better than red when there are more sources. It’s even worse. Paremeters: Thresh_ 100, maxthresh_ 2500
8
Ns Simulation: two-way, long-lived traffic
NewReno/VQ NewReno/PI Cwnds Cwnds Queue (pkts) Queue (pkts) We also try some other queue management. AVQ works similar as RED. PI seems work best among these mechanism with the AIMD window management. And, we find that the utilization of all these mechanism is not so high when the window is large enough. It’s even worse when more sources added. Utilization Utilization Paremeters: qref_=100
9
Ns Simulation: small marking Prob
New Protocol Cwnds Rates (pkts/sec) Queue (pkts) Utilization Another interesting simulation is that, does the new protocol work with a very small marking probability as the prior schemes? We use a small phi. And, we found that the system still work quite good. Of course, the queue becomes noise since the noise of the price estimation is greater than before. Marking Prob Estimated Prob
10
Ns Simulation: “heavy-tailed” traffic
Long/heavy-tailed distributions Power law Crovella data set, files Pareto law lognormal: log X normally distr., (q,s,m) Another stuff that the people in the Internet community care about is that the system could work well for the Internet traffic? Here we show some results. Numerious research has been done with the simulation of the heavy-tailed traffic. Here we use pareto distribution. It’s a kind of pow law when the size is large enough.
11
Ns Simulation: “heavy-tailed” traffic
Pareto(scale,shape): Count contribution of the flows Ex: Pareto(100,1.0) Log2(Prob[X>x]) Prob(X<x) 34078 flows 3.33e+7 pkts Median size 200.5pkts flow size(pkts, log2(x)) flow size(pkts, log2(x)) packet contribution of the flows Here is an example of the flows we generated. From the distribution of the flows, at least, it’s heavy – tailed. For example, more than 90 percent of the flows have a size smaller than 1000 pkts. But the packets contribution of these flows is less than 25%. Pareto(scale,shape): Scale*(1.0/pow(uniform(),1.0/shape)) In the following simulations, we assume that flows are generated from the 1024 nodes. After every session of the flows, the tcp agent is reset. So the transmission of every flow is began from the initial window. So at any point, there are at most 1024 flows go through the duplex links. The size of the flows has a pareto distribution. And the inter arrival time of the flows has a exponential distribution. flow size(pkts, log2(x)) flow size(pkts, log2(x))
12
Ns Simulation: “heavy-tailed” traffic
New Protocol Cwnds Percentage of the sessions Queue Marking Prob Percentage of the packets First, let’s look at the case when all nodes have the same round trip time. We could find that queue is still very small at most of the time, and the utilization is still high as expected. Utilization Flow size (log2(x)) Cumulative Distribution of the flows Scenario: 1024 sources started at [0,10] with RTT 100ms, link capacity 1Gbps (125pkts/ms)
13
Ns Simulation: “heavy-tailed” traffic
New Protocol Cwnds Percentage of the sessions Queue (pkts) Percentage of the packets Marking Prob When the rtt ranges from 40 ms to 200ms, the queue is almost the same except the more shoots. The elephants still work fairly. Utilization Flow size (log2(x)) Scenario: 2^6, 2^6, 2^7,2^8,2^9 sources started uniformly in [0,10] secs with RTTs 40, 80, 120,160, 200ms,link capacity 1Gbps (125pkts/ms) Cumulative Distribution of the flows
14
Ns Simulation: “heavy-tailed” traffic
NewReno/RED NewReno/AdaptiveRED Cwnds Cwnds Queue (pkts) Queue (pkts) Utilization Utilization Paremeters: Thresh_ 100, maxthresh_ 2500
15
Ns Simulation: “heavy-tailed” traffic
NewReno/VQ NewReno/PI Cwnds Cwnds Queue (pkts) Queue (pkts) Utilization Utilization
16
Packet implementation: tricks ?
Window management Price estimation: Pacing output Here are some comments on the implementation. First, we note that from the equation for the dynamic of the window, it’s possible to simplify the window mechanism so that we donot need the exponential computation every interval. Actully, we could also simplify the computation of the price. Note that the estimation of the window is limited. We use 31. So there exists only N+1 possible marking probabilities. Then we could map the marking probability to the price if we know the phi. And it’s avoided to comute the log function. Another question here is that we paced the output over the entire RTT so that the expected window could work more smoothly. Of course, there are negative effect on the pacing. It’s still a nonstandard policy. Also, because of the noise of the price estimation, we cap the change of the window for every ack to mitigate the noise. And, we could also penalize the real queue above a threshold, and smooth the average arrival rate, just as REM does. All these may improve the performance of the protocol. Capping the change of the cwnd Penalizing the real queue above a threshold Smoothing the average arrival rate of the queue
17
Conclusion: Equation-based implementation has the desirable performance. --scalable stable, fair, high utilization, small queue --especially for the high bandwidth links Price feedback and estimation is the main obstacle. Improvement may be made to be more efficient. To be done... -- packet-drop and timeout TO much to be done: Add the mechanism to deal with packet drop and timeout. To make the implementation more complete and practical, new tricks? TO find “optimal” parameters. New price transmission scheme? -- optimal parameter set -- new price estimation and transmission scheme -- more practical implementation
18
Packet implementation: Source Side
19
Ns Simulation: “heavy-tail” traffic
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.