Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Scheduling file transfers on a circuit- switched network Student: Hojun Lee Advisor: Professor M. Veeraraghavan Committee: Professor E. K. P. Chong.

Similar presentations


Presentation on theme: "1 Scheduling file transfers on a circuit- switched network Student: Hojun Lee Advisor: Professor M. Veeraraghavan Committee: Professor E. K. P. Chong."— Presentation transcript:

1

2 1 Scheduling file transfers on a circuit- switched network Student: Hojun Lee Advisor: Professor M. Veeraraghavan Committee: Professor E. K. P. Chong Professor S. Torsten Professor S. Panwar Professor M. Veeraraghavan Date: 5/10/04

3 2 Problem statement Increasing file sizes (e.g., multimedia, eScience: particle physics) Increasing Link rates (e.g., optical fiber) Current protocols (e.g., tcp) do not exploit high bandwidth to decrease file transfer delay example) Current TCP connection with: 1) 1500B (MTU), 2)100 ms round-trip time(RTT), and 3) a steady throughput of 10Gbps; would require at most one packet drop every 5,000,000,000 packets (not realistic) (p = packet loss rate and )

4 3 Solutions to this problem  Limit upgrades to end hosts  Scalable TCP (Kelly)  High speed TCP (Floyd)  FAST TCP (Low, et al.)  Upgrade routers within the Internet  Larger Maximum Transmission Unit (MTU) ~ proposed by Mathis

5 4 Our proposed solution: Circuit-switched High-speed End-to-End Transport ArcHitecture (CHEETAH) End-to-end circuit setup and release dynamically CHEETAH: add-on basis to current Internet

6 5 File transfers using CHEETAH  Set up circuit  transfer file  release circuit  Do not keep circuit open during user think time  Only unidirectional circuit used  Utilization reasons  Mode of operation of circuit-switched network  Call-blocking mode  “All-or nothing” full-bandwidth allocation approach  Attempt a circuit setup:  If it succeeds  end host will enjoy a much shorter file- transfer delay than on the TCP/IP path  If it fails  fall back to the TCP/IP path  Call-queueing mode

7 6 Analytical model for blocking mode  (mean delay if the circuit setup is attempted): (1) = call blocking probability, = the mean call-setup delay, = time to transfer a file, ~=  : ~ Padhye et al., Cardwell et al. (Modeling TCP Latency) ~ function of RTT, bottleneck link rate r, packet loss, and round-trip propagation delay

8 7 Routing decision Compare with  resort directly to the TCP/IP path  attempt circuit setup

9 8 File transfer delays for large files( 1GB and 1TB) over TCP/IP path

10 9 Numerical results for transfer delays of file size [5MB – 1GB] Link rate = 100 Mbps,, k = 20  Should always attempt a circuit setup for these parameters

11 10 Numerical results for transfer delays of file size [5MB – 1GB] Con’t Link rate = 1 Gbps,, k = 20  Cross over file size exists for small propagation delay environment

12 11 Crossover file sizes Measure of loading on P b = 0.01P b = 0.1P b = 0.3 P loss = 0.001 22 MB24 MB30 MB P loss = 0.001 9 MB10 MB12 MB P loss = 0.01 < 5MB Measure of loading on P b = 0.01P b = 0.1P b = 0.3 P loss = 0.001 2.4 MB2.65 MB3.4 MB P loss = 0.001 2 MB2.2 MB2.8 MB P loss = 0.01 500 KB 550 KB650 kB ckt. Sw. network TCP/IP path ckt. Sw. network r = 1 Gbps, T prop = 0.1 ms, k = 20r = 100 Mbps, T prop = 0.1 ms, k = 20  For high propagation-delay environment, always attempt a circuit (utilization implications)  This work was presented at PFLDNET2003 Workshop [1] and Opticomm2003 [2].

13 12 Motivation for call queueing Example: Large file transfer (1TB) Network H D TCP/IP path Circuit path Call setup attempt  Delay = 4 days 14.9 hours P loss = 0.0001, T prop = 50ms r = 1 Gbps,  1TB/1Gbps = 2.2 hours

14 13 Problem with call queueing Low bandwidth utilization  Reason: up-stream switches hold resources while for waiting for down-stream switches to admit a call instead of using the wait period to admit short calls that only traverse upstream segments Host A Host B Switch 1 Switch 2 link 1 link 2 setup The call waits (queues) until resources become available on link 1, reserves and holds bandwidth for this call until the call is setup all through While call is being queue for link 2 resources, link 1 resources are idle

15 14 Idea! Use knowledge of file sizes to “schedule” calls Network knows File sizes of admitted calls Bandwidth of admitted calls When a new call arrives: The network can figure out when resources will become available for the new call Network can schedule the new call for a delayed start and provide this information to the requesting end host End host can then compare this delay with the expected delay on TCP/IP path

16 15 Call scheduling on a single link Main question: Since files can be transferred at any rate, what rate should the network assign to a given file transfer?

17 16 One simple answer In circuit switched networks, use fixed bandwidth allocation for the duration of a file transfer TDM/FDM scheme Transmission capacity C (bits/sec) divided among n streams Transmission of a file of L(bits) will take Ln/C sec Even if other transfers complete before this transfer, bandwidth cannot be increased for this particular transfer Packet-switched system Statistical multiplexing

18 17 Our answer  Greedy scheme ~ allocates maximum bandwidth available that is less than of equal to, which is the maximum rate requested for call  Varying-Bandwidth List Scheduling (VBLS):  End host specifies the file size, maximum bandwidth limit and a desired start time, and the network returns a time-range capacity allocation vector assigning varying bandwidth levels in different time ranges for the transfer  VBLS with Channel Allocation (VBLS/CA):  Special case of practical interest  Tracks actual channel allocations in different time ranges

19 18 Notation Specified in call request i Switch’s response

20 19 VBLS algorithm  Initialization step: set time, remaining file size check for available bandwidth at (if  find next change point in curve), set  “next change point”  Case 1 ( and can be transmitted before the next change point in curve)  set,,, ; Terminate loop  Case 2 ( and cannot be transmitted before the next change point in curve)  set, “next change point” in curve,  set, “next change point”,  continue repeat loop (go to Initialization step)

21 20 VBLS algorithm con’t  Case 3 ( and can be transmitted before the next change pint in curve)  set,,, ; Terminate loop  Case 4 ( and cannot be transmitted before the next change in curve)  set, “next change point” in curve,  set, “next change point”,   continue repeat loop (go to Initialization step)

22 21 Example of VBLS by figure Circuit Switch S3S3 Shared single link Ch. 1 Ch. 2 S1S1 S2S2 Ch. 3 Ch. 4 D t=1t=2t=3t=4t=5 time 1 2 3 4 :Available time ranges TRC 1 TRC 2 TRC 3

23 22 VBLS/CA algorithm Four additions: 1) Track channel availability with time for each channel in addition to tracking total available bandwidth curve ( ). Furthermore, track the channel availability in each time change point in 2) Track the set of open channels - to save the switch programming time 3) If multiple channels are allocated with the same time range, we count each allocation as a separate entry in the Time-Range-channeL (TRL) vector 4) For many candidate channels, there are two rules:  1’st rule: If file transfer completes within a time range, choose the channel with smallest leftover time  2 nd rule: If file transfer does not complete within a time range, choose the channel with largest leftover time

24 23 Example of VBLS/CA ParameterValue 10 2 75 MB 1 Gbps leftover (MB) C open TRL i Round 075{ } Round 150{1,4}(10,20,1) (10,20,4) Round 225{4}(20,30,1) (20,30,4) Round 312.5{ }(30,40,4) Round 40{1}(40,50,1) 12.5 MB can be sent

25 24 Traffic model File arrival request ~ Poisson process File size( ) distribution ~ the bounded Pareto distribution, where, the shape parameter (for the entire simulation 1.1), k, and p are the lower and upper bounds, respectively, of allowed file-size range ~ varies depending on the simulation settings

26 25 Validation of simulation against analytical results Assumptions: 1.All file requests set their to match the link capacity, C. 2.Arrival rate is Poisson process 3.Service rate is bounded Pareto distribution 4.k = 500MB and p = 100GB Analytical model:, where

27 26 Sensitivity analysis We carry out four experiments: (1) To understand the impact of when all calls request the constant (2) To understand the impact of the allowed file-size range (i.e., parameters (k and p)) (3) To understand the impact of the when calls request the three different values of (i.e., (1,2,4) and (1,5,10)) (4) To understand the impact of the size of

28 27 Sensitivity analysis Con’t First Experiment: k = 500MB, p = 100GB, and (1, 5, 10, and 100 channels) File latency: the mean waiting time across all files transferred Mean file transfer delay: file latency + mean service time (transmission delay) File latency comparison Mean file transfer delay comparison

29 28 Sensitivity analysis Con’t Second Experiment: Case 1: k = 500MB, p = 100GB, = 1.1 Case 2: k = 10GB, p = 100GB, = 1.1 Question: In which case is the variance is larger at first glance? Case 1 Case 2 Case 1 Case 2

30 29 Sensitivity analysis Con’t Third Experiment Case 1: (per-channel rate) = 10Gbps, C = 1Tbps (100 channels), Case 2: (per-channel rate) = 1Gbps, C =100Gbps (100 channels), File throughput: long-run average of the file size divided by the file transfer delay Case 1Case 2

31 30 Sensitivity analysis Con’t Fourth Experiment Assumptions: 1. all calls request the same and the link capacity C = 100 channels 2. vary the value of the discrete time unit ( T discrete ) as 0.05, 0.5, 1, and 2 sec.

32 31 Comparison of VBLS with FBLS and PS Basic simulation setup: File arrival requests ~ Poisson process Per channel rate = 10Gbps, = 1, 5, or 10 channels (30%, 30%, and 40%) Bound Pareto input parameters: = 1.1, k = 5MB, and p = 1GB Packet-switched system: files are divided into packets (1500B), and arrive at the infinite packet buffer at a constant packet rate equal to divided by the packet length

33 32 Comparison of VBLS with FBLS and PS Con’t The performance of the VBLS scheme has proved to be much better than the FBLS scheme The throughput performance of VBLS is indistinguishable from packet switching. This serves to illustrate our main point – that by taking into account file sizes and varying the bandwidth allocation for each transfer over its transfer duration, we mitigate the performance degradation usually associated with circuit-based methods. This work was presented at GAN’04[3] and PFLDNET’04[4] and will be published in ICC2004 [5].

34 33 Call scheduling on multiple-link case Multiple-link cases Centralized online greedy scheme Create a new reflecting the available bandwidth for all links Distributed online greedy scheme  Needs some kind of mechanism to merge TRC and TRL vectors for multiple switches  Practical issues  Clock synchronization  Propagation delay

35 34 Some additional notations for multiple-link case SymbolMeaning Time-range-capacity allocation: Capacity is assigned to call i in time range k starting at and ending at by switch n. Since the number of time ranges can change from link to link, we add the subscript n to. Time range capacity allocation: Capacity is to be released i starting at and ending at at switch (n-1) MMultiplicative factor used in reserving TRCs; if 5, then TRC vector reserved is 5 times the TRC allocation needed to transfer the file

36 35 VBLS example for M = 1 by figure SW1 Ch. 1 Ch. 2 S1S1 D1D1 Ch. 3 Ch. 4 t=1t=2t=3t=4t=5 time 1 2 3 4 :Available time ranges SW2SW3 time 1 2 3 4 t=1t=2t=3t=4t=5t=6 X (blocked)

37 36 VBLS example for M = 2 by figure SW1 Ch. 1 Ch. 2 S1S1 D1D1 Ch. 3 Ch. 4 t=1t=2t=3t=4t=5 time 1 2 3 4 :Available time ranges SW2SW3 time 1 2 3 4 t=1t=2t=3t=4t=5t=6

38 37 Traffic model Bounded Pareto input parameters: = 1.1, k = 500MB, and p = 100GB Study traffic: the mean call interarrival time used by Source is 10 files/sec (constant) Interference traffic: the mean call interarrival times used for the interference traffic are varied (5, 10, 15, 20, 25, 30, 35, and 40 file/sec)

39 38 Sensitivity analysis We carry out two experiments: (1) To understand the impact of M (Multiplicative factor)  M = 2, 3, and 4 (2) To understand the impact of the discrete time unit (T discrete )  T discrete = 0.01, 0.1, and 1 sec

40 39 Sensitivity analysis Con’t First experiment (Impact of M): varies the size of M as 2, 3, and 4, but fixes propagation delay and T discrete as 5ms and 10ms respectively. Percentages of blocked calls comparisonFile throughput comparison

41 40 Sensitivity analysis Con’t Second experiment (Impact of T discrete ): varies the size of T discrete as 0.01, 0.1, and 1 sec, but fixes propagation delay and M as 5ms and 3 respectively. Percentages of blocked calls comparisonFile throughput comparison

42 41 Future work  We can include a second class of user requests specifically targeted at interactive applications (long-holding time applications), i.e., remote visualization and simulation steering. Such requests will be specified as  The simulation results for the multiple-link case are only preliminary  More possible sets of comparison via simulations  Varying the propagation delays for the links, but fixing other parameters such as M and T discrete  Comparison between TCP/IP(FAST TCP) and VBLS scheme  Assume finite buffer instead of infinite buffer  Take into account the effect of congestion control, retransmission mechanism when the packet loss exists due to the buffer overflow  Might degrade the performance of packet-switched system

43 42 References 1.M. Veeraraghavan, H. Lee and X. Zheng, “File transfers across optical circuit-switched networks,” PFLDnet 2003, Feb. 3-4, 2003, Geneva, Switzerland. 2.M. Veeraraghavan, X. Zheng, H. Lee, M. Gardner, W. Feng, "CHEETAH: Circuit-switched High-speed End-to-End Transport ArcHitecture,” accepted for publication in the Proc. of Opticomm 2003, Oct. 13- 17, Dallas, TX. 3. H. Lee, M. Veeraraghavan,, E.K.P. Chong, H. Li, “Lambda scheduling algorithm for file transfers on high-speed optical circuits,” Workshop of Grids and Advanced Networks (GAN’04), April 19-22, 2004, Chicago, Illinois. 4.M. Veeraraghavan, X. Zheng, W. Feng, H. Lee, E. K. P. Chong, and H. Li, “Scheduling and Transport for File Transfers on High-speed Optical Circuits,” PFLDNET 2004, Feb. 16-17, 2004, Argonne, Illinois, http://www.-didc.lbl.gov/PFLDnet2004/. 5.M. Veeraraghavan, H. Lee, E.K.P. Chong, H. Li, “ A varying-bandwidth list scheduling heuristic for file transfers, ” in Proc of ICC2004, June 20-24, Paris, France. 6.M. Veeraraghavan, X. Zheng, W. Feng H. Lee, E.K.P. Chong, H. Li, “ Scheduling and Transport for File Transfers on High-speed Optical Circuits, ” JOGC 2004 (Journal of Grid Computing)

44 43 Thank you!


Download ppt "1 Scheduling file transfers on a circuit- switched network Student: Hojun Lee Advisor: Professor M. Veeraraghavan Committee: Professor E. K. P. Chong."

Similar presentations


Ads by Google