1 Scheduling file transfers on a circuit- switched network Student: Hojun Lee Advisor: Professor M. Veeraraghavan Committee: Professor E. K. P. Chong.

Slides:



Advertisements
Similar presentations
Spring 2000CS 4611 Introduction Outline Statistical Multiplexing Inter-Process Communication Network Architecture Performance Metrics.
Advertisements

Web Server Benchmarking Using the Internet Protocol Traffic and Network Emulator Carey Williamson, Rob Simmonds, Martin Arlitt et al. University of Calgary.
Fundamentals of Computer Networks ECE 478/578
CECS 474 Computer Network Interoperability Notes for Douglas E. Comer, Computer Networks and Internets (5 th Edition) Tracy Bradley Maples, Ph.D. Computer.
1 “Multiplexing Live Video Streams & Voice with Data over a High Capacity Packet Switched Wireless Network” Spyros Psychis, Polychronis Koutsakis and Michael.
Enabling New Applications with Optical Circuit-Switched Networks Xuan Zheng April 27, 2004.
Computer Networks Performance Metrics Advanced Computer Networks.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
End-to-End Analysis of Distributed Video-on-Demand Systems Padmavathi Mundur, Robert Simon, and Arun K. Sood IEEE Transactions on Multimedia, February.
Department of Computer Engineering University of California at Santa Cruz Networking Systems (1) Hai Tao.
Lecture 2 Introduction 1-1 Chapter 1: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit.
Computer Networks: Performance Measures1 Computer Network Performance Measures.
Lecture Internet Overview: roadmap 1.1 What is the Internet? (A simple overview last week) Today, A closer look at the Internet structure! 1.2 Network.
Little’s Theorem Examples Courtesy of: Dr. Abdul Waheed (previous instructor at COE)
Data Communication and Networks
Networks: Performance Measures1 Network Performance Measures.
Lecture Internet Overview: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit switching,
Lecture Internet Overview: roadmap 1.1 What is the Internet? 1.2 Network edge  end systems, access networks, links 1.3 Network core  circuit switching,
Ch. 28 Q and A IS 333 Spring Q1 Q: What is network latency? 1.Changes in delay and duration of the changes 2.time required to transfer data across.
1 Scheduling calls with known holding times Reinette Grobler * Prof. M. Veeraraghavan University of Pretoria Polytechnic University
Lecture 1, 1Spring 2003, COM1337/3501Computer Communication Networks Rajmohan Rajaraman COM1337/3501 Textbook: Computer Networks: A Systems Approach, L.
Data Communications and Networking
1 Computer Communication & Networks Lecture 4 Circuit Switching, Packet Switching, Delays Waleed.
Chapter 4. After completion of this chapter, you should be able to: Explain “what is the Internet? And how we connect to the Internet using an ISP. Explain.
جلسه دهم شبکه های کامپیوتری به نــــــــــــام خدا.
QoS Support in High-Speed, Wormhole Routing Networks Mario Gerla, B. Kannan, Bruce Kwan, Prasasth Palanti,Simon Walton.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
1 Optical Burst Switching (OBS). 2 Optical Internet IP runs over an all-optical WDM layer –OXCs interconnected by fiber links –IP routers attached to.
The Transmission Control Protocol (TCP) Application Services (Telnet, FTP, , WWW) Reliable Stream Transport (TCP) Connectionless Packet Delivery.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
Computer Networks Performance Metrics
Computer Networks with Internet Technology William Stallings
A High Performance Channel Sorting Scheduling Algorithm Based On Largest Packet P.G.Sarigiannidis, G.I.Papadimitriou, and A.S.Pomportsis Department of.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
IT 210: Web-based IT Winter 2012 Measuring Speed on the Internet and WWW.
Data Comm. & Networks Instructor: Ibrahim Tariq Lecture 3.
Transport Layer3-1 TCP throughput r What’s the average throughout of TCP as a function of window size and RTT? m Ignore slow start r Let W be the window.
Lecture # 03 Switching Course Instructor: Engr. Sana Ziafat.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Lambda scheduling algorithm for file transfers on high-speed optical circuits Hojun Lee Polytechnic Univ. Hua Li and Edwin Chong Colorado State Univ. Malathi.
Unit III Bandwidth Utilization: Multiplexing and Spectrum Spreading In practical life the bandwidth available of links is limited. The proper utilization.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 CH. 8: SWITCHING & DATAGRAM NETWORKS 7.1.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Scheduling and transport for file transfers on high-speed optical circuits Authors: M. Veeraraghavan & Xuan Zheng (University of Virginia) Wu Feng (Los.
CSE 413: Computer Network Circuit Switching and Packet Switching Networks Md. Kamrul Hasan
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Computer Communication & Networks Lecture # 03 Circuit Switching, Packet Switching Nadeem Majeed Choudhary
Reliable Adaptive Lightweight Multicast Protocol Ken Tang, Scalable Network Technologies Katia Obraczka, UC Santa Cruz Sung-Ju Lee, Hewlett-Packard Laboratories.
Courtesy Piggybacking: Supporting Differentiated Services in Multihop Mobile Ad Hoc Networks Wei LiuXiang Chen Yuguang Fang WING Dept. of ECE University.
Providing QoS in IP Networks
1 Ram Dantu University of North Texas, Practical Networking.
Data Communication Networks Lec 13 and 14. Network Core- Packet Switching.
Chapter 10 Congestion Control in Data Networks and Internets 1 Chapter 10 Congestion Control in Data Networks and Internets.
1. Layered Architecture of Communication Networks: Circuit Switching & Packet Switching.
1 The Latency/Bandwidth Tradeoff in Gigabit Networks UBI 527 Data Communications Ozan TEKDUR , Fall.
1 ICCCN 2003 Modelling TCP Reno with Spurious Timeouts in Wireless Mobile Environments Shaojian Fu School of Computer Science University of Oklahoma.
Topics discussed in this section:
SEMINAR ON Optical Burst Switching
Open Issues in Router Buffer Sizing
Transport Layer Unit 5.
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
Network Core and QoS.
CS Lecture 2 Network Performance
Congestion Control, Quality of Service, & Internetworking
CSE 550 Computer Network Design
Network Core and QoS.
Presentation transcript:

1 Scheduling file transfers on a circuit- switched network Student: Hojun Lee Advisor: Professor M. Veeraraghavan Committee: Professor E. K. P. Chong Professor S. Torsten Professor S. Panwar Professor M. Veeraraghavan Date: 5/10/04

2 Problem statement Increasing file sizes (e.g., multimedia, eScience: particle physics) Increasing Link rates (e.g., optical fiber) Current protocols (e.g., tcp) do not exploit high bandwidth to decrease file transfer delay example) Current TCP connection with: 1) 1500B (MTU), 2)100 ms round-trip time(RTT), and 3) a steady throughput of 10Gbps; would require at most one packet drop every 5,000,000,000 packets (not realistic) (p = packet loss rate and )

3 Solutions to this problem  Limit upgrades to end hosts  Scalable TCP (Kelly)  High speed TCP (Floyd)  FAST TCP (Low, et al.)  Upgrade routers within the Internet  Larger Maximum Transmission Unit (MTU) ~ proposed by Mathis

4 Our proposed solution: Circuit-switched High-speed End-to-End Transport ArcHitecture (CHEETAH) End-to-end circuit setup and release dynamically CHEETAH: add-on basis to current Internet

5 File transfers using CHEETAH  Set up circuit  transfer file  release circuit  Do not keep circuit open during user think time  Only unidirectional circuit used  Utilization reasons  Mode of operation of circuit-switched network  Call-blocking mode  “All-or nothing” full-bandwidth allocation approach  Attempt a circuit setup:  If it succeeds  end host will enjoy a much shorter file- transfer delay than on the TCP/IP path  If it fails  fall back to the TCP/IP path  Call-queueing mode

6 Analytical model for blocking mode  (mean delay if the circuit setup is attempted): (1) = call blocking probability, = the mean call-setup delay, = time to transfer a file, ~=  : ~ Padhye et al., Cardwell et al. (Modeling TCP Latency) ~ function of RTT, bottleneck link rate r, packet loss, and round-trip propagation delay

7 Routing decision Compare with  resort directly to the TCP/IP path  attempt circuit setup

8 File transfer delays for large files( 1GB and 1TB) over TCP/IP path

9 Numerical results for transfer delays of file size [5MB – 1GB] Link rate = 100 Mbps,, k = 20  Should always attempt a circuit setup for these parameters

10 Numerical results for transfer delays of file size [5MB – 1GB] Con’t Link rate = 1 Gbps,, k = 20  Cross over file size exists for small propagation delay environment

11 Crossover file sizes Measure of loading on P b = 0.01P b = 0.1P b = 0.3 P loss = MB24 MB30 MB P loss = MB10 MB12 MB P loss = 0.01 < 5MB Measure of loading on P b = 0.01P b = 0.1P b = 0.3 P loss = MB2.65 MB3.4 MB P loss = MB2.2 MB2.8 MB P loss = KB 550 KB650 kB ckt. Sw. network TCP/IP path ckt. Sw. network r = 1 Gbps, T prop = 0.1 ms, k = 20r = 100 Mbps, T prop = 0.1 ms, k = 20  For high propagation-delay environment, always attempt a circuit (utilization implications)  This work was presented at PFLDNET2003 Workshop [1] and Opticomm2003 [2].

12 Motivation for call queueing Example: Large file transfer (1TB) Network H D TCP/IP path Circuit path Call setup attempt  Delay = 4 days 14.9 hours P loss = , T prop = 50ms r = 1 Gbps,  1TB/1Gbps = 2.2 hours

13 Problem with call queueing Low bandwidth utilization  Reason: up-stream switches hold resources while for waiting for down-stream switches to admit a call instead of using the wait period to admit short calls that only traverse upstream segments Host A Host B Switch 1 Switch 2 link 1 link 2 setup The call waits (queues) until resources become available on link 1, reserves and holds bandwidth for this call until the call is setup all through While call is being queue for link 2 resources, link 1 resources are idle

14 Idea! Use knowledge of file sizes to “schedule” calls Network knows File sizes of admitted calls Bandwidth of admitted calls When a new call arrives: The network can figure out when resources will become available for the new call Network can schedule the new call for a delayed start and provide this information to the requesting end host End host can then compare this delay with the expected delay on TCP/IP path

15 Call scheduling on a single link Main question: Since files can be transferred at any rate, what rate should the network assign to a given file transfer?

16 One simple answer In circuit switched networks, use fixed bandwidth allocation for the duration of a file transfer TDM/FDM scheme Transmission capacity C (bits/sec) divided among n streams Transmission of a file of L(bits) will take Ln/C sec Even if other transfers complete before this transfer, bandwidth cannot be increased for this particular transfer Packet-switched system Statistical multiplexing

17 Our answer  Greedy scheme ~ allocates maximum bandwidth available that is less than of equal to, which is the maximum rate requested for call  Varying-Bandwidth List Scheduling (VBLS):  End host specifies the file size, maximum bandwidth limit and a desired start time, and the network returns a time-range capacity allocation vector assigning varying bandwidth levels in different time ranges for the transfer  VBLS with Channel Allocation (VBLS/CA):  Special case of practical interest  Tracks actual channel allocations in different time ranges

18 Notation Specified in call request i Switch’s response

19 VBLS algorithm  Initialization step: set time, remaining file size check for available bandwidth at (if  find next change point in curve), set  “next change point”  Case 1 ( and can be transmitted before the next change point in curve)  set,,, ; Terminate loop  Case 2 ( and cannot be transmitted before the next change point in curve)  set, “next change point” in curve,  set, “next change point”,  continue repeat loop (go to Initialization step)

20 VBLS algorithm con’t  Case 3 ( and can be transmitted before the next change pint in curve)  set,,, ; Terminate loop  Case 4 ( and cannot be transmitted before the next change in curve)  set, “next change point” in curve,  set, “next change point”,   continue repeat loop (go to Initialization step)

21 Example of VBLS by figure Circuit Switch S3S3 Shared single link Ch. 1 Ch. 2 S1S1 S2S2 Ch. 3 Ch. 4 D t=1t=2t=3t=4t=5 time :Available time ranges TRC 1 TRC 2 TRC 3

22 VBLS/CA algorithm Four additions: 1) Track channel availability with time for each channel in addition to tracking total available bandwidth curve ( ). Furthermore, track the channel availability in each time change point in 2) Track the set of open channels - to save the switch programming time 3) If multiple channels are allocated with the same time range, we count each allocation as a separate entry in the Time-Range-channeL (TRL) vector 4) For many candidate channels, there are two rules:  1’st rule: If file transfer completes within a time range, choose the channel with smallest leftover time  2 nd rule: If file transfer does not complete within a time range, choose the channel with largest leftover time

23 Example of VBLS/CA ParameterValue MB 1 Gbps leftover (MB) C open TRL i Round 075{ } Round 150{1,4}(10,20,1) (10,20,4) Round 225{4}(20,30,1) (20,30,4) Round 312.5{ }(30,40,4) Round 40{1}(40,50,1) 12.5 MB can be sent

24 Traffic model File arrival request ~ Poisson process File size( ) distribution ~ the bounded Pareto distribution, where, the shape parameter (for the entire simulation 1.1), k, and p are the lower and upper bounds, respectively, of allowed file-size range ~ varies depending on the simulation settings

25 Validation of simulation against analytical results Assumptions: 1.All file requests set their to match the link capacity, C. 2.Arrival rate is Poisson process 3.Service rate is bounded Pareto distribution 4.k = 500MB and p = 100GB Analytical model:, where

26 Sensitivity analysis We carry out four experiments: (1) To understand the impact of when all calls request the constant (2) To understand the impact of the allowed file-size range (i.e., parameters (k and p)) (3) To understand the impact of the when calls request the three different values of (i.e., (1,2,4) and (1,5,10)) (4) To understand the impact of the size of

27 Sensitivity analysis Con’t First Experiment: k = 500MB, p = 100GB, and (1, 5, 10, and 100 channels) File latency: the mean waiting time across all files transferred Mean file transfer delay: file latency + mean service time (transmission delay) File latency comparison Mean file transfer delay comparison

28 Sensitivity analysis Con’t Second Experiment: Case 1: k = 500MB, p = 100GB, = 1.1 Case 2: k = 10GB, p = 100GB, = 1.1 Question: In which case is the variance is larger at first glance? Case 1 Case 2 Case 1 Case 2

29 Sensitivity analysis Con’t Third Experiment Case 1: (per-channel rate) = 10Gbps, C = 1Tbps (100 channels), Case 2: (per-channel rate) = 1Gbps, C =100Gbps (100 channels), File throughput: long-run average of the file size divided by the file transfer delay Case 1Case 2

30 Sensitivity analysis Con’t Fourth Experiment Assumptions: 1. all calls request the same and the link capacity C = 100 channels 2. vary the value of the discrete time unit ( T discrete ) as 0.05, 0.5, 1, and 2 sec.

31 Comparison of VBLS with FBLS and PS Basic simulation setup: File arrival requests ~ Poisson process Per channel rate = 10Gbps, = 1, 5, or 10 channels (30%, 30%, and 40%) Bound Pareto input parameters: = 1.1, k = 5MB, and p = 1GB Packet-switched system: files are divided into packets (1500B), and arrive at the infinite packet buffer at a constant packet rate equal to divided by the packet length

32 Comparison of VBLS with FBLS and PS Con’t The performance of the VBLS scheme has proved to be much better than the FBLS scheme The throughput performance of VBLS is indistinguishable from packet switching. This serves to illustrate our main point – that by taking into account file sizes and varying the bandwidth allocation for each transfer over its transfer duration, we mitigate the performance degradation usually associated with circuit-based methods. This work was presented at GAN’04[3] and PFLDNET’04[4] and will be published in ICC2004 [5].

33 Call scheduling on multiple-link case Multiple-link cases Centralized online greedy scheme Create a new reflecting the available bandwidth for all links Distributed online greedy scheme  Needs some kind of mechanism to merge TRC and TRL vectors for multiple switches  Practical issues  Clock synchronization  Propagation delay

34 Some additional notations for multiple-link case SymbolMeaning Time-range-capacity allocation: Capacity is assigned to call i in time range k starting at and ending at by switch n. Since the number of time ranges can change from link to link, we add the subscript n to. Time range capacity allocation: Capacity is to be released i starting at and ending at at switch (n-1) MMultiplicative factor used in reserving TRCs; if 5, then TRC vector reserved is 5 times the TRC allocation needed to transfer the file

35 VBLS example for M = 1 by figure SW1 Ch. 1 Ch. 2 S1S1 D1D1 Ch. 3 Ch. 4 t=1t=2t=3t=4t=5 time :Available time ranges SW2SW3 time t=1t=2t=3t=4t=5t=6 X (blocked)

36 VBLS example for M = 2 by figure SW1 Ch. 1 Ch. 2 S1S1 D1D1 Ch. 3 Ch. 4 t=1t=2t=3t=4t=5 time :Available time ranges SW2SW3 time t=1t=2t=3t=4t=5t=6

37 Traffic model Bounded Pareto input parameters: = 1.1, k = 500MB, and p = 100GB Study traffic: the mean call interarrival time used by Source is 10 files/sec (constant) Interference traffic: the mean call interarrival times used for the interference traffic are varied (5, 10, 15, 20, 25, 30, 35, and 40 file/sec)

38 Sensitivity analysis We carry out two experiments: (1) To understand the impact of M (Multiplicative factor)  M = 2, 3, and 4 (2) To understand the impact of the discrete time unit (T discrete )  T discrete = 0.01, 0.1, and 1 sec

39 Sensitivity analysis Con’t First experiment (Impact of M): varies the size of M as 2, 3, and 4, but fixes propagation delay and T discrete as 5ms and 10ms respectively. Percentages of blocked calls comparisonFile throughput comparison

40 Sensitivity analysis Con’t Second experiment (Impact of T discrete ): varies the size of T discrete as 0.01, 0.1, and 1 sec, but fixes propagation delay and M as 5ms and 3 respectively. Percentages of blocked calls comparisonFile throughput comparison

41 Future work  We can include a second class of user requests specifically targeted at interactive applications (long-holding time applications), i.e., remote visualization and simulation steering. Such requests will be specified as  The simulation results for the multiple-link case are only preliminary  More possible sets of comparison via simulations  Varying the propagation delays for the links, but fixing other parameters such as M and T discrete  Comparison between TCP/IP(FAST TCP) and VBLS scheme  Assume finite buffer instead of infinite buffer  Take into account the effect of congestion control, retransmission mechanism when the packet loss exists due to the buffer overflow  Might degrade the performance of packet-switched system

42 References 1.M. Veeraraghavan, H. Lee and X. Zheng, “File transfers across optical circuit-switched networks,” PFLDnet 2003, Feb. 3-4, 2003, Geneva, Switzerland. 2.M. Veeraraghavan, X. Zheng, H. Lee, M. Gardner, W. Feng, "CHEETAH: Circuit-switched High-speed End-to-End Transport ArcHitecture,” accepted for publication in the Proc. of Opticomm 2003, Oct , Dallas, TX. 3. H. Lee, M. Veeraraghavan,, E.K.P. Chong, H. Li, “Lambda scheduling algorithm for file transfers on high-speed optical circuits,” Workshop of Grids and Advanced Networks (GAN’04), April 19-22, 2004, Chicago, Illinois. 4.M. Veeraraghavan, X. Zheng, W. Feng, H. Lee, E. K. P. Chong, and H. Li, “Scheduling and Transport for File Transfers on High-speed Optical Circuits,” PFLDNET 2004, Feb , 2004, Argonne, Illinois, 5.M. Veeraraghavan, H. Lee, E.K.P. Chong, H. Li, “ A varying-bandwidth list scheduling heuristic for file transfers, ” in Proc of ICC2004, June 20-24, Paris, France. 6.M. Veeraraghavan, X. Zheng, W. Feng H. Lee, E.K.P. Chong, H. Li, “ Scheduling and Transport for File Transfers on High-speed Optical Circuits, ” JOGC 2004 (Journal of Grid Computing)

43 Thank you!