Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Review Computer Networking: A Top Down Approach , 4th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007.

Similar presentations


Presentation on theme: "Network Review Computer Networking: A Top Down Approach , 4th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007."— Presentation transcript:

1 Network Review Computer Networking: A Top Down Approach , 4th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007.

2 Network Review Overview: what’s the Internet?
network edge; hosts, access net network core: packet/circuit switching, Internet structure performance: loss, delay, throughput Protocol TCP and UDP

3 What’s the Internet: “nuts and bolts” view
PC server wireless laptop cellular handheld millions of connected computing devices: hosts = end systems running network apps Home network Institutional network Mobile network Global ISP Regional ISP communication links fiber, copper, radio, satellite transmission rate = bandwidth wired links access points routers: forward packets (chunks of data) router

4 What’s the Internet: “nuts and bolts” view
protocols control sending, receiving of msgs e.g., TCP, IP, HTTP, Skype, Ethernet Internet: “network of networks” loosely hierarchical public Internet versus private intranet Internet standards RFC: Request for comments IETF: Internet Engineering Task Force Home network Institutional network Mobile network Global ISP Regional ISP

5 What’s the Internet: a service view
communication infrastructure enables distributed applications: Web, VoIP, , games, e-commerce, file sharing communication services provided to apps: reliable data delivery from source to destination “best effort” (unreliable) data delivery

6 A closer look at network structure:
network edge: applications and hosts access networks, physical media: wired, wireless communication links network core: interconnected routers network of networks

7 The network edge: end systems (hosts): client/server model
run application programs e.g. Web, at “edge of network” peer-peer client/server client/server model client host requests, receives service from always-on server e.g. Web browser/server; client/server peer-peer model: minimal (or no) use of dedicated servers e.g. Skype, BitTorrenth

8 Network edge: reliable data transfer service
Goal: data transfer between end systems handshaking: setup (prepare for) data transfer ahead of time Hello, hello back human protocol set up “state” in two communicating hosts TCP - Transmission Control Protocol Internet’s reliable data transfer service TCP service [RFC 793] reliable, in-order byte-stream data transfer loss: acknowledgements and retransmissions flow control: sender won’t overwhelm receiver congestion control: senders “slow down sending rate” when network congested

9 Network edge: best effort (unreliable) data transfer service
Goal: data transfer between end systems same as before! UDP - User Datagram Protocol [RFC 768]: connectionless unreliable data transfer no flow control no congestion control App’s using TCP: HTTP (Web), FTP (file transfer), Telnet (remote login), SMTP ( ) App’s using UDP: streaming media, teleconferencing, DNS, Internet telephony

10 Access networks and physical media
Access networks – the physical links that connect and end system to the first router (edge router) on a path from the end system to any other distant end system. Q: How to connect end systems to edge router? residential access nets institutional access networks (school, company) mobile access networks Keep in mind: bandwidth (bits per second) of access network? shared or dedicated?

11 Residential access: point to point access
Dialup via modem up to 56Kbps direct access to router (often less) Can’t surf and phone at same time: can’t be “always on” DSL: digital subscriber line deployment: telephone company (typically) up to 1 Mbps upstream (today typically < 256 kbps) up to 8 Mbps downstream (today typically < 1 Mbps) dedicated physical line to telephone central office

12 Residential access: cable modems
HFC: hybrid fiber coax asymmetric: up to 30Mbps downstream, 2 Mbps upstream network of cable and fiber attaches homes to ISP router homes share access to router deployment: available via cable TV companies

13 Company access: local area networks
company/univ local area network (LAN) connects end system to edge router Ethernet: 10 Mbs, 100Mbps, 1Gbps, 10Gbps Ethernet modern configuration: end systems connect into Ethernet switch LANs: chapter 5

14 Wireless access networks
shared wireless access network connects end system to router via base station also known as “access point” wireless LANs: 802.11b/g (WiFi): 11 or 54 Mbps wider-area wireless access provided by telco operator ~1Mbps over cellular system (EVDO, HSDPA) next up (?): WiMAX (10’s Mbps) over wide area router base station mobile hosts

15 Characteristics of selected wireless link standards
200 802.11n 54 802.11a,g 802.11a,g point-to-point data 5-11 802.11b (WiMAX) 4 3G cellular enhanced UMTS/WCDMA-HSPDA, CDMA2000-1xEVDO Data rate (Mbps) 1 802.15 .384 UMTS/WCDMA, CDMA2000 3G .056 IS-95, CDMA, GSM 2G Indoor 10-30m Outdoor 50-200m Mid-range outdoor 200m – 4 Km Long-range outdoor 5Km – 20 Km

16 Home networks Typical home network components: DSL or cable modem
router/firewall/NAT Ethernet wireless access point wireless laptops to/from cable headend cable modem router/ firewall wireless access point Ethernet

17 The Network Core mesh of interconnected routers
the fundamental question: how is data transferred through net? circuit switching: dedicated circuit per call: telephone net packet-switching: data sent thru net in discrete “chunks”

18 Network Core: Packet Switching
each end-end data stream divided into packets user A, B packets share network resources each packet uses full link bandwidth resources used as needed resource contention: aggregate resource demand can exceed amount available congestion: packets queue, wait for link use store and forward: packets move one hop at a time Node receives complete packet before forwarding Bandwidth division into “pieces” Dedicated allocation Resource reservation

19 Packet Switching: Statistical Multiplexing
100 Mb/s Ethernet C A statistical multiplexing 1.5 Mb/s B queue of packets waiting for output link D E Sequence of A & B packets does not have fixed pattern, bandwidth shared on demand  statistical multiplexing. TDM: each host gets same slot in revolving TDM frame.

20 Packet-switching: store-and-forward
L R R R takes L/R seconds to transmit (push out) packet of L bits on to link at R bps store and forward: entire packet must arrive at router before it can be transmitted on next link delay = 3L/R (assuming zero propagation delay) Example: L = 7.5 Mbits R = 1.5 Mbps transmission delay = 15 sec more on delay shortly …

21 How do loss and delay occur?
packets queue in router buffers packet arrival rate to link exceeds output link capacity packets queue, wait for turn packet being transmitted (delay) A free (available) buffers: arriving packets dropped (loss) if no free buffers packets queueing (delay) B

22 Four sources of packet delay
1. nodal processing: check bit errors determine output link 2. queueing time waiting at output link for transmission depends on congestion level of router A B propagation transmission nodal processing queueing

23 Delay in packet-switched networks
3. Transmission delay: R=link bandwidth (bps) L=packet length (bits) time to send bits into link = L/R 4. Propagation delay: d = length of physical link s = propagation speed in medium (~2x108 m/sec) propagation delay = d/s Note: s and R are very different quantities! A B propagation transmission nodal processing queueing

24 Nodal delay dproc = processing delay dqueue = queuing delay
typically a few microsecs or less dqueue = queuing delay depends on congestion dtrans = transmission delay = L/R, significant for low-speed links dprop = propagation delay a few microsecs to hundreds of msecs

25 Queueing delay (revisited)
R=link bandwidth (bps) L=packet length (bits) a=average packet arrival rate traffic intensity = La/R La/R ~ 0: average queueing delay small La/R -> 1: delays become large La/R > 1: more “work” arriving than can be serviced, average delay infinite!

26 “Real” Internet delays and routes
What do “real” Internet delay & loss look like? Traceroute program: provides delay measurement from source to router along end-end Internet path towards destination. For all i: sends three packets that will reach router i on path towards destination router i will return packets to sender sender times interval between transmission and reply. 3 probes 3 probes 3 probes

27 “Real” Internet delays and routes
traceroute: gaia.cs.umass.edu to Three delay measurements from gaia.cs.umass.edu to cs-gw.cs.umass.edu 1 cs-gw ( ) 1 ms 1 ms 2 ms 2 border1-rt-fa5-1-0.gw.umass.edu ( ) 1 ms 1 ms 2 ms 3 cht-vbns.gw.umass.edu ( ) 6 ms 5 ms 5 ms 4 jn1-at wor.vbns.net ( ) 16 ms 11 ms 13 ms 5 jn1-so wae.vbns.net ( ) 21 ms 18 ms 18 ms 6 abilene-vbns.abilene.ucaid.edu ( ) 22 ms 18 ms 22 ms 7 nycm-wash.abilene.ucaid.edu ( ) 22 ms 22 ms 22 ms ( ) 104 ms 109 ms 106 ms 9 de2-1.de1.de.geant.net ( ) 109 ms 102 ms 104 ms 10 de.fr1.fr.geant.net ( ) 113 ms 121 ms 114 ms 11 renater-gw.fr1.fr.geant.net ( ) 112 ms 114 ms 112 ms 12 nio-n2.cssi.renater.fr ( ) 111 ms 114 ms 116 ms 13 nice.cssi.renater.fr ( ) 123 ms 125 ms 124 ms 14 r3t2-nice.cssi.renater.fr ( ) 126 ms 126 ms 124 ms 15 eurecom-valbonne.r3t2.ft.net ( ) 135 ms 128 ms 133 ms ( ) 126 ms 128 ms 126 ms 17 * * * 18 * * * 19 fantasia.eurecom.fr ( ) 132 ms 128 ms 136 ms trans-oceanic link * means no response (probe lost, router not replying)

28 Packet loss queue (aka buffer) preceding link in buffer has finite capacity packet arriving to full queue dropped (aka lost) lost packet may be retransmitted by previous node, by source end system, or not at all buffer (waiting area) packet being transmitted A B packet arriving to full buffer is lost

29 Throughput throughput: rate (bits/time unit) at which bits transferred between sender/receiver instantaneous: rate at given point in time average: rate over long(er) period of time pipe that can carry fluid at rate Rc bits/sec) pipe that can carry fluid at rate Rs bits/sec) link capacity Rs bits/sec link capacity Rc bits/sec server sends bits (fluid) into pipe server, with file of F bits to send to client

30 Throughput (more) Rs < Rc What is average end-end throughput?
Rc bits/sec Rs bits/sec Rs > Rc What is average end-end throughput? Rs bits/sec Rc bits/sec link on end-end path that constrains end-end throughput bottleneck link

31 Throughput: Internet scenario
Rs per-connection end-end throughput: min(Rc,Rs,R/10) in practice: Rc or Rs is often bottleneck Rs Rs R Rc Rc Rc 10 connections (fairly) share backbone bottleneck link R bits/sec

32 Protocol “Layers” Networks are complex! many “pieces”: hosts routers
links of various media applications protocols hardware, software

33 Why layering? Dealing with complex systems:
explicit structure allows identification, relationship of complex system’s pieces layered reference model for discussion modularization eases maintenance, updating of system change of implementation of layer’s service transparent to rest of system e.g., change in gate procedure doesn’t affect rest of system layering considered harmful?

34 Internet protocol stack
application: supporting network applications FTP, SMTP, HTTP transport: process-process data transfer TCP, UDP network: routing of datagrams from source to destination IP, routing protocols link: data transfer between neighboring network elements PPP, Ethernet physical: bits “on the wire” application transport network link physical

35 ISO/OSI reference model
presentation: allow applications to interpret meaning of data, e.g., encryption, compression, machine-specific conventions session: synchronization, checkpointing, recovery of data exchange Internet stack “missing” these layers! these services, if needed, must be implemented in application needed? application presentation session transport network link physical

36 Encapsulation source destination application transport network link
message M application transport network link physical segment Ht M Ht datagram Ht Hn M Hn frame Ht Hn Hl M link physical switch destination network link physical Ht Hn M Ht Hn Hl M M application transport network link physical Ht Hn M Ht M Ht Hn M router Ht Hn Hl M

37 Internet apps: application, transport protocols
layer protocol SMTP [RFC 2821] Telnet [RFC 854] HTTP [RFC 2616] FTP [RFC 959] proprietary (e.g. RealNetworks) (e.g., Vonage,Dialpad) Underlying transport protocol TCP TCP or UDP typically UDP Application remote terminal access Web file transfer streaming multimedia Internet telephony

38 Internet transport-layer protocols
reliable, in-order delivery (TCP) congestion control flow control connection setup unreliable, unordered delivery: UDP no-frills extension of “best-effort” IP services not available: delay guarantees bandwidth guarantees application transport network data link physical network data link physical network data link physical logical end-end transport network data link physical network data link physical network data link physical network data link physical application transport network data link physical

39 UDP: User Datagram Protocol [RFC 768]
“no frills,” “bare bones” Internet transport protocol “best effort” service, UDP segments may be: lost delivered out of order to app connectionless: no handshaking between UDP sender, receiver each UDP segment handled independently of others Why is there a UDP? no connection establishment (which can add delay) simple: no connection state at sender, receiver small segment header no congestion control: UDP can blast away as fast as desired

40 UDP: more other UDP uses often used for streaming multimedia apps
loss tolerant rate sensitive other UDP uses DNS SNMP reliable transfer over UDP: add reliability at application layer application-specific error recovery! 32 bits source port # dest port # Length, in bytes of UDP segment, including header length checksum Application data (message) UDP segment format

41 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 point-to-point:
one sender, one receiver reliable, in-order byte steam: no “message boundaries” pipelined: TCP congestion and flow control set window size send & receive buffers full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init’s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver

42 TCP segment structure source port # dest port # application data
32 bits application data (variable length) sequence number acknowledgement number Receive window Urg data pnter checksum F S R P A U head len not used Options (variable length) URG: urgent data (generally not used) counting by bytes of data (not segments!) ACK: ACK # valid PSH: push data now (generally not used) # bytes rcvr willing to accept RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP)

43 simple telnet scenario
TCP seq. #’s and ACKs Seq. #’s: byte stream “number” of first byte in segment’s data ACKs: seq # of next byte expected from other side cumulative ACK Q: how receiver handles out-of-order segments A: TCP spec doesn’t say, - up to implementor Host A Host B User types ‘C’ Seq=42, ACK=79, data = ‘C’ host ACKs receipt of ‘C’, echoes back ‘C’ Seq=79, ACK=43, data = ‘C’ host ACKs receipt of echoed ‘C’ Seq=43, ACK=80 time simple telnet scenario

44 TCP: retransmission scenarios
Host A Seq=92, 8 bytes data ACK=100 loss timeout lost ACK scenario Host B X time Host A Host B Seq=92 timeout Seq=92, 8 bytes data Seq=100, 20 bytes data ACK=100 ACK=120 Sendbase = 100 Seq=92, 8 bytes data SendBase = 120 Seq=92 timeout ACK=120 SendBase = 100 SendBase = 120 premature timeout time

45 TCP retransmission scenarios (more)
Host A Seq=92, 8 bytes data ACK=100 loss timeout Cumulative ACK scenario Host B X Seq=100, 20 bytes data ACK=120 time SendBase = 120

46 Fast Retransmit Time-out period often relatively long:
long delay before resending lost packet Detect lost segments via duplicate ACKs. Sender often sends many segments back-to-back If segment is lost, there will likely be many duplicate ACKs. If sender receives 3 ACKs for the same data, it supposes that segment after ACKed data was lost: fast retransmit: resend segment before timer expires

47 TCP congestion control: additive increase, multiplicative decrease
Approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs additive increase: increase CongWin by 1 MSS every RTT until loss detected multiplicative decrease: cut CongWin in half after loss Saw tooth behavior: probing for bandwidth congestion window size time

48 TCP Congestion Control: details
sender limits transmission: LastByteSent-LastByteAcked  CongWin Roughly, CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events rate = CongWin RTT Bytes/sec

49 TCP Slow Start When connection begins, CongWin = 1 MSS
Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate When connection begins, increase rate exponentially fast until first loss event

50 TCP Slow Start (more) When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast Host A Host B one segment RTT two segments four segments time

51 Refinement Implementation:
Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event

52 Refinement: inferring loss
After 3 dup ACKs: CongWin is cut in half window then grows linearly But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Philosophy: 3 dup ACKs indicates network capable of delivering some segments timeout indicates a “more alarming” congestion scenario

53 Summary: TCP Congestion Control
When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

54 TCP throughput What’s the average throughout of TCP as a function of window size and RTT? Ignore slow start Let W be the window size when loss occurs. When window is W, throughput is W/RTT Just after loss, window drops to W/2, throughput to W/2RTT. Average throughout: .75 W/RTT

55 TCP Throughput In the case of loss,
Average throughput of a TCP connection = L – loss rate RTT – round trip time MSS – maximum segment size

56 TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck router capacity R TCP connection 2

57 Fairness (more) Fairness and parallel TCP connections Fairness and UDP
nothing prevents app from opening parallel connections between 2 hosts. Web browsers do this Example: link of rate R supporting 9 connections; new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 ! Fairness and UDP Multimedia apps often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly


Download ppt "Network Review Computer Networking: A Top Down Approach , 4th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007."

Similar presentations


Ads by Google