Download presentation
Presentation is loading. Please wait.
Published byEverett Jacobs Modified over 9 years ago
1
Transport Layer Moving Segments
2
Transport Layer Protocols Provide a logical communication link between processes running on different hosts as if directly connected Implemented in end systems but not in network routers (Possibly) break messages into smaller units, adding transport layer header to create segment We have two protocols: TCP and UDP
4
An Analogy Ann, her brothers and sisters on West Coast; Bill and family on East Coast –Application messages = letters in envelopes –Processes = cousins –Hosts (end systems) = houses –Transport Layer Protocol = Ann and Bill –Network Layer Protocol = Postal Service
5
UDP User Datagram Protocol Provides an unreliable, connectionless service to a process Provides integrity checking by including error detection fields in segment header
6
TCP Transmission Control Protocol Provides reliable data transfer –Flow control –Sequence numbers –Acknowledgments –Timers Provides congestion control Provides integrity through error checking
7
Multiplexing and Demultiplexing Multi- is the job of gathering data chunks thru sockets and creating segments Demulti- is delivering data chunks (segments minus Transport header) to correct socket
9
Segment Identification UDP: destination IP address and port number TCP: source IP, source port, destination IP and destination port
10
TCP Handshake Server application has a “welcome socket” that waits for connection requests Client generates a connection- establishment request (includes source IP and port at client) Server creates new port (socket) for client Both sides allocate resources for connection
12
UDP Defined in RFC 768 Does about as little as a transport protocol can do Attaches source and destination port numbers and passes segment to network layer No handshaking before segment is sent DNS uses UDP
13
Why use UDP? Finer application-level control over what data is sent and when No connection establishment (thus no delays) No connection state information Only 8 bytes of packet overhead Out of order receipt can be discarded Lack of congestion control can lead to high loss rates if network is busy
14
UDP Checksum For error detection, can’t fix error(s) Add (with wrap-around) 16-bit words Take 1’s compliment (invert 0/1) Send this value At receiver, all words are added (including checksum) and result should be 11111111111111111
15
Principles of Reliable Data Transfer No transferred data bits are corrupted All are delivered in the order sent This gets complicated because lower layer (IP) is a best-effort (no guarantees) delivery service
16
Stop and Wait protocol Sender sends packet Receiver gets packet, checks for accuracy Receiver sends acknowledgement back If sender times out, presume NAK and resend packet Use sequence number to identify packets sent/resent
18
A little math West coast to East coast transfer; RTT = 30msec; Channel with 1GHz rate; packet size of 1000 bytes (8000 bits) Time needed to send packet is 8 microsec 15.008 msec for packet to get to East coast Ack packet back to sender after 30.008msec Utilization is.00027; effective throughput is 267 kbps P 215
19
Pipelining Range of sequence numbers must increase May have to buffer packets on both sides of link Error response either Go-Back-N or Selective Repeat
21
Go-Back-N (GBN) Sender allowed to transmit multiple packets but is constrained to have no more than some maximum, N, not ACK’ed N is window size and GBN is sliding window protocol
23
Selective Repeat Avoids unnecessary retransmission by having the sender retransmit only those packets that it suspects were in error Big difference is that we will buffer (keep) out- of-order packets
24
TCP Client first sends a TCP segment Server responds with a second segment Client responds with a third segment (that can optionally have message data) Connection is point-to-point, not one to many Can be full duplex
25
TCP Timer We need to know when data is lost We can measure round trip time Timer expiration could be due to congestion in the network, so… If timeout, we double timeout value next interval and go back to original value when ACK received.
26
Flow Control We have receive buffer. If application is slow to read, we can overflow buffer Receiver sends value of receive window to sender (with each ACK) Sender makes sure un-ACK’ed data does not exceed receive window size.
27
Closing a connection Client issues a close command (FIN=1) Server sends ACK Server sends a close command (FIN=1) Client sends ACK Resources are now deallocated
28
Congestion Control Theory: As supply (feed) rate increases, output increases to limit of output line and then levels off T2: As feed rate increases, delay grows exponentially As feed rate grows, we start loosing packets at router – forcing retransmission TCP has to infer that this is congestion
29
TCP Congestion Control Additive-increase, multiplicative-decrease Slow start Reaction to timeout events
30
Speed Control Congestion window = amount of data “in the pipeline” With congestion (lost packet) we halve the window for each occurrence With ACKs, we increase window by set amount (Maximum Segment Size)
31
Slow Start Start at one MSS – send one packet Double that value each time an ACK comes back – send two, then four, then …
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.