Download presentation
Presentation is loading. Please wait.
1
Announcements 10 project groups
Presentation: 11/29, 12/4, 12/6 Paper review starts next class (this Wed.) 15 classes to discuss papers Turn in 11 reviews (1 review per class at most) Reviews are due at the beginning of class
2
Outline Finish DSR/AODV/DSDV TCP performance in wireless networks
3
DSR Can DSR use routing table instead of source routing?
Why or why not?
4
AODV Route Requests (RREQ) are forwarded in a manner similar to DSR
When a node re-broadcasts a Route Request, it sets up a reverse path pointing towards the source AODV assumes symmetric (bi-directional) links When the intended destination receives a Route Request, it replies by sending a Route Reply Route Reply travels along the reverse path set-up when Route Request is forwarded
5
Timeouts A routing table entry maintaining a reverse path is purged after a timeout interval timeout should be long enough to allow RREP to come back A routing table entry maintaining a forward path is purged if not used for an active_route_timeout interval even if the route may actually still be valid
6
Link Failure Reporting
A neighbor of node X is considered active for a routing table entry if the neighbor sent a packet within active_route_timeout interval which was forwarded using that entry When the next hop link in a routing table entry breaks, all active neighbors are informed Link failures are propagated by means of Route Error messages, which also update destination sequence numbers
7
Route Error When node X is unable to forward packet P (from node S to node D) on link (X,Y), it generates a RERR message Node X increments the destination sequence number for D cached at node X The incremented sequence number N is included in the RERR When node S receives the RERR, it initiates a new route discovery for D using destination sequence number at least as large as N When node D receives the route request with destination sequence number N, node D will set its sequence number to N, unless it is already larger than N
8
Link Failure Detection
Hello messages: Neighboring nodes periodically exchange hello message Absence of hello message is used as an indication of link failure Alternatively, failure to receive several MAC-level acknowledgement may be used as an indication of link failure
9
Why Sequence Numbers in AODV
To avoid using old/broken routes To determine which route is newer To prevent formation of loops Assume that A does not know about failure of link C-D because RERR sent by C is lost Now C performs a route discovery for D. Node A receives the RREQ (say, via path C-E-A) Node A will reply since A knows a route to D via node B Results in a loop (for instance, C-E-A-B-C ) A B C D E
10
Why Sequence Numbers in AODV
Loop C-E-A-B-C A B C D E
11
Optimization: Expanding Ring Search
Route Requests are initially sent with small Time-to-Live (TTL) field, to limit their propagation DSR also includes a similar optimization If no Route Reply is received, then larger TTL tried
12
Summary: AODV Routes need not be included in packet headers
Nodes maintain routing tables containing entries only for routes that are in active use At most one next-hop per destination maintained at each node DSR may maintain several routes for a single destination Unused routes expire even if topology does not change
13
Destination-Sequenced Distance-Vector (DSDV) [Perkins94Sigcomm]
Each node maintains a routing table which stores next hop towards each destination a cost metric for the path to each destination a destination sequence number that is created by the destination itself Sequence numbers used to avoid formation of loops Each node periodically forwards the routing table to its neighbors Each node increments and appends its sequence number when sending its local routing table This sequence number will be attached to route entries created for this node
14
Destination-Sequenced Distance-Vector (DSDV)
Assume that node X receives routing information from Y about a route to node Z Let S(X) and S(Y) denote the destination sequence number for node Z as stored at node X, and as sent by node Y with its routing table to node X, respectively X Y Z
15
Destination-Sequenced Distance-Vector (DSDV)
Node X takes the following steps: If S(X) > S(Y), then X ignores the routing information received from Y If S(X) = S(Y), and cost of going through Y is smaller than the route known to X, then X sets Y as the next hop to Z If S(X) < S(Y), then X sets Y as the next hop to Z, and S(X) is updated to equal S(Y) X Y Z
16
Transport Layer
17
Transport services and protocols
provide logical communication between app processes running on different hosts transport protocols run in end systems send side: breaks app messages into segments, passes to network layer rcv side: reassembles segments into messages, passes to app layer more than one transport protocol available to apps Internet: TCP and UDP application transport network data link physical network data link physical network data link physical network data link physical logical end-end transport network data link physical network data link physical application transport network data link physical
18
Transport vs. network layer
Household analogy: 12 kids sending letters to 12 kids processes = kids app messages = letters in envelopes hosts = houses transport protocol = Ann and Bill network-layer protocol = postal service network layer: logical communication between hosts transport layer: logical communication between processes relies on and enhances, network layer services
19
Internet transport-layer protocols
unreliable, unordered delivery: UDP no-frills extension of “best-effort” IP reliable, in-order delivery (TCP) congestion control flow control connection setup services not available: delay guarantees bandwidth guarantees application transport network data link physical network data link physical network data link physical network data link physical logical end-end transport network data link physical network data link physical application transport network data link physical
20
UDP: User Datagram Protocol [RFC 768]
“no frills,” “bare bones” Internet transport protocol “best effort” service, UDP segments may be: lost delivered out of order to app connectionless: no handshaking between UDP sender, receiver each UDP segment handled independently of others Why is there a UDP? no connection establishment (which can add delay) simple: no connection state at sender, receiver small segment header no congestion control: UDP can blast away as fast as desired
21
TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 point-to-point:
one sender, one receiver reliable, in-order byte steam: no “message boundaries” pipelined: TCP congestion and flow control set window size send & receive buffers full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init’s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver
22
Reliable Data Transfer
Reliable data transfer over a reliable channel over a reliable channel over a channel with error NACK + ACK over a channel with error and loss ACK + timeout
23
Reliable Data Transfer Mechanisms
Details Checksum Detect bit errors Timer Detect packet loss at sender Sequence number Detect packet loss and duplicates at receiver ACK Inform sender that pkt has been received NACK Inform sender that pkt has not been received correctly Window, pipelining Increase throughput, and adapt to receiver buffer size and network congestion
24
TCP Flow Control flow control
sender won’t overflow receiver’s buffer by transmitting too much, too fast flow control receive side of TCP connection has a receive buffer: speed-matching service: matching the sending rate to the receiving app’s drain rate Rcvr advertises spare room by including value of RcvWindow in segments Sender limits unACKed data to RcvWindow guarantees receive buffer doesn’t overflow app process may be slow at reading from buffer
25
Principles of Congestion Control
informally: “too many sources sending too much data too fast for network to handle” different from flow control manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem!
26
Approaches towards congestion control
Two broad approaches towards congestion control: End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at (XCP)
27
TCP congestion control: additive increase, multiplicative decrease
Approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs additive increase: increase CongWin by 1 MSS every RTT until loss detected multiplicative decrease: cut CongWin in half after loss Saw tooth behavior: probing for bandwidth congestion window size time
28
TCP Congestion Control: details
sender limits transmission: LastByteSent-LastByteAcked CongWin Roughly, Both CongWin and RTT are time-varying How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events rate = CongWin RTT Bytes/sec
29
Summary: TCP Congestion Control
When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.
30
TCP in Wireless Networks
Transmission errors Random errors Burst errors Mobility Infrastructure wireless networks Wireless ad hoc networks
31
Impacts of Random Errors
Random errors may cause fast retransmit Fast retransmit results in retransmission of lost packet reduction in congestion window Reducing congestion window in response to errors is unnecessary Reduction in congestion window reduces the throughput Random errors may cause timeout Multiple packet losses in a window can result in timeout when using TCP-Reno (and to a lesser extent when using SACK)
32
Burst Errors May Cause Timeouts
If wireless link remains unavailable for extended duration, a window worth of data may be lost passing a truck driving through the tunnel Timeout results in Possibly long idle time Slow start, which reduces congestion window to 1 MSS and reduces ssthresh to 1/2 Reduction in window and ssthresh in response to errors are unnecessary
33
Various Schemes Link level mechanisms Split connection approach
TCP-Aware link layer TCP-Unaware approximation of TCP-aware link layer Explicit notification Receiver-based discrimination Sender-based discrimination
34
Link Level Schemes Link layer state TCP connection application
wireless physical link network transport application rxmt
35
Link Layer Schemes (Cont.)
Ideas Recover wireless losses using FEC code, retransmission, and/or adapting frame size Characteristics Hide wireless losses from TCP sender Link layer modifications needed at both ends of wireless link TCP need not be modified When is a reliable link layer beneficial to TCP performance? If it provides almost in-order delivery and TCP retransmission timeout large enough to tolerate additional delays due to link level retransmits
36
Split Connection Approach
Per-TCP connection state TCP connection TCP connection wireless physical link network transport application rxmt
37
Split Connection Approach (Cont.)
Idea End-to-end TCP connection is broken into one connection on the wired part of route and one over wireless part of the route Characteristics Hides transmission errors from sender Primary responsibility at base station If specialized transport protocol used on wireless, then wireless host also needs modification
38
Split Connection Approach : Advantages
Local recovery of errors Faster recovery due to relatively shorter RTT on wireless link BS-MH connection can be optimized independent of FH-BS connection Different flow / error control on the two connections Good performance achievable using appropriate BS-MH protocol Standard TCP on BS-MH performs poorly when multiple packet losses occur per window (timeouts can occur on the BS-MH connection, stalling during the timeout interval) Selective acks improve performance for such cases
39
Split Connection Approach : Disadvantages
End-to-end semantics violated ack may be delivered to sender before data delivered to the receiver May not be a problem for applications that do not rely on TCP for the end-to-end semantics May not be useful if data and acks traverse different paths (both do not go through the base station) Extra copy and storage required at base station
40
TCP-Aware Link Layer
41
Snoop Protocol [Balakrishnan95acm]
Retains local recovery of Split Connection approach and link level retransmission schemes Improves on split connection end-to-end semantics retained soft state at base station, instead of hard state
42
Snoop Protocol Per TCP-connection state TCP connection physical link
network transport application physical link network transport application physical link network transport application rxmt FH BS MH wireless
43
Snoop Protocol Buffers data packets at the base station BS
to allow link layer retransmission When dupacks received by BS from MH, retransmit on wireless link, if packet present in buffer Prevents fast retransmit at TCP sender FH by dropping the dupacks at BS FH BS MH
44
Snoop : Example 35 TCP state maintained at link layer 36 37 38 40 39
FH BS MH 34 36 Example assumes delayed ack - every other packet ack’d
45
Snoop : Example 35 39 36 37 38 41 40 39 38 34 36
46
Snoop : Example 37 40 38 39 42 41 40 39 36 36 dupack Duplicate acks are not delayed
47
Snoop : Example 37 40 38 41 39 43 42 41 40 36 36 36 Duplicate acks
48
Snoop : Example 37 40 38 41 39 42 44 43 37 41 FH BS MH Discard dupack
36 36 Discard dupack Dupack triggers retransmission of packet 37 from base station BS needs to be TCP-aware to be able to interpret TCP headers 36
49
Snoop : Example 37 40 43 38 41 39 42 45 44 42 37 36 36 36 36
50
Snoop : Example 37 40 43 38 41 44 39 42 46 45 43 42 36 41 TCP sender does not fast retransmit 36 36 36
51
Snoop : Example 37 40 43 38 41 44 39 42 45 47 46 44 43 41 TCP sender does not fast retransmit 36 36 36 36
52
Snoop : Example 42 45 43 46 44 48 47 45 44 FH BS MH 41 43 36 36 36 36
53
Snoop Protocol: Advantages
Snoop prevents fast retransmit from sender despite transmission errors, and out-of-order delivery on the wireless link Base station retransmits only if it results in at least 3 dupacks If wireless link level delay-bandwidth product is less than 4 packets, a simple (TCP-unaware) link level retransmission scheme can suffice Since delay-bandwidth product is small, the retransmission scheme can deliver the lost packet without resulting in 3 dupacks from the TCP receiver
54
Snoop Protocol Characteristics
Hides wireless losses from the sender Requires modification to only BS (network-centric approach)
55
Snoop Protocol : Advantages
High throughput can be achieved performance further improved using selective acks Local recovery from wireless losses Fast retransmit not triggered at sender despite out-of-order link layer delivery End-to-end semantics retained Soft state at base station loss of the soft state affects performance, but not correctness
56
Snoop Protocol : Disadvantages
Link layer at base station needs to be TCP-aware Not useful if TCP headers are encrypted (IPsec) Cannot be used if TCP data and TCP acks traverse different paths (both do not go through the base station)
57
TCP-Unaware Approximation of TCP-Aware Link Layer
58
Delayed Dupacks Protocol [Mehta98,Vaidya99]
Attempts to imitate Snoop without making the base station TCP-aware Snoop implements two features at the base station link layer retransmission reducing interference between TCP and link layer retransmissions (by dropping dupacks) Delayed Dupacks implements the same two features at BS : link layer retransmission at MH : reducing interference between TCP and link layer retransmissions (by delaying third and subsequent dupacks)
59
Delayed Dupacks Protocol
TCP receiver delays dupacks (third and subsequent) for interval D, when out-of-order packets received Dupack delay intended to give link level time to retransmit Pros Delayed dupacks can result in recovery from a transmission loss without triggering a response from the TCP sender Cons Recovery from congestion losses delayed
60
Various Schemes Link-layer retransmissions Split connection approach
TCP-Aware link layer TCP-Unaware approximation of TCP-aware link layer Explicit notification ELN: Base station tags dup-ack with ELN if it’s wireless related loss ECN: Router tags packet if it experiences congestion Receiver-based discrimination Receiver attempts to guess cause of packet loss When receiver believes that packet loss is due to errors, it sends a notification to the TCP sender TCP sender, on receiving the notification, retransmits the lost packet without reducing congestion window Sender-based discrimination Sender can attempt to determine cause of a packet loss If packet loss determined to be due to errors, do not reduce congestion window
61
Summary Schemes Idea Who Characteristics Link layer Link layer retx
Wireless end points Hide wireless error Split connection Independent optimization of wireless conn. Base station Snoop Link layer retx + drop dup ACK at base station Hide wireless error + avoid unnecessary cwnd reduction Delayed dup ACK Link layer retx + delay dup ACK at wireless host Wireless host
62
Comparison (Cont.) Schemes Idea Who Characteristics ELN
Tag dup ack with ELN if loss occurs at wireless link Base station Avoid unnecessary cwnd reduction Receiver When receiver believes that packet loss is due to errors, it sends a notification to the TCP sender Sender When sender believes that packet loss is due to errors, it does not reduce cwnd.
63
Techniques to Improve TCP Performance in Presence of Mobility
64
Classification Hide mobility from the TCP sender
Make TCP adaptive to mobility
65
Using Fast Retransmits to Recover from Timeouts during Handoff [Caceres95]
During the long delay for a handoff to complete, a whole window worth of data may be lost After handoff is complete, acks are not received by the TCP sender Sender eventually times out and retransmits If handoff still not complete, another timeout will occur Performance penalty Time wasted until timeout occurs Window shrunk after timeout
66
Mitigation Using Fast Retransmit
When MH is the TCP receiver: after handoff is complete, it sends 3 dupacks to the sender this triggers fast retransmit at the sender instead of dupacks, a special notification could also be sent When MH is the TCP sender: invoke fast retransmit after completion of handoff
67
M-TCP [Brown97] In the fast retransmit scheme [Caceres95]
sender starts transmitting soon after handoff But congestion window shrinks M-TCP attempts to avoid shrinkage in the congestion window
68
M-TCP Uses TCP Persist Mode
When a new ack is received with receiver’s advertised window = 0, the sender enters persist mode Sender does not send any data in persist mode except when persist timer goes off When a positive window advertisement is received, sender exits persist mode On exiting persist mode, RTO and cwnd are same as before the persist mode Reusing the old state is not always appropriate
69
Office hour Friday 3-4pm No office hour on Monday By appointment
70
TCP in Mobile Ad Hoc Networks
71
How to Improve Throughput
Network feedback Inform TCP of route failure by explicit message Let TCP know when route is repaired Probing Explicit notification Reduces repeated TCP timeouts and backoff
72
Performance Improvement
Without network feedback With feedback Actual throughput Ideal throughput 2 m/s speed
73
Impact of Route Caching
Route caching has been suggested as a mechanism to reduce route discovery overhead [Broch98] Each node may cache one or more routes to a given destination When a route from S to D is detected as broken, node S may: Use another cached route from local cache, or Obtain a new route using cached route at another node
74
To Cache or Not to Cache Average speed (m/s)
Actual throughput (as fraction of expected throughput) Average speed (m/s)
75
Why Performance Degrades With Caching
When a route is broken, route discovery returns a cached route from local cache or from a nearby node After a time-out, TCP sender transmits a packet on the new route. However, the cached route has also broken after it was cached Another route discovery, and TCP time-out interval Process repeats until a good route is found timeout due to route failure timeout, cached route is broken timeout, second cached route also broken
76
Issues: To Cache or Not to Cache
Caching can result in faster route “repair” Faster does not necessarily mean correct If incorrect repairs occur often enough, caching performs poorly Need mechanisms for determining when cached routes are stale
77
Caching and TCP performance
Caching can reduce overhead of route discovery even if cache accuracy is not very high But if cache accuracy is not high enough, gains in routing overhead may be offset by loss of TCP performance due to multiple time-outs
78
Hari Balakrishnan Venkata N. Padmanabhan Srinivasan Seshan
A Comparison of Mechanisms for Improving TCP Performance over Wireless Links Hari Balakrishnan Venkata N. Padmanabhan Srinivasan Seshan Randy H.Katz UC Berkeley
79
What questions would you like to answer when comparing different schemes to improve TCP?
80
Goals What combination of mechanisms result in best performance?
Link layer approach TCP-aware vs. TCP-agnostic TCP variants How important is it for LL-scheme to be TCP-aware? How useful is SACK and SMART for dealing with (bursty) lossy links? Is it important to split TCP to get good performance?
81
Experimental Results Modifications of TCP Reno
IBM Thinkpads and Pentium PCs 10mbps Ethernet links 1.5mbps max th’put , 1.35 mbps wide area th’put 1400 bytes TCP data size
82
Experimental Results Performance of LL protocols
83
Experimental Results Congestion Window size comparisons
84
Experimental Results Different transport layer schemes
85
Experimental Results Performance of Split Connection Protocols
86
Experimental Results Benefits of SACKs
87
Lessons TCP-aware link layer is better than link layer approach alone
TCP-aware link-layer protocol with SACK performs the best Split connection is not necessary for good performance SACK is effective especially for bursty losses End-to-end schemes (e.g., ELN, SACK) are useful
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.