Download presentation
Presentation is loading. Please wait.
Published byGiles Haynes Modified over 9 years ago
1
Routing Murat Demirbas SUNY Buffalo
2
2 Routing patterns in WSN Model: static large-scale WSN Convergecast: Nodes forwards their data to basestation over multihops, scenario: monitoring application Broadcast: Basestation pushes data to all nodes in WSN, scenario: reprogramming Data driven: Nodes subscribe for data of interest to them, scenario: operator queries the nearby nodes for some data (similar to querying)
3
3 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
4
4 Routing tree Most commonly used approach is to induce a spanning tree over the network The root is the base-station Each node forwards data to its parent In-network aggregation possible at intermediate nodes Initial construction of the tree is problematic Broadcast storm, remember Complex behavior at scale Link status change non-deterministically Snooping on nearby traffic to choose high-quality neighbors pays off Taming the Underlying Challenges of Reliable Multihop Routing Trees are problematic since a change somewhere in the tree might lead to escalating changes in the rest (or a deformed structure)
5
5
6
6 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
7
7 Grid Routing Protocol The protocol is simple: it requires each mote to send only one three-byte msg every T seconds This protocol is reliable: it can overcome random msg loss and mote failure Routing on a grid is stateless: perturbed region upon failure of nodes is bounded by their local neighbors
8
8 The Logical Grid The motes are named as if they form an M*N logical grid Each mote is named by a pair (i, j) where i = 0.. M-1 and j = 0.. N-1 The network root is mote (0,0) Physical connectivity between motes is a superset of their connectivity in the logical grid: (0,0) (0,1) (1,1) (1,0) (2,1) (2,0) (0,0) (0,1) (1,0) (1,1) (2,0) (2,1)
9
9 Neighbors Each mote (i, j) has two low-neighbors (i-H, j) and (i, j-H) two high-neighbors (i+H, j) and (i, j+H) H is a positive integer called the tree hop If a mote (i, j) receives a msg from any mote other than its low- and high-neighbors, (i, j) discards the msg (i, j+H) (i+H, j) (i, j-H) (i-H, j) (i, j)
10
10 Communication Pattern Each mote (i, j) can send msgs whose ultimate destination is mote (0, 0) The motes need to maintain an incoming spanning tree whose root is (0, 0): each mote maintains a pointer to its parent When a mote (i, j) has a msg, it forwards the msg to its parent. This continues until the msg reaches mote (0, 0). (H = 2)
11
11 Choosing the Parent Usually, each mote (i, j) chooses one of its low-neighbors (i-H, j) or (i, j-H) to be its parent If both its low-neighbors fail, then (i, j) chooses one of its high-neighbors (i+H, j) or (i, j+H) to be its parent. This is called inversion Example: there is one inversion at mote (2, 2) because the two low-neighbors of (2, 2) have failed. (H = 2) failed
12
12 Inversion Count Each mote (i, j) maintains the id (x, y) of its parent, and the value c of its inversion count: the number of inversions that occur along the tree from (i, j) to (0, 0) Inversion count c has an upper value cmax Example: failed (H = 2) (3,2), 1(0,3), 0 (0,1), 0 (0,0), 0
13
13 Protocol Message If a mote (i, j) has a parent, then every T seconds it sends a msg with three fields: connected(i, j, c) where c is the inversion count of mote (i,j) Otherwise, mote (i, j) does nothing. Every 3 seconds, mote (0, 0) sends a msg with three fields: connected(0, 0, 0)
14
14 Acquiring a Parent Initially, every mote (i, j) has no parent. When mote (i, j) has no parent and receives connected(x, y, e), (i, j) chooses (x, y) as its parent if (x, y) is its low-neighbor, or if (x, y) is its high-neighbor and e < cmax When mote (i, j) receives a connected(x, y, e) and chooses (x, y) to be its parent, (i, j) computes its inversion count c as: if (x, y) is low-neighbor, c := e if (x, y) is high-neighbor, c := e + 1
15
15 Keeping the Parent If mote (i, j) has a parent (x, y) and receives any connected(x, y, e) then (i, j) updates its inversion count c as: if (x, y) is low-neighbor, c := e if (x, y) is high-neighbor and e < cmax, c := e + 1 if (x, y) is high-neighbor and e = cmax, then (i, j) loses its parent
16
16 Losing the Parent There are two scenarios that cause mote (i, j) to lose its parent (x, y) (i, j) receives a connected(x, y, cmax) msg and (x, y) happens to be a high-neighbor of (i, j) (i, j) does not receive any connected(x, y, e) msg for kT seconds
17
17 Replacing the Parent If mote (i, j) has a parent (x, y), and receives a connected(u, v, f) msg where (u, v) is a neighbor of (i, j), and (i,j) detects that by adopting (u, v) as a parent and using f to compute its inversion count c, the value of c is reduced then (i, j) adopts (u, v) as its parent and recomputes its inversion count
18
18 Allowing Long Links Add the following rule to the previous rules for acquiring and replacing a parent: If any mote (i,j) ever receives a message connected(0,0,0), then mote (i,j) makes mote (0,0) its parent
19
19 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
20
20 Application context A Line in the Sand (Lites) field sensor network experiment for real-time target detection, classification, and tracking A target can be detected by tens of nodes Traffic burst Bursty convergecast Deliver traffic bursts to a base station nearby
21
21 Problem statement Only 33.7% packets are delivered with the default TinyOS messaging stack Unable to support precise event classification Objectives Close to 100% reliability Close to optimal event goodput (real-time) Experimental study for high fidelity
22
22 Network setup Network 49 MICA2s in a 7 X 7 grid 5 feet separation Power level: 9 (for 2-hop reliable communication range) Logical Grid Routing (LGR) It uses reliable links It spreads traffic uniformly base station
23
23 Traffic trace from Lites Packets generated in a 7 X 7 subgrid, when a vehicle passes across the middle of the Lites network Optimal event goodput: 6.66 packets/second
24
24 Retransmission based packet recovery At each hop, retransmit a packet if the corresponding ACK is not received after a constant time Synchronous explicit ack (SEA) Explicit ACK immediately after packet reception Shorter retransmission timer Stop-and-wait implicit ack (SWIA) Forwarded packet as an ACK Longer retransmission timer
25
25 SEA Retransmission does not help much, and may even decrease reliability and goodput Similar observations when adjusting contention window of B- MAC and using S-MAC Retransmission-incurred contention MetricsRT= 0RT= 1RT= 2 Reliability (%)51.0554.7454.63 Delay (sec)0.210.250.26 Goodput (pkt/sec)4.014.053.63
26
26 SWIA Again, retransmission does not help Compared with SEA, longer delay and lower goodput/reliability –longer retransmission timer & blocking flow control –More ACK losses, and thus more unnecessary retransmissions MetricsRT= 0RT= 1RT= 2 Reliability (%)43.0931.7646.5 Delay (sec)0.358.8118.77 Goodput (pkt/sec)3.482.581.41
27
27 Protocol RBC Differentiated contention control Reduce channel contention caused by packet retransmissions Window-less block ACK Non-blocking flow control Reduce ack loss Fine-grained tuning of retransmission timers
28
28 Window-less block ACK Non-blocking window-less queue management Unlike sliding-window based black ACK, in order packet delivery is not considered Packets have been timestamped For block ACK, sender and receiver maintain the “ order ” in which packets have been transmitted “ order ” is identified without using sliding-window, thus there is no upper bound on the number of un-ACKed packet transmissions
29
29 Sender: queue management static physical queue ranked virtual queues (VQ) VQ 0 12 VQ 1 345 VQ M VQ M+1 6 high low occupied empty ID of buffer/packet M: max. # of retransmissions
30
30 Differentiated contention control Schedule channel access across nodes Higher priority in channel access is given to nodes having fresher packets nodes having more queued packets
31
31 Implementation of contention control The rank of a node j = M - k, |VQ k |, ID(j) , where M: maximum number retransmissions per-hop VQ k : the highest-ranked non-empty virtual queue at j ID(j): the ID of node j A node with a larger rank value has higher priority Neighboring nodes exchange their ranks Lower ranked nodes leave the floor to higher ranked ones
32
32 Fine tuning retransmission timer Timeout value: tradeoff between delay in necessary retransmissions probability of unnecessary retransmissions In RBC Dynamically estimate ACK delay Conservatively choose timeout value; also reset timers upon packet and ACK loss
33
33 Event-wise Retransmission helps improve reliability and goodput –close to optimal goodput (6.37 vs. 6.66) Compared with SWIA, delay is significantly reduced –1.72 vs. 18.77 seconds MetricsRT= 0RT= 1RT= 2 Reliability (%)56.2183.1695.26 Delay (sec)0.211.181.72 Goodput (pkt/sec)4.285.726.37
34
34 Distribution of packet generation and reception RBC Packet reception smoothes out and almost matches packet generation SEA Many packets are lost despite quick packet reception SWIA Significant delay and packet loss
35
35 Field deployment (http://www.cse.ohio- state.edu/exscal) A Line in the Sand (Lites) ~ 100 MICA2 ’ s 10 X 20 meter 2 field Sensors: magnetometer, micro impulse radar (MIR) ExScal ~ 1,000 XSM ’ s, ~ 200 Stargates 288 X 1260 meter 2 field Sensors: passive infrared radar (PIR), acoustic sensor, magnetometer
36
36 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
37
37 Flooding Forward the message upon hearing it the first time Leads to broadcast storm and loss of messages Obvious optimizations are possible The node sets a timer upon receiving the message first time Might be based on RSSI If, before the timer expires, the node hears message broadcasted T times, then node decides not to broadcast
38
38 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
39
39 Flooding, gossiping, flooding, … Flood a message upon first hearing a message Gossiping periodically (less frequently) to ensure that there are no missed messages Upon detecting a missed message disseminate by flooding again Best effort flooding (fast) followed by a guaranteed coverage gossiping (slow) followed by best effort flooding Algorithm takes care of delivery to loosely connected sections of the wsn Livadas and Lynch, 2003
40
40 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
41
41 Trickle See Phil Levis’s talk.
42
42 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
43
43 Polygonal broadcasts Imaginary polygonal tilings for supporting communication E.g., 1-bit broadcast scheme for hexagonal tiling Dolev, Herman, Lahiani, “Brief announcement: polygonal broadcast, secret maturity and the firing sensors”, PODC 2004 1 1 1 1 1 1 1 0 0
44
44 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
45
45 Fire-cracker protocol Firecracker uses a combination of routing and broadcasts to rapidly deliver a piece of data to every node in a network –To start dissemination, the data source sends data to distant points in the network –Once the data reaches its destinations, broadcast-based dissemination begins along the paths By using an initial routing phase, Firecracker can disseminate at a faster rate than scalable broadcasts while sending fewer packets –The selection of points to route to has a large effect on performance.
46
46 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
47
47 Directed Diffusion Protocol initiated by destination (through query) Data has attributes; sink broadcasts interests Nodes diffuse the interest towards producers via a sequence of local interactions Nodes receiving the broadcast set up a gradient (leading towards the sink) Intermediate nodes opportunistically fuse interests, aggregate, correlate or cache data Reinforcement and negative reinforcement used to converge to efficient distribution
48
48 Directed diffusion Intanagonwiwat, Govindan and Estrin, “Directed diffusion: a scalable and robust communication paradigm for sensor networks” 6 th conf. on Mobile computing and networking, 2000.
49
49 Directed Diffusion…. Interest Sink Source Gradient Directional Flooding
50
50 Directed Diffusion…. Interest Sink Source Gradient
51
51 Directed Diffusion…. Interest Sink Source Gradient
52
52 Directed Diffusion…. Sink Source Gradient
53
53 Directed Diffusion…. Sink Source Gradient Reinforcement
54
54 Directed Diffusion…. Sink Source Gradient Reinforcement
55
55 Directed Diffusion…. Sink Source Gradient Reinforcement
56
56 Directed Diffusion…. Sink Source Gradient Data
57
57 Directed Diffusion…. Sink Source Gradient Data
58
58 Directed Diffusion robustness…. Sink Source Gradient Data
59
59 Directed Diffusion…. Sink Source Gradient Data Reinforcement
60
60 Directed Diffusion…. Sink Source Gradient Data Reinforcement
61
61 Directed Diffusion…. Sink Source Gradient Data Reinforcement
62
62 Design considerations……
63
63 Data Naming Expressing an Interest Using attribute-value pairs E.g., Type = Wheeled vehicle// detect vehicle location Interval = 20 ms// send events every 20ms Duration = 10 s// Send for next 10 s Field = [x1, y1, x2, y2]// from sensors in this area
64
64 Outline Convergecast Routing tree Grid routing Reliable bursty broadcast Broadcast Flood, Flood-Gossip-Flood Trickle Polygonal broadcast, Fire-cracker Data driven Directed diffusion Rumor routing
65
65 Rumor routing Deliver packets to events query/configure/command No global coordinate system Algorithm: Event sends out agents which leave trails for routing info Agents do random walk If an agent crosses a path to another event, a path is established Agent also optimizes paths if they find shorter ones Braginsky, Estrin WSNA 2002
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.