Download presentation
Presentation is loading. Please wait.
1
1-1 Routing
2
1-2 Data-Centric Routing r Paradigm shift from accessing data from individual nodes to accessing “relevant” data. m Data within certain region, m Data on events, m Collective data processing, e.g., “What’s the average temperature of a region?”, “How many animals cross this path?”, “Is there an intruder in the area?”.
3
1-3 Challenges r Energy-limited nodes. r Computation. m Aggregate data. m Suppress redundant routing information. r Communication. m Bandwidth-limited. m Energy-intensive. Goal: Minimize energy dissipation
4
1-4 Challenges r Scalability: arbitrarily large scale ad-hoc deployment. m Fully distributed w/o global knowledge. m Large numbers of sources and sinks. r Robustness: unexpected sensor node failures. r Dynamics: m Topology changes (e.g., mobility, failures, etc.) m Target mobility.
5
1-5 Directed Diffusion r Intanagonwiwat et al., ACM Mobicom 2000. r One of the first data centric routing paradigms.
6
1-6 Application Example: Remote Surveillance m “Give me periodic reports about animal location in region A every t seconds”. m Tell me in what direction that vehicle in region Y is moving?
7
1-7 Basic Idea r Simple attribute-based naming as fundamental building block. r Requests for information (interests) and relevant data (reports) are described as sets of value-attribute pairs.
8
1-8 Naming r Content-based naming. m Tasks are named by a list of attribute – value pairs. m Task description specifies an interest for data matching the attributes. m Animal tracking: Interest ( Task ) Description Type = four-legged animal Interval = 20 ms Duration = 1 minute Location = [-100, -100; 200, 400] Request Node data Type =four-legged animal Instance = elephant Location = [125, 220] Confidence = 0.85 Time = 02:10:35Reply
9
1-9 Elements of Directed Diffusion r Naming m Data is named using attribute-value pairs. r Interests m A node requests data by sending interests for named data. r Gradients m Gradients are set up within the network designed toward the sink to “draw” events, i.e. data matching interest. r Reinforcement m Sink reinforces particular neighbors to draw higher quality ( higher data rate) events.
10
1-10 Basic Algorithm r Sink floods interest. (interest may be periodically repeated). Every node caches interest while valid, and creates local gradient towards neighboring nodes from which it heard interest. Sources with relevant data starts sending it according to local gradients. r When sink starts receiving data, it reinforces one or some of the paths, pruning the rest. r Negative reinforcements can be used for adjusting to changing consitions.
11
1-11 Source Sink Interest = Interrogation Gradient = Who is interested ( data rate, duration, direction ) Example Neighbor’s choices : 1. Flooding 2. Geographic routing 3. Cache data to direct interests
12
1-12 Data Propagation r Sensor node computes the highest requested event rate among all its outgoing gradients. r When a node receives data : m Find a matching interest entry in its cache Examine the gradient list, send out data by rate. m Cache keeps track of recent seen data items (loop prevention). m Data message is unicast individually to the relevant neighbors.
13
1-13 Source Sink Reinforcing the Best Path Low rate eventReinforcement = Increased interest The neighbor reinforces a path: 1. At least one neighbor 2. Choose the one from whom it first received the latest event (low delay) 3. Choose all neighbors from which new events were recently received
14
1-14 Local Behavior Choices For propagating interests: In the example, flood. More sophisticated behaviors possible: e.g. based on GPS. For setting up gradients: Data-rate gradients are set up towards neighbors who send an interest. Others possible: probabilistic gradients, energy gradients, etc.
15
1-15 Local Behavior Choices r For data transmission m Multi-path delivery with selective quality along different paths m Probabilistic forwarding m Single-path delivery, etc. r For reinforcement m Reinforce paths based on observed delays m Losses, variances etc.
16
1-16 Initial simulation study of diffusion r Key metric m Average Dissipated Energy per event delivered indicates energy efficiency and network lifetime diffusion r Compare diffusion to m Flooding omniscient multicast m Centrally computed tree (omniscient multicast)
17
1-17 Diffusion Simulation Details ns-2 r Simulator: ns-2 r Network Size: 50-250 Nodes r Transmission Range: 40m r Constant Density: 1.95x10 -3 nodes/m 2 (9.8 nodes in radius) r MAC: Modified Contention-based MAC r Energy Model: Mimic a realistic sensor radio [Pottie 2000] m 660 mW in transmission, 395 mW in reception, and 35 mw in idle
18
1-18 Diffusion Simulation r Surveillance application m 5 sources are randomly selected within a 70m x 70m corner in the field m 5 sinks are randomly selected across the field m High data rate is 2 events/sec m Low data rate is 0.02 events/sec m Event size: 64 bytes m Interest size: 36 bytes m All sources send the same location estimate for base experiments
19
1-19 Average Dissipated Energy 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 050100150200250300 Average Dissipated Energy (Joules/Node/Received Event) Network Size Diffusion Omniscient Multicast Flooding Diffusion can outperform flooding and even omniscient multicast. (suppress duplicate location estimates)
20
1-20 Directed Diffusion Variants r Original mechanism: 2-phase pull, i.e., interests and reinforcements. r 1-phase pull variant: eliminates reinforcements as a separate phase. m Sink floods interest. m Data source selects best reverse path. m Assumes links are bidirectional. r Push-diffusion: m Initiative from sources, i.e., they advertise their data along multiple paths; sink, if interested, reinforces one or some of the paths.
21
1-21 Pull versus Push Diffusion r Overall performance is application dependent. r “Pull” is more energy-efficient in terms of route setup in the case of many active sources. r “Push” is more efficient when there are fewer sources and more sinks.
22
1-22 Multipath Routing r Robustness/resilience to failures. r Multipath versus alternate path routing. r Totally- or partially disjoint paths.
23
1-23 Directed Diffusion Resilience r Periodic flooding of interests and events to circumvent failures. r Problem?
24
1-24 Braided Multipath Routing r Ganesan et al., MC2R 2002. r Alternate path routing. r Braided path: node/link disjointedness between the multiple paths is not required. Braided paths: For each node in the main path, find path that does not include that node.
25
1-25 Observations r Primary path: “best” path. r Data sent at lower rate on alternate paths. r Upon failure on primary path, reinforcement on alternate path. r If all alternate paths fail, flooding for path re-establishment. r Overhead: alternate path maintenance. r Resilience measured as how often path re- establishment is needed.
26
1-26 Approach r Disjoint versus “braided” paths. r How to build multiple paths with local information only?
27
1-27 Localized disjoint multipaths r Sink establishes primary path. r Sink selects “next best” neighbor “A”. r A propagates “alternate path” reinforcement to its “best” neighbor “B”. r If B is already on a path between sink and source, B sends back a “negative reinforcement”. r Access to local information only may lead to longer paths.
28
1-28 Braided multipath r Partially disjoint. r For each node on primary path, find best path from source to sink that does not contain that node. r Paths in the braid expend equivalent energy. r Reinforcement to “best” node and alternate reinforcement to “next best” node.
29
1-29 Evaluation r Energy efficiency. m Overhead. r Resilience to failures. m Isolated versus patterned failures.
30
1-30 Results r Braided multipaths are more energy efficient. m Especially at lower densities. r Disjoint multipaths have better resilience to patterned losses. r Braided multipaths exhibit better resilience to isolated failures.
31
1-31 Gradient Cost Routing (GRAd) r Poor et al., ACM Queue 2003. r All nodes keep estimated cost to destinations (sinks); e.g., number of hops. r When packet is sent, it includes cost so far (i.e., number of hops traversed) and TTL. r Node receiving packet whose cost is smaller than packet TTL, forwards packet. r Increments packet cost by one; decrements TTL by one. r GRAd = limited flood for robustness at expense of overhead.
32
1-32 Gradient Broadcast (GRAB) r Ye et al., IPSN 2003. r Enhances GRAd with “credits” decremented at each hop. m Earlier hops receive greater credit and thus higher spreading initially. m Ensures diverse paths converge to sink. S D
33
1-33 Energy-Efficient Routing r Maximize network lifetime. r Techniques range from: m Use of suitable shortest-path metric. m Derive energy-efficient routes using global optimization. m Traffic spreading for load balancing.
34
1-34 Power-Aware Routing for MANETs r Singh et al., ACM Mobicom 98. r Pick nodes with longer remaining battery lifetime as intermediate relays. r If R i is remaining energy of node i, then link metric is C=1/R i. Shortest-path algorithm finds route that minimizes i 1/R i.
35
1-35 Traffic Spreading r Load balance across multiple paths.
36
1-36 Traffic spreading approaches r Stochastic: node picks next-hop randomly (chosen from neighbors with equal gradient). r Energy-based: node increases its “height” when its energy falls below a certain threshold. All nodes need to adjust their height accordingly. r Stream-based: divert streams from nodes that are part of paths used b other streams.
37
1-37 Geographic Routing r Useful for location-specific interests/queries. r Deliver packets to nodes or regions based on their geographic location. r Typically, nodes know their position and immediate neighbors.
38
1-38 Geographic Forwarding r Simplest form of geographic-based forwarding. m Finn, ISI Tech Report, 1987. m Greedy approach. m Forwards packet to neighbor closest to destination.
39
1-39 Basic Geographic Forwarding B. Karp and H.T. Kung. GPSR: Greedy Perimeter stateless Routing for Wireless Networks. MobiCom2000. r Greedy: send packet to neighbor that is closest to destination r Can get stuck in voids. GPSR proposes a perimeter routing mode to avoid this.
40
1-40 Trajectory Based Forwarding D. Niculescu and B. Nath, Trajectory Based Forwarding and Its Applications. MOBICOM 2003. r Pre-encode arbitrary geographic trajectory; packet goes through nodes closest to this trajectory. r Particularly well suited for large networks with high density.
41
1-41 Geographic routing without location information (Rao et al.) r Apply geographic routing when (most) nodes do not have position information. r Approach: “virtual coordinates”. m Use local connectivity information.
42
1-42 Assumptions r Nodes know their own coordinates. r Nodes know coordinates of nodes in the 2- hop neighborhood.
43
1-43 Data Forwarding r Greedy: forward to neighbor closest to destination. r When packet arrived to destination, stop. r If stuck, do expanding ring search until closer node found.
44
1-44 Coordinate construction r A node’s coordinates is the average of its neighbors’ coordinates. r Finding perimeter nodes’ coordinates. m Beacon nodes flood “Hello” message. m Perimeter nodes discover distance in hops to other perimeter nodes. m Perimeter nodes broadcast their perimeter vector. m Perimeter nodes use triangulation to find coordinates of all perimeter nodes.
45
1-45 Coordinate construction (cont’d) r Deciding whether a node is on perimeter: m Use distance to beacon nodes. m If node is the farthest away from beacon node compared to all its 2-hop neighbors, then it’s on the perimeter.
46
1-46 Evaluation r Comparison between greedy routing using real- versus virtual coordinates. r Metrics: m Success rate: number packets reaching destination using purely greedy routing. m Average path length. m Routing load. m Overhead.
47
1-47 Results r Scalability. m Network size. m Density. r Mobility. r Losses. r Obstacles. r Trade-offs.
48
1-48 Routing with Mobile Nodes r Significant previous work on routing for MANETs where potentially all nodes can move. r Sensor networks are assumed to be predominantly static. However, a few nodes (e.g., the sinks) can be mobile. m E.g., robots, humans roaming in the area, etc. r Advantages of mobility: m Enable collecting information in a timely manner. m Provide network connectivity.
49
1-49 Data MULEs
50
1-50 Target deployments. r Sparse networks. r Multi-tiered deployments. m Sensors. m Wired access points. m Mules.
51
1-51 Approach r Mobile agents. r MULEs: mobile ubiquitous LAN extensions. m Mobility. m Communication (short range). UWB radios? [low power and ability to handle bursts]. m Buffering.
52
1-52 Pros and cons
53
1-53 Pros and cons r Pros: m Energy efficiency ? Listen for the mule. m Intermittent connectivity. r Cons: m Increased latency.
54
1-54 3-tier architecture r Wired APs. r Mules. r Sensors.
55
1-55 Considerations r APs have no limitations. r Mules: m Storage, mobility, ability to communicate with sensors and APs. m Unpredictable movement patterns. m Can talk to other mules. Benefits? r Robustness. r Reliability.
56
1-56 More considerations… r No routing overhead. r Mules can transport data for multiple applications. r High latency. m Delay bounds? r Mobility limitations.
57
1-57 Main results r Buffer requirements at sensors inversely proportional to ratio of number of mules to grid size. r Buffer requirement at mule inversely proportional to ratio of number of mules to grid size and ratio of APs to grid size. r Relationship between buffer capacity, number of mules, and reliability.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.