Download presentation
Presentation is loading. Please wait.
1
<month year> doc.: IEEE May 2005 Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs) Submission Title: [IEEE WPAN Mesh Networks] Date Submitted: [12 May, 2005] Source: [Jianliang Zheng, Yong Liu, Chunhui Zhu, Marcus Wong, Myung Lee] Company [Samsung] Address [Samsung Steinman Hall, 140th St & Convent Ave, New York, NY 10031, USA] Voice:[ ], FAX: [ ], Re: [Call for Proposal: IEEE P /0071] Abstract: [This proposal discusses Samsung’s proposal for IEEE WPAN Mesh, based on Meshed-Tree approach, including Meshed Tree routing, multicasting, and Key pre-distribution.] Purpose: [This proposal is provided to be adopted as a recommended practice for IEEE WPAN Mesh] Notice: This document has been prepared to assist the IEEE P It is offered as a basis for discussion and is not binding on the contributing individual(s) or organization(s). The material in this document is subject to change in form and content after further study. The contributor(s) reserve(s) the right to add, amend or withdraw material contained herein. Release: The contributor acknowledges and accepts that this contribution becomes the property of IEEE and may be made publicly available by P Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
2
IEEE 802.15.5 WPAN Mesh Networks
May 2005 IEEE WPAN Mesh Networks Jianliang Zheng, Yong Liu, Chunhui Zhu, Marcus Wong, Myung Lee Samsung Zheng, Liu, Zhu, Wong, Lee
3
May 2005 Objectives To construct Mesh Networking Layer over IEEE MAC and PHY Proposed Mesh Network includes following features: Meshed Tree Formation Block Addressing Routing Multicasting Key Pre-distribution Extensible to IEEE MAC and PHY Zheng, Liu, Zhu, Wong, Lee
4
Contents Meshed Tree Beacon-enabled Network Tree-Based Multicasting
May 2005 Contents Meshed Tree Beacon-enabled Network Tree-Based Multicasting Key Pre-distribution Zheng, Liu, Zhu, Wong, Lee
5
May 2005 Meshed Tree Zheng, Liu, Zhu, Wong, Lee
6
Outline Adaptive Robust Tree (ART) Mesh Networking Summary
May 2005 Outline Adaptive Robust Tree (ART) Three phases Meshed ART (MART) Mesh Networking Routing Table Data Forwarding Route Discovery Route Repair Summary Zheng, Liu, Zhu, Wong, Lee
7
1. Adaptive Robust Tree ( ART)
May 2005 1. Adaptive Robust Tree ( ART) Three phases Meshed ART (MART) Zheng, Liu, Zhu, Wong, Lee
8
Three Phases Initialization Phase Normal Phase Recovery Phase May 2005
Zheng, Liu, Zhu, Wong, Lee
9
Initialization Phase Stage 1: Association
<month year> doc.: IEEE Initialization Phase May 2005 [children#][children#]=[8][6] Stage 1: Association A [beg,end,next]=[1,16,1] [beg,end,next]=[17,28,17] [5][2] [5] B J Stage 2: Reporting number of children [3,12,3] [13,16,13] [19,28,19] C H K [1][2][1] [3][1] [1] Stage 3: Address assignment An ART is formed. Additional addresses can be reserved. [5,6,5] [7,10,7] [11,12,11] [21,26,21] [27,28,27] [15,16,15] D E G I L O [1][1] [0] [0] [1] [0] [0] [23,24,23] [25,26,25] [9,10,9] F M N [0] [0] [0] Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
10
Normal Phase Normal data transmissions
May 2005 Normal Phase [8][6] A 0 [1,16,1] [17,28,17] Normal data transmissions [5][2] [5] B 1 J 17 [3,12,3] [13,16,13] [19,28,19] Example: Node C node L C 3 H 13 K 19 [1][2][1] [3][1] [1] Nodes are still allowed to join the network [5,6,5] [7,10,7] [11,12,11] [21,26,21] [15,16,15] [27,28,27] D 5 E 7 G 11 I 15 L 21 O 27 [1][1] [0] [0] [1] [0] [0] [23,24,23] [25,26,25] [9,10,9] F 9 M 23 N 25 [0] [0] [0] Zheng, Liu, Zhu, Wong, Lee
11
Recovery Phase Tree route repair will be discussed later. May 2005
Zheng, Liu, Zhu, Wong, Lee
12
Neighbors treat each other as a child.
May 2005 Meshed ART (MART) Neighbors treat each other as a child. [8][6] [1,16,1] [17,28,17] A 0 [5][2] [5] [3,12,3] [13,16,13] [19,28,19] B 1 J 17 [1,16,1] [17,28,17] [13,16,13] Shorter path C H 13 [1] K [15,16,15] Elimination of SPOFs [17,28,17] …… D E G I L O F M N Zheng, Liu, Zhu, Wong, Lee
13
2. Mesh Networking Routing Tables: Data Forwarding Route Discovery
May 2005 2. Mesh Networking Routing Tables: ART Table (ARTT) Non-Tree Table (NTT) Route Type Data Forwarding Route Discovery Route Repair Tree route repair Non-tree route repair Zheng, Liu, Zhu, Wong, Lee
14
ART Table (ARTT) num_branch type1 beg_addr1 end_addr1 priority1
<month year> doc.: IEEE May 2005 ART Table (ARTT) num_branch type1 beg_addr1 end_addr1 priority1 next_hop1 type2 beg_addr2 end_addr2 priority2 next_hop2 …… a num_branch: number of branches the node has typei: type of branch i (desIn, desOut, srcIn, srcOut) beg_addri: beginning address of branch i end_addri: ending address of branch i priorityi: priority of branch i (normal, high) next_hopi: next hop via which the whole branch i is routed a Each node keeps an ART table (ARTT) to track its branches. Each branch is assigned a block of consecutive addresses. There is no need to assign consecutive blocks to those branches. Branch priority (from high to low): desIn desOut srcIn srcOut; and field priorityi further distinguishes branches of the same type. Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
15
Non-Tree Table (NTT) beg_addr1 end_addr1 next_hop1 hops1 cost1
May 2005 Non-Tree Table (NTT) beg_addr1 end_addr1 next_hop1 hops1 cost1 tstamps1 beg_addr2 end_addr2 next_hop2 hops2 cost2 tstamps2 num_entry: number of NTT entries beg_addri: beginning address of entry i (it is also the destination address) end_addri: ending address of entry i next_hopi: next hop via which entry i can be routed hopsi: hops to the beginning address of entry i costi: cost to the beginning address of entry i (no need if hop count is the only cost) tstampi: time when entry i is created or refreshed Zheng, Liu, Zhu, Wong, Lee
16
Route Type Each NTT entry provides In terms of cost, roughly we have:
May 2005 Route Type Each NTT entry provides an optimal route to address beg_addri an auxiliary route to the whole address block [beg_addri+1, end_addri] In terms of cost, roughly we have: ART_route ≥ MART_route ≥ NT_auxiliary_route ≥ NT_optimal_route Whenever a route is optimal, equal sign applies from there on. Zheng, Liu, Zhu, Wong, Lee
17
Data Forwarding Start May 2005 Find an optimal route in ARTT?
route in NTT? Find an auxiliary Use the route found Start Use tree route 1 2 4 3 Y N Optimal routes Non-optimal routes Zheng, Liu, Zhu, Wong, Lee
18
Route Request (RREQ) and Route Reply (RREP) Packet Formats
May 2005 Route Discovery (1) Route Request (RREQ) and Route Reply (RREP) Packet Formats RREQ route_type dst_beg_addr src_beg_addr src_end_addr max_link_cost hops_traveled cost_accumed TTL RREP route_type dst_beg_addr dst_end_addr src_beg_addr src_end_addr hops_traveled hops_total cost_accumed cost_total TTL Zheng, Liu, Zhu, Wong, Lee
19
May 2005 Route Discovery (2) route_type: route type (optimal_NT, optimal_ART, non-optimal, unknown) dst_beg_addr: beginning address of destination tree branch dst_end_addr: ending address of destination tree branch src_beg_addr: beginning address of source tree branch src_end_addr: ending address of source tree branch max_link_cost: maximum link cost along the propagation path of the RREQ (used to filter out routes whose max_link_cost exceed a certain threshold, i.e., the RREQ will no longer be relayed if max_link_cost recorded in it exceeds a certain threshold) hops_traveled: hops the RREQ/RREP has traveled hops_total: total hops between the source and the destination (no larger than the hops_traveled in the RREQ unicast from the source) cost_accumed: cost the RREQ/RREP has accumulated cost_total: total cost between the source and the destination (it is no larger than the cost_accumed in the RREQ unicast from the source; if asymmetric links are assumed, cost_total is not needed and optimal routes for different directions should be discovered separately) TTL: Time to live Zheng, Liu, Zhu, Wong, Lee
20
Route Discovery (3) Case 1: Source has an optimal route
May 2005 Route Discovery (3) A Case 1: Source has an optimal route No route discovery B J C H K D E G I L O Example 1: node F node I (optimal non-tree route) F M N Example 2: node J node M (optimal tree route) Optimal non-tree route Optimal tree route Zheng, Liu, Zhu, Wong, Lee
21
<month year> doc.: IEEE May 2005 Route Discovery (4) A B J E D C I H K L O G F M N Case 2: Source has no optimal route; but destination has. dst. Example 1: node F node I Bi-directional routes are set up dst. Here only give an example that destination has an optimal non-tree route; The case that destination has an optimal tree route can be more easily handled. Example 2: node N node J No routing entry created src. src. unicast RREQ unicast RREP existing optimal non-tree route Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
22
<month year> doc.: IEEE May 2005 Route Discovery (5) A Case 3: Neither the source nor the destination has optimal route. B J C H K Example: node I node O D E G I L O src. dst. Source will time out if the unicast RREQ does not get through; then the source will flood an RREQ. F M N unicast RREQ broadcast RREQ RREP Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
23
Route Repair Tree Route Repair Non-Tree Route Repair
<month year> doc.: IEEE May 2005 Route Repair Tree Route Repair Non-Tree Route Repair If no reply, node J will increase TTL and try again. An RERR will be sent to the source if several tries fail. Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
24
Tree Route Repair Node K fails
<month year> doc.: IEEE May 2005 Tree Route Repair A Node K fails Node J broadcasts an RREQ to locate node K, with a limited TTL. B J C H K All nodes below node K that have received the RREQ reply. D E G I L O Node J selects the best path and sends a gratuitous RREP to activate it. If no reply, node J will increase TTL and try again. An RERR will be sent to the source if several tries fail. Not all broadcast propagations are shown in the Figure. Multiple branches (e.g, also branch O) can be repaired during above procedure. F M N MART route RREP RREQ Grat. RREP Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
25
Tree Route Repair (cont.)
<month year> doc.: IEEE May 2005 Tree Route Repair (cont.) Data Forwarding after tree route repair A [desIn,21,26,high,13] B J 17 C H 13 K [desIn,21,26,normal,21] [srcIn,21,26,normal,17] Note the entry [srcIn,21,26,normal,17] at node H may be ignored if there is another high priority entry. For example, the path from node M to node C will be M L H C. D E G I L 21 O [1][1] [23,24,23] [25,26,25] parent=13 F M N Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
26
Non-Tree Route Repair Use local repair <month year>
doc.: IEEE May 2005 Non-Tree Route Repair Use local repair Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
27
HighLights Adaptive address assignment Efficient tree repair
<month year> doc.: IEEE HighLights May 2005 Adaptive address assignment avoiding “running out of addresses” problem Efficient tree repair no address re-assignment Meshed ART (MART) shorter path Robustness Mesh networking (Tree routing + Non-tree routing) optimal routes no broadcast (even with limited TTL) if either the source or the destination has an optimal route no flooding if there is a (non-optimal) route from the source to the destination Note that there is no RTS/CTS in Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
28
Mesh Networking for Low Duty-cycle Networks
May 2005 Mesh Networking for Low Duty-cycle Networks Zheng, Liu, Zhu, Wong, Lee
29
Outlines Basic mechanisms Enhancements Highlights May 2005
Zheng, Liu, Zhu, Wong, Lee
30
Basic Mechanisms Tree formation Tree addressing Tree routing
May 2005 Tree formation Tree addressing Tree routing Topology server setup Beacon scheduling Reactive shortcut formation Two-address strategy Route repair Zheng, Liu, Zhu, Wong, Lee
31
Tree Formation May 2005 The node initiating the network becomes the PAN coordinator. In the network formation stage, all coordinators shall enable their receivers to catch beacon requests from new nodes. No regular beaconing is allowed before the beacon scheduling is done. New nodes perform active scan to collect beacons from their neighbors. Every new node selects a neighbor, which has the best path quality to the PAN coordinator, as its parent. Zheng, Liu, Zhu, Wong, Lee
32
Tree Addressing (1) May 2005 The PAN coordinator broadcasts a descendant statistics message along the tree to all leaf nodes. Each leaf node returns a descendant counter to its parent with the initial counter value set to one. After a coordinator collects the descendant counters from all its children, it adds them together and passes the sum to its own parent. This process continues until the PAN coordinator receives the descendant counters from all its children. Zheng, Liu, Zhu, Wong, Lee
33
Tree Addressing (2) May 2005 The PAN coordinator divides the available address* block among its children based on the ratios of their descendant numbers. Each coordinator, once receiving the address block assigned by its parent, further divides the address block among its own children. This process continues until all leaf nodes receive their addresses. * The address here is MAC short address Zheng, Liu, Zhu, Wong, Lee
34
Tree Addressing (3) May 2005 Zheng, Liu, Zhu, Wong, Lee
35
Tree Routing (1) May 2005 When a coordinator receives a packet not destined for itself: If the destination’s address is out of its address block, it forwards the packet to its parent; Otherwise, it compares the destination’s address with its children’s addresses. The packet is forwarded to child i if the destination’s address is between the addresses of child i and child i+1. Zheng, Liu, Zhu, Wong, Lee
36
Tree Routing (2) Node H sends a packet to node F.
May 2005 Node H sends a packet to node F. As H does not have any child, it forwards the packet to its parent C. C finds that F has an address out of its address block. So it forwards the packet to its parent A. As F’s address falls into A’s address block, and A further finds that F’s address is between the addresses of child B and C, so A forwards the packet to B. B forwards the packet to F. Zheng, Liu, Zhu, Wong, Lee
37
Topology Server Setup May 2005 Either the PAN coordinator or a resource sufficient node can serve as the topology server. All other nodes can reach the topology server by using tree routing. Each coordinator shall report its superframe parameters and link states to the topology server. Each coordinator may periodically scan its neighbors' beacons and report significant link changes to the server. There can be two or more topology servers acting as backup of each other. Zheng, Liu, Zhu, Wong, Lee
38
Beacon Scheduling May 2005 When receiving the neighboring information of a coordinator, the topology server assigns a contention-free beacon time-slot to the coordinator. Every coordinator gets a beacon time slot that is not overlapped with the active periods of its two-hop neighbors. This two-hop beacon scheduling ensures that each node can correctly capture all its neighbors’ beacons and locate their active periods. Once a coordinator receives the beacon time assignment, it can emit regular beacons and operate in beacon-enabled mode. Zheng, Liu, Zhu, Wong, Lee
39
Reactive Shortcut Formation
May 2005 An active source sends a shortcut request (SCRQ) message to the topology server. The topology server calculates the optimal shortcut by using the Dijkstra’s algorithm. The topology server sends a shortcut notification (SCNF) message to the destination. The destination sends a shortcut reply (SCRP) message to the source to establish routing entries along the shortcut. Relay nodes along the shortcut shall locate and record the active periods of their previous-hop and next-hop neighbors. Zheng, Liu, Zhu, Wong, Lee
40
Two-address Strategy Two addresses are used in routing:
May 2005 Two addresses are used in routing: MAC short address is used in tree routing. NWK address is used in shortcut routing. MAC short addresses may be changed with the variations of the tree structure. NWK addresses shall not be changed. Each packet shall carry the destination’s NWK address. When the packet is delivered through a tree route, it shall also carry the destination’s MAC short address. Zheng, Liu, Zhu, Wong, Lee
41
Route Repair (1) May 2005 If the topology server is not the PAN coordinator, it keeps maintaining the route between it and the PAN coordinator. When a node detects the failure of a non-tree link or a tree link to its child, It updates the link change to the topology server. If it cannot reach the topology server directly, it may ask the PAN coordinator to forward the link change. The topology server recalculates a new route between this node and the destination. Zheng, Liu, Zhu, Wong, Lee
42
Route Repair (2) When a node cannot reach its parent,
May 2005 When a node cannot reach its parent, It initiates tree reconstruction by dismissing all its descendants. All affected nodes rejoin the tree and obtain new MAC short addresses (NWK addresses are not changed). The node detecting the link failure updates the topology server about the link change. If there is an address mapping server, the nodes with new MAC short addresses shall update the server. The sources with the old addresses of desired destinations either contact the address server or broadcast an address search message along the tree to obtain the new addresses. The existing shortcuts are not affected since they depend on NWK addresses. Zheng, Liu, Zhu, Wong, Lee
43
Enhancements Solutions to potential beacon collisions
May 2005 Enhancements Solutions to potential beacon collisions Solutions to single point of failure Solutions to large network case Zheng, Liu, Zhu, Wong, Lee
44
Solutions to Beacon Collisions (1)
May 2005 When node N does not appear in the network, node C and E are not two-hop neighbors. The topology server may assign the same beacon time slot to C and E. When node N presents and tries to join the tree, it cannot receive beacons from either C or E. Zheng, Liu, Zhu, Wong, Lee
45
Solutions to Beacon Collisions (2)
May 2005 Solutions: New nodes may scan not only beacons, but also other packets. According to the scanning results, the new nodes may estimate active periods of their neighbors, and request the neighbors to send ad hoc beacons. The topology server may define an irregular beacon period that is not overlapped with the active period of any coordinator, so that each coordinator can send irregular beacons during this period in randomly selected superframes. Zheng, Liu, Zhu, Wong, Lee
46
Solutions to SPF (1) Backup server
May 2005 Backup server The main server shall synchronize its topology knowledge with the backup server. The neighbors of the main server record the address of the backup server. Once the main server fails, all messages destined for the main server are forwarded to the backup server. Zheng, Liu, Zhu, Wong, Lee
47
Solutions to SPF (2) May 2005 When the PAN coordinator acts as the topology server, The neighbors of the PAN coordinator may establish alternative table-driven routes to reach each other (bypassing the PAN coordinator). Once the PAN coordinator fails, its neighbors can still maintain the tree routing. Zheng, Liu, Zhu, Wong, Lee
48
Solutions to SPF (3) May 2005 The PAN coordinator R has three children A, B, and C. Alternative routes are established among these children. When R fails, node A can forward packets to C’s descendants through an alternative route A-B-C. Zheng, Liu, Zhu, Wong, Lee
49
Solutions to Large Network Case (1)
May 2005 If a topology server cannot accommodate the whole network topology, it may record only the top portion of the tree. The topology server can help nodes within the server coverage area (SCA) establish optimal shortcuts to reach each other. Nodes outside the SCA have to use tree routes. If a tree route passes through the SCA, the topology server can optimize the part of the tree route lying within the SCA. Partial Topology Server Zheng, Liu, Zhu, Wong, Lee
50
Solutions to Large Network Case (2)
May 2005 If multiple nodes can act as topology servers, the network can be partitioned into several subtrees, and each subtree is assigned a local topology server. Communications within a subtree can be handled by its local topology server. A central topology server is also named to manage the communications among different subtrees. The central topology server can help the roots of different subtrees establish shortcuts to connect each other. Zheng, Liu, Zhu, Wong, Lee
51
Solution to Large Network Case (3)
May 2005 Zheng, Liu, Zhu, Wong, Lee
52
Highlights Establish a self-routing tree to cover the whole network
May 2005 Establish a self-routing tree to cover the whole network Schedule beacon transmissions at a topology server to avoid beacon collisions Calculate shortcuts between active source-destination pairs at a topology server to avoid flooding based route discovery Quickly recover from link/node failures by recalculating new routes or reforming the tree Zheng, Liu, Zhu, Wong, Lee
53
A Multicast Routing Protocol Based on Adaptive Robust Tree (ART)
May 2005 A Multicast Routing Protocol Based on Adaptive Robust Tree (ART) Zheng, Liu, Zhu, Wong, Lee
54
Outline General Assumptions Introduction Definitions and Message Types
May 2005 Outline General Assumptions Introduction Definitions and Message Types Protocol Operations Joining the Multicast Group Leaving the Multicast Group Migrating the Role of Group Coordinator Group Dismiss Non-member Issues Data Dissemination Highlights Zheng, Liu, Zhu, Wong, Lee
55
May 2005 General Assumptions Both MAC address and logical short address can be used for routing purpose; There is a thin Mesh layer running on top of MAC layer for routing purpose; Necessary fields are available from the Mesh layer packet header. Zheng, Liu, Zhu, Wong, Lee
56
May 2005 Introduction This multicast routing proposal is based on the Tree-Based Mesh Routing (TBMR) protocol we proposed. The Adaptive Robust Tree (ART) algorithm constructs a shared tree which spans all the nodes in a WPAN mesh network. Our goal is to find a minimum sub-tree of the Adaptive Robust Tree which covers all multicast members within each multicast group. Zheng, Liu, Zhu, Wong, Lee
57
Definitions Group Member (GM) – a node participating a multicast group
May 2005 Definitions Group Member (GM) – a node participating a multicast group On-Tree Router (OnTR) – nodes on the multicast tree but not GMs Group Coordinator (GC) – the top level GM or OnTR of a specific multicast group (sub-tree root). It sets the upper bound of the multicast tree. Tree Coordinator (TC) – the root of the ART. It keeps information of all multicast groups in the network so that it always knows from which child(ren) it can reach the multicast tree for a specific group. Off-Tree Router (OffTR)– nodes that are GC’s direct ancestors (including TC). These nodes are not on the multicast tree but they know from which child they can reach the multicast tree. Zheng, Liu, Zhu, Wong, Lee
58
Illustration of Definitions
May 2005 Illustration of Definitions Zheng, Liu, Zhu, Wong, Lee
59
Message Types JREQ – Joining REQuest JREP – Joining REPly
May 2005 Message Types JREQ – Joining REQuest JREP – Joining REPly LREQ – Leaving REQuest LREP – Leaving REPly GCUD – GC UpDate GDIS – Group DISmission Zheng, Liu, Zhu, Wong, Lee
60
Joining the Multicast Group
May 2005 Joining the Multicast Group Generating JREQ: A non-OnTR/OffTR/TC node sends (unicasts) a JREQ to its parent node; A OnTR node simply changes its status from OnTR to GM; The TC or a OffTR node changes its status to GC sends a GCUD packet down to the existing multicast tree indicating it will be the new GC for this group. The current GC will give up its GC status upon receiving the GCUD packet and dropped the GCUD packet without propagating the packet further. Zheng, Liu, Zhu, Wong, Lee
61
Joining the Multicast Group (cont.)
May 2005 Joining the Multicast Group (cont.) Receiving the JREQ A parent node first records this child with the group address. Then, A GM, OnTR, OffTR, GC or TC parent responds with a JREP; When a parent node is none of above - forward the JREQ to its own parent; this process will repeat until the JREQ meets a GM, OnTR, OffTR, GC or ZC. Zheng, Liu, Zhu, Wong, Lee
62
Joining the Multicast Group (cont.)
May 2005 Joining the Multicast Group (cont.) If a JREQ finally meets TC, it means there is not multicast branch for this node from TC. The TC will check the record of this group address. Record not found - the joining node is the first one for this group. TC responds with a JREP with the GC flag set, indicating the joining node to be the GC for this group. Record found – the TC should have GMs of this group in its other branches. The TC responds with a regular JREP and becomes the GC for this group (it could already be). Zheng, Liu, Zhu, Wong, Lee
63
Joining the Multicast Group (cont.)
May 2005 Joining the Multicast Group (cont.) Upon receiving the JREP, the joining node checks whether the GC flag is set. flag is set - the node sets itself as the GC for this group. The joining process ends; flag is not set - the route to the multicast tree is found - the node marks its parent with the group address. A node MAY try to join the group up to MaxJoinAttempts times if its previous attempts failed. If a node finally receives no JREP (even no JREPs from TC), it MAY decide to be the GC for this group. In this case, the network may be partitioned. Zheng, Liu, Zhu, Wong, Lee
64
Joining Case 1 – The First GM
May 2005 Joining Case 1 – The First GM Zheng, Liu, Zhu, Wong, Lee
65
Joining Case 2 – OffTR/ZC Joining
<month year> doc.: IEEE May 2005 Joining Case 2 – OffTR/ZC Joining Node B is the original GC; Node A (OffTR) joins the multicast group by simply changing its status from OffTR to GC; Node B gives up its GC status upon receiving GC update message from A; TC will follow the same rule when it joins. Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
66
Joining Case 3 – Normal Cases
May 2005 Joining Case 3 – Normal Cases Example: Node A, B and C are joining the multicast tree. Zheng, Liu, Zhu, Wong, Lee
67
Leaving the Multicast Group
May 2005 Leaving the Multicast Group A GM wants to leave a multicast group checks whether it is a leaf node. Leaf node - sends a LREQ to its parent node. Non-leaf node - it can only change its multicast status from GM to OnTR but cannot fully leave the tree. Upon receiving a LREQ from a child node, the parent node responds with a LREP deletes all the multicast information related to this child. Zheng, Liu, Zhu, Wong, Lee
68
Leaving the Multicast Group (cont.)
May 2005 Leaving the Multicast Group (cont.) If the leaving of a child makes the OnTR parent node a leaf node, then the parent node will also send a LREQ to its parent to prune itself from the multicast tree. If the leaving of a GC’s direct child makes the GC have only one direct child links to the multicast group, the GC SHOULD give up its role as the GC if it is not a GM (example provided in Slide 22, step 3). If the GC finally finds all its multicast children have left, it MAY choose to leave the group too. In this case, the GC MUST send a LREQ toward the TC. The TC will delete all the information about this multicast group and send a LREP to the GC. Upon receiving the LREP from the TC, all OffTRs along the route and the GC will delete all the information about this multicast group. Zheng, Liu, Zhu, Wong, Lee
69
May 2005 Leaving the Group Zheng, Liu, Zhu, Wong, Lee
70
GC Role Migration Case 1: New GM joins
May 2005 GC Role Migration Case 1: New GM joins When new GM joins the group, its depth of the Cluster-tree may be smaller than the current GC’s depth. The role of GC SHOULD be migrated from current GC to the common parent of the newly joined GM and the existing multicast tree. Zheng, Liu, Zhu, Wong, Lee
71
Case 1: New GM joins <month year>
doc.: IEEE May 2005 Case 1: New GM joins Case 1: New GM joins – algorithm When a OffTR/the ZC receives a JREQ from one of its child nodes (other than the existing one), it will first reply with a JREP to the corresponding child and then send a GCUD packet down to its existing child who links to the multicast tree, indicating the OffTR/ZC itself is now the GC for this multicast group. The GCUD will propagate down to the tree until it hits the current GC. The current GC will give up its role as a GC upon receiving the GCUD packet. It can be a regular GM or a OnTR depending on its original status. All OffTRs must change their status to OnTR upon receiving the GCUD packet. Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
72
GC Role Migration (cont.)
May 2005 GC Role Migration (cont.) Case 2: Current GC leaves The GC SHOULD give up its role as the GC in the following two cases: The GC is not a GM and the leaving of a child makes the GC have only one child in its neighbor list for a multicast group The GC is a GM and it wants to leave The GC SHOULD still be the OffTR for the group even after it leaves. Zheng, Liu, Zhu, Wong, Lee
73
Case 2: Current GC leaves
<month year> doc.: IEEE May 2005 Case 2: Current GC leaves Case 2: Current GC leaves – algorithm When the GC detects that it has now only one child in its neighbor list for a multicast group and it is not a GM , or it is a member but wants to leave, it SHOULD change its status to OffTR and send a GCUD packet down the tree to this child. Upon receiving the GCUD, the first GM, or OnTR with more than one children in this group, SHOULD become the new GC for this group. OnTR with only one child in this group SHOULD become a OffTR instead of GC. It is now not on the tree. Zheng, Liu, Zhu, Wong, Lee Zheng, Liu, Zhu, Wong, Lee, Samsung
74
May 2005 Group Dismiss When a group finishes its session, one of the members can issue a GDIS packet (multicast) to all the members to indicate the end of the group communication; The higher layer will determine which member have the right to issue this GDIS packet; Upon receiving the GDIS packet GMs and OnTRs will delete all the information related to this group; GC will issue a LREQ toward TC and follow the operation of the GC leave case described in Slide 18. All OffTRs along the route to TC and TC will delete all the information related to this group by this process. The GDIS packet reduce the control traffic led by GM’s leaving process described before. Zheng, Liu, Zhu, Wong, Lee
75
Data Transmission Mechanism
May 2005 Data Transmission Mechanism Multicast request from application at Source; The Source’s Mesh layer sets the “Destination group address” field and puts the multicast group address as the destination address in the Mesh header; The Source’s Mesh layer indicates its MAC layer that the destination address of the MAC header should be broadcast address. The Source’s MAC layer broadcasts the packet by setting both the destination PAN ID and MAC short address to 0xffff. All neighbors received this broadcast packet (at MAC layer) will pass it to their Mesh layers. By checking the “Destination group address” field, the destination address field and its multicast status, each neighbor node’s Mesh layer determines whether it needs to/how to process the multicast packet (see next page). Zheng, Liu, Zhu, Wong, Lee
76
Data Transmission Mechanism - upon receiving a multicast packet
May 2005 Data Transmission Mechanism - upon receiving a multicast packet Neighbor Status Process GM If the previous hop of the packet is one of my on-tree neighbor (parent/child), • pass the packet to upper layer; • rebroadcast the packet if I have on-tree neighbors other than the previous hop (this check prevents leaf node from rebroadcasting at MAC layer). Otherwise (packets were received from sibling nodes), discard the packet. OnTR Same as GM except not to pass the packet to upper layer. Zheng, Liu, Zhu, Wong, Lee
77
May 2005 Non-member Issues A Non-GM can send data packets to the multicast group but cannot receive packets from the group; A Non-GM can only UNICAST (at MAC layer) packets toward TC if it does not have information of the multicast tree; A Non-GM can UNICAST (at MAC layer) packets toward the multicast tree if it is a OffTR or the TC; A Non-GM can behave like a GM when sending its own data to the multicast group if it is a OnTR; When a non-GM receives a packet (unicast at MAC layer) with multicast address as destination address, and it is neither a OnTR/OffTR nor the TC, it has to forward (unicast at MAC layer) the packet to its parent; Zheng, Liu, Zhu, Wong, Lee
78
Data Transmission Mechanism (cont.)
May 2005 Data Transmission Mechanism (cont.) Upon receiving a multicast packet sent by non-GM Neighbor Status Process OffTR/TC If the previous hop is NOT the child who links me to the multicast tree, unicast the packet down to the multicast tree. In this case, a non-member node is sending packet to the group; Otherwise, discard the packet. In this case, the packet is out of the transmission scope. This happens when GC broadcast the packet and its direct parent (immediate OffTR or the TC) receives it. None of above (GM/OnTR/ OffTR/TC) Unicast the packet to parent. In this case, a non-member node is sending packet to the group and this packet can only follow the tree link toward TC. Luckily, some packets may hit a OffTR and be propagated down to the multicast tree without going all the way up to the TC. Zheng, Liu, Zhu, Wong, Lee
79
Performance Enhancement
May 2005 Performance Enhancement To reduce the processing overhead incurred by broadcast packet, a node could broadcasts /rebroadcasts the data packet only if it has more than two neighbors on the multicast tree. Otherwise, the multicast packet is unicast at MAC layer. Although the basic algorithm is to broadcast (at MAC) data packets along multicast tree, it is also possible to allow sibling GMs/OnTRs to propagate the packets to expedite the packet delivery. In this case, nodes need to maintain a Multicast Transaction Table (which records “GroupAddr-SrcAddr-SeqNb”) to prevent duplicate packets. Zheng, Liu, Zhu, Wong, Lee
80
May 2005 Highlights Utilizing the underlying ART to construct shared multicast trees for different multicast groups; Low control overhead No control traffic is broadcast; In most cases, the tree coordinator is not bothered for transmitting control and data messages. The introduction of Group Coordinator and simple joining/leaving algorithm guarantee the multicast sub-tree is minimal at any time. Simple and timely data propagation. Data packets do not need to go to the ZigBee coordinator first. Non-members can also send packets to members; Zheng, Liu, Zhu, Wong, Lee
81
May 2005 Key Pre-distribution Zheng, Liu, Zhu, Wong, Lee
82
Inter-backbone connection Backbone’s wireless radio covering
May 2005 Wireless Mesh Network Mesh points Mesh clients Inter-backbone connection Client’s wireless radio covering Backbone’s wireless radio covering Backbone (responsible for security services) Zheng, Liu, Zhu, Wong, Lee
83
Infrastructure meshing
May 2005 Wireless Mesh Network Infrastructure meshing link Peer-to-peer link Client meshing link Zheng, Liu, Zhu, Wong, Lee
84
Key Pre-Distribution 1 6 Center 2 5 3 4 May 2005
Zheng, Liu, Zhu, Wong, Lee
85
May 2005 1 2 3 4 5 6 K1 K2 K3 K4 K5 K6 K7 K8 K9 K10 Zheng, Liu, Zhu, Wong, Lee
86
Common Key Computation
May 2005 Common Key Computation 1 2 3 K34=H( )= 4 =Hash(K5||K10) 5 No other node can compute K34 ! 6 Zheng, Liu, Zhu, Wong, Lee
87
WMN and Key Pre-Distribution
May 2005 WMN and Key Pre-Distribution Mesh clients 1 3 2 Need secure Connection ! 4 5 K45=H( ) K45=H( ) Backbone Zheng, Liu, Zhu, Wong, Lee
88
WMN and Key Pre-Distribution
May 2005 1 3 2 4 EK45(message) 5 K45=H( ) K45=H( ) Zheng, Liu, Zhu, Wong, Lee
89
WMN and Key Pre-Distribution
May 2005 WMN and Key Pre-Distribution 5 K45=H( ) 1 3 EK45(message) 2 4 K45=H( ) Zheng, Liu, Zhu, Wong, Lee
90
Node’s Exclusion 5 1 3 Node to exclude 2 4 Keys to renew May 2005
Zheng, Liu, Zhu, Wong, Lee
91
Node’s Exclusion 5 1 3 2 Covering 4 Refresh originator May 2005
Zheng, Liu, Zhu, Wong, Lee
92
Excluded node has no new keys
Node’s Exclusion May 2005 5 1 3 E ( ) E ( ) E ( ) 2 Session key for refresh 4 Excluded node has no new keys Zheng, Liu, Zhu, Wong, Lee
93
New Node Association 5 1 3 2 6 New node Keys to acquire May 2005
Zheng, Liu, Zhu, Wong, Lee
94
Location-limited channel
New Node Association May 2005 5 1 Location-limited channel 3 6 2 Zheng, Liu, Zhu, Wong, Lee
95
New node is mesh client now
New Node Association May 2005 5 1 3 2 6 New node is mesh client now Zheng, Liu, Zhu, Wong, Lee
96
Benefits of using KPDS in Distributed Key Management
May 2005 Benefits of using KPDS in Distributed Key Management Any link between nodes is secured through common key of a pair No need for on-line servers Simple node’s exclusion and association Self-healing through key refresh Robustness due to distributed solution Simple implementation and low resources requirements Zheng, Liu, Zhu, Wong, Lee
97
Highlights (1) Meshed Tree Adaptive address assignment
May 2005 Highlights (1) Meshed Tree Adaptive address assignment avoiding “running out of addresses” problem Efficient tree repair no address re-assignment Meshed ART (MART) shorter path Robustness Mesh networking (Tree routing + Non-tree routing) optimal routes no broadcast (even with limited TTL) if either the source or the destination has an optimal route no flooding if there is a (non-optimal) route from the source to the destination Zheng, Liu, Zhu, Wong, Lee
98
Highlights (2) Topology Server-Based Meshed Tree
May 2005 Highlights (2) Topology Server-Based Meshed Tree Schedule beacon transmissions at a topology server to avoid beacon collisions Calculate shortcuts between active source-destination pairs at a topology server to avoid flooding based route discovery Quick recovery from link/node failures by recalculating new routes or reforming the tree Zheng, Liu, Zhu, Wong, Lee
99
Highlights (3) Tree-Based Multicasting Low control overhead
May 2005 Highlights (3) Tree-Based Multicasting Low control overhead No control traffic is broadcast; In most cases, the tree coordinator is not involved for transmitting control and data messages. The introduction of Group Coordinator and simple joining/leaving algorithm guarantee the multicast sub-tree is minimal at any time. Simple and timely data propagation. Data packets do not need to go to the ZigBee coordinator first. Non-members can also send packets to members; Zheng, Liu, Zhu, Wong, Lee
100
Highlights (4) Key Pre-distribution
May 2005 Highlights (4) Key Pre-distribution Any link between nodes is secured through common key of a pair No need for on-line servers Simple node’s exclusion and association Self-healing through key refresh Robustness due to distributed solution Simple implementation and low resources requirements Extensible to IEEE MAC and PHY Zheng, Liu, Zhu, Wong, Lee
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.