Managing the performance of multiple radio Multihop ESS Mesh Networks. March 13 2004 Managing the performance of multiple radio Multihop ESS Mesh Networks. Francis daCosta Meshdynamics fdacosta@meshdynamics.com (408) 373-7700
1 radio ad hoc Mesh Networks March 13 2004 1 radio ad hoc Mesh Networks Severe bandwidth constraints with each hop Vulnerable to both inter and intra channel interference Clients mobility adds to the complexity of the problem Not an enterprise class solution
1 radio ad hoc vs. 2 radio Infrastructure March 13 2004 1 radio ad hoc vs. 2 radio Infrastructure Ethernet Link Ethernet Link AP 0,1 AP 0,1 Bandwidth loss each hops in network Bandwidth preserved at all hops in network RL 1,1 AP 1,1 STA1 STA1 1/2 AP 2,1 STA2 RL 2,1 STA2 1/4 ONE RADIO SYSTEM Bandwidth halved each level Routing paths are fixed Not redundant or re-configurable Not scaleable or self managing TWO RADIO SYSTEM Bandwidth conserved at each level Flexible routing paths Redundant and re-configurable Scaleable and self managing Routing Path Alternate Routing Path
A two radio Infrastructure Mesh March 13 2004 A two radio Infrastructure Mesh ROOT ROOT RELAY RELAY ST9 RELAY ST2 ST8 ST1 ST3 RELAY ST4 ST5 ST6 Backhaul Up Backhaul Dn and Access Point to clients
Supports Multiple Routing Paths March 13 2004 ROOT ROOT RELAY RELAY ST9 RELAY ST2 Alternate Path ST8 ST1 ST3 RELAY ST4 ST5 ST6 Supports Multiple Routing Paths Ensures same available Bandwidth at different levels Self-managed performance Dynamic Load balancing Backhaul Up Backhaul Dn and Access Point to clients
Meshed ESS is the Wireless Equivalent of 802.1d Switch Stack March 13 2004 Meshed ESS is the Wireless Equivalent of 802.1d Switch Stack
Hub vs. Switch Topologies: Switch requires similar radios. March 13 2004 Hub vs. Switch Topologies: Switch requires similar radios.
March 13 2004 Infrastructure Hybrid Mesh Topologies Ad Hoc
60Kb Control Layer Embedded Software Upgrade to devices 1 ACG Product offering March 13 2004 Monitor Network Manage Settings 60Kb Control Layer Embedded Software Upgrade to devices 1 2 Communications Interface with NMS
March 13 2004 Monitor Network
March 13 2004 Manage Settings
Backhaul throughput = 50 Backhaul latency = 1.0 March 13 2004 Backhaul throughput = 50 Backhaul latency = 1.0 ROOT Low latency network configuration Low Latency for all nodes Poor throughput for distant nodes Throughput is sacrificed for latency Signal Strength varies inversely with Distance. Devices connect based on Best Local Signal Strength, not best throughput In the current implementation of the WLAN network depicted, signal strength affects overall network throughput (= 50.0 )
Backhaul throughput = 64 Backhaul latency = 1.6 March 13 2004 Backhaul throughput = 64 Backhaul latency = 1.6 High throughput network configuration Distant nodes connect through nearer nodes More hops required for distant nodes Now latency is sacrificed for throughput ACG Software control layer enables each AP to make routing decisions that increase the overall throughput. (20% increase to 64) Higher throughput is achieved at the cost of higher latency. The Average number of hops increases to 1.6.
Backhaul throughput = 59 Backhaul latency = 1.2 March 13 2004 Backhaul throughput = 59 Backhaul latency = 1.2 NMS can “tune” network between extremes Directive sent to control layer in each device Devices change associations per config setting. ACG software layer in AP dynamically reconfigures network to satisfy latency objectives while minimizing throughput degradation
Backhaul throughput = 55 Backhaul latency = 1.1 March 13 2004 Backhaul throughput = 55 Backhaul latency = 1.1 At 37% tradeoff Backhaul more latency centric NMS can “tune” network between extremes Directive sent to control layer in each device Devices change associations per config setting. Further incentive (37) reduces the latency to 1.1
Backhaul throughput = 50 Backhaul latency = 1.0 March 13 2004 Backhaul throughput = 50 Backhaul latency = 1.0 At 49% tradeoff Backhaul at low latency setting At 49, the cost of connecting to a parent further removed from the root (in terms of number of hops) is too high. Thus, the system can be tuned to anything between low latency and high throughput
Small footprint control layer also provides: Dynamic Load balancing March 13 2004 Congested Backhaul: Congested Node Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation As the load on a node increases, children seek alternate routes based on an increased cost-to-connect value..
Small footprint control layer also provides: Dynamic Load balancing March 13 2004 Connectivity Cost = 7 Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation As the load on a node increases, children seek alternate routes based on an increased cost-to-connect value..
Small footprint control layer also provides: Dynamic Load balancing March 13 2004 8 Connectivity Cost = 8 Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation As the load on a node increases, it encourages its children to find alternate routes.. By progressively increasing the connect cost
Multiple Roots for more redundancy, throughput March 13 2004 Multiple Roots for more redundancy, throughput Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation System supports multiple roots for redundancy and increased bandwidth
Small footprint control layer also provides: Dynamic Load balancing March 13 2004 Backhaul is Self healing Turned Off Turned Off Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation Nodes self configure in case of node failure. System inherently redundant and fail safe.
March 13 2004 CoS based Data Flow Buffer Depletion with Weighted Fair Queuing – QoS ON
March 13 2004
March 13 2004
March 13 2004
Implementation of algorithms on Hardware March 13 2004 Implementation of algorithms on Hardware
March 13 2004 Network Monitor
March 13 2004 Software Integration
Demonstration on hardware available on request. March 13 2004 Demonstration on hardware available on request. Fdacosta@meshdynamics.com Meshdynamics 1299 Parkmoor Ave, San Jose, CA 95126 Phone: (408) 373-7700