Download presentation
Presentation is loading. Please wait.
1
Reinforcing Reachable Routes
Muralidhar H Sortur Gujar Sujit Prakash Debmalya Panigrahi 4/9/2005 COMPUTER COMMUNICATION NETWORKS
2
COMPUTER COMMUNICATION NETWORKS
Agenda Routing Multi-path routing Reachability routing Reinforcement Learning in Routing Q-routing Ant-based routing Modified Ant-based routing Experimental Results Suggestions for improvement 4/9/2005 COMPUTER COMMUNICATION NETWORKS
3
COMPUTER COMMUNICATION NETWORKS
Routing Objectives Minimize delay Maximize throughput Obvious solution Shortest (single) path routing!!! Shortest with respect to some cost Static costs Dynamic costs 4/9/2005 COMPUTER COMMUNICATION NETWORKS
4
COMPUTER COMMUNICATION NETWORKS
Single Path Routing Is this always a good choice? NO!!! Imposes a routing tree on the available graph structure Not capable of meeting multiple performance objectives Severe oscillations in dynamic cost setting Failure of optimal link is costly 4/9/2005 COMPUTER COMMUNICATION NETWORKS
5
COMPUTER COMMUNICATION NETWORKS
Multi-Path Routing How to overcome these shortcomings? MULTI-PATH ROUTING Multiple active paths between nodes Loop detection required Connection oriented Connection less Single metric Multi metric 4/9/2005 COMPUTER COMMUNICATION NETWORKS
6
COMPUTER COMMUNICATION NETWORKS
Reachability Routing All loop free paths between source and destination used Exploration and exploitation Hard reachability All and only loop free paths used Does not have a practically viable solution Soft reachability All loop free paths used 4/9/2005 COMPUTER COMMUNICATION NETWORKS
7
COMPUTER COMMUNICATION NETWORKS
Routing Issues No exploration Expensive initial data collection Reinforcement learning Constructive Destructive 3 4 1 2 Probabilistic Deterministic Most current network routing protocols Costly to adopt for multi-path routing Not viable since loops are catastrophic 4/9/2005 COMPUTER COMMUNICATION NETWORKS
8
Reinforcement Learning
Populating routing tables is viewed as a problem of learning the entries Agents = Routers Action = Exploration by control packets Reinforcement = Probabilities tweaked according to response from environment Concurrent learning at all routers – multi-agent learning Probabilistic nature of routing table entries Suitable for single path and multi-path routing 4/9/2005 COMPUTER COMMUNICATION NETWORKS
9
Q-Routing [Boyan-Litman, ’94]
One of the first RL algorithms for routing Each router x maintains Qx(d, is)- a metric denoting estimated time for delivery of a packet to destination d on interface is Deterministic or probabilistic routing possible 4/9/2005 COMPUTER COMMUNICATION NETWORKS
10
Q-Routing Learning rule
x forwards packet to next best router y on interface is on receiving the packet, y sends x the best value for Qy (for the destination d of the packet) x updates Qx as l = learning rate t = queuing time in x + transmission time from x to y Qx(d, is) = Qx(d, is) + l*{(maxkQy(d, ik) + t) – Qx(d, is)} 4/9/2005 COMPUTER COMMUNICATION NETWORKS
11
COMPUTER COMMUNICATION NETWORKS
Q-Routing Basically a relaxed version of Bellman-Ford!!! Some serious shortcomings Convergence to shortest path not guaranteed Exploration only along currently exploited path Improvement in sub-optimal path goes unnoticed Routing overhead proportional to number of data packets 4/9/2005 COMPUTER COMMUNICATION NETWORKS
12
Ant-based Routing [Subramanian et al, ’97]
Exploration and exploitation decoupled Exploration by small control packets called ants generated by hosts to randomly chosen destinations Exploitation by data packets 4/9/2005 COMPUTER COMMUNICATION NETWORKS
13
Ant-based Routing Backward learning
On receiving an ant with accumulated cost c, on interface ik, a router updates its routing probabilities pk = (pk + (del)pk) / (1 + (del)pk) pj = pj / (1 + (del)pk) where (del)pk is inversely proportional to f(c), f(c) being a non-decreasing function of c 4/9/2005 COMPUTER COMMUNICATION NETWORKS
14
COMPUTER COMMUNICATION NETWORKS
Ant-based Routing Routing of ants Regular ants Converges deterministically to the shortest paths in the network Single path routing in the long run Uniform ants Probabilities are partitioned according to costs in the long run Multi-path routing 4/9/2005 COMPUTER COMMUNICATION NETWORKS
15
COMPUTER COMMUNICATION NETWORKS
Ant-based Routing Uniform ants have a tendency to choose decision-free paths A costly loop ruins a high-quality loop-free sub-path Ants should have selected amnesia Behave as uniform ants for multi-path routing Behave as regular ants for suppressing loops 4/9/2005 COMPUTER COMMUNICATION NETWORKS
16
Modified Ant-based Routing [Varadarajan et al, ’03]
Introduce statistics table for each node Remembers the number of ants generated and those that returned without delivery for each destination Discard, for each destination, all interfaces that had a 100% delivery failure Effectively tries to detect loops and discard interfaces leading to them 4/9/2005 COMPUTER COMMUNICATION NETWORKS
17
Modified Ant-based Routing
Statistics table entries in a router updated only for ants generated by itself 4/9/2005 COMPUTER COMMUNICATION NETWORKS
18
Modified Ant-based Routing
Ants generated not only by hosts but by all routers Routing table entries updated only at the destination rather than at all intermediate nodes Nodes that present interfaces to more destinations experience greater reinforcement No send-back (except for leaf nodes) 4/9/2005 COMPUTER COMMUNICATION NETWORKS
19
Implementation Details
Static cost Takes into account only fixed costs associated with links Dynamic cost Considers queuing delay at intermediate nodes along the path Moving window of size 10 used for statistics table 4/9/2005 COMPUTER COMMUNICATION NETWORKS
20
Network under consideration
Static cost route learning R2 L1=2 L3=3.5 L0=1 L5=4 H0 R1 R4 H5 L4=9 L2=7 R3 4/9/2005 COMPUTER COMMUNICATION NETWORKS
21
Result for static cost route learning
4/9/2005 COMPUTER COMMUNICATION NETWORKS
22
Dynamic cost Route learning
Assumptions Only hosts generate data traffic Steady state queue lengths denote cost of an interface Jackson’s theorem for open network of queues and traffic equations used 4/9/2005 COMPUTER COMMUNICATION NETWORKS
23
Network under consideration
Dynamic costs Equivalent Jackson’s open network R2, u = 5 L1 L3 r1=1 r4=1 R1, u = 5 R4, u = 5 L4 L2 R3, u = 5 4/9/2005 COMPUTER COMMUNICATION NETWORKS
24
Result for dynamic cost route learning
4/9/2005 COMPUTER COMMUNICATION NETWORKS
25
COMPUTER COMMUNICATION NETWORKS
Some suggestions Introducing a window size for the statistics table Adapts to situations where links can go down False alarms possible- however, protocol is capable of recovering from a false alarm Has been actually tried in the implementation and gave satisfactory results 4/9/2005 COMPUTER COMMUNICATION NETWORKS
26
COMPUTER COMMUNICATION NETWORKS
Some suggestions Ants carry path information along with cost Set maximum path length threshold- ants traversing longer paths are discarded Uniform ants used Handled by the paper Connection oriented Connection less Single metric Multi metric Almost same as for single metric case Ants carry multiple costs Use regular ants Ants carry multiple costs Use regular ants 4/9/2005 COMPUTER COMMUNICATION NETWORKS
27
COMPUTER COMMUNICATION NETWORKS
References Srinidhi Varadarajan, Naren Ramakrishnan and Muthukumar Thirunavukkarasu, Reinforcing Reachable Routes. In Computer Networks, Vol. 43, No. 3, pages , Oct 2003 Devika Subramanian, Johnny Chen and Peter Druschel, Ants and Reinforcement Learning: A Case Study in Routing in Dynamic Networks. In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI ’97), pages Morgan Kaufmann, San Francisco, CA, 1997 Justin A. Boyan and Michael L. Littman, Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach. In Advances in Neural Information Processing Systems 6 (NIPS6), pages Morgan Kauffman, San Francisco, CA, 1994 4/9/2005 COMPUTER COMMUNICATION NETWORKS
28
COMPUTER COMMUNICATION NETWORKS
THANK YOU 4/9/2005 COMPUTER COMMUNICATION NETWORKS
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.