Application of Reinforcement Learning in Network Routing By Chaopin Zhu Chaopin Zhu
Machine Learning Supervised Learning Supervised Learning Unsupervised Learning Unsupervised Learning Reinforcement Learning Reinforcement Learning
Supervised Learning Feature: Learning with a teacher Feature: Learning with a teacher Phases Phases Training phaseTraining phase Testing phaseTesting phase Application Application Pattern recognitionPattern recognition Function approximationFunction approximation
Unsupervised Leaning Feature Feature Learning without a teacherLearning without a teacher Application Application Feature extractionFeature extraction Other preprocessingOther preprocessing
Reinforcement Learning Feature: Learning with a critic Feature: Learning with a critic Application Application OptimizationOptimization Function approximationFunction approximation
Elements of Reinforcement Learning Agent Agent Environment Environment Policy Policy Reward function Reward function Value function Value function Model of environment (optional) Model of environment (optional)
Reinforcement Learning Problem
Markov Decision Process (MDP) Definition: A reinforcement learning task that satisfies the Markov property A reinforcement learning task that satisfies the Markov property Transition probabilities
An Example of MDP
Markov Decision Process (cont.) Parameters Parameters Value functions
Elementary Methods for Reinforcement Learning Problem Dynamic programming Dynamic programming Monte Carlo Methods Monte Carlo Methods Temporal-Difference Learning Temporal-Difference Learning
Bellman’s Equations
Dynamic Programming Methods Policy evaluation Policy evaluation Policy improvement Policy improvement
Dynamic Programming (cont.) E ---- policy evaluation I ---- policy improvement Policy Iteration Policy Iteration Value Iteration Value Iteration
Monte Carlo Methods Feature Feature Learning from experienceLearning from experience Do not need complete transition probabilitiesDo not need complete transition probabilities Idea Idea Partition experience into episodesPartition experience into episodes Average sample returnAverage sample return Update at episode-by-episode baseUpdate at episode-by-episode base
Temporal-Difference Learning Features Features (Combination of Monte Carlo and DP ideas) (Combination of Monte Carlo and DP ideas) Learn from experience (Monte Carlo)Learn from experience (Monte Carlo) Update estimates based in part on other learned estimates (DP)Update estimates based in part on other learned estimates (DP) TD( ) algorithm seemlessly integrates TD and Monte Carlo Methods TD( ) algorithm seemlessly integrates TD and Monte Carlo Methods
TD(0) Learning Initialize V(x) arbitrarily to the policy to be evaluated Repeat (for each episode): Initialize x Repeat (for each step of episode) a action given by for x Take action a; observe reward r and next state x’ x x’ until x is terminal
Q-Learning Initialize Q(x,a) arbitrarily Repeat (for each episode) Initialize x Repeat (for each step of episode): Choose a from x using policy derived from Q Take action a, observe r, x’ x x’ until x is terminal
Q-Routing Q x (y,d)----estimated time that a packet would take to reach the destination node d from current node x via x’s neighbor node y T y (d) y’s estimate for the time remaining in the trip q y queuing time in node y T xy transmission time between x and y
Algorithm of Q-Routing 1. Set initial Q-values for each node 2. Get the first packet from the packet queue of node x 3. Choose the best neighbor node and forward the packet to node by 4. Get the estimated value from node 5. Update 6. Go to 2.
Dual Reinforcement Q-Routing
Network Model
Network Model (cont.)
Node Model
Routing Controller
Initialization/ Termination Procedures Initilization Initilization Initialize and / or register global variable Initialize routing table Termination Termination Destroy routing table Release memory
Arrival Procedure Data packet arrival Data packet arrival Update routing table Route it with control information or destroy the packet if it reaches the destination Control information packet arrival Control information packet arrival Update routing table Destroy the packet
Departure Procedure Set all fields of the packet Set all fields of the packet Get a shortest route Get a shortest route Send the packet according to the route Send the packet according to the route
References [1] Richard S. Sutton and Andrew G. Barto, Reinforcement Learning—An Introduction [2] Chengan Guo, Applications of Reinforcement Learning in Sequence Detection and Network Routing [3] Simon Haykin, Neural Networks– A Comprehensive Foundation