Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.

Slides:



Advertisements
Similar presentations
Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014.
Advertisements

Lirong Xia Reinforcement Learning (2) Tue, March 21, 2014.
Markov Decision Process
Value Iteration & Q-learning CS 5368 Song Cui. Outline Recap Value Iteration Q-learning.
Announcements  Homework 3: Games  Due tonight at 11:59pm.  Project 2: Multi-Agent Pacman  Has been released, due Friday 2/21 at 5:00pm.  Optional.
CS 188: Artificial Intelligence
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Markov Decision Processes
Planning under Uncertainty
Reinforcement learning
Announcements Homework 3: Games Project 2: Multi-Agent Pacman
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010
CS 182/CogSci110/Ling109 Spring 2008 Reinforcement Learning: Algorithms 4/1/2008 Srini Narayanan – ICSI and UC Berkeley.
CS 188: Artificial Intelligence Fall 2009
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
CS 188: Artificial Intelligence Fall 2009 Lecture 10: MDPs 9/29/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell.
Markov Decision Processes Value Iteration Pieter Abbeel UC Berkeley EECS TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010 Reinforcement Learning 10/25/2010 A subset of slides from Dan Klein – UC Berkeley Many.
CS 188: Artificial Intelligence Fall 2009 Lecture 12: Reinforcement Learning II 10/6/2009 Dan Klein – UC Berkeley Many slides over the course adapted from.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
MAKING COMPLEX DEClSlONS
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CSE 473Markov Decision Processes Dan Weld Many slides from Chris Bishop, Mausam, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
CHAPTER 10 Reinforcement Learning Utility Theory.
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
QUIZ!!  T/F: RL is an MDP but we do not know T or R. TRUE  T/F: In model free learning we estimate T and R first. FALSE  T/F: Temporal Difference learning.
Quiz 6: Utility Theory  Simulated Annealing only applies to continuous f(). False  Simulated Annealing only applies to differentiable f(). False  The.
CS 188: Artificial Intelligence Spring 2007 Lecture 21:Reinforcement Learning: I Utilities and Simple decisions 4/10/2007 Srini Narayanan – ICSI and UC.
MDPs (cont) & Reinforcement Learning
gaflier-uas-battles-feral-hogs/ gaflier-uas-battles-feral-hogs/
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley.
Reinforcement learning (Chapter 21)
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
CSE 473Markov Decision Processes Dan Weld Many slides from Chris Bishop, Mausam, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
QUIZ!!  T/F: Optimal policies can be defined from an optimal Value function. TRUE  T/F: “Pick the MEU action first, then follow optimal policy” is optimal.
Reinforcement Learning
CS 188: Artificial Intelligence Spring 2007 Lecture 21:Reinforcement Learning: II MDP 4/12/2007 Srini Narayanan – ICSI and UC Berkeley.
CSE 473: Artificial Intelligence Spring 2012 Reinforcement Learning Dan Weld Many slides adapted from either Dan Klein, Stuart Russell, Luke Zettlemoyer.
CS 541: Artificial Intelligence Lecture XI: Reinforcement Learning Slides Credit: Peter Norvig and Sebastian Thrun.
Announcements  Homework 3: Games  Due tonight at 11:59pm.  Project 2: Multi-Agent Pacman  Has been released, due Friday 2/19 at 5:00pm.  Optional.
Def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() replaceMe =.0001 for i in range(0, numIterations): hypothesis = np.dot(x,
CS 188: Artificial Intelligence Fall 2007 Lecture 12: Reinforcement Learning 10/4/2007 Dan Klein – UC Berkeley.
CS 188: Artificial Intelligence Fall 2008 Lecture 11: Reinforcement Learning 10/2/2008 Dan Klein – UC Berkeley Many slides over the course adapted from.
CS 541: Artificial Intelligence Lecture X: Markov Decision Process Slides Credit: Peter Norvig and Sebastian Thrun.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Announcements Grader office hours posted on course website
Reinforcement Learning (1)
Markov Decision Processes
Reinforcement Learning
CSE 473: Artificial Intelligence
CS 188: Artificial Intelligence
CAP 5636 – Advanced Artificial Intelligence
Quiz 6: Utility Theory Simulated Annealing only applies to continuous f(). False Simulated Annealing only applies to differentiable f(). False The 4 other.
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
CSE 473: Artificial Intelligence Autumn 2011
CS 188: Artificial Intelligence Fall 2007
CSE 473 Markov Decision Processes
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2008
Hidden Markov Models (cont.) Markov Decision Processes
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2007
Announcements Homework 2 Project 2 Mini-contest 1 (optional)
Reinforcement Learning (2)
Markov Decision Processes
Markov Decision Processes
Reinforcement Learning (2)
Presentation transcript:

Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act so as to maximize expected rewards This slide deck courtesy of Dan Klein at UC Berkeley

Recap: MDPs  Markov decision processes:  States S  Actions A  Transitions P(s’|s,a) (or T(s,a,s’))  Rewards R(s,a,s’) (and discount  )  Start state s 0  Quantities:  Policy = map of states to actions  Episode = one run of an MDP  Utility = sum of discounted rewards  Values = expected future utility from a state  Q-Values = expected future utility from a q-state a s s, a s,a,s’ s’ 2

Recap: Optimal Utilities  The utility of a state s: V * (s) = expected utility starting in s and acting optimally  The utility of a q-state (s,a): Q * (s,a) = expected utility starting in s, taking action a and thereafter acting optimally  The optimal policy:  * (s) = optimal action from state s 3 a s s’ s, a (s,a,s’) is a transition s,a,s’ s is a state (s, a) is a q-state

Recap: Bellman Equations  Definition of utility leads to a simple one-step lookahead relationship amongst optimal utility values: Total optimal rewards = maximize over choice of (first action plus optimal future)  Formally: a s s, a s,a,s’ s’ 4

Practice: Computing Actions  Which action should we chose from state s:  Given optimal values V?  Given optimal q-values Q?  Lesson: actions are easier to select from Q’s! 5

Value Estimates  Calculate estimates V k * (s)  Not the optimal value of s!  The optimal value considering only next k time steps (k rewards)  As k  , it approaches the optimal value  Almost solution: recursion (i.e. expectimax)  Correct solution: dynamic programming 6

Value Iteration  Idea:  Start with V 0 * (s) = 0, which we know is right (why?)  Given V i *, calculate the values for all states for depth i+1:  Throw out old vector V i *  Repeat until convergence  This is called a value update or Bellman update  Theorem: will converge to unique optimal values  Basic idea: approximations get refined towards optimal values  Policy may converge long before values do 7

Example: Bellman Updates 8 max happens for a=right, other actions not shown Example:  =0.9, living reward=0, noise=0.2

Example: Value Iteration  Information propagates outward from terminal states and eventually all states have correct value estimates V2V2 V3V3 9

Eventually: Correct Values  This is the unique solution to the Bellman Equations V 3 (when R=0,  =0.9)V* (when R=-.04,  =1)

Utilities for a Fixed Policy  Another basic operation: compute the utility of a state s under a fixed (generally non-optimal) policy  Define the utility of a state s, under a fixed policy  : V  (s) = expected total discounted rewards (return) starting in s and following   Recursive relation (one-step look- ahead / Bellman equation):  (s) s s,  (s) s,  (s),s’ s’ 11

Policy Evaluation  How do we calculate the V’s for a fixed policy?  Idea one: turn recursive equations into updates  Idea two: it’s just a linear system, solve with Matlab (or whatever) 12

Policy Iteration  Alternative approach:  Step 1: Policy evaluation: calculate utilities for some fixed policy (not optimal utilities!) until convergence  Step 2: Policy improvement: update policy using one- step look-ahead with resulting converged (but not optimal!) utilities as future values  Repeat steps until policy converges  This is policy iteration  It’s still optimal!  Can converge faster under some conditions 13

Policy Iteration  Policy evaluation: with fixed current policy , find values with simplified Bellman updates:  Iterate until values converge  Policy improvement: with fixed utilities, find the best action according to one-step look-ahead 14

Comparison  Both compute same thing (optimal values for all states)  In value iteration:  Every pass (or “backup”) updates both utilities (explicitly, based on current utilities) and policy (implicitly, based on current utilities)  Tracking the policy isn’t necessary; we take the max  In policy iteration:  Several passes to update utilities with fixed policy  After policy is evaluated, a new policy is chosen  Together, these are dynamic programming for MDPs 15

Asynchronous Value Iteration*  In value iteration, we update every state in each iteration  Actually, any sequences of Bellman updates will converge if every state is visited infinitely often  In fact, we can update the policy as seldom or often as we like, and we will still converge  Idea: Update states whose value we expect to change: If is large then update predecessors of s

Reinforcement Learning  Reinforcement learning:  Still have an MDP:  A set of states s  S  A set of actions (per state) A  A model T(s,a,s’)  A reward function R(s,a,s’)  Still looking for a policy  (s)  New twist: don’t know T or R  I.e. don’t know which states are good or what the actions do  Must actually try actions and states out to learn 17

Example: Animal Learning  RL studied experimentally for more than 60 years in psychology  Rewards: food, pain, hunger, drugs, etc.  Mechanisms and sophistication debated  Example: foraging  Bees learn near-optimal foraging plan in field of artificial flowers with controlled nectar supplies  Bees have a direct neural connection from nectar intake measurement to motor planning area 18

Example: Backgammon  Reward only for win / loss in terminal states, zero otherwise  TD-Gammon learns a function approximation to V(s) using a neural network  Combined with depth 3 search, one of the top 3 players in the world  You could imagine training Pacman this way…  … but it’s tricky! (It’s also P3) 19

Passive Learning  Simplified task  You don’t know the transitions T(s,a,s’)  You don’t know the rewards R(s,a,s’)  You are given a policy  (s)  Goal: learn the state values (and maybe the model)  I.e., policy evaluation  In this case:  Learner “along for the ride”  No choice about what actions to take  Just execute the policy and learn from experience  We’ll get to the active case soon  This is NOT offline planning! 20

Example: Direct Estimation  Episodes: x y (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (3,3) right -1 (4,3) exit +100 (done) (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,2) exit -100 (done) V(2,3) ~ ( ) / 2 = -3.5 V(3,3) ~ ( ) / 3 = 31.3  = 1, R =

Model-Based Learning  Idea:  Learn the model empirically through experience  Solve for values as if the learned model were correct  Simple empirical model learning  Count outcomes for each s,a  Normalize to give estimate of T(s,a,s’)  Discover R(s,a,s’) when we experience (s,a,s’)  Solving the MDP with the learned model  Iterative policy evaluation, for example 22  (s) s s,  (s) s,  (s),s’ s’

Example: Model-Based Learning  Episodes: x y T(, right, ) = 1 / 3 T(, right, ) = 2 /  = 1 (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (3,3) right -1 (4,3) exit +100 (done) (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,2) exit -100 (done) 23

Model-Free Learning  Want to compute an expectation weighted by P(x):  Model-based: estimate P(x) from samples, compute expectation  Model-free: estimate expectation directly from samples  Why does this work? Because samples appear with the right frequencies! 24

Sample-Based Policy Evaluation?  Who needs T and R? Approximate the expectation with samples (drawn from T!) 25  (s) s s,  (s) s1’s1’ s2’s2’ s3’s3’ s,  (s),s’ s’ Almost! But we only actually make progress when we move to i+1.

Temporal-Difference Learning  Big idea: learn from every experience!  Update V(s) each time we experience (s,a,s’,r)  Likely s’ will contribute updates more often  Temporal difference learning  Policy still fixed!  Move values toward value of whatever successor occurs: running average! 26  (s) s s,  (s) s’ Sample of V(s): Update to V(s): Same update: