CS 188: Artificial Intelligence Fall 2009

Slides:



Advertisements
Similar presentations
Lirong Xia Reinforcement Learning (1) Tue, March 18, 2014.
Advertisements

Lirong Xia Reinforcement Learning (2) Tue, March 21, 2014.
RL for Large State Spaces: Value Function Approximation
Announcements  Homework 3: Games  Due tonight at 11:59pm.  Project 2: Multi-Agent Pacman  Has been released, due Friday 2/21 at 5:00pm.  Optional.
Ai in game programming it university of copenhagen Reinforcement Learning [Outro] Marco Loog.
Reinforcement Learning
Reinforcement learning (Chapter 21)
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Planning under Uncertainty
Reinforcement learning
Announcements Homework 3: Games Project 2: Multi-Agent Pacman
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010
Reinforcement Learning
CS 182/CogSci110/Ling109 Spring 2008 Reinforcement Learning: Algorithms 4/1/2008 Srini Narayanan – ICSI and UC Berkeley.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
CS 188: Artificial Intelligence Fall 2009 Lecture 10: MDPs 9/29/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell.
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010 Reinforcement Learning 10/25/2010 A subset of slides from Dan Klein – UC Berkeley Many.
CS 188: Artificial Intelligence Fall 2009 Lecture 12: Reinforcement Learning II 10/6/2009 Dan Klein – UC Berkeley Many slides over the course adapted from.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Reinforcement Learning
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CSE 573: Artificial Intelligence
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
Reinforcement Learning
CHAPTER 10 Reinforcement Learning Utility Theory.
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
QUIZ!!  T/F: RL is an MDP but we do not know T or R. TRUE  T/F: In model free learning we estimate T and R first. FALSE  T/F: Temporal Difference learning.
Quiz 6: Utility Theory  Simulated Annealing only applies to continuous f(). False  Simulated Annealing only applies to differentiable f(). False  The.
MDPs (cont) & Reinforcement Learning
gaflier-uas-battles-feral-hogs/ gaflier-uas-battles-feral-hogs/
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: III 4/17/2007 Srini Narayanan – ICSI and UC Berkeley.
Reinforcement learning (Chapter 21)
CSE 473Markov Decision Processes Dan Weld Many slides from Chris Bishop, Mausam, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
Reinforcement Learning Based on slides by Avi Pfeffer and David Parkes.
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: IV 4/19/2007 Srini Narayanan – ICSI and UC Berkeley.
QUIZ!!  T/F: Optimal policies can be defined from an optimal Value function. TRUE  T/F: “Pick the MEU action first, then follow optimal policy” is optimal.
Reinforcement Learning
Warmup: Which one would you try next? 1 Tries: 1000 Winnings: 900 Tries: 100 Winnings: 90 Tries: 5 Winnings: 4 Tries: 100 Winnings: 0 ABCD.
CS 188: Artificial Intelligence Spring 2007 Lecture 21:Reinforcement Learning: II MDP 4/12/2007 Srini Narayanan – ICSI and UC Berkeley.
CSE 473: Artificial Intelligence Spring 2012 Reinforcement Learning Dan Weld Many slides adapted from either Dan Klein, Stuart Russell, Luke Zettlemoyer.
REINFORCEMENT LEARNING Unsupervised learning 1. 2 So far ….  Supervised machine learning: given a set of annotated istances and a set of categories,
CS 541: Artificial Intelligence Lecture XI: Reinforcement Learning Slides Credit: Peter Norvig and Sebastian Thrun.
Announcements  Homework 3: Games  Due tonight at 11:59pm.  Project 2: Multi-Agent Pacman  Has been released, due Friday 2/19 at 5:00pm.  Optional.
Def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() replaceMe =.0001 for i in range(0, numIterations): hypothesis = np.dot(x,
CS 188: Artificial Intelligence Fall 2007 Lecture 12: Reinforcement Learning 10/4/2007 Dan Klein – UC Berkeley.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CS 188: Artificial Intelligence Fall 2008 Lecture 11: Reinforcement Learning 10/2/2008 Dan Klein – UC Berkeley Many slides over the course adapted from.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Reinforcement learning (Chapter 21)
Reinforcement Learning (1)
Reinforcement learning (Chapter 21)
CS 188: Artificial Intelligence Fall 2006
Instructors: Fei Fang (This Lecture) and Dave Touretzky
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
CSE 473: Artificial Intelligence Autumn 2011
CSE 473: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
October 6, 2011 Dr. Itamar Arel College of Engineering
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Spring 2006
Reinforcement Learning (2)
Markov Decision Processes
Markov Decision Processes
Reinforcement Learning (2)
Presentation transcript:

CS 188: Artificial Intelligence Fall 2009 Lecture 11: Reinforcement Learning 10/1/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore

Announcements P0 / P1 in glookup If you have no entry, etc, email staff list! If you have questions, see one of us or email list. P3: MDPs and Reinforcement Learning is up! W2: MDPs, RL, and Probability up before next class

P1: Admissibility Tests 8 steps 11 steps 16 steps

P1: Mini-Contest Results 3rd place: 306 moves Yuan Lu Searched using weighted graph over dots intersections, dead ends, until dots all gone 2nd place: 298 moves Haotian Bai and Nick Fraioli More than 30 dots: look for dots in dead ends Fewer than 30 dots: solve FoodSearchProblem / A* 1st place: 286 moves Xingyan Liu and James Ide Dead ends, MSTs, food islands, a slight bias towards the west... “Captain Pacman is aboard the Maze Murderer and is trying to navigate the depths of the infamous Labyrinth of Darkness. Along with his spyglass and trusty parrot, who is certainly a star companion, he scrys into the distance, searching for pockets of land...”

Reinforcement Learning Still assume an MDP: A set of states s  S A set of actions (per state) A A model T(s,a,s’) A reward function R(s,a,s’) Still looking for a policy (s) New twist: don’t know T or R I.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn [DEMO]

Passive Learning Simplified task In this case: You don’t know the transitions T(s,a,s’) You don’t know the rewards R(s,a,s’) You are given a policy (s) Goal: learn the state values … what policy evaluation did In this case: Learner “along for the ride” No choice about what actions to take Just execute the policy and learn from experience We’ll get to the active case soon This is NOT offline planning! You actually take actions in the world and see what happens…

Recap: Model-Based Policy Evaluation Simplified Bellman updates to calculate V for a fixed policy: New V is expected one-step-look-ahead using current V Unfortunately, need T and R s (s) s, (s) s, (s),s’ s’

Model-Based Learning Idea: Learn the model empirically through experience Solve for values as if the learned model were correct Simple empirical model learning Count outcomes for each s,a Normalize to give estimate of T(s,a,s’) Discover R(s,a,s’) when we experience (s,a,s’) Solving the MDP with the learned model Iterative policy evaluation, for example (s) s s, (s) s, (s),s’ s’

Example: Model-Based Learning y Episodes: +100 (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,3) exit +100 (done) (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,2) exit -100 (done) -100 x  = 1 T(<3,3>, right, <4,3>) = 1 / 3 T(<2,3>, right, <3,3>) = 2 / 2

Model-Free Learning Want to compute an expectation weighted by P(x): Model-based: estimate P(x) from samples, compute expectation Model-free: estimate expectation directly from samples Why does this work? Because samples appear with the right frequencies!

Example: Direct Estimation [DEMO – Optimal Policy] Example: Direct Estimation y Episodes: +100 (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,3) exit +100 (done) (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,2) exit -100 (done) -100 x  = 1, R = -1 V(2,3) ~ (96 + -103) / 2 = -3.5 V(3,3) ~ (99 + 97 + -102) / 3 = 31.3

Sample-Based Policy Evaluation? Who needs T and R? Approximate the expectation with samples (drawn from T!) (s) s, (s) s, (s),s’ s’ s1’ s2’ s3’ Almost! But we only actually make progress when we move to i+1.

Temporal-Difference Learning Big idea: learn from every experience! Update V(s) each time we experience (s,a,s’,r) Likely s’ will contribute updates more often Temporal difference learning Policy still fixed! Move values toward value of whatever successor occurs: running average! s (s) s, (s) s’ Sample of V(s): Update to V(s): Same update:

Exponential Moving Average Makes recent samples more important Forgets about the past (distant past values were wrong anyway) Easy to compute from the running average Decreasing learning rate can give converging averages

Example: TD Policy Evaluation [DEMO – Grid V’s] Example: TD Policy Evaluation (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,3) exit +100 (done) (1,1) up -1 (1,2) up -1 (1,3) right -1 (2,3) right -1 (3,3) right -1 (3,2) up -1 (4,2) exit -100 (done) Take  = 1,  = 0.5

Problems with TD Value Learning TD value leaning is a model-free way to do policy evaluation However, if we want to turn values into a (new) policy, we’re sunk: Idea: learn Q-values directly Makes action selection model-free too! a s s, a s,a,s’ s’

Active Learning Full reinforcement learning In this case: You don’t know the transitions T(s,a,s’) You don’t know the rewards R(s,a,s’) You can choose any actions you like Goal: learn the optimal policy … what value iteration did! In this case: Learner makes choices! Fundamental tradeoff: exploration vs. exploitation This is NOT offline planning! You actually take actions in the world and find out what happens…

Detour: Q-Value Iteration Value iteration: find successive approx optimal values Start with V0*(s) = 0, which we know is right (why?) Given Vi*, calculate the values for all states for depth i+1: But Q-values are more useful! Start with Q0*(s,a) = 0, which we know is right (why?) Given Qi*, calculate the q-values for all q-states for depth i+1:

Q-Learning Q-Learning: sample-based Q-value iteration [DEMO – Grid Q’s] Q-Learning Q-Learning: sample-based Q-value iteration Learn Q*(s,a) values Receive a sample (s,a,s’,r) Consider your old estimate: Consider your new sample estimate: Incorporate the new estimate into a running average:

Q-Learning Properties [DEMO – Grid Q’s] Q-Learning Properties Amazing result: Q-learning converges to optimal policy If you explore enough If you make the learning rate small enough … but not decrease it too quickly! Basically doesn’t matter how you select actions (!) Neat property: off-policy learning learn optimal policy without following it (some caveats) S E S E

Exploration / Exploitation Several schemes for forcing exploration Simplest: random actions ( greedy) Every time step, flip a coin With probability , act randomly With probability 1-, act according to current policy Problems with random actions? You do explore the space, but keep thrashing around once learning is done One solution: lower  over time Another solution: exploration functions

Exploration Functions [DEMO – Auto Grid Q’s] Exploration Functions When to explore Random actions: explore a fixed amount Better idea: explore areas whose badness is not (yet) established Exploration function Takes a value estimate and a count, and returns an optimistic utility, e.g. (exact form not important)

Q-Learning Q-learning produces tables of q-values: [DEMO – Crawler Q’s] Q-Learning Q-learning produces tables of q-values:

The Story So Far: MDPs and RL Things we know how to do: Techniques: We can solve small MDPs exactly, offline We can estimate values V(s) directly for a fixed policy . We can estimate Q*(s,a) for the optimal policy while executing an exploration policy Value and policy Iteration Temporal difference learning Q-learning Exploratory action selection Adaptive dynamic programming is really called

Q-Learning In realistic situations, we cannot possibly learn about every single state! Too many states to visit them all in training Too many states to hold the q-tables in memory Instead, we want to generalize: Learn about some small number of training states from experience Generalize that experience to new, similar states This is a fundamental idea in machine learning, and we’ll see it over and over again

Example: Pacman Let’s say we discover through experience that this state is bad: In naïve q learning, we know nothing about this state or its q states: Or even this one!

Feature-Based Representations Solution: describe a state using a vector of features Features are functions from states to real numbers (often 0/1) that capture important properties of the state Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot)2 Is Pacman in a tunnel? (0/1) …… etc. Can also describe a q-state (s, a) with features (e.g. action moves closer to food)

Linear Feature Functions Using a feature representation, we can write a q function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but be very different in value!

Function Approximation Q-learning with linear q-functions: Intuitive interpretation: Adjust weights of active features E.g. if something unexpectedly bad happens, disprefer all states with that state’s features Formal justification: online least squares

Example: Q-Pacman

Linear regression Given examples Predict given a new point 40 26 24 20 22 20 30 40 20 30 10 20 10 20 10 Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); Given examples Predict given a new point

Linear regression Prediction Prediction 10 20 30 40 22 24 26 40 20 20 Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); Prediction Prediction

Ordinary Least Squares (OLS) Error or “residual” Observation Prediction Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); 20

Minimizing Error Value update explained:

Overfitting Degree 15 polynomial [DEMO] 30 25 20 15 10 5 -5 -10 -15 2 -5 -10 -15 2 4 6 8 10 12 14 16 18 20 [DEMO]

Policy Search

Policy Search Problem: often the feature-based policies that work well aren’t the ones that approximate V / Q best E.g. your value functions from project 2 were probably horrible estimates of future rewards, but they still produced good decisions We’ll see this distinction between modeling and prediction again later in the course Solution: learn the policy that maximizes rewards rather than the value that predicts rewards This is the idea behind policy search, such as what controlled the upside-down helicopter

Policy Search Simplest policy search: Problems: Start with an initial linear value function or q-function Nudge each feature weight up and down and see if your policy is better than before Problems: How do we tell the policy got better? Need to run many sample episodes! If there are a lot of features, this can be impractical

Policy Search* Advanced policy search: Write a stochastic (soft) policy: Turns out you can efficiently approximate the derivative of the returns with respect to the parameters w (details in the book, but you don’t have to know them) Take uphill steps, recalculate derivatives, etc.

Take a Deep Breath… We’re done with search and planning! Next, we’ll look at how to reason with probabilities Diagnosis Tracking objects Speech recognition Robot mapping … lots more! Last part of course: machine learning

Reinforcement Learning Known Unknown Assumed Current state Available actions Experienced rewards Transition model Reward structure Markov transitions Fixed reward for (s,a,s’) Problem: Find values for fixed policy  (policy evaluation) Model-based learning: Learn the model, solve for values Model-free learning: Solve for values directly (by sampling)

Example: Animal Learning RL studied experimentally for more than 60 years in psychology Rewards: food, pain, hunger, drugs, etc. Mechanisms and sophistication debated Example: foraging Bees learn near-optimal foraging plan in field of artificial flowers with controlled nectar supplies Bees have a direct neural connection from nectar intake measurement to motor planning area

Example: Backgammon Reward only for win / loss in terminal states, zero otherwise TD-Gammon learns a function approximation to V(s) using a neural network Combined with depth 3 search, one of the top 3 players in the world You could imagine training Pacman this way… … but it’s tricky!

Recap: Model-Based Policy Evaluation Simplified Bellman updates to calculate V for a fixed policy: New V is expected one-step-look-ahead using current V Unfortunately, need T and R s (s) s, (s) s, (s),s’ s’

Model-Based Learning In general, want to learn the optimal policy, not evaluate a fixed policy Idea: adaptive dynamic programming Learn an initial model of the environment: Solve for the optimal policy for this model (value or policy iteration) Refine model through experience and repeat Crucial: we have to make sure we actually learn about all of the model

Example: Greedy ADP Imagine we find the lower path to the good exit first Some states will never be visited following this policy from (1,1) We’ll keep re-using this policy because following it never collects the regions of the model we need to learn the optimal policy ? ?

What Went Wrong? Problem with following optimal policy for current model: Never learn about better regions of the space if current policy neglects them Fundamental tradeoff: exploration vs. exploitation Exploration: must take actions with suboptimal estimates to discover new rewards and increase eventual utility Exploitation: once the true optimal policy is learned, exploration reduces utility Systems must explore in the beginning and exploit in the limit ? ?