CS 188: Artificial Intelligence Fall 2006

Slides:



Advertisements
Similar presentations
Lirong Xia Reinforcement Learning (2) Tue, March 21, 2014.
Advertisements

RL for Large State Spaces: Value Function Approximation
DARPA Mobile Autonomous Robot SoftwareMay Adaptive Intelligent Mobile Robotics William D. Smart, Presenter Leslie Pack Kaelbling, PI Artificial.
Planning under Uncertainty
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
CS 188: Artificial Intelligence Fall 2009
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
CS 188: Artificial Intelligence Fall 2006 Lecture 17: Bayes Nets III 10/26/2006 Dan Klein – UC Berkeley.
CS 188: Artificial Intelligence Fall 2009 Lecture 10: MDPs 9/29/2009 Dan Klein – UC Berkeley Many slides over the course adapted from either Stuart Russell.
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010 Reinforcement Learning 10/25/2010 A subset of slides from Dan Klein – UC Berkeley Many.
CS 188: Artificial Intelligence Fall 2009 Lecture 12: Reinforcement Learning II 10/6/2009 Dan Klein – UC Berkeley Many slides over the course adapted from.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 16 Nov, 3, 2011 Slide credit: C. Conati, S.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
CSE 573: Artificial Intelligence
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
QUIZ!!  T/F: RL is an MDP but we do not know T or R. TRUE  T/F: In model free learning we estimate T and R first. FALSE  T/F: Temporal Difference learning.
MDPs (cont) & Reinforcement Learning
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
gaflier-uas-battles-feral-hogs/ gaflier-uas-battles-feral-hogs/
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
Complexity & Computability. Limitations of computer science  Major reasons useful calculations cannot be done:  execution time of program is too long.
CS 188: Artificial Intelligence Spring 2007 Lecture 23: Reinforcement Learning: IV 4/19/2007 Srini Narayanan – ICSI and UC Berkeley.
Reinforcement Learning
CSE 473: Artificial Intelligence Spring 2012 Reinforcement Learning Dan Weld Many slides adapted from either Dan Klein, Stuart Russell, Luke Zettlemoyer.
CS 541: Artificial Intelligence Lecture XI: Reinforcement Learning Slides Credit: Peter Norvig and Sebastian Thrun.
Def gradientDescent(x, y, theta, alpha, m, numIterations): xTrans = x.transpose() replaceMe =.0001 for i in range(0, numIterations): hypothesis = np.dot(x,
CS 188: Artificial Intelligence Fall 2007 Lecture 12: Reinforcement Learning 10/4/2007 Dan Klein – UC Berkeley.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
CS 188: Artificial Intelligence Fall 2008 Lecture 11: Reinforcement Learning 10/2/2008 Dan Klein – UC Berkeley Many slides over the course adapted from.
CS 188: Artificial Intelligence Fall 2008 Lecture 27: Conclusion 12/9/2008 Dan Klein – UC Berkeley 1.
LECTURE 15: PARTIAL LEAST SQUARES AND DEALING WITH HIGH DIMENSIONS March 23, 2016 SDS 293 Machine Learning.
CS Fall 2015 (Shavlik©), Midterm Topics
Done Done Course Overview What is AI? What are the Major Challenges?
Reinforcement learning (Chapter 21)
Reinforcement Learning (1)
Probabilistic reasoning over time
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Reinforcement learning (Chapter 21)
Reinforcement Learning
Artificial Intelligence Lecture No. 5
Data Mining Lecture 11.
CS 4/527: Artificial Intelligence
Graph Paper Programming
Structure in Reinforcement Learning
Concurrent Hierarchical Reinforcement Learning
Hidden Markov Models Part 2: Algorithms
Announcements Homework 3 due today (grace period through Friday)
KAIST CS LAB Oh Jong-Hoon
Dr. Unnikrishnan P.C. Professor, EEE
RL for Large State Spaces: Value Function Approximation
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CSE 473: Artificial Intelligence Autumn 2011
CS5112: Algorithms and Data Structures for Applications
CSE 473: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
October 6, 2011 Dr. Itamar Arel College of Engineering
Hidden Markov Models Markov chains not so useful for most agents
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2008
Artificial Intelligence 12. Two Layer ANNs
CS 188: Artificial Intelligence Spring 2006
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor
Reinforcement Learning (2)
Probabilistic reasoning over time
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Reinforcement Learning (2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

CS 188: Artificial Intelligence Fall 2006 Lecture 13: Advanced Reinforcement Learning 10/12/2006 Dan Klein – UC Berkeley

Midterms Exams are graded, will be in glookup today, returned and gone over in section Monday My impressions: long and fairly hard exam, class generally did fine You should expect that the final will be equally hard, but seem much less long We added 25 points and took it out of 100

Midcourse Reviews You liked: You wanted: Projects (19) Lectures (19) Visual / demo presentations (14) Newsgroup (4) Pacman (3) You wanted: Debugging help / coding advice / see staff code (8) More time between projects (6) More problem sets (4) A webcast or podcast (4) More coding or more projects (4) Slides / reading earlier (3) Class later in the day (3) Lecture to be less fast / dense / technical / confusing (3)

Midcourse Reviews II Difficult / workload is: Dan’s office hours Hard (15) Medium (9) Easy (7) Dan’s office hours Thursday (7) Not Thursday (9)

Midcourse Reviews I propose: Other: I’ll hang out after class on Tuesdays, we can walk to my office if there are more questions I’ll add Thursday OH for a few weeks, and keep if attended We’ll spend more section time devoted to projects I’ll link to last term’s slides so you can get a preview I’ll keep coding demos for you There will be a (slight) shift from projects to written questions Rough “midterm grades” soon, to let you know where you stand (will incorporate at least first two projects) Other: I’ve asked about webcasting / podcasting, but it seems very unlikely this term Limits to how early I can get new slides up, since I do revise extensively from last term Can’t really change: programming language or time of day

Midcourse Reviews - Anonymous

Today How advanced reinforcement learning works for large problems Some previews of fundamental ideas we’ll see throughout the rest of the term Next class we’ll start on probabilistic reasoning and reasoning about beliefs

Recap: Q-Learning Learn Q*(s,a) values from samples Receive a sample (s,a,s’,r) On one hand: old estimate of return: But now we have a new estimate for this sample: Nudge the old estimate towards the new sample: Equivalently, average samples over time:

Q-Learning Q-learning produces tables of q-values:

Q-Learning In realistic situations, we cannot possibly learn about every single state! Too many states to visit them all in training Too many states to even hold the q-tables in memory Instead, we want to generalize: Learn about some small number of training states from experience Generalize that experience to new, similar states This is a fundamental idea in machine learning, and we’ll see it over and over again

Example: Pacman Let’s say we discover through experience that this state is bad: In naïve q-learning, we know nothing about this state or its q-states: Or even this one!

Feature-Based Representations Solution: describe a state using a vector of features Features are functions from states to real numbers (often 0/1) that capture important properties of the state Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot)2 Is Pacman in a tunnel? (0/1) …… etc. Can also describe a q-state (s, a) with features (e.g. action moves closer to food)

Linear Feature Functions Using a feature representation, we can write a q-function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but be very different in value!

Function Approximation Q-learning with linear q-functions: Intuitive interpretation: Adjust weights of active features E.g. if something unexpectedly bad happens, disprefer all states with that state’s features Formal justification: online least squares (much later)

Example: Q-Pacman

Hierarchical Learning

Hierarchical RL Stratagus: Example of a large RL task, from Bhaskara Marthi’s thesis (w/ Stuart Russell) Stratagus is hard for reinforcement learning algorithms > 10100 states > 1030 actions at each point Time horizon ≈ 104 steps Stratagus is hard for human programmers Typically takes several person-months for game companies to write computer opponent Still, no match for experienced human players Programming involves much trial and error Hierarchical RL Humans supply high-level prior knowledge using partial program Learning algorithm fills in the details

Partial “Alisp” Program (defun top () (loop (choose (gather-wood) (gather-gold)))) (defun gather-wood () (with-choice (dest *forest-list*) (nav dest) (action ‘get-wood) (nav *base-loc*) (action ‘dropoff))) (defun gather-gold () (with-choice (dest *goldmine-list*) (nav dest)) (action ‘get-gold) (nav *base-loc*)) (action ‘dropoff))) (defun nav (dest) (until (= (pos (get-state)) dest) (move ‘(N S E W NOOP)) (action move)))) Animation, fade outs… Program state Motivate Q(\omega,…) here Before walking through program, Alisp is an extension of Lisp that adds a choose operator Mention nav-choice wd have to be path-planning alg in complete program

Hierarchical RL They then define a hierarchical Q-function which learns a linear feature-based mini-Q-function at each choice point Very good at balancing resources and directing rewards to the right region Still not very good at the strategic elements of these kinds of games (i.e. the Markov game aspect) [DEMO]

Policy Search

Policy Search Problem: often the feature-based policies that work well aren’t the ones that approximate V / Q best E.g. your value functions from 1.3 were probably horrible estimates of future rewards, but they still produce good decisions We’ll see this distinction between modeling and prediction again later in the course Solution: learn the policy that maximizes rewards rather than the value that predicts rewards This is the idea behind policy search, such as what controlled the upside-down helicopter

Policy Search Simplest policy search: Problems: Start with an initial linear value function or q-function Nudge each feature weight up and down and see if your policy is better than before Problems: How do we tell the policy got better? Need to run many sample episodes! If there are a lot of features, this can be impractical

Policy Search* Advanced policy search: Write a stochastic (soft) policy: Turns out you can efficiently approximate the derivative of the returns with respect to the parameters w (details in the book, but you don’t have to know them) Take uphill steps, recalculate derivatives, etc.

Take a Deep Breath… We’re done with search and planning! Next, we’ll look at how to reason with probabilities Diagnosis Tracking objects Speech recognition Robot mapping … lots more! Last part of course: machine learning

Digression / Preview

Linear regression Given examples Predict given a new point Temperature 40 26 24 Temperature 20 22 20 30 40 20 30 10 20 10 20 10 Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); Given examples Predict given a new point

Linear regression Prediction Prediction Temperature 10 20 30 40 22 24 26 40 Temperature 20 20 Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); Prediction Prediction

Ordinary Least Squares (OLS) Error or “residual” Observation Prediction Figure 1: scatter(1:20,10+(1:20)+2*randn(1,20),'k','filled'); a=axis; a(3)=0; axis(a); 20

Overfitting Degree 15 polynomial [DEMO] 30 25 20 15 10 5 -5 -10 -15 2 -5 -10 -15 2 4 6 8 10 12 14 16 18 20 [DEMO]