Reinforcement Learning Reinforced. Administrivia I’m out of town next Tues and Wed Class cancelled Apr 4 -- work on projects! No office hrs Apr 4 or 5.

Slides:



Advertisements
Similar presentations
Department of Computer Science Undergraduate Events More
Advertisements

SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
1 Reinforcement Learning Problem Week #3. Figure reproduced from the figure on page 52 in reference [1] 2 Reinforcement Learning Loop state Agent Environment.
An Introduction to Markov Decision Processes Sarah Hickmott
COSC 878 Seminar on Large Scale Statistical Machine Learning 1.
Planning under Uncertainty
1 Policies for POMDPs Minqing Hu. 2 Background on Solving POMDPs MDPs policy: to find a mapping from states to actions POMDPs policy: to find a mapping.
POMDPs: Partially Observable Markov Decision Processes Advanced AI
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
RL at Last! Q- learning and buddies. Administrivia R3 due today Class discussion Project proposals back (mostly) Only if you gave me paper; e-copies yet.
RL Cont’d. Policies Total accumulated reward (value, V ) depends on Where agent starts What agent does at each step (duh) Plan of action is called a policy,
Intro to Reinforcement Learning Learning how to get what you want...
Markov Decision Processes
Bayesian Reinforcement Learning with Gaussian Processes Huanren Zhang Electrical and Computer Engineering Purdue University.
Nov 14 th  Homework 4 due  Project 4 due 11/26.
Reinforcement Learning, Cont’d Useful refs: Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998.
Planning in MDPs S&B: Sec 3.6; Ch. 4. Administrivia Reminder: Final project proposal due this Friday If you haven’t talked to me yet, you still have the.
4/1 Agenda: Markov Decision Processes (& Decision Theoretic Planning)
Reinforcement Learning: Learning to get what you want... Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998.
Planning to learn. Progress report Last time: Transition functions & stochastic outcomes Markov chains MDPs defined Today: Exercise completed Value functions.
RL: Algorithms time. Happy Full Moon! Administrivia Reminder: Midterm exam, this Thurs (Oct 20) Spec v 0.98 released today (after class) Check class.
RL 2 It’s 2:00 AM. Do you know where your mouse is?
INDR 343 Problem Session
Department of Computer Science Undergraduate Events More
More RL. MDPs defined A Markov decision process (MDP), M, is a model of a stochastic, dynamic, controllable, rewarding process given by: M = 〈 S, A,T,R.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
Dynamic Bayesian Networks CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanningLearning.
The People Have Spoken.... Administrivia Final Project proposal due today Undergrad credit: please see me in office hours Dissertation defense announcements.
9/23. Announcements Homework 1 returned today (Avg 27.8; highest 37) –Homework 2 due Thursday Homework 3 socket to open today Project 1 due Tuesday –A.
Exploration in Reinforcement Learning Jeremy Wyatt Intelligent Robotics Lab School of Computer Science University of Birmingham, UK
Q. Administrivia Final project proposals back today (w/ comments) Evaluated on 4 axes: W&C == Writing & Clarity M&P == Motivation & Problem statement.
Learning and Planning for POMDPs Eyal Even-Dar, Tel-Aviv University Sham Kakade, University of Pennsylvania Yishay Mansour, Tel-Aviv University.
RL Rolling on.... Administrivia Reminder: Terran out of town, Tues Oct 11 Andree Jacobsen substitute prof Reminder: Stefano Markidis out of town Oct 19.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
MAKING COMPLEX DEClSlONS
General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning Duke University Machine Learning Group Discussion Leader: Kai Ni June 17, 2005.
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
CPSC 7373: Artificial Intelligence Lecture 10: Planning with Uncertainty Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Department of Computer Science Undergraduate Events More
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
E 3 Finish-up; Intro to Clustering & Unsup. Kearns & Singh, “Near-Optimal Reinforcement Learning in Polynomial Time.”Machine Learning 49, Class Text:
Reinforcement Learning Yishay Mansour Tel-Aviv University.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making Fully Observable MDP.
Department of Computer Science Undergraduate Events More
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
Markov Decision Process (MDP)
MDPs and Reinforcement Learning. Overview MDPs Reinforcement learning.
COMP 2208 Dr. Long Tran-Thanh University of Southampton Reinforcement Learning.
1 (Chapter 3 of) Planning and Control in Stochastic Domains with Imperfect Information by Milos Hauskrecht CS594 Automated Decision Making Course Presentation.
Markov Decision Processes AIMA: 17.1, 17.2 (excluding ), 17.3.
Reinforcement Learning. Overview Supervised Learning: Immediate feedback (labels provided for every input). Unsupervised Learning: No feedback (no labels.
1 Markov Decision Processes Finite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Making complex decisions
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Dr. Unnikrishnan P.C. Professor, EEE
Reinforcement Learning
October 6, 2011 Dr. Itamar Arel College of Engineering
Markov Decision Problems
Reinforcement Learning Dealing with Partial Observability
Reinforcement Nisheeth 18th January 2019.
Markov Decision Processes
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Lecture 11 – Stochastic Processes
Presentation transcript:

Reinforcement Learning Reinforced

Administrivia I’m out of town next Tues and Wed Class cancelled Apr 4 -- work on projects! No office hrs Apr 4 or 5 Back on Apr 6 Today: FP proposals due I’ll get you feedback ASAP R2 returned

Where were we again??? Last time -- exam Class before that -- review session Week before that -- spring break Class before that -- intro to reinforcement learning Definitions; problem statement; notation Today: RL continued Quick refresher Outcome uncertainty Definition: Markov Decision Process (MDP)

(real) Quick refresher Agent (Mack the mouse) Goodness of outcomes: Set of locations (states): S ={s 1,s 2,...,s | S | } Set of actions: A ={a 1,...,a | A | } Fundamental unit of experience: Trace of SARSA tuples/history: h T Policy/control law: Value function/aggregate reward: V π (h|s start,π) Finite & infinite horizon rewards; discounting

Uncertainty of outcomes Consider: You can go to one of two startups: SplatSoft Inc. (makes game software) Might win big: R(survives)=$15 Mil, Pr[survives]=0.01 Might tank: R(tanks)=-$0.25 Mil, Pr[tanks]=0.99 Google Inc. (makes some web software) Might win big: R(survives)=$2 Mil,Pr[survives]=0.8 Might tank: R(tanks)=$-0.5 Mil,Pr[tanks]=0.2

Transition functions At time t, in state s i, take action a j Next state is a stochastic function of history h t as well as s i and a j : Need random vars q t to represent “state that agent is in at time t ”

Transition functions I.e., where you go next depends on where you are, what you do, and how you got were you are Very different SplatSoft depending on internship at MordorSoft or Golden Arches T() called transition function A.k.a. process model, system dynamics

Histories & dynamics s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 s4s4 s2s2 s7s7 s 11 s8s8 s9s9 s 10 s8s8 s9s9 Fixed policy π π(s 1 )=a 1 π(s 2 )=a 7 π(s 4 )=a 19 π(s 5 )=a 3 π(s 11 )=a 19 T(s 1,a 1,s 2 )=0.25 T(s 1,a 1,s 4 )=0.63 Pr({s 2,s 5,s 8 }|q 1 =s 1,π)=0.25*... Pr({s 4,s 11,s 9 }|q 1 =s 1,π)=0.63*...

Probable histories Combination of: Initial state, q 1 Policy, π Transition function, T() Produces a probability over histories Pr T [h|q 1, π] Note: for any fixed length of history, t : (i.e., something has to happen...)

Sweet sorrow of memory Keeping around h is essentially a kind of memory What happened in the past to shape where you are now? Problem: a full function of h is really nasty... Assume that t=50, | S | =2, | A | =2 Q: How many parameters does T have to have?

Sweet sorrow of memory Issue is that there are a truly vast number of possible history traces Think about size of complete tree of histories Don’t want to have to think about history when figuring out what will happen when agent acts now For many useful systems, all important info can be encoded in the state alone

5 minutes of math... Definition: An order- k Markov process is a stochastic temporal process in which: for some finite, bounded k<t

5 minutes of math... Important special case: k=1 First order Markov process -- only need to know current state to make the best possible prediction of next state Note! We’re not talking RL here -- no actions, no rewards We’re just talking about random procs. in general

5 minutes of math... For a finite Markov process (| S | =N<∞ ), entire transition probability function can be written as a transition matrix: Often called a Markov chain =A

Markovian environments Let’s assume our world is (1st order) Markov Don’t need to know how Mack got to any specific state in the world Knowing his current state tells us everything important we need to know about what will happen when he takes a specific action Now how many params do you need to describe the transition function? Such a world is called a Markov decision process

To describe all possible transitions under all possible actions, need a set of transition matrices The SAS matrix

... To describe all possible transitions under all possible actions, need a set of transition matrices The SAS matrix

5 Minutes of math... Given a Markov chain (in general, proc), and a start state, can generate a trajectory Start w/ q 1 =s i, pick next state from Repeat for t steps Yields a t-step trajectory, Any specific trajectory has a fixed probability: Markov decision process+fixed policy π ⇒ Markov chain Markov chain ⇒ distribution over trajectories

MDPs defined A Markov decision process (MDP), M, is a model of a stochastic, dynamic, controllable, rewarding process given by: M = 〈 S, A,T,R 〉 S : State space A : Action space T : Transition function R : Reward function For most of RL, we’ll assume the agent is living in an MDP

Exercise Given the following tasks, describe the corresponding MDP -- what are the state space, action space, transition function, and reward function? How many states/actions are there? How many policies are possible? Flying an airplane and trying to get from point A to B. Flying an airplane and trying to emulate recorded human behaviors. Delivering a set of packages to buildings on UNM campus Winning at the stock market