RL Cont’d. Policies Total accumulated reward (value, V ) depends on Where agent starts What agent does at each step (duh) Plan of action is called a policy,

Slides:



Advertisements
Similar presentations
Markov Decision Process
Advertisements

SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Decision Theoretic Planning
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
An Introduction to Markov Decision Processes Sarah Hickmott
COSC 878 Seminar on Large Scale Statistical Machine Learning 1.
Reinforcement Learning & Apprenticeship Learning Chenyi Chen.
Planning under Uncertainty
1 Policies for POMDPs Minqing Hu. 2 Background on Solving POMDPs MDPs policy: to find a mapping from states to actions POMDPs policy: to find a mapping.
Announcements Homework 3: Games Project 2: Multi-Agent Pacman
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
RL at Last! Q- learning and buddies. Administrivia R3 due today Class discussion Project proposals back (mostly) Only if you gave me paper; e-copies yet.
Markov Decision Processes CSE 473 May 28, 2004 AI textbook : Sections Russel and Norvig Decision-Theoretic Planning: Structural Assumptions.
Markov Decision Processes
Bayesian Reinforcement Learning with Gaussian Processes Huanren Zhang Electrical and Computer Engineering Purdue University.
Q. The policy iteration alg. Function: policy_iteration Input: MDP M = 〈 S, A,T,R 〉  discount  Output: optimal policy π* ; opt. value func. V* Initialization:
Nov 14 th  Homework 4 due  Project 4 due 11/26.
Reinforcement Learning, Cont’d Useful refs: Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998.
Planning in MDPs S&B: Sec 3.6; Ch. 4. Administrivia Reminder: Final project proposal due this Friday If you haven’t talked to me yet, you still have the.
4/1 Agenda: Markov Decision Processes (& Decision Theoretic Planning)
Reinforcement Learning Reinforced. Administrivia I’m out of town next Tues and Wed Class cancelled Apr 4 -- work on projects! No office hrs Apr 4 or 5.
Reinforcement Learning: Learning to get what you want... Sutton & Barto, Reinforcement Learning: An Introduction, MIT Press 1998.
Planning to learn. Progress report Last time: Transition functions & stochastic outcomes Markov chains MDPs defined Today: Exercise completed Value functions.
RL 2 It’s 2:00 AM. Do you know where your mouse is?
Department of Computer Science Undergraduate Events More
More RL. MDPs defined A Markov decision process (MDP), M, is a model of a stochastic, dynamic, controllable, rewarding process given by: M = 〈 S, A,T,R.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
The People Have Spoken.... Administrivia Final Project proposal due today Undergrad credit: please see me in office hours Dissertation defense announcements.
Learning and Planning for POMDPs Eyal Even-Dar, Tel-Aviv University Sham Kakade, University of Pennsylvania Yishay Mansour, Tel-Aviv University.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
MDP Reinforcement Learning. Markov Decision Process “Should you give money to charity?” “Would you contribute?” “Should you give money to charity?” $
MAKING COMPLEX DEClSlONS
General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning Duke University Machine Learning Group Discussion Leader: Kai Ni June 17, 2005.
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
Computer Science CPSC 502 Lecture 14 Markov Decision Processes (Ch. 9, up to 9.5.3)
Reinforcement Learning 主講人:虞台文 Content Introduction Main Elements Markov Decision Process (MDP) Value Functions.
1 Markov Decision Processes Basics Concepts Alan Fern.
CPSC 7373: Artificial Intelligence Lecture 10: Planning with Uncertainty Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Department of Computer Science Undergraduate Events More
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
E 3 Finish-up; Intro to Clustering & Unsup. Kearns & Singh, “Near-Optimal Reinforcement Learning in Polynomial Time.”Machine Learning 49, Class Text:
Reinforcement Learning Yishay Mansour Tel-Aviv University.
Reinforcement Learning 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making Fully Observable MDP.
Department of Computer Science Undergraduate Events More
Markov Decision Process (MDP)
MDPs and Reinforcement Learning. Overview MDPs Reinforcement learning.
Markov Decision Processes AIMA: 17.1, 17.2 (excluding ), 17.3.
1 Markov Decision Processes Finite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Making complex decisions
Markov Decision Processes
Dr. Unnikrishnan P.C. Professor, EEE
CS 188: Artificial Intelligence Fall 2007
Reinforcement Learning
Markov Decision Problems
Chapter 17 – Making Complex Decisions
Reinforcement Learning Dealing with Partial Observability
Reinforcement Nisheeth 18th January 2019.
Markov Decision Processes
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Presentation transcript:

RL Cont’d

Policies Total accumulated reward (value, V ) depends on Where agent starts What agent does at each step (duh) Plan of action is called a policy, π Policy defines what action to take in every state of the system: Value is a function of start state and policy:

Experience & histories In supervised learning, “fundamental unit of experience”: feature vector+label Fundamental unit of experience in RL: At time t in some state s i, take action a j, get reward r t, end up in state s k Called an experience tuple or SARSA tuple Set of all experience during a single episode up to time t is a history:

Finite horizon reward Assuming that an episode is finite: Agent acts in the world for a finite number of time steps, T, experiences history h T What should total aggregate value be?

Finite horizon reward Assuming that an episode is finite: Agent acts in the world for a finite number of time steps, T, experiences history h T What should total aggregate value be? Total accumulated reward: Occasionally useful to use average reward:

Gonna live forever... Often, we want to model a process that is indefinite Infinitely long Of unknown length (don’t know in advance when it will end) Runs ‘til it’s stopped (randomly) Have to consider infinitely long histories Q: what does value mean over an infinite history?

Reaaally long-term reward Let be an infinite history We define the infinite-horizon discounted value to be: where is the discount factor Q1: Why does this work? Q2: if R max is the max possible reward attainable in the environment, what is V max ?

Uncertainty of outcomes Consider: You can go to one of two startups: SplatSoft Inc. (makes game software) Might win big: R(survives)=$15 Mil, Pr[survives]=0.01 Might tank: R(tanks)=-$0.25 Mil, Pr[tanks]=0.99 Google Inc. (makes some web software) Might win big: R(survives)=$2 Mil,Pr[survives]=0.8 Might tank: R(tanks)=$-0.5 Mil,Pr[tanks]=0.2

Transition functions At time t, in state s i, take action a j Next state is a stochastic function of history h t as well as s i and a j : Need random vars q t to represent “state that agent is in at time t ” I.e., where you go next depends on where you are, what you do, and how you got were you are Very different SplatSoft depending on internship at MordorSoft or Golden Arches T() a.k.a. “process model” or “system dynamics”

Histories & dynamics s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 s4s4 s2s2 s7s7 s 11 s8s8 s9s9 s 10 s8s8 s9s9 Fixed policy π π(s 1 )=a 1 π(s 2 )=a 7 π(s 4 )=a 19 π(s 5 )=a 3 π(s 11 )=a 19 T(s 1,a 1,s 2 )=0.25 T(s 1,a 1,s 4 )=0.63 Pr({s 2,s 5,s 8 }|q 1 =s 1,π)=0.25*... Pr({s 4,s 11,s 9 }|q 1 =s 1,π)=0.63*...

Probable histories Combination of: Initial state, q 1 Policy, π Transition function, T() Produces a probability over histories Pr T [h|q 1, π] Note: for any fixed length of history, t : (i.e., something has to happen...)

Sweet sorrow of memory Keeping around h is essentially a kind of memory What happened in the past to shape where you are now? Problem: a full function of h is really nasty... Assume that t=50, | S | =2, | A | =2 Q: How many parameters does T have to have?

Sweet sorrow of memory Problem is that there are a truly vast number of possible history traces Think about size of complete tree of histories Don’t want to have to think about history when figuring out what will happen when agent acts now For many useful systems, all important info can be encoded in the state alone

5 minutes of math... Definition: An order- k Markov process is a stochastic temporal process in which: for some finite, bounded k<t Important special case: k=1 First order Markov process -- only need to know current state to make the best possible prediction of next state Note! We’re not talking RL here -- no actions, no rewards We’re just talking about random procs. in general

5 minutes of math... For a finite Markov process (| S | =N<∞ ), entire transition probability function can be written as a transition matrix: Often called a Markov chain =A

Markovian environments Let’s assume our world is (1st order) Markov Don’t need to know how Mack got to any specific state in the world Knowing his current state tells us everything important we need to know about what will happen when he takes a specific action Now how many params do you need to describe the transition function? Such a world is called a Markov decision process

To describe all possible transitions under all possible actions, need a set of transition matrices The SAS matrix

... To describe all possible transitions under all possible actions, need a set of transition matrices The SAS matrix

5 Minutes of math... Given a Markov chain (in general, proc), and a start state, can generate a trajectory Start w/ q 1 =s i, pick next state from Repeat for t steps Yields a t-step trajectory, Any specific trajectory has a fixed probability: Markov decision process+fixed policy π ⇒ Markov chain Markov chain ⇒ distribution over trajectories

MDPs defined Full definition: A Markov decision process (MDP), M, is a model of a stochastic, dynamic, controllable, rewarding process given by: M = 〈 S, A,T,R 〉 S : State space A : Action space T : Transition function R : Reward function For most of RL, we’ll assume the agent is living in an MDP

Exercise Given the following tasks, describe the corresponding MDP -- what are the state space, action space, transition function, and reward function? How many states/actions are there? How many policies are possible? Flying an airplane and trying to get from point A to B. Flying an airplane and trying to emulate recorded human behaviors. Delivering a set of packages to buildings on UNM campus Winning at the stock market