Download presentation
Presentation is loading. Please wait.
1
RL Cont’d
2
Policies Total accumulated reward (value, V ) depends on Where agent starts What agent does at each step (duh) Plan of action is called a policy, π Policy defines what action to take in every state of the system: Value is a function of start state and policy:
3
Experience & histories In supervised learning, “fundamental unit of experience”: feature vector+label Fundamental unit of experience in RL: At time t in some state s i, take action a j, get reward r t, end up in state s k Called an experience tuple or SARSA tuple Set of all experience during a single episode up to time t is a history:
4
Finite horizon reward Assuming that an episode is finite: Agent acts in the world for a finite number of time steps, T, experiences history h T What should total aggregate value be?
5
Finite horizon reward Assuming that an episode is finite: Agent acts in the world for a finite number of time steps, T, experiences history h T What should total aggregate value be? Total accumulated reward: Occasionally useful to use average reward:
6
Gonna live forever... Often, we want to model a process that is indefinite Infinitely long Of unknown length (don’t know in advance when it will end) Runs ‘til it’s stopped (randomly) Have to consider infinitely long histories Q: what does value mean over an infinite history?
7
Reaaally long-term reward Let be an infinite history We define the infinite-horizon discounted value to be: where is the discount factor Q1: Why does this work? Q2: if R max is the max possible reward attainable in the environment, what is V max ?
8
Uncertainty of outcomes Consider: You can go to one of two startups: SplatSoft Inc. (makes game software) Might win big: R(survives)=$15 Mil, Pr[survives]=0.01 Might tank: R(tanks)=-$0.25 Mil, Pr[tanks]=0.99 Google Inc. (makes some web software) Might win big: R(survives)=$2 Mil,Pr[survives]=0.8 Might tank: R(tanks)=$-0.5 Mil,Pr[tanks]=0.2
9
Transition functions At time t, in state s i, take action a j Next state is a stochastic function of history h t as well as s i and a j : Need random vars q t to represent “state that agent is in at time t ” I.e., where you go next depends on where you are, what you do, and how you got were you are Very different outcomes @ SplatSoft depending on internship at MordorSoft or Golden Arches T() a.k.a. “process model” or “system dynamics”
10
Histories & dynamics s1s1 s2s2 s3s3 s4s4 s5s5 s6s6 s4s4 s2s2 s7s7 s 11 s8s8 s9s9 s 10 s8s8 s9s9 Fixed policy π π(s 1 )=a 1 π(s 2 )=a 7 π(s 4 )=a 19 π(s 5 )=a 3 π(s 11 )=a 19 T(s 1,a 1,s 2 )=0.25 T(s 1,a 1,s 4 )=0.63 Pr({s 2,s 5,s 8 }|q 1 =s 1,π)=0.25*... Pr({s 4,s 11,s 9 }|q 1 =s 1,π)=0.63*...
11
Probable histories Combination of: Initial state, q 1 Policy, π Transition function, T() Produces a probability over histories Pr T [h|q 1, π] Note: for any fixed length of history, t : (i.e., something has to happen...)
12
Sweet sorrow of memory Keeping around h is essentially a kind of memory What happened in the past to shape where you are now? Problem: a full function of h is really nasty... Assume that t=50, | S | =2, | A | =2 Q: How many parameters does T have to have?
13
Sweet sorrow of memory Problem is that there are a truly vast number of possible history traces Think about size of complete tree of histories Don’t want to have to think about history when figuring out what will happen when agent acts now For many useful systems, all important info can be encoded in the state alone
14
5 minutes of math... Definition: An order- k Markov process is a stochastic temporal process in which: for some finite, bounded k<t Important special case: k=1 First order Markov process -- only need to know current state to make the best possible prediction of next state Note! We’re not talking RL here -- no actions, no rewards We’re just talking about random procs. in general
15
5 minutes of math... For a finite Markov process (| S | =N<∞ ), entire transition probability function can be written as a transition matrix: Often called a Markov chain =A
16
Markovian environments Let’s assume our world is (1st order) Markov Don’t need to know how Mack got to any specific state in the world Knowing his current state tells us everything important we need to know about what will happen when he takes a specific action Now how many params do you need to describe the transition function? Such a world is called a Markov decision process
17
To describe all possible transitions under all possible actions, need a set of transition matrices The SAS matrix
18
... To describe all possible transitions under all possible actions, need a set of transition matrices The SAS matrix
19
5 Minutes of math... Given a Markov chain (in general, proc), and a start state, can generate a trajectory Start w/ q 1 =s i, pick next state from Repeat for t steps Yields a t-step trajectory, Any specific trajectory has a fixed probability: Markov decision process+fixed policy π ⇒ Markov chain Markov chain ⇒ distribution over trajectories
20
MDPs defined Full definition: A Markov decision process (MDP), M, is a model of a stochastic, dynamic, controllable, rewarding process given by: M = 〈 S, A,T,R 〉 S : State space A : Action space T : Transition function R : Reward function For most of RL, we’ll assume the agent is living in an MDP
21
Exercise Given the following tasks, describe the corresponding MDP -- what are the state space, action space, transition function, and reward function? How many states/actions are there? How many policies are possible? Flying an airplane and trying to get from point A to B. Flying an airplane and trying to emulate recorded human behaviors. Delivering a set of packages to buildings on UNM campus Winning at the stock market
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.