Presentation is loading. Please wait.

Presentation is loading. Please wait.

David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.

Similar presentations


Presentation on theme: "David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics."— Presentation transcript:

1

2 David Rosen

3 Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics and engineering  ZOMG ROBOTS!!!

4  Actually, no universally accepted definition  For this talk: ◦ Sensing (what’s going on?) ◦ Decision (what to do?) ◦ Planning (how to do it?) ◦ Actuation & control (follow the plan)

5 How do we begin to think about this problem?

6

7

8 What’s going on?

9 Sensing and Estimation  How do we know what the state is at a given time?  Generally, we have some sensors: ◦ Laser rangefinders ◦ GPS ◦ Vision systems ◦ etc…

10  Great!  Well, not quite… ◦ In general, can’t measure all state variables directly. Instead, an observation function H : M → O maps the current state x to some manifold O of outputs that can be directly measured ◦ Usually, dim O < dim M ◦ Given some observation z = H(x), can’t determine x !

11  Maybe we can use the system dynamics (f ) together with multiple observations?  Observability: Is it possible to determine the state of the system given a finite-time sequence of observations? ◦ “Virtual” sensors!  Detectability (weaker): Are all of the unobservable modes of the system stable?

12  What about noise?  In general, uncorrected/unmodeled error accumulates over time.  Stochastic processes: nondeterministic dynamical systems that evolve according to probability distributions.

13  New model: for randomly distributed variables w t and v t.  We assume that x t conditionally depends only upon x t-1 and the control u t (completeness): Stochastic processes that satisfy this condition are called Markov chains.

14  Similarly, we assume that the measurement z t conditionally depends only upon the current state x t :

15 Thus, we get a sequence of states and observations like this: This is called the hidden Markov model (HMM).

16 How can we estimate the state of a HMM at a given time? Any ideas?

17 Hint: How might we obtain from ?

18

19

20 Bayes’ Rule  Punchline: If we regard probabilities in the Bayesian sense, then Bayes’ Rule provides a way to optimally update beliefs in response to new data. This is called Bayesian inference.  It also leads to recursive Bayesian estimation.

21  Define  Then by conditional independence in the Markov chain: and by Bayes’ rule:

22 Recursive Bayesian Estimation: The Bayes Filter  This shows how to compute given only and the control input.  Recursive filter!

23  Initialize the filter with initial belief  Recursion step: ◦ Propagate: ◦ Update:

24  Benefits of recursion: ◦ Don’t need to remember observations ◦ Online implementation ◦ Efficient!  Applications: ◦ Guidance ◦ Aerospace tracking ◦ Autonomous mapping (e.g., SLAM) ◦ System identification ◦ etc…

25  This clip was reportedly sampled from an Air Force training video on missile guidance, circa 1955.  It is factually correct.  See also: ◦ Turboencabulator ◦ Unobtainium Rudolf Kalman

26 How do we identify trajectories of the system with desirable properties?

27

28  Controllability: given two arbitrary specified states p and q, does there exist a finite-time admissible control u that can drive the system from p to q ?  Reachability: Given an initial state p, what other states can be reached from p along system trajectories in a given length of time?  Stabilizability: Given an arbitrary state p, does there exist an admissible control u that can stabilize the system at p ?

29  Several methods for generating trajectories ◦ Rote playback ◦ Online synthesis from libraries of moves ◦ etc…  Optimal control: Minimize a cost functional amongst all controls whose trajectories have prescribed initial and final states x 0 and x 1.

30  Provides a set of necessary conditions satisfied by any optimal trajectory.  Can often be used to identify optimal controls of a system. Lev Pontryagin

31

32

33 Can also derive versions of the PMP for:  State-constrained control  Non-autonomous (i.e., time-dependent) dynamics.  etc…

34

35 How can we regulate autonomous systems?

36 The problem  Real-world systems suffer from noise, perturbations  If the underlying system is unstable, even small perturbations can drive the system off of the desired trajectory.

37

38  We have a desired trajectory that we would like to follow, called the reference.  At each time t, we can estimate the actual state of the system.  In general there is some nonzero error at each time t.

39 What to do?  Maybe we can find some rule for setting the control input u (t ) at each time t as a function of the error e (t ) such that the system is stabilized?  In that case, we have a feedback control law:

40 Many varieties of feedback controllers:  Proportional-integral-derivative (PID) control  Fuzzy logic control  Machine learning  Model adaptive control  Robust control  H ∞ control  etc…

41 We started with what (at least conceptually) were very basic problems from engineering e.g., make do this this

42 and ended up investigating all of this:  Dynamical systems  Stochastic processes  Markov chains  The hidden Markov model  Bayesian inference  Recursive Bayesian estimation  The Pontryagin Maximum Principle  Feedback stabilization and this is just the introduction!

43


Download ppt "David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics."

Similar presentations


Ads by Google