Presentation is loading. Please wait.

Presentation is loading. Please wait.

Theory of Computations III CS-6800 |SPRING -2014.

Similar presentations


Presentation on theme: "Theory of Computations III CS-6800 |SPRING -2014."— Presentation transcript:

1 Theory of Computations III CS-6800 |SPRING -2014

2 Markov Models By Sandeep Ravikanti

3 Contents  Introduction  Definition  Examples  Types of Markov Models  Applications  References

4 Introduction  Origin Mid 20th cent.: named after Andrei A. Markov (1856– 1922), Russian mathematician.  Markov model It’s a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event

5 Formal Definition Formal Definition Cont.…..

6 Examples of Markov models  Simple Markov Model for weather  Influence of weather on day T is determined by weather on day T-1  The transition matrix for weather model  M = 0.75 0.25 0.7 0.3  Transition matrix rows sum up to 1.0 since its transition matrix Sunny Rainy 0.75 0.7 0.3 0.25

7 Markov Model For Weather

8 Modelling of Markov Models  Link Structure of World Wide Web.  Performances of a System.  Population Genetics. Types of Markov Models Cont.….

9

10 Markov Chain for predicting the weather Sunny Rainy

11 Properties of Markov chains Reducibility Accessible Essential or final Irreducible Periodicity Recurrence Transient Recurrent Ergodicity A B C D

12 Hidden Markov model Defined as non deterministic finite state transducer. States of the system are not directly observable Derived from two key properties. They are Markov models. Their state at time t is a functions solely of their time at t-1. Actual Progression of machine is hidden. Only output string is observed.

13 Markov Decision Process It is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Used to compute a policy of actions that will maximize some utility Can be solved with value iteration and related methods. Partially observable Markov decision process It is a Markov decision process in which the state of the system is only partially observed. Pomdps are known to be NP complete. Recent approximation techniques have made them useful for A variety of applications, such as controlling simple agents or robots

14 Applications  PHYSICS  CHEMISTRY  TESTING  INFORMATION SCIENCES  QUEUING THEORY  INTERNET APPLICATIONS  STATISTICS  ECONOMICS AND FINANCE  SOCIAL SCIENCE  MATHEMATICAL BIOLOGY  GAMES  MUSIC

15 References  Elaine A Rich, Automata, Computability And Complexity, Theory And Application.1 st Edition  Wikipedia-Markov Model," [Online]. Available: Http://En.Wikipedia.Org/Wiki/Markov_chain. [Accessed 28 10 2012].  P. Xinhui Zhang, "Dtmarkovchains," [Online]. Available: Http://Www.Wright.Edu/~Xinhui.Zhang/. [Accessed 28 10 2012].  http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_b ook/chapter11.pdf http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_b ook/chapter11.pdf

16 Questions…..?

17


Download ppt "Theory of Computations III CS-6800 |SPRING -2014."

Similar presentations


Ads by Google