Markov Chains & Randomized algorithms BY HAITHAM FALLATAH FOR COMP4804.

Slides:



Advertisements
Similar presentations
Lecture 16 Hidden Markov Models. HMM Until now we only considered IID data. Some data are of sequential nature, i.e. have correlations have time. Example:
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Max Cut Problem Daniel Natapov.
CSCI 3160 Design and Analysis of Algorithms Tutorial 4
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
1 Hidden Markov Model Xiaole Shirley Liu STAT115, STAT215, BIO298, BIST520.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
An Introduction to Markov Decision Processes Sarah Hickmott
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Hidden Markov Models Fundamentals and applications to bioinformatics.
Hidden Markov Models (HMMs) Steven Salzberg CMSC 828H, Univ. of Maryland Fall 2010.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
Complexity 26-1 Complexity Andrei Bulatov Interactive Proofs.
Probability Theory Part 2: Random Variables. Random Variables  The Notion of a Random Variable The outcome is not always a number Assign a numerical.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Restricted Satisfiability (SAT) Problem
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
1 Discrete Structures CS 280 Example application of probability: MAX 3-SAT.
Axioms and Algorithms for Inferences Involving Probabilistic Independence Dan Geiger, Azaria Paz, and Judea Pearl, Information and Computation 91(1), March.
Daniel Kroening and Ofer Strichman 1 Decision Procedures in First Order Logic Decision Procedures for Equality Logic.
Step 1: Simplify Both Sides, if possible Distribute Combine like terms Step 2: Move the variable to one side Add or Subtract Like Term Step 3: Solve for.
ALGORITHMIC TRADING Hidden Markov Models. Overview a)Introduction b)Methodology c)Programming codes d)Conclusion.
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Theory of Computation, Feodor F. Dragan, Kent State University 1 NP-Completeness P: is the set of decision problems (or languages) that are solvable in.
1 Two-point Sampling. 2 X,Y: discrete random variables defined over the same probability sample space. p(x,y)=Pr[{X=x}  {Y=y}]: the joint density function.
The satisfiability threshold and clusters of solutions in the 3-SAT problem Elitza Maneva IBM Almaden Research Center.
Additional NP-complete problems
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
NPC.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Complexity ©D.Moshkovits 1 2-Satisfiability NOTE: These slides were created by Muli Safra, from OPICS/sat/)
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Final Exam Information These slides and more detailed information will be posted on the webpage later…
1 Hidden Markov Model Xiaole Shirley Liu STAT115, STAT215.
Gábor Kusper University of Linz RISC Austria
Computability and Complexity
(xy)(yz)(xz)(zy)
NP-Completeness Yin Tat Lee
Propositional Calculus: Boolean Algebra and Simplification
Complexity 6-1 The Class P Complexity Andrei Bulatov.
Hidden Markov Autoregressive Models
NP-Completeness Proofs
Hidden Markov Models (HMMs)
NP-Complete Problems.
CS21 Decidability and Tractability
NP-Completeness Yin Tat Lee
CSE 589 Applied Algorithms Spring 1999
Instructor: Aaron Roth
Autonomous Cyber-Physical Systems: Probabilistic Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Markov Chains & Randomized algorithms BY HAITHAM FALLATAH FOR COMP4804

Outline  History Of Markov chains  The Markov property  Markov Chains Example  3 MC models and applications  Analysis of a randomized algorithm using MC

History of the Markov chain  Andrey Markov was a Russian mathematician  Lived in 19 th and early 20 th century  He was bad at school in everything but math  Student of Chebyshev  was motivated to work on a model that was later known as the MC because of 2 reasons.  Show that Chebyshev’s approach to extending the Law of Large numbers to sums of dependent random variables could be improved.  He had an animosity with another Russian mathematician called Nekrasov who claimed that only independent events can converge on predictable distributions

The Markov property  Markov disproved Nekrasov’s claim by coming up with a model that has states and transitions.  Pick a ball at random, if Blue, pick a ball from Bin 1 otherwise from Bin 2  Probability will converge on a predictable distribution 1:1 Red:Blue 2:1 Red:Blue 1/3 2/3 1/2 Bin 1Bin 2 1/2

Markov Chain Example SunnyCloudyRainySnowy Sunny Cloudy Rainy Snowy SunnyCloudyRainySnowy Transition Matrix P 100 matrix for gen(100) SunnyCloudyRainySnowy 1000 P 0 matrix for initial state

Markov Models & applications  Discreet time Markov chain (DTMC)  It is the notion that there is a chain of steps between states where movement from a state to the next, depends only on the current state.  Pr(X n+1 = x | X 1 = x 1, X 2 = x 2, ….., X n = x n ) = Pr(X n+1 = x | X n = x n )  This process is used with simulations and analysing random algorithms

Markov Models & applications  Continues Time Markov chain (CTMC)  Similar to the DTMC model except that steps have no meaning in continuous time so the Monrovian property must hold for all future times instead of just for one step. This model is best suited for modeling biological and physical systems because of their continues time nature.  Used in construction of phylogenies (evolutionary trees)

Markov Models & applications  Hidden Markov chain model  It is a model in which the “real Markov chain” is hidden and instead, another set of states is observed and is dependent on the states of the hidden chain. This is one of the most powerful Markov models because it can fill in the blanks in problems where not all the information is given.  This model is used in pattern, speech, and writing recognition Z1Z1 Z2Z2 Z3Z3 ZnZn X1X1 X2X2 X3X3 XnXn

Analysis a randomized algorithm  One example of a random algorithm that can be analysed by the Markov chain model, is a randomized algorithm that solves the 2- SAT problem.  The 2-SAT problem consists of n Boolean variables and m clauses that are in a 2-conjunctive normal form. This problem is solved when all clauses have a value of true and thus the overall result of the statement would be true. This problem is considered in P (polynomial) as many algorithms can solve it in polynomial time.

The Problem  We are given an algorithm that takes O(n 2 ) steps to find a satisfiable assignment. Our job is to prove independently using the Markov chain that this claim is true.  The algorithm runs as follows (pseudo code):  Repeat the following 2mn 2 times  Select a clause that’s not satisfied (ie. False) at random  Select one of the variables in the clause at random and flip its value  If the formula is satisfied, return true  Otherwise, return that this formula is unsatisfiable.

Example of Algorithm Execution X 1 = True X 2 = False X 3 = False X 4 = False X 1 = False X 2 = False X 3 = False X 4 = False X 1 = True X 2 = False X 3 = True X 4 = False X 1 = True X 2 = False X 3 = False X 4 = False

Analysis

j-1 jj+1n 0 1/2

Analysis

 Thus:

Analysis  The System of equations we have is:  Using Induction we can get:

Analysis  Finally, we can substitute and get:

References  Probability and Computing (book) by Michael mitzenmacher and Eli Upfal (Main)    Markov Models and Hidden Markov Models: A Brief Tutorial  Discrete Mathematics & Mathematical Reasoning by Kousha Etessami  ut_LJSA_2.pdf    Khan Academy