Polynomial time The Chinese University of Hong Kong Fall 2010

Slides:



Advertisements
Similar presentations
Algorithms Sipser 3.3 (pages ). CS 311 Fall Computability Hilbert's Tenth Problem: Find a process according to which it can be determined.
Advertisements

Measuring Time Complexity
Variants of Turing machines
THE CHURCH-TURING T H E S I S “ TURING MACHINES” Pages COMPUTABILITY THEORY.
INHERENT LIMITATIONS OF COMPUTER PROGRAMS CSci 4011.
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Variants.
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
Measuring Time Complexity Sipser 7.1 (pages )
P and NP Sipser (pages ). CS 311 Fall Polynomial time P = ∪ k TIME(n k ) … P = ∪ k TIME(n k ) … TIME(n 3 ) TIME(n 2 ) TIME(n)
CS5371 Theory of Computation Lecture 11: Computability Theory II (TM Variants, Church-Turing Thesis)
CS5371 Theory of Computation Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG)
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Polynomial.
CS 461 – Nov. 21 Sections 7.1 – 7.2 Measuring complexity Dividing decidable languages into complexity classes. Algorithm complexity depends on what kind.
INHERENT LIMITATIONS OF COMPUTER PROGRAMS CSci 4011.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
 Computability Theory Turing Machines Professor MSc. Ivan A. Escobar
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Efficient.
Introduction to CS Theory Lecture 15 –Turing Machines Piotr Faliszewski
CSCI 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Ambiguity.
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Polynomial.
CSCI 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Variants.
P Vs NP Turing Machine. Definitions - Turing Machine Turing Machine M has a tape of squares Each Square is capable of storing a symbol from set Γ (including.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 3 The Church-Turing Thesis Contents Turing Machines definitions, examples,
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Turing Machines.
CSCI 2670 Introduction to Theory of Computing October 13, 2005.
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Normal forms.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 3 The Church-Turing Thesis Contents Turing Machines definitions, examples,
Umans Complexity Theory Lectures Lecture 1b: Turing Machines & Halting Problem.
CSCI 3130: Formal languages and automata theory Andrej Bogdanov The Chinese University of Hong Kong Decidable.
FORMAL LANGUAGES, AUTOMATA, AND COMPUTABILITY * Read chapter 4 of the book for next time * Lecture9x.ppt.
CSC 3130: Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Undecidable.
CSCI 2670 Introduction to Theory of Computing November 15, 2005.
CSCI 3130: Formal languages and automata theory Andrej Bogdanov The Chinese University of Hong Kong Polynomial.
Cpt S 317: Spring 2009 Reading: Chapter 8
CSCI 2670 Introduction to Theory of Computing
More variants of Turing Machines
Complexity & the O-Notation
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
CSCI 2670 Introduction to Theory of Computing
Ambiguity Parsing algorithms
Turing Machines.
Context Sensitive Languages and Linear Bounded Automata
(Universal Turing Machine)
Cpt S 317: Spring 2009 Reading: Chapter 8
CSE 105 theory of computation
CSE 2001: Introduction to Theory of Computation Fall 2013
Reducibility The Chinese University of Hong Kong Fall 2010
CSCI 2670 Introduction to Theory of Computing
CSCI 2670 Introduction to Theory of Computing
Jaya Krishna, M.Tech, Assistant Professor
Jaya Krishna, M.Tech, Assistant Professor
Intractable Problems Time-Bounded Turing Machines Classes P and NP
Intro to Theory of Computation
Efficient computation
Time Complexity We use a multitape Turing machine
Decidable and undecidable languages
CS154, Lecture 12: Time Complexity
NP-completeness The Chinese University of Hong Kong Fall 2008
More undecidable languages
Recall last lecture and Nondeterministic TMs
CS21 Decidability and Tractability
Variants of Turing Machines
CSE 105 theory of computation
More undecidable languages
CSCI 2670 Introduction to Theory of Computing
Normal forms and parsing
Variants of Turing machines
Instructor: Aaron Roth
CSE 105 theory of computation
Presentation transcript:

Polynomial time The Chinese University of Hong Kong Fall 2010 CSCI 3130: Formal languages and automata theory Polynomial time Andrej Bogdanov http://www.cse.cuhk.edu.hk/~andrejb/csc3130

Running time How soon will the laundry be done? When you do laundry, you want to know: How soon will the laundry be done? When you run a program, you want to know: M 01001 How soon will my program finish?

Efficiency PCP ATM Undecidable problems: We cannot find solutions in any finite amount of time Decidable problems: Sometimes we can find solutions fast, sometimes not decidable efficient

Efficiency The running time of an algorithm depends on the input For longer inputs, we should allow more time Efficiency is measured as a function of input size PCP ATM decidable efficient

Running time The running time of TM M is the function tM(n): tM(n) = maximum number of steps that M takes on any input of length n L = {w#w: w ∈ {a, b}} On input x, until you reach # Read and cross off first a or b before # Read and cross off first a or b after # If there is a mismatch, reject M: If all symbols but # are crossed off, accept O(n) times O(n) steps O(n) steps running time: O(n2)

Running time L = {0n1n: n ≥ 0} On input x, Check that the input is of the form 0*1* Until everything is crossed off: Move head to left end of tape Cross off the leftmost 0 Cross off the following 1 If everything is crossed off, accept. M: O(n) steps O(n) times O(n) steps running time: O(n2)

A faster way L = {0n1n: n ≥ 0} On input x, Check that the input is of the form $0*1* Until everything is crossed off: Move head to left end of tape Find the parities of number of 0s and 1s If one is even and other off, reject Otherwise, cross off every other 0 and every other 1 If everything is crossed off, accept. M: O(n) steps O(log n) times O(n) steps O(n) steps running time: O(n log n)

Running time vs. model What if we have a two-tape Turing Machine? L = {0n1n: n ≥ 0} On input x, Check that the input is of the form 0*1* Copy 0* part of input on second tape Until ☐ is reached, Cross off next 1 from first tape and next 0 from second tape If both tapes reach ☐ at same time, accept M: O(n) steps O(n) steps running time: O(n)

Running time vs. model How about a java program? L = {$0n1n: n ≥ 0} M(string x) { n = x.len; if n % 2 == 0 reject; else for (i = 1; i <= n/2; i++) if x[i] != 0 reject; if x[n-i+1] != 1 reject; accept; } running time: O(n) 1-tape TM O(n log n) 2-tape TM O(n) java O(n) The running time can change depending on the model of computation!

Measuring running time What does it mean when we say: One “time unit” in all mean different things! “This algorithm runs in time T” java RAM machine 1-tape TM d(q3, a) = (q7, b, R) if (x > 0) y = 5*y + x; write r3;

Efficiency and the Church-Turing thesis The Church-Turing thesis says all these models are equivalent in power… … but not in running time! java Turing Machine RAM machine multitape TM

The Cobham-Edmonds thesis However, there is an extension to the Church-Turing thesis that says For any realistic models of computation M1 and M2: So any task that takes time T on M1 can be done in time (say) O(T2) or O(T3) on M2 M1 can be simulated on M2 with at most polynomial slowdown

Efficient simulation The running time of a program depends on the model of computation… … but in the grand scheme, this is irrelevant ordinary TM multitape TM RAM machine java slow fast Every reasonable model of computation can be simulated efficiently on every other

Example of efficient simulation Recall simulating multiple tapes on a single tape M … 1 G = {0, 1, ☐} S … 1 #  G’ = {0, 1, ☐, 0, 1, ☐, #}

Running time of simulation Each move of the multiple tape TM might require traversing the whole single tape 1 step of 3-tape TM O(s) steps of single tape TM s = rightmost cell ever visited after t steps s ≤ 3t + 4 t steps of 3-tape O(ts) = O(t2) single tape steps quadratic slowdown multi-tape TM single tape TM

Simulation slowdown Cobham-Edmonds Thesis: java multi-tape TM O(t) O(t) O(t2) O(t) O(t2) O(t) RAM machine single tape TM Cobham-Edmonds Thesis: M1 can be simulated on M2 with at most polynomial slowdown

The class P P is the class of languages decidable P is the class of languages that can be decided on a TM with polynomial running time in the input length efficient context-free By the CE thesis, we can replace “ordinary TM” by any realistic model of computation regular java RAM multi-tape TM

Examples of languages in P P is the class of languages that are decidable in polynomial time (in the input length) L01 = {0n1n: n > 0} decidable LG = {x: x is generated by G} G is some CFG P (efficient) PATH = {(G, s, t): G is a graph with a path from node s to node t} PATH LG context-free L01

Context-free languages in polynomial time Let L be a context-free language, and G be a CFG for L in Chomsky Normal Form For cells in last row If there is a production A  xi Put A in table cell ii For cells st in other rows If there is a production A  BC where B is in cell sj and C is in cell jt Put A in cell st x1 x2 … xn 11 22 nn 12 23 … 1k CYK algorithm On input x of length n, running time is O(n3) ✔

Paths in polynomial-time x s PATH = {〈G, s, t〉: G is a graph with a path from node s to node t} x x G has n vertices, m edges x M := On input 〈G, s, t〉, where G is a graph with nodes s and t t x Place a mark on node s. O(n) times Repeat until no additional nodes are marked: Scan the edges of G. If there is an edge so that a is marked and b is not marked, mark b. O(m) time If t is marked accept, otherwise reject. ✔ running time: O(nm)

Hamiltonian paths A Hamiltonian path in G is a path that visits every node exactly once s t UHAMPATH = {〈G, s, t〉: G is a graph with a Hamiltonian path from s to t} We don’t know if UHAMPATH is in P, and we believe it is not