Turing Machines Alan Turing (1912-1954) mathematician and logician.

Slides:



Advertisements
Similar presentations
Turing Machines Memory = an infinitely long tape Persistent storage A read/write tape head that can move around the tape Initially, the tape contains only.
Advertisements

CS 345: Chapter 9 Algorithmic Universality and Its Robustness
THE CHURCH-TURING T H E S I S “ TURING MACHINES” Pages COMPUTABILITY THEORY.
C O N T E X T - F R E E LANGUAGES ( use a grammar to describe a language) 1.
CS21 Decidability and Tractability
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
Introduction to Computability Theory
Courtesy Costas Busch - RPI1 A Universal Turing Machine.
Fall 2004COMP 3351 A Universal Turing Machine. Fall 2004COMP 3352 Turing Machines are “hardwired” they execute only one program A limitation of Turing.
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
January 28, 2015CS21 Lecture 101 CS21 Decidability and Tractability Lecture 10 January 28, 2015.
1 Introduction to Computability Theory Lecture11: The Halting Problem Prof. Amos Israeli.
Complexity and Computability Theory I Lecture #13 Instructor: Rina Zviel-Girshin Lea Epstein Yael Moses.
CSCI 2670 Introduction to Theory of Computing September 28, 2005.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 15-1 Mälardalen University 2012.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata (LBAs) are the same as Turing Machines with one difference: The input string tape space is the.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
 2005 SDU Lecture13 Reducibility — A methodology for proving un- decidability.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Section 13.1 Turing Machines A Turing machine (TM) is a simple computer that has an infinite amount of storage in the form of cells on an infinite tape.
1 Turing Machines and Equivalent Models Section 13.1 Turing Machines.
Lecture 16b Turing Machines Topics: Closure Properties of Context Free Languages Cocke-Younger-Kasimi Parsing Algorithm June 23, 2015 CSCE 355 Foundations.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 3 The Church-Turing Thesis Contents Turing Machines definitions, examples,
1 Introduction to Turing Machines
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 12 Mälardalen University 2007.
1 Unit – 5 : STATE MACHINES Syllabus: Languages and Grammars – Finite State Machines State machines and languages – Turing Machines – Computational Complexity.
Theory of Languages and Automata By: Mojtaba Khezrian.
FORMAL LANGUAGES, AUTOMATA, AND COMPUTABILITY * Read chapter 4 of the book for next time * Lecture9x.ppt.
1 A Universal Turing Machine. 2 Turing Machines are “hardwired” they execute only one program A limitation of Turing Machines: Real Computers are re-programmable.
Finite-State Machines (FSM) Chuck Cusack Based partly on Chapter 11 of “Discrete Mathematics and its Applications,” 5 th edition, by Kenneth Rosen.
The Acceptance Problem for TMs
Linear Bounded Automata LBAs
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Turing Machines.
CS21 Decidability and Tractability
Pumping Lemma Revisited
Pushdown Automata.
CSE 105 theory of computation
BCS 2143 Theory of Computer Science
CS154, Lecture 7: Turing Machines.
CSE 105 theory of computation
CSE 2001: Introduction to Theory of Computation Fall 2013
Turing Machines Acceptors; Enumerators
Turing Machines (At last!).
Hierarchy of languages
فصل سوم The Church-Turing Thesis
Jaya Krishna, M.Tech, Assistant Professor
Turing Machines (TM) Deterministic Turing Machine (DTM)
Intro to Data Structures
8. Introduction to Turing Machines
CS21 Decidability and Tractability
CS21 Decidability and Tractability
Formal Languages, Automata and Models of Computation
CSE 105 theory of computation
Recall last lecture and Nondeterministic TMs
Decidability and Tractability
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
CSE 105 theory of computation
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Instructor: Aaron Roth
Automata, Grammars and Languages
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Turing Machines Everything is an Integer
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Intro to Theory of Computation
CSE 105 theory of computation
Presentation transcript:

Turing Machines Alan Turing (1912-1954) mathematician and logician

Models of computing DFA and NFA - Regular languages Pushdown automata - Context-free Bounded Turing M’s - Context sensitive Turing machines - Unrestricted Each level of languages in the Chomsky hierarchy has a computing model associated with it. As the languages grow more complex, the computing models become more powerful.

Phrase structure grammars A phrase structure, or unrestricted grammar is a grammar whose productions are of the form    where  and  are any strings over the alphabet of terminals and non-terminals, so basically any kind of production is allowed, even shortening rules. Phrase structure grammars contain within them all the other kinds of grammars and are potentially the most complex. Thus to recognise them, we need a powerful computational model.

Turing machines Turing machines (TM’s) were introduced by Alan Turing in 1936. They are more powerful than both finite automata or pushdown automata. In fact, they are as powerful as any computer we have ever built. The main improvement from PDA’s is that they have infinite accessible memory (in the form of a tape) which can be read and written to.

Basic design of Turing machine Infinite tape Tape head Control Usual finite state control The basic improvement from the previous automata is the infinite tape, which can be read and written to. The input data begins on the tape, so no separate input is needed (e.g. DFA, PDA).

Comparing computing models Finite automata finite state control input string no memory Pushdown automata finite state control input string stack memory Turing machine finite state control input on tape (infinite) tape memory A TM is equivalent to a PDA with two stacks

Basic operation of a TM The TM is based on a person working on a long strip of paper (infinite in principle) divided up into cells, each of which contains a symbol from an alphabet. The person uses a pencil with an eraser. Starting at some cell, the person reads the symbol in the cell and decides either to leave it alone or to erase it and write a new symbol in its place. The person can then perform the same action on one of the adjacent cells. The computation continues in this manner, moving from one cell to the next along the paper in either direction.

TM instructions (transition functions) Each Turing machine instruction contains the following five parts: The current machine state. A tape symbol read from the current tape cell. A tape symbol to write into the current tape cell. The next machine state. A direction for the tape head to move. Two inputs, three outputs: (q, a) = (p, x, R)

Moving around the tape At each step there are three possible actions for the tape head: "move left one cell," "stay at the current cell," and "move right one cell," respectively.   We'll represent these by the letters: L, S, and R.

Instruction shorthand We can represent a full TM instruction as: i, a, b, L, j This means… If the current state of the machine is i, and if the symbol in the current tape cell is a, then write b into the current tape cell, move left one cell, and go to state j.

The instruction in graphical form… This means the same as above: If the current state of the machine is i, and if the symbol in the current tape cell is a, then write b into the current tape cell, move left one cell, and go to state j.

Putting the input on the tape An input string is represented on the tape by placing the letters of the string in adjacent tape cells. All other cells of the tape contain the blank symbol, which we'll denote by  or . The tape head is usually put at the leftmost cell of the input string (unless specified otherwise).

Starting and stopping… As before, there is one start state which must be specified. However in this case, there is usually only one halt state, which we denote by "Halt."

The Machine stops… A Turing machine stops (halts) when it either: 1. enters the Halt state or 2. when it enters a state for which there is no valid move. For example, if a Turing machine enters state i and reads a in the current cell, but there is no instruction of the form i,a,…., then the machine stops in state i.

Instantaneous description To describe the TM at any given time we need to know three things: 1) What is on the tape? 2) Where is the tape head? 3) What state is the control in? We’ll represent this information as follows: State i :  a a b a b  Here, the  symbol represents an empty cell (repeated indefinitely to left and right) and the position of the tape head is underlined.

Turing machine problems In the finite automata and pushdown automata problems we concentrated on accepting and rejecting strings of a grammar. Turing machines can do this as well, but most of the types of problems will be doing simple tasks, generally performing transformations on the string on a tape. For example, mathematical operations, etc.

Simple Turing machine EXAMPLE: Adding 2 to a Natural Number Lets represent natural numbers in unary form. (e.g. 3 = 111) We will represent 0 by the empty string . Plan: Move to the left of the first 1. Change that empty cell to a 1. Move left and repeat. Halt.

There are just three instructions… Rule 1: 0, 1, 1, L, 0 Move left to blank cell. 0, , 1, L, 1 Rule 2: Add 1 and move left. Rule 3: 1, , 1, S, Halt Add 1 and halt.

Graphically… Instantaneous description: State 0:  1 1 1  Begin in state 0 State 0:  1 1 1  Rule 1 State 1:  1 1 1 1  Rule 2 Halt :  1 1 1 1 1  Rule 3

Example Adding 1 to a Binary Natural Number Binary numbers: 1 = 1 1 = 1 2 = 10 3 = 11 4 = 100 5 = 101 6 = 110

The algorithm is… 1. Move to the right end of string 2. Repeat: If current cell contains 1, write 0 and move left until current cell contains 0 or  3. Write a 1 4. Move to left end of string and halt. WHY IS THIS THE ALGORITHM?

The TM instructions 0, 0, 0, R, 0 Scan right. 0, , , L, 1 Found right end of string. 1, 1, 0, L, 1 Write 0 and move left with carry bit. 1, 0, 1, L, 2 Write 1, done adding. 1, , 1, S, Halt Write 1, done and in proper position.

The TM instructions (continued) Otherwise move to left most character in the string: 2, 0, 0, L, 2 Scan left. 2, 1, 1, L, 2 Scan left. 2, , , R, Halt Reached left end of string, and halt.

Instantaneous description: 3 + 1 = 4 State 0:  1 1  State 1:  1 1  State 1:  1 0  State 1:  0 0  Halt:  1 0 0 

Non-deterministic Turing machines We can have non-deterministic Turing machines as well, just as in the other automata we’ve considered. Deterministic Turing machines are just as powerful as non-deterministic ones, and so they accept the same languages, those of the phrase structure grammar. In addition, there are many other similar machines which can be seen to be equivalent to the simple Turing machine.

Nondeterministic TMs A nondeterministic Turing machine M can have several options at every step. It is defined by the 7-tuple (Q,,,,q0,qaccept,qreject), with Q finite set of states  finite input alphabet (without “_”)  finite tape alphabet with { _ }     q0 start state  Q qaccept accept state  Q qreject reject state  Q  the transition function : Q\{qaccept,qreject}    P (Q    {L,R})

Computing with Nondeterministic TMs Evolution of the n.d. TM represented by a tree of configurations (rather than a single path). t=2 C2 C4 C3 t=3 C5 If there is (at least) one accepting leave, then the TM accepts. C6 “reject”  “accept”

Simulating Nondeterministic TMs with Deterministic Ones We want to search every path down the tree for accepting configurations. Bad idea: “depth first”. This approach can get lost in never-halting paths. Good idea: “breadth first”. For time step 1,2,… we list all possible configurations of the non- deterministic TM. The simulating TM accepts when it lists an accepting configuration.

Turing machine variations Two state pushdown automaton Input Control The right half of the tape is kept on one stack, and the left half on the other. As we move along, we pop characters off one and push them onto the other.

More complex applications Understanding the variations is useful, because many jobs are easier to do the with more complicated machines. For example, multiplying two numbers to get a third would be difficult to do with a simple Turing machine, but is fairly straight forward with a three tape machine. Plan: If either number is zero, the product is zero Otherwise, use the first number as a counter to repeatedly add the second number to itself

The Church-Turing thesis ‘Anything which is intuitively computable can be computed by a Turing machine.’ What do we mean by ‘intuitively computable’ ? It means that we can describe a procedure which will solve the problem. It’s a thesis, or a conjecture, not a theorem!

Importance of the Church-Turing Thesis The Church-Turing thesis marks the end of a long sequence of developments that concern the notions of “way-of-calculating”, “procedure”, “solving”, “algorithm”. Goes back to Euclid’s GCD algorithm (300 BC). For a long time, this was an implicit notion that defied proper analysis.

Computable Functions A function f:** is a TM-computable function if there is a Turing machine that on every input w* halts with just f(w) on the tape. All the usual computations (addition, multiplication, sorting, minimization, etc.) are all TM-computable. Important here is that alternations to TMs, like “given a TM M, we can make an M’ such that…” can also be described by computable functions that thus have f(M) = M’.

Computational Equivalence No one has ever invented a more powerful computing model than a Turing machine! A number have been invented which are equivalent, in the sense that they solve the same class of problems (-calculus, etc.) However, we believe that there isn’t a more powerful computing model!

Describing TM Programs Three Levels of Describing algorithms: formal (state diagrams, CFGs, et cetera) implementation (pseudo-Pascal) high-level (coherent and clear English) Describing input/output format: TMs allow only strings * as input/output. If our X and Y are of another form (graph, Turing machine, polynomial), then we use X,Y to denote ‘some kind of encoding *’.

Deciding Regular Languages The acceptance problem for deterministic finite automata is defined by: ADFA = { B,w | B is a DFA that accepts w } Note that this language deals with all possible DFAs and inputs w, not a specific instance. Of course, ADFA is a TM-decidable language.

Other Computational Models We can consider many other ‘reasonable’ models of computation: DNA computing, neural networks, quantum computing… Experience teaches us that every such model can be simulated by a Turing machine. Church-Turing Thesis: The intuitive notion of computing and algorithms is captured by the Turing machine model.

Decidability While we believe that Turing machines can compute anything which is computable in principle, they cannot do everything! One example is the halting problem, which has to do with whether a Turing machine will stop for a given input or will keep computing forever. For example, we cannot make a Turing machine that will be able to decide whether another Turing will halt on any given input.

Countable Set of TMs A set S is countable infinite if there is a bijection possible between {0,1,2,…} and S. A set S is countable, if you can make a list s1,s2,… of all the elements of S. The sets , 2, {0,1}*, * are all countable infinite. Example for {0,1}*: the lexicographical ordering: {0,1}* = {,0,1,00,01,10,11,000,…}

Uncountable Sets There are infinite sized sets that are not countable. Typical examples are , P () and P ({0,1}*) We prove this by a diagonalization argument. In short, if S is countable, then you can make a list s1,s2,… of all elements of S. Diagonalization shows that given such a list, there will always be an element x of S that does not occur in s1,s2,…

Uncountability of P () The set P () contains all the subsets of {0,1,2,…}. Each subset X can be identified by an infinite string of bits x0x1x2... such that xj=1 iff jX. There is bijection between P () and {0,1}. Proof by contradiction: Assume P () countable. Hence there must exist a surjection F from  to the set of infinite bit strings. “There is a list of all infinite bit strings.”

Diagonalization Try to list all possible infinite bit strings: Look at the bit string on the diagonal of this table: 0101… The negation of this string (“1010…”) does not appear in the table.

No Surjection   {0,1} Let F be a function   {0,1}. F(0),F(1),F(2),… are all infinite bit strings. Define the infinite string Y=Y0Y1Y2… by Yj = NOT(j-th bit of F(j)) On the one hand Y {0,1}, but on the other hand: for every j we know that F(j)  Y because F(j) and Y differ in the j-th bit. F cannot be a surjection: {0,1} is uncountable.

Uncountability We just showed that there it is impossible to have surjection from  to the set {0,1}. Similar proofs are possible for the uncountability of the sets , P ({0,1}*), et cetera. What does this have to do with Turing machine computability?

Counting TMs Observation: Every TM has a finite description; there is only a countable number of different TMs. (A description M can consist of a finite string of bits, and the set {0,1}* is countable.) Our definition of Turing recognizable languages is a mapping between the set of TMs {M1,M2 ,…} and the set of languages {L(M1),L(M2),…}P (*). Question: How many languages are there?

Counting Languages There are uncountable many different languages over the alphabet ={0,1} (the languages L{0,1}*). With the lexicographical ordering ,0,1,00,01,… of *, every L coincides with an infinite bit string via its characteristic sequence L. Example for L={0,00,01,000,001,…} with L= 0101100…

Counting TMs and Languages There is a bijection between the set of languages over the alphabet ={0,1} and the uncountable set of infinite bit strings {0,1}. There are uncountable many different languages L{0,1}*. Hence there is no surjection possible from the countable set of TMs to the set of languages. Specifically, the mapping L(M) is not surjective. Conclusion: There are languages that are not Turing-recognizable. (A lot of them.)

Is This Really Interesting? We now know that there are languages that are not Turing recognizable, but we do not know what kind of languages are non-TM recognizable. Are there interesting languages for which we can prove that there is no Turing machine that recognizes it?