EECS 598: Background in Theory of Computation Igor L. Markov and John P. Hayes

Slides:



Advertisements
Similar presentations
University of Queensland
Advertisements

Theory of Computing Lecture 1 MAS 714 Hartmut Klauck.
Razdan with contribution from others 1 Algorithm Analysis What is the Big ‘O Bout? Anshuman Razdan Div of Computing.
CS420 lecture one Problems, algorithms, decidability, tractability.
02/01/11CMPUT 671 Lecture 11 CMPUT 671 Hard Problems Winter 2002 Joseph Culberson Home Page.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
Algorithmic Complexity Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park.
1 L is in NP means: There is a language L’ in P and a polynomial p so that L 1 · L 2 means: For some polynomial time computable map r : 8 x: x 2 L 1 iff.
Computational problems, algorithms, runtime, hardness
Chapter 3 Growth of Functions
March 11, 2015CS21 Lecture 271 CS21 Decidability and Tractability Lecture 27 March 11, 2015.
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
Introduction to Analysis of Algorithms
1 Polynomial Church-Turing thesis A decision problem can be solved in polynomial time by using a reasonable sequential model of computation if and only.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
University of Queensland
1 Other Models of Computation. 2 Models of computation: Turing Machines Recursive Functions Post Systems Rewriting Systems.
Quantum Automata Formalism. These are general questions related to complexity of quantum algorithms, combinational and sequential.
1 Data Structures A program solves a problem. A program solves a problem. A solution consists of: A solution consists of:  a way to organize the data.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
An Arbitrary Two-qubit Computation in 23 Elementary Gates or Less Stephen S. Bullock and Igor L. Markov University of Michigan Departments of Mathematics.
Quantum Counters Smita Krishnaswamy Igor L. Markov John P. Hayes.
CSE 421 Algorithms Richard Anderson Lecture 3. Classroom Presenter Project Understand how to use Pen Computing to support classroom instruction Writing.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
1 Complexity Lecture Ref. Handout p
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Games and Complexity, Guangzhou Peter van Emde Boas KNOW YOUR NUMBERS ! The Impact of Complexity a short intro in Complexity theory Peter van Emde.
Lecture 2 Computational Complexity
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Data Structures and Algorithms in Java Chapter 2 Complexity Analysis.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
Computation Model and Complexity Class. 2 An algorithmic process that uses the result of a random draw to make an approximated decision has the ability.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Computational Complexity Jang, HaYoung BioIntelligence Lab.
Quantum Computing MAS 725 Hartmut Klauck NTU
CSCI 3160 Design and Analysis of Algorithms Tutorial 10 Chengyu Lin.
Algorithm Analysis CS 400/600 – Data Structures. Algorithm Analysis2 Abstract Data Types Abstract Data Type (ADT): a definition for a data type solely.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability.
06/12/2015Applied Algorithmics - week41 Non-periodicity and witnesses  Periodicity - continued If string w=w[0..n-1] has periodicity p if w[i]=w[i+p],
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
CS 615: Design & Analysis of Algorithms Chapter 2: Efficiency of Algorithms.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 3524 Course.
2-0 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 7 Time Complexity Some slides are in courtesy.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Review of Basic Computer Science Background. Computer Science You’ll Need Some models of computationSome models of computation –Finite state machines.
Algorithmics - Lecture 41 LECTURE 4: Analysis of Algorithms Efficiency (I)
Quantum Computation Stephen Jordan. Church-Turing Thesis ● Weak Form: Anything we would regard as “computable” can be computed by a Turing machine. ●
NP ⊆ PCP(n 3, 1) Theory of Computation. NP ⊆ PCP(n 3,1) What is that? NP ⊆ PCP(n 3,1) What is that?
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Attendance Syllabus Textbook (hardcopy or electronics) Groups s First-time meeting.
Introduction to Algorithms
Big-O notation.
Analysis of algorithms
Circuit Lower Bounds A combinatorial approach to P vs NP
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
CSCE 411 Design and Analysis of Algorithms
Objective of This Course
CSE838 Lecture notes copy right: Moon Jung Chung
Chapter 2.
Analysis of algorithms
Presentation transcript:

EECS 598: Background in Theory of Computation Igor L. Markov and John P. Hayes

Outline On computational models … More on terminology and notation Representing states in a computer Tensor products (again) Postulates of Q.M. specified for Q.C. Handling the probabilistic nature of quantum computation Entanglement

Parameters of algorithms… Execution Time – Wall-clock time – Asymptotic time Memory (size) Ease of programming Worst, best, average, amortized, typical Multiple objectives -> often no single winner – Cost trade-offs

Principle of Invariance Challenges for analysis of [time] complexity – Different hardware executes different #instr./sec – Need a uniform measure of time complexity Solution: measure constant-time steps – Will be off by at most a constant Principle of invariance – Two implementations of an algorithm (when executed on two actual computers) will not differ in time complexity by more than a constant – This is not a theorem ! (unlike, e.g., mathematical induction)

Comparing algorithms POI: A similar situation with memory POI gives hope to compare algorithms, but does not provide a good mechanism What we want: – Equivalence relations for algorithms (“as fast”) – Order relations (“as fast or faster”) Possible ideas: – Count elementary operations – Count elementary units of memory – Are all operations counted alike?

Elementary Operations What are elementary operations? – Operations whose execution time is bounded by a constant that does not depend on input values – Actual “seconds per operation” may be disregarded – But the very selection of elem. ops is still hardware-dependent !!! What about +, -, *, /,, =, ==, etc. of integers (or doubles)? – Yes, if integers have a bounded number of bits, e.g., 32 (or 64) – No, if the number of bits is not bounded What about function calls? What about memory accesses, e.g., a[i] ?

Computational Models A computational model is determined by – data representation and storage mechanisms – available elementary operations Most popular examples – Sequential and combinational Boolean circuits – EECS 270, 478 – [Non-] Deterministic Finite Automata (DFA/NFAs) – EECS 376(476),478 – Push-down automata – EECS 376(476) – Turing machines – EECS 376(476) – C programs, C++ programs – EECS 280, 281(380), 477 Computation models originate in technologies, physics, biology, etc – Parallel and distributed computing – Optical and DNA computing – Analog computing – Quantum computing

Measures of Algorithm Efficiency Based on Allowed Inputs Each set of input data gives a new measure of efficiency ! – Best cases – Worst cases – Representative / typical inputs (“benchmarks”) application-specific and domain-specific Averaged measures (give “expected efficiency”) – Consider different inputs in independent experiments – Average over a particular distribution of inputs Some inputs may happen more frequently than others Application-specific – Formal “average case” Averaged over the uniform distribution of all allowed inputs – Empirical evaluation by sampling Generate random samples of the target distribution Much faster than enumeration. Theorem: results are very close.

Asymptotic Complexity Recall that resource consumption is typically described by positive monotonic functions Asymptotic complexity measures (“on the order of”) – Main idea: Given f(n) and g(n), does the difference grow with n or stay the same ? Difference: f(n)/g(n). Only pay attention to the “limit n  ∞ If f(n)/g(n)  const as n  ∞ then f(n) is at least as good as g(n) if f(n)/g(n) ≤ const as n  ∞ then f(n) is at least as good as g(n) If f(x) is at least as good as g(x) and g(x) is at least as good as f(x), then the two functions are equivalent – Coarser than counts of elementary “things” (larger equiv. classes) e.g., “200 N steps” and “3 N + log(N) steps” are now equivalent – Much greater hardware independence

Complexity of Problems In a given computational model, for a given problem… consider best possible algorithms – In fact, whole equivalence classs of algos in terms of asymptotic worst-case complexity Call that the complexity of the problem Can often consider the same problem in multiple computational models – Will they have the same complexity?

Problem reductions Reduction of problem X to problem Y – Every instance of problem X translated into an instance of problem Y – Can apply any algorithm to solve Y – Every solution to a translated instance translated back Complexity of problem X is no more than sum of – Complexity of translating instances – Complexity of Y – Complexity of translating solutions back

Simulating Computational Models Write a C program that simulates a Turing machine Write a C interpreter as a program for a Turing machine The computational models are equivalent – Need to be careful about bit-sizes of basic types Modern Church-Turing thesis – Whatever you can do in terms of computation, can be done on a Turing machine “as efficiently” – “as efficiently” allows poly-time reductions (next slide) How would you prove/disprove this?

NP-complete problems P is the class of problems whose solutions can be found in poly-time NP is the class of problems whose solutions can be verified in polynomial time Polynomial-time reductions – Translations take poly-time NP-complete problems are those to which every other problem in NP reduces in poly-time NP=P? is an open question Orphan problems: – Those in NP\P but not NP-complete, assuming NP!=P – Number factoring – Graph auto-morphism

So, how could you disprove the modern Church-Turing thesis ? Find a problem that has different worst-case asymptotic complexity – In terms of Turing machines – In terms of, say, quantum computers Need to define a computational model first! Back to Quantum Mechanics – Measurements are not deterministic – So, need to include randomized algorithms – BPP (answers can be correct with probability ½+e) More complexity classes: – PSPACE – Decidable problems

Back to Quantum Mechanics Details of computational model should follow from the postulates of quantum mechanics Postulate 1: State Space (complex, Hilbert) – Will need only finite-dim vector spaces Postulate 2: Evolutions = Unitaries – Think of matrices Postulate 3: Quantum Measurement – Two forms: Hermitian observables and ortho-normal decompositions Postulate 4: Composite systems – Tensor products

Computing with functions Functions form a Hilbert space – It is infinite-dimensional, much harder to study – Inner product is defined via integrals – Examples of Hermitian ops: differential operators – Analogues of matrices: “kernels” Not nearly as nice as finite-dim matrices How do we get finite-dim spaces? – Restrict allowed states to a finite-dim subspace Special case: restrict functions to finite sets – All notions map to common finite-dim notions

Tensor products A state of a quantum system is f(x) A state of a composite system is f(x,y) We would like some construction that would give us the space of f(x,y) from spaces of g(x) and h(y) Tensor product – Take bases: g 1 (x), g 2 (x),… and h 1 (x), h 2 (x),… – Pair-wise product will be a basis of 2-var funcs Tensor product of operators