Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability.

Slides:



Advertisements
Similar presentations
The Equivalence of Sampling and Searching Scott Aaronson MIT.
Advertisements

Models of Computation Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms Week 1, Lecture 2.
Problems and Their Classes
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
Parallel vs Sequential Algorithms
The class NP Section 7.3 Giorgi Japaridze Theory of Computability.
Complexity class NP Is the class of languages that can be verified by a polynomial-time algorithm. L = { x in {0,1}* | there exists a certificate y with.
NL equals coNL Section 8.6 Giorgi Japaridze Theory of Computability.
Efficient Parallel Algorithms COMP308
Probabilistic algorithms Section 10.2 Giorgi Japaridze Theory of Computability.
Complexity 7-1 Complexity Andrei Bulatov Complexity of Problems.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Advanced Topics in Algorithms and Data Structures Page 1 Parallel merging through partitioning The partitioning strategy consists of: Breaking up the given.
1 L is in NP means: There is a language L’ in P and a polynomial p so that L 1 · L 2 means: For some polynomial time computable map r : 8 x: x 2 L 1 iff.
Simulating a CRCW algorithm with an EREW algorithm Efficient Parallel Algorithms COMP308.
Advanced Topics in Algorithms and Data Structures
CS151 Complexity Theory Lecture 5 April 13, 2015.
Slide 1 Parallel Computation Models Lecture 3 Lecture 4.
Tractable and intractable problems for parallel computers
CS151 Complexity Theory Lecture 5 April 13, 2004.
On the Hardness of Graph Isomorphism Jacobo Tor á n SIAM J. Comput. Vol 33, p , Presenter: Qingwu Yang April, 2006.
Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems.
Computability and Complexity 32-1 Computability and Complexity Andrei Bulatov Boolean Circuits.
Parallel Merging Advanced Algorithms & Data Structures Lecture Theme 15 Prof. Dr. Th. Ottmann Summer Semester 2006.
CHAPTER 4 Decidability Contents Decidable Languages
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
Complexity 19-1 Parallel Computation Complexity Andrei Bulatov.
Umans Complexity Theory Lectures Lecture 2c: EXP Complete Problem: Padding and succinctness.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Simulating a CRCW algorithm with an EREW algorithm Lecture 4 Efficient Parallel Algorithms COMP308.
PSPACE-Completeness Section 8.3 Giorgi Japaridze Theory of Computability.
CS 461 – Nov. 21 Sections 7.1 – 7.2 Measuring complexity Dividing decidable languages into complexity classes. Algorithm complexity depends on what kind.
Themes of Presentations Rule-based systems/expert systems (Catie) Software Engineering (Khansiri) Fuzzy Logic (Mark) Configuration Systems (Sudhan) *
Complexity Classes Kang Yu 1. NP NP : nondeterministic polynomial time NP-complete : 1.In NP (can be verified in polynomial time) 2.Every problem in NP.
1 Lecture 2: Parallel computational models. 2  Turing machine  RAM (Figure )  Logic circuit model RAM (Random Access Machine) Operations supposed to.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Machines with Memory Chapter 3 (Part B). Turing Machines  Introduced by Alan Turing in 1936 in his famous paper “On Computable Numbers with an Application.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Analysis of Algorithms
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
Logic Circuits Chapter 2. Overview  Many important functions computed with straight-line programs No loops nor branches Conveniently described with circuits.
Computational Complexity Jang, HaYoung BioIntelligence Lab.
Interactive proof systems Section 10.4 Giorgi Japaridze Theory of Computability.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
Hierarchy theorems Section 9.1 Giorgi Japaridze Theory of Computability.
The Classes L and NL Section 8.4 Giorgi Japaridze Theory of Computability.
Alternation Section 10.3 Giorgi Japaridze Theory of Computability.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
The Acceptance Problem for TMs
Theory of Computability
Analysis of Algorithms
CS137: Electronic Design Automation
Theory of Computability
Theory of Computability
Umans Complexity Theory Lectures
Theory of Computability
CSC 4170 Theory of Computation Turing reducibility Section 6.3.
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
CSE838 Lecture notes copy right: Moon Jung Chung
Theory of Computability
Theory of Computability
Theory of Computability
Theory of Computability
CS151 Complexity Theory Lecture 5 April 16, 2019.
Intro to Theory of Computation
Presentation transcript:

Parallel computation Section 10.5 Giorgi Japaridze Theory of Computability

Introduction 10.5.a Giorgi Japaridze Theory of Computability A parallel computer is one that can perform multiple operations simultaneously. Such computers may solve certain problems much faster than sequential computers, which can only do a single operation at a time. In practice, the distinction between the two is slightly blurred because most real computers (including “sequential” ones) are designed to use some parallelism as they execute individual instructions (remember pipelining after all). We focus here on massive parallelism whereby a huge number (think of millions or more) of processing elements are actively participating in a single computation. One of the most popular models in theoretical work on parallel algorithms is called the Parallel Random Access Machine or PRAM. In the PRAM model, idealized processors with a single instruction set patterned on actual computers interact via a shared memory. Our textbook, however, uses an alternative, simpler model of parallel computers. Namely, Boolean circuits, already seen in Section 9.3.

Uniform Boolean circuits as parallel computers 10.5.b Giorgi Japaridze Theory of Computability In the Boolean circuit model of a parallel computer, we take each gate to be an individual processor, so we define the processor complexity of a Boolean circuit to be its size. We consider each processor to compute its function in a single time step, so we define the parallel time complexity of a Boolean circuit to be its depth. Any particular circuit has a fixed input size (= number of input variables), so we use circuit families as defined in Definition 9.27 for recognizing languages. We however need to impose a technical requirement on circuit families so that they correspond to parallel computation models such as PRAMs where a single machine is capable of handling all input lengths. That requirement states that we can easily obtain all members in a circuit family. This uniformity requirement is reasonable because knowing that a small circuit exists for recognizing certain elements of a language isn’t very useful if the circuit itself is hard to find. That leads us to the following definition. Definition A family of circuits (C 1,C 2,…) is uniform if some log space transducer T outputs when T’s input is 1 n. We say that a language has simultaneous size-depth circuit complexity at most (f(n),g(n)) if a uniform circuit family exists for that language with size complexity f(n) and depth complexity g(n).

The class NC 10.5.c Giorgi Japaridze Theory of Computability Many interesting problems have size-depth complexity (O(n k ),O(log k n)) for some constant k. Such problems may be considered highly parallelizable with a moderate number of processors. That prompts the following definition. Definition For i ≥ 1, NC i is the class of languages that can be decided by a uniform family of circuits with polynomial size and O(log i n) depth. NC is the class of languages that are in NC i for some i. Functions that are computed by such circuit families are called NC i computable or NC computable.

Main theorems 10.5.d Giorgi Japaridze Theory of Computability Theorem NC 1  L. Proof idea. We sketch a log space algorithm to decide a language A in NC 1. On input w of length n, the algorithm can construct the description as needed of the n’th circuit in the uniform circuit family for A. Then the algorithm can evaluate the circuit using a depth-first search from the output gate. Theorem NL  NC 2. Proof idea. Omitted. Theorem NC  P. Proof idea. A polynomial time algorithm can run the log space transducer to generate circuit C n and simulate it on an input of length n. Open problem: NC=P? Equality here would be surprising because it would imply that all polynomial time solvable problems are highly parallelizable.

P-completeness 10.5.e Giorgi Japaridze Theory of Computability Definition A language B is P-complete if 1. B  P, and 2. every A in P is log space reducible to B. CIRCUIT-VALUE = { | C is a Boolean circuit and C(x)=1}. For a circuit C and input string x we write C(x) to be the value of C on x. The following language can be called the circuit evaluation problem. Theorem CIRCUIT-VALUE is P-complete.