Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 345: Chapter 10 Parallelism, Concurrency, and Alternative Models Or, Getting Lots of Stuff Done at Once.

Similar presentations


Presentation on theme: "CS 345: Chapter 10 Parallelism, Concurrency, and Alternative Models Or, Getting Lots of Stuff Done at Once."— Presentation transcript:

1 CS 345: Chapter 10 Parallelism, Concurrency, and Alternative Models Or, Getting Lots of Stuff Done at Once

2 Parallelism is the use of many processors to solve a problem. When we want to ask the question, “Will parallelism result in some intractable problems becoming tractable?” we need to distinguish between fixed parallelism and expanding parallelism.

3 Fixed Parallelism: The number of processors remains fixed as the problem size grows. Expanding Parallelism: The number of processors available grows as the problem size grows. Since the number of processors available affects the running time of a parallel algorithm, how do we balance the two?

4 Product Complexity: Time x Size. See Figure 10.2, page 262. Note that since any parallel algorithm can be serialized on a single processor, the best product complexity cannot be lower than the lower bound of the problem’s sequential complexity.

5 Among the difficulties of parallel algorithms is “How should processors communicate?” There are two basic approaches. –Shared Memory –Fixed Networks

6 Shared Memory These types of machines are called Multi ‑ Processors. Is the shared memory only for reading values or can it be used to write values? If the shared memory is for writing, then there must be some method for resolving conflicts.

7 Fixed-Network Machines These types of computers are called Multi ‑ Computers. A processor is connected to a fixed number of neighbors, and each processing element has its own private memory. There are many types of network designs: mesh/torus, fat trees, hypercubes, and others based on graph theory.

8 Some networks are designed for specific problems. Others are for general purpose, but are better at some algorithms than others. Some issues with large multi-computers –What communication model is used? Store and forward Worm-hole routing Virtual cut-through –Deadlock and Livelock –Fault-tolerance.

9 Systolic Networks One type of network for parallel processing is a systolic network. See figure 10.5 on page 266. This is similar to an assembly line. Each processor performs a set task on some data and passes it on to its neighbor. In the example, there is an n x m matrix. A sequential algorithm takes O( n x m ), but a systolic network takes O( n + m ).

10 No. Since any parallel algorithm can be serialized, a parallel algorithm cannot solve an undecidable or non-computable problem. Can a parallel algorithm solve an undecidable problem?

11 Can a parallel algorithm turn an intractable problem into a tractable problem? Many problems in NP, including some NP- Complete problems, have been shown to have a polynomial time parallel solution. However –The number of processors required for the solution is exponential.

12 –NP-complete problems are not known to be intractable, so this does not necessarily mean that parallelism can remove inherent intractability. –It is not clear that a parallel algorithm that uses only a polynomial number of instructions but requires an exponential number of processors can actually run in polynomial time on a real computer. Thus, the question of whether a parallel algorithm can remove inherent intractability, is still an open question.

13 The Parallel Computation Thesis Part of this thesis is the claim that parallel time is the same as sequential memory. If P can be solved sequentially using S space for inputs of length N, then it can be solved in parallel time that is no worse than polynomial time in S.

14 Alternately, if P can be solved in parallel time T with inputs of length N, it can be solved sequentially using memory bounded by a polynomial in T.

15 So, any algorithm solvable by a sequential algorithm in polynomial space can be solved by a parallel algorithm in polynomial time. That is: Sequential-PSpace = Parallel-PTime

16 Thus the question of whether there are intractable problems that become tractable when using parallelism comes down to the question: Does PSpace contain intractable problems? This, like the question, “Does P=NP?”, is still an open question.

17 The Class NC (Nick’s Class) In general, a polynomial time parallel algorithm cannot claim to be tractable since it may require an exponential number of processors. A problem is in NC if –It runs in polylogrithmic time. –It requires only a polynomial number of processors.

18 All problems in NC are also in P, but it is not known if the converse is true. Most researchers believe that the two classes are not the same. Thus, we have: NC  P  NP  PSpace

19 Conjecture 1 There are problems solvable in reasonable sequential space – reasonable parallel time that cannot be solved in reasonable sequential time, even using non- determinism.

20 Conjecture 2 There are problems solvable in reasonable sequential time only if non-determinism is used.

21 Conjecture 3 There are problems solvable in reasonable sequential space, but not in very fast parallel time using reasonable hardware size.


Download ppt "CS 345: Chapter 10 Parallelism, Concurrency, and Alternative Models Or, Getting Lots of Stuff Done at Once."

Similar presentations


Ads by Google