PARALLEL COMPUTING
Abstract Parallelism has become a standard technique in the design of high performance computers.despite the impressive progress achived in the desugn of sequential von Neumann machines , their computing power is limited in te light of certain applications. Parallel computing emerged as an alternative and viable medium for the solution of many important problems.Many conventional machines such as PCs and workstations contain some degree of parallelism. Such a tendency represents a departure from sequential model of computation and parallel computing itself has not been a big success. The difficulty lies with a gap between the view needed to use a particular machine effectively and the view needed to develop parallel software successfully .
Introduction Software has been written for serial computation: To be executed by a single computer having a single Central Processing Unit (CPU); Problems are solved by a series of instructions, executed one after the other by the CPU. Only one instruction may be executed at any moment in time.
The computer resources are: A single computer with multiple processors Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. The computer resources are: A single computer with multiple processors An arbitrary number of computers connected by a network; A combination of both. 3. The computational problem usually demonstrates characteristics such as the ability to be: Broken apart into discrete pieces of work that can be solved simultaneously; Execute multiple program instructions at any moment in time; Solved in less time with multiple compute resources than with a single compute resource. 4. Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world, many complex, interrelated events happening at the same time.
USE Save time Solve larger problems Taking advantage of non-local resources Cost savings Overcoming memory constraints Transmission speeds Limits to miniaturization Economic limitations
ISSUES Distributed Systems Issues Concurrent access Reliability Transparency Scalability
Concepts and Terminology 1)Von Neumann Architecture
Basic design: Memory is used to store both program and data instructions Program instructions are coded data which tell the computer to do something Data is simply information to be used by the program A central processing unit (CPU) gets instructions and/or data from memory, decodes the instructions and then sequentially performs them.
2)Flynn's Classical Taxonomy Flynn's taxonomy distinguishes multi- processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Data. Each of these dimensions can have only one of two possible states: Single or Multiple. The matrix below defines the 4 possible classifications according to Flynn.
3)Parallel Computer Memory Architectures (a).Shared Memory
(b).Distributed Memory
(c).Hybrid Distributed-Shared Memory
4) Parallel Programming Models (a).Shared Memory Model
(b).Threads Model
(c).Message Passing Model
Conclusion we can show that partial evaluation plays an important role in the parallel computation process. This approach is intended for a broad spectrum of activity such as automatic transforming, optimizing, specialization of programs with respect to the partial knowledge of the input and for their parallelization. we demonstrate through the analysis of the program examples the way to partially overcome some shortcomings and non-effectiveness of declarative programs, and show that the method is particularly effective on numerically oriented scientific programs and even for irregular data structures like trees,lists,graphs etc.,
THANK YOU