Download presentation
Published byJennifer Whitehead Modified over 9 years ago
1
Outline Why this subject? What is High Performance Computing?
Parallel computing Issues in HPC Monika Shah
2
What is a Parallel Computer?
Parallel computing: the use of multiple computers or processors working together on a common task Parallel computer: consists of two or more processing units, which are operating more or less independently in parallel each processor works on its section of the problem processors are allowed to exchange information with other processors Runtime = < t/n, t=Rutime for sequential code n= #Processing units
3
Parallel vs. Serial Computers
Two big advantages of parallel computers: total performance total memory Parallel computers enable us to solve problems that: benefit from, or require, fast solution require large amounts of memory example that requires both: weather forecasting
4
Parallel vs. Serial Computers
Some benefits of parallel computing include: more data points bigger domains better spatial resolution more particles more time steps longer runs better temporal resolution faster execution faster time to solution more solutions in same time lager simulations in real time
5
Serial Processor Performance
Although Moore’s Law ‘predicts’ that single processor performance doubles every 18 months, eventually physical limits on manufacturing technology will be reached
6
Classification of Parallel computers
Based on various aspects of their architecture Memory-Processor Organization : Distinguished by the way processors connected with the memory (Memory Arch , N/w Topology) Flynn’s Classification Scheme : Considering #Instruction-streams, and #data-streams Erlanger Classification Scheme(ECS) : focus on #control-units, #function-units, Word-size
7
Classification of Parallel Computer based on Memory-Processor Organization
The simplest and most useful way to classify modern parallel computers is by their memory model: shared memory distributed memory Distributed Shared memory
8
Shared vs. Distributed Memory
P P P P P P Shared memory - single address space. All processors have access to a pool of shared memory. (Ex: SGI Origin, Sun E10000) BUS Memory P P P P P P Distributed memory - each processor has it’s own local memory. Must do message passing to exchange data between processors. (Ex: CRAY T3E, IBM SP, clusters) M M M M M M Network
9
Shared Memory: UMA vs. NUMA
Uniform memory access (UMA): Each processor has uniform access to memory. Also known as symmetric multiprocessors, or SMPs (Sun E10000) P P P P P P BUS Memory P P P P P P P P Non-uniform memory access (NUMA): Time for memory access depends on location of data. Local access is faster than non-local access. Easier to scale than SMPs (SGI Origin) BUS BUS Memory Memory Network
10
Shared Memory Architecture
UMA Global Address Space Implicit Communication and Synchronization Disadvantages : Expensive and complex hardware (for automatic synchronization support) Poor scalability (Centralized Memory and Interconnection N/w) : Difficult high level of parallelism (most have max 64 m/c) System Bus bottleneck: To access memory , system bus(address bus) may not be available Because, it may being used by other CPU High Network contention Single Point Failure Advantages : Shared Memory give No Need to copy data No Explicit synchronization overhead Easy Programming
11
Distributed Memory Advantages : Better Cost/Performance ratio
Reliability Better scalability. Disadvantages : More I/O operation for synchronization Explicit synchronization Complex Programming Problems to keep up-to-date cache copies (Can be solved using Cache coherence and consistency protocol)
12
Distributed Memory: MPPs vs. Clusters
Processor-memory nodes are connected by some type of interconnect network Massively Parallel Processor (MPP): tightly integrated, single system image. Cluster: individual computers connected by s/w Difference between Cluster & distributed computer: cluster provides SSI (Single System Image) Interconnect Network CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM
13
Processors, Memory, & Networks
Both shared and distributed memory systems have: processors: now generally commodity RISC processors memory: now generally commodity DRAM network/interconnect: between the processors and memory (bus, crossbar, fat tree, torus, hypercube, etc.) We will now begin to describe these pieces in detail, starting with definitions of terms.
14
Classification of Parallel Computer Flynn’s Classification scheme (1 )
Single instruction, single data stream – SISD Single instruction, multiple data stream – SIMD Multiple instruction, single data stream – MISD Multiple instruction, multiple data stream- MIMD
15
Classification of Parallel Computer Flynn’s Classification scheme (1 )
Single processor Single instruction stream Data stored in single memory
16
Classification of Parallel Computer Flynn’s Classification scheme (2 )
Single machine instruction Each instruction executed on different set of data by different processors Number of processing elements Machine controls simultaneous execution Lockstep basis Each processing element has associated data memory Two methods: Processor arrays and Vector Pipelines Best suited for regular application Application: Vector and array processing
17
Classification of Parallel Computer Flynn’s Classification scheme (3 )
Sequence of data Transmitted to set of processors Each processor executes different instruction sequence Not clear if it has ever been implemented Application: Fault avoidance Multiple cryptography algo(s) attemp to crack a single coded message Multiple frequency filters operating on a single system
18
Classification of Parallel Computer Flynn’s Classification scheme (4 )
Set of processors (CU, PU) Multiple Instructions : different P Multiple Data streams: Every P may work on different data streams Synchronous / Asynchronous Most common today's MIMD also include SIMD sub-components Examples: modern supercomputers, SMPs, NUMA systems, Cluster, Grid, multi-core CPUs
19
Taxonomy of parallel Computer
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.