Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction Super-computing Tuesday

Similar presentations


Presentation on theme: "Introduction Super-computing Tuesday"— Presentation transcript:

1 18.337 Introduction Super-computing Tuesday

2

3 News you can use Hardware
Multicore chips (2008: mostly 2 cores and 4 cores, but doubling) (cores=processors) Servers (often 2 or 4 multicores sharing memory) Clusters (often several, to tens, and many more servers not sharing memory)

4 Performance Single processor speeds for now no longer growing.
Moore’s law still allows for more real estate per core (transistors double/nearly every two years) People want performance but hard to get Slowdowns seen before speedups Flops (floating point ops / second) Gigaflops (109), Teraflops (1012), Petaflops(1015) Compare matmul with matadd. What’s the difference?

5 Some historical machines

6 Earth Simulator was #1 now #30

7 Some interesting hardware
Nvidia Cell Processor Sicortex – “Teraflops from Milliwatts”

8 Programming MPI: The Message Passing Interface
Low level “lowest common denominator” language that the world has stuck with for nearly 20 years Can get performance, but can be a hindrance as well Some say that there are those that will pay for a 2x speedup, just make it easy Reality is that many want at least 10x and more for a qualitative difference in results People forget that serial performance can depend on many bottlenecks including time to memory Performance (and large problems) are the reason for parallel computing, but difficult to get the “ease of use” vs “performance” trade-off right.

9 Places to Look Best current news: Huge Conference:
Huge Conference: MIT Home grown software, now Interactive Supercomputing (Star-P for MATLAB®, Python, and R)

10 Architecture Diagrams from Sam Berkeley Bottom Up Performance Engineering: Understanding Hardware’s implications on performance up to software Top Down: measuring software and tweaking sometimes aware and sometimes unaware of hardware

11

12 Want to delve into hard numerical algorithms
Examples: FFTs and Sparse Linear Algebra At the MIT level: Potential “not quite right” question: How do you parallelize these operations? Rather what issues arise and why is getting performance hard? Why is nxn matmul easy? Almost cliché? Comfort level in this class to delve in?

13 Another New Term for the Day: SIMD
SIMD (Single Instruction, Multiple Data) refers to parallel hardware that can execute the same instruction on multiple data.  One may also refer to a SIMD operation (sometimes but not always historically synonymous with a Data Parallel Operation) when the software appears to run “lock-step” with every processor executing the same instructions.


Download ppt "Introduction Super-computing Tuesday"

Similar presentations


Ads by Google