Download presentation
Presentation is loading. Please wait.
Published byRobyn Wheeler Modified over 9 years ago
1
Seaborg Cerise Wuthrich CMPS 5433
2
Seaborg Manufactured by IBM Distributed Memory Parallel Supercomputer Based on IBM’s SP RS/6000 Architecture
3
Seaborg Used by National Energy Research Scientific Computing Center (funded by Department of Energy) at Berkeley Lab Named for Glenn Seaborg – Nobel laureate chemist who discovered 10 atomic elements, including plutonium
4
IBM SP RS/6000 Architecture SP – Scalable Power Parallel RS – RISC System Composed of nodes
5
Nodes 416 nodes with 16 processors/node 380 compute nodes 20 nodes used for disk storage 6 login nodes 2 network nodes 8 spares
6
Front and back view of nodes
7
Node Architecture 16 IBM Power3 processors per node Between 16 – 64 GB Memory per node 2 network adapters per node
8
Processors IBM Power3 processors each running at 375 MHz Power – Performance Optimized With Enhanced RISC PowerPC processors are RISC-based symmetric multiprocessors (every processor is functionally identical) with 64-bit addressability Connected to L2 cache by bus running at 250 MHz Dynamic Branch Prediction Instruction prefetching FP units are fully pipelined 4 FLOP/cycle x 375 MHz = 1500 Million or 1.5 GFLOPS/sec
9
Power PC 3 processor 32 KB 64KB 8 MB
10
Power3 Processor 15 million transistors
11
Interconnection Network Nodes connected with high bandwidth, low latency IBM SP2 switch Can be connected in various topologies depending on number of nodes Each switchboard has up to 32 links 16 links to nodes 16 links to other switchboards
12
Interconnection Network Star Topology used for up to 80 nodes and still guarantee 4 independent shortest paths
13
Interconnection Network Intermediate switchboards must be added for 81-256 nodes
14
Interconnection Network The combination of HW and SW of the switch system is known as the CSS – Communication SubSystem Network is highly available
15
Latency in the network Within nodes, latency is 9 microseconds Between nodes, using Message Passing Interface, the latency is 17 microseconds
16
Scalability Architecture can handle from 1 – 512 nodes The current version of Seaborg (2003) is twice the size of the original (2001)
17
Memory shared memory Within each node, between 16 & 64 GB of shared memory Between nodes, there is distributed memory Parallel programs can be run using distributed memory message passing, shared memory threading or a combination
18
I/O 20 nodes run the distributed parallel I/O system called GPFS – General Parallel File System 44 Terabytes of disk space Each node runs its own copy of AIX – IBM’s Unix-based OS
19
Production Status/Cost $33 Million for the first version put into operation in June 2001 At the time, it was the 2 nd most powerful computer in the world and the most powerful one for unclassified research In 2003, number of nodes was doubled
20
Customers 2100 researchers at national labs and universities across the country Restricted to Department of Energy funded massively parallel processing projects Located at the National Energy Research Computing Center
21
Applications Massively Parallel Scientific Research Gasoline Combustion Simulation Fusion Energy Research Climate Modeling Materials Science Computational Biology Particle Simulations Plasma Acceleration Large Scale Simulation of Atomic Structures
22
Interesting Features In 2004, 2.4 times as many requests as resources available Uses POE (Parallel Operating Environment) and LoadLeveler to schedule jobs
23
Survey Results – Why do you use Seaborg? Need massively parallel computer High speed Achieves level of numerical accuracy Can run several simulations in parallel Easy to connect using ssh Fastest and most efficient computer available for my research Long queue times are great Large enough memory for my needs
24
Survey Results – How could Seaborg be improved? I think too many nodes are scheduled for many jobs. Scaling is not good in many cases. “..Virtually impossible to do interactive work” “Debuggers are terrible.” “Compilers and debuggers are a step down from the Cray.” Giving preference to high concurrency jobs makes smaller jobs wait
25
Questions
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.