Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.

Similar presentations


Presentation on theme: "© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware."— Presentation transcript:

1 © 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware

2 IBM Blue Gene/P System Administration BlueGene/P Racks 13.6 GF/s 8 MB EDRAM 4 processors 1 chip, 20 DRAMs 13.6 GF/s 2.0 GB DDR2 32 Node Cards 13.9 TF/s 2 TB 72 Racks, 72x32x32 1 PF/s 144 TB Cabled 8x8x16 Rack Compute Card Chip 435 GF/s 64 GB (32 chips 4x4x2) 32 compute, 0-2 IO cards Node Card Intro (BG/P and BG/L)

3 IBM Blue Gene/P System Administration Node Dual Processor Processor Card 2 chips, 1x2x1 Node Card (32 chips 4x4x2) 16 compute, 0-2 IO cards Rack 32 Node Cards Cabled 8x8x16 System 64 Racks, 64x32x32 5.6/11.2 GF/s 1.0 GB 90/180 GF/s 16 GB 2.8/5.6 TF/s 512 GB 180/360 TF/s 32 TB BlueGene/L Racks Intro (BG/P and BG/L)

4 IBM Blue Gene/P System Administration Comparison of BG/L and BG/P nodes FeatureBlue Gene/LBlue Gene/P Cores per node24 Core Clock Speed700 MHz850 MHz Cache CoherencySoftware managedSMP Private L1 cache32 KB per core Private L2 cache14 stream prefetching Shared L3 cache4 MB8 MB Physical Memory per Node 512 MB - 1 GB2 GB Main Memory Bandwidth 5.6 GB/s13.6 GB/s Peak Performance5.6 GFlop/s per node13.6 GFlop/s per node

5 IBM Blue Gene/P System Administration Comparison of BG/L and BG/P nodes (2) Torus Network Bandwidth2.1 GB/s5.1 GB/s Hardware Latency (nearest neighbor) 200 ns (32B packet) and 1.6 μs (256B packet) 100 ns (32B packet) and 800 ns (256B packet) Global Collective Network Bandwidth700 MB/s1.7 GB/s Hardware Latency (Round trip worst case) 5.0 μs3.0 μs Full System (72 rack comparison) Peak Performance410 TFlop/s~1 PFlop/s Power1.7 MW~2.3 MW

6 IBM Blue Gene/P System Administration Blue Gene/P Rack

7 IBM Blue Gene/P System Administration Blue Gene/L Rack

8 IBM Blue Gene/P System Administration BG/P and BG/L Cooling

9 IBM Blue Gene/P System Administration 32 Compute nodes Optional IO card (one of 2 possible) with 10Gb optical link Local DC-DC regulators (6 required, 8 with redundancy) BG/P Node Card

10 IBM Blue Gene/P System Administration Blue Gene/L Hardware Node Card

11 IBM Blue Gene/P System Administration BG/P Compute Card

12 IBM Blue Gene/P System Administration Three type of compute/IO cards  1.07 volts44V3572  1.13 volts44V3575  1.19 volts44V3578  Cannot be mixed within a node card.

13 IBM Blue Gene/P System Administration Blue Gene/L Processor Card

14 IBM Blue Gene/P System Administration Blue Gene/P Hardware Link Cards

15 IBM Blue Gene/P System Administration Blue Gene/L Hardware Link Cards

16 IBM Blue Gene/P System Administration Blue Gene/P Hardware Service Card

17 IBM Blue Gene/P System Administration Blue Gene/L Hardware Service Card

18 IBM Blue Gene/P System Administration Naming conventions (1) Racks: Rxx Power Modules: Rxx-B-Px Bulk Power Supply: Rxx-B Power Cable: Rxx-B-C Rack Column (0-F) Rack Row (0-F) Midplanes: Rxx-Mx Clock Cards: Rxx-K Fan Assemblies: Rxx-Mx-Ax Fans: Rxx-Mx-Ax-Fx Power Module (0-7) 0-3 Left to right facing front 4-7 left to right facing rear Bulk Power Supply Rack Row (0-F) Rack Column (0-F) Midplane (0-1) 0=Bottom 1=Top Rack Column (0-F) Rack Row (0-F) Clock Rack Column (0-F) Rack Row (0-F) Fan Assembly (0-9) 0=Bottom Front, 4=Top Front 5=Bottom Rear, 9=Top Rear Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) Fan (0-2) 0=Tailstock 2= Midplane Fan Assembly (0-9) 0=Bottom Front, 4=Top Front 5=Bottom Rear, 9=Top Rear Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) See picture below for details Bulk Power Supply Rack Row (0-F) Rack Column (0-F) Power Cable Bulk Power Supply Rack Row (0-F) Rack Column (0-F)

19 IBM Blue Gene/P System Administration Naming conventions (2) Service Cards: Rxx-Mx-S Service Card Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) Link Cards: Rxx-Mx-Lx Node Cards: Rxx-Mx-Nxx Compute Cards: Rxx-Mx-Nxx-Jxx Link Card (0-3) Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) Node Card (00-15) Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) Compute Card (04 through 35) Node Card (00-15) Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) 0=Bottom Front 1=Top Front 2=Bottom Rear 3=Top Rear 00=Bottom Front 07=Top Front 08=Bottom Rear 15=Top Rear I/O Cards: Rxx-Mx-Nxx-Jxx I/O Card (00-01) Node Card (00-15) Midplane (0-1) 0-Bottom, 1=Top Rack Column (0-F) Rack Row (0-F) Note: Master service card for a rack is always Rxx-M0-S See picture below for details

20 IBM Blue Gene/P System Administration Rack Naming Convention Note direction of slant on covers Service card side Note: The fact that this illustration shows numbers 00 through 77 does not imply this is the largest configuration possible. The largest configuration possible is 256 racks numbered 00 through FF.

21 IBM Blue Gene/P System Administration Torus X-Y-Z X Y Z

22 IBM Blue Gene/P System Administration Node, Link, Service Card Names L1 N07 N06 N05 N04 S N03 N02 N01 N00 L0 L3 N15 N14 N13 N12 N11 N10 N09 N08 L2

23 IBM Blue Gene/P System Administration Node Card J35 J31 J27 J23 J19 J15 J11 J07 J01 J34 J30 J26 J22 J18 J14 J10 J06 J33 J29 J25 J21 J17 J13 J09 J05 J00 EN0 EN1 J32 J28 J24 J20 J16 J12 J08 J04

24 IBM Blue Gene/P System Administration Service Card Control Network Clock Input Rack Row Indicator (0-F) Rack Column Indicator (0-F)

25 IBM Blue Gene/P System Administration U00 U01 U02 U03 U04 U05 J00J02J04J06J08J10J12J14 J01J03J05J07J09J11J13J15 Link Card

26 IBM Blue Gene/P System Administration Clock Card Output 9 Master Output 8 Output 7 Output 6 Output 5 Output 4 Output 3 Output 2 Output 1 Output 0 Input Slave

27 IBM Blue Gene/P System Administration Networks (BG/P and BG/L) Service Network Service Network Functional Network Functional Network Site Network Site Network Users Administrator Front-End Node File System Service Node Blue Gene Core

28 IBM Blue Gene/P System Administration Blue Gene/P Interconnection Networks  3 Dimensional Torus  Interconnects all compute nodes –Communications backbone for computations  Adaptive cut-through hardware routing  3.4 Gb/s on all 12 node links (5.1 GB/s per node)  0.5 µs latency between nearest neighbors, 5 µs to the farthest –MPI: 3 µs latency for one hop, 10 µs to the farthest  1.7/2.6 TB/s bisection bandwidth, 188TB/s total bandwidth (72k machine)  Collective Network  Interconnects all compute and I/O nodes (1152)  One-to-all broadcast functionality  Reduction operations functionality  6.8 Gb/s of bandwidth per link  Latency of one way tree traversal 2 µs, MPI 5 µs  ~62TB/s total binary tree bandwidth (72k machine)  Low Latency Global Barrier and Interrupt  Latency of one way to reach all 72K nodes 0.65 µs, MPI 1.6 µs  Other networks  10Gb Functional Ethernet –I/O nodes only  1Gb Private Control Ethernet –Provides JTAG access to hardware. Accessible only from Service Node system

29 IBM Blue Gene/P System Administration Blue Gene/L Networks  3 Dimensional Torus  Interconnects all compute nodes (65,536)  Virtual cut-through hardware routing  1.4Gb/s on all 12 node links (2.1 GB/s per node)  Communications backbone for computations  0.7/1.4 TB/s bisection bandwidth, 67TB/s total bandwidth  Collective Network  One-to-all broadcast functionality  Reduction operations functionality  2.8 Gb/s of bandwidth per link; Latency of tree traversal 2.5 µs  ~23TB/s total binary tree bandwidth (64k machine)  Interconnects all compute and I/O nodes (1024)  Ethernet IP  Incorporated into every node ASIC  Active in the I/O nodes (1:64)  All external comm. (file I/O, control, user interaction, etc.)  Control System Network  Boot, monitoring and diagnostics  Global Barrier and Interrupt


Download ppt "© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware."

Similar presentations


Ads by Google