Peter Wegner, DESY CHEP03, 25 March LQCD benchmarks on cluster architectures M. Hasenbusch, D. Pop, P. Wegner (DESY Zeuthen), A. Gellrich, H.Wittig (DESY Hamburg) CHEP03, 25 March 2003 Category 6: Lattice Gauge Computing Motivation PC Benchmark architectures DESY Cluster E7500 systems Infiniband blade servers Itanium2 Benchmark programs, Results Future Conclusions, Acknowledgements
Peter Wegner, DESY CHEP03, 25 March PC Cluster Motivation LQCD, Stream Benchmark, Myrinet Bandwidth 32/64-bit Dirac Kernel, LQCD (Martin Lüscher, (DESY) CERN, 2000): P4, 1.4 GHz, 256 MB Rambus, using SSE1(2) instructions incl. cache pre- fetch Time per lattice point: micro sec (1503 Mflops [32 bit arithmetic]) micro sec (814 Mflops [64 bit arithmetic]) Stream Benchmark, Memory Bandwidth: P4(1.4 GHz, PC800 Rambus): 1.4 … 2.0 GB/s PIII (800MHz, PC133 SDRAM) : 400 MB/s PIII(400 MHz, PC133 SDRAM) : 340 MB/s Myrinet, external Bandwidth: Gb/s optical-connection, bidirectional, ~240 MB/s sustained
Peter Wegner, DESY CHEP03, 25 March Benchmark Architectures - DESY Cluster Hardware NodesMainboard Supermicro P4DC6 2 x XEON P4, 1.7 (2.0) GHz, 256 (512) kByte Cache 1 Gbyte (4x 256 Mbyte) RDRAM IBM 18.3 GB DDYS-T18350 U ” SCSI disk Myrinet 2000 M3F-PCI64B-2 Interface NetworkFast Ethernet Switch Gigaline 2024M, 48x100BaseTX ports + GIGAline BaseSX-SC Myrinet Fast Interconnect M3-E32 5 slot chassis, 2xM3-SW16 Line cards Installation Zeuthen: 16 dual CPU nodes, Hamburg: 32 dual CPU nodes
800MB/s 64 bit PCI P64H >1GB/s 64 bit PCI 800MB/s P64H Benchmark Architectures DESY Cluster i860 chipset problem 133 MB/s I ICH2 MCH PCI Slots, (33 MHz, 32bit) 4 USB ports LAN Connection Interface ATA 100 MB/s (dual IDE Channels) 6 channel audio 10/100 Ethernet Intel® Hub Architecture 266 MB/s 3.2 GB/s XeonProcessor Dual Channel RDRAM* AGP4XGraphics 400MHz System Bus XeonProcessor PCI Slots (66 MHz, 64bit) MRH MRH Up to 4 GB of RDRAM bus_read (send) = 227 MBytes/s bus_write (recv) = 315 MBytes/s of max. 528 MBytes/s External Myrinet bandwidth: 160 Mbytes/s 90 Mbytes/s bidirectional
Peter Wegner, DESY CHEP03, 25 March Benchmark Architectures – Intel E7500 chipset
Peter Wegner, DESY CHEP03, 25 March Benchmark Architectures - E7500 system Par-Tec (Wuppertal) 4 Nodes:Intel(R) Xeon(TM) CPU 2.60GHz 2 GB ECC PC1600 (DDR-200) SDRAM Super Micro P4DPE-G2 Intel E7500 chipset PCI 64/66 2 x Intel(R) PRO/1000 Network Connection Myrinet M3F-PCI64B-2
Peter Wegner, DESY CHEP03, 25 March Benchmark Architectures Leibniz-Rechenzentrum Munich (single cpu tests): Pentium IV 3,06GHz. with ECC Rambus Pentium IV 2,53GHz. with Rambus 1066 memory Xeon, 2.4GHz. with PC2100 DDR SDRAM memory (probably FSB400) Megware: 8 nodes dual XEON, 2.4GHz, E7500 2GB DDR ECC memory Myrinet2000 Supermicro P4DMS-6GM University of Erlangen: Itanium2, 900MHz, 1.5MB Cache, 10GB RAM zx1 chipset (HP)
Peter Wegner, DESY CHEP03, 25 March Benchmark Architectures - Infiniband Megware: 10 Mellanox ServerBlades Single Xeon 2.2 GHz 2 GB DDR RAM ServerWorks GC-LE Chipsatz InfiniBand 4X HCA RedHat 7.3, Kernel MPICH und OSU-Patch für VIA/InfiniBand Mellanox Firmware 1.14 Mellanox SDK (VAPI) Compiler GCC 2.96
Peter Wegner, DESY CHEP03, 25 March Dirac Operator Benchmark (SSE) 16x16 3, single P4/XEON CPU Dirac operator Linear Algebra MFLOPS
Peter Wegner, DESY CHEP03, 25 March Parallel (1-dim) Dirac Operator Benchmark (SSE), even-odd preconditioned, 2 x 16 3, XEON CPUs, single CPU performance MFLOPS Myrinet2000 i860: 90 MB/s E7500: 190 MB/s
Peter Wegner, DESY CHEP03, 25 March Parallel (1-dim) Dirac Operator Benchmark (SSE), even-odd preconditioned, 2 x 16 3, XEON CPUs, single CPU performance, 2, 4 nodes Single nodeDual node SSE2non-SSESSE2non-SSE (74%) (85%) blockingnon-blocking I/O (119%) Parastation3 software non-blocking I/O support (MFLOPS, non-SSE): Performance comparisons (MFLOPS):
Peter Wegner, DESY CHEP03, 25 March Maximal Efficiency of external I/O MFLOPs (without communication) MFLOPS (with communication) Maximal Bandwidth Efficiency Myrinet (i860), SSE Myrinet/GM (E7500), SSE Myrinet/ Parastation (E7500), SSE Myrinet/ Parastation (E7500), non-blocking, non-SSE hidden 0.91 Gigabit, Ethernet, non-SSE Infiniband non-SSE
Peter Wegner, DESY CHEP03, 25 March Parallel (1-dim) Dirac Operator Benchmark (SSE), even-odd preconditioned, 2 x 16 3, XEON/Itanium2 CPUs, single CPU performance, 4 nodes 4 single CPU nodes, Gbit Ethernet, non-blocking switch, full duplex P4 (2.4 GHz, 0.5 MB cache) SSE:285 MFLOPS MB/s non-SSE:228 MFLOPS MB/s Itanium2 (900 MHz, 1.5 MB cache) non-SSE:197 MFLOPS MB/s
Peter Wegner, DESY CHEP03, 25 March Infiniband interconnect Link: High Speed Serial 1x, 4x, and 12x Host Channel Adapter: Protocol Engine Moves data via messages queued in memory Switch: Simple, low cost, multistage network Target Channel Adapter: Interface to I/O controller SCSI, FC-AL, GbE,... I/O Cntlr TCA Sys Mem CPU Mem Cntlr Host Bus HCA Link Switch Link Sys Mem HCA Mem Cntlr Host Bus CPU TCA I/O Cntlr up to 10GB/s Bi-directional Chips :IBM, Mellanox PCI-X cards: Fujitsu, Mellanox, JNI, IBM
Peter Wegner, DESY CHEP03, 25 March Infiniband interconnect
Peter Wegner, DESY CHEP03, 25 March Parallel (2-dim) Dirac Operator Benchmark (Ginsparg-Wilson- Fermions), XEON CPUs, single CPU performance, 4 nodes Infiniband vs Myrinet performance, non-SSE (MFLOPS): XEON 1.7 GHz Myrinet, i860 chipset XEON 2.2 GHz Infiniband, E7500 chipset 32-Bit64-Bit32-Bit64-Bit 8x8 3 lattice, 2x2 processor grid x16 3 lattice, 2x4 processor grid
Peter Wegner, DESY CHEP03, 25 March Future - Low Power Cluster Architectures ?
Peter Wegner, DESY CHEP03, 25 March Future Cluster Architectures - Blade Servers ? NEXCOM – Low voltage blade server 200 low voltage Intel XEON CPUs (1.6 GHz – 30W) in a 42U Rack Integrated Gbit Ethernet network Mellanox – Infiniband blade server Single XEON Blades connected via a 10 Gbit (4X) Infiniband network MEGWARE, NCSA, Ohio State University
Peter Wegner, DESY CHEP03, 25 March Conclusions PC CPUs have an extremely high sustained LQCD performance using SSE/SSE2 (SIMD+pre-fetch), assuming a sufficient large local lattice Bottlenecks are the memory throughput and the external I/O bandwidth, both components are improving (Chipsets: i860 E7500 E705 …, FSB: 400MHz 533 MHz 667 MHz …, external I/O: Gbit-Ethernet Myrinet2000 QSnet Inifiniband …) Non-blocking MPI communication can improve the performance by using adequate MPI implementations (e.g. ParaStation) 32-bit Architectures (e.g. IA32) have a much better price performance ratio than 64-bit architectures (Itanium, Opteron ?) Large low voltage dense blade clusters could play an important role in LQCD computing (low voltage XEON, CENTRINO ?, …)
Peter Wegner, DESY CHEP03, 25 March Acknowledgements We would like to thank Martin Lüscher (CERN) for the benchmark codes and the fruitful discussions about PCs for LQCD, and Isabel Campos Plasencia (Leibnitz-Rechenzentrum Munich), Gerhard Wellein (Uni Erlangen), Holger Müller (Megware), Norbert Eicker (Par-Tec), Chris Eddington (Mellanox) for the opportunity to run the benchmarks on their clusters.