Download presentation
Presentation is loading. Please wait.
Published byWendy Skinner Modified over 9 years ago
1
Leader in Next Generation Ethernet
2
2 Outline Where is iWARP Today? Some Proof Points Conclusion Questions
3
3 Latency in Multi-Processor/Multi-Core Systems
4
4 Industry Leading Multiple Connection Throughput Predominant RDMA Command for; Intel MPI, MVAPICH2, HP MPI, Scali MPI, OpenMPI, MPICH2 and iSER
5
5 In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand E1 E2 E3 E4 E5 E6 Infiniband NetEffect 10Gb Abaqus/Explicit: finite element analysis test suite (E1-E6)
6
6 In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand –Unmodified –Automatically scaled across nodes –Wall-clock times equal to or better than 20-Gbps InfiniBand DDR Infiniband (20Gbps) NetEffect NE020 10GbE PAM-CRASH crash simulation
7
7 Fluent rating In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand FL5L1FL5L2FL5L3 8163264 Infiniband 978.81938.34004.66843.6 NE020 10GbE 1008.31926.93901.46861.7 8163264 568.11145.52355.84702.0 611.51163.12359.84821.9 8163264 96.0191.2379.0742.6 195.3385.3784.7 Fluent ‘rating’: the number of benchmarks that can be run (in sequence) in a 24 hour period. A higher rating means faster performance. Fluent FL5Lx Benchmarks: Computational Fluid Dynamics (CFD)
8
8 Landmark Nexus Performance Results 0:00:00 0:28:48 0:57:36 1:26:24 1:55:12 2:24:00 2:52:48 2-24-24-48-48-816-832-8 system configuration (processors-nodes) wall-clock time (h:mm:ss) 0:00:00 0:28:48 0:57:36 1:26:24 1:55:12 2:24:00 2:52:48 2-24-24-48-48-816-832-8 system configuration (processors-nodes) wall-clock time (h:mm:ss) In Production Applications Now… Even 1Gb iWARP Ethernet achieves results on par with InfiniBand in some applications Landmark Nexus reservoir simulation (dominated by latency and message rate) NetEffect 1GbE adapter 8 Cores Standard 1GbE 2:27:17 16 Cores Standard 1GbE 2:58:41 NetEffect 10GbE adapter 10Gb Infiniband
9
9 Conclusion We joined forces 2.5 years ago to make the concept of a single RDMA API work on multiple fabrics from multiple vendors and be interoperable InfiniBand & Ethernet Working Together –Multiple IB Vendors –Multiple iWARP Vendors It is now a Reality!
10
iWARP has moved “from benchmarks to applications”
11
Questions
12
Thank You! www.neteffect.com
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.