Www.openfabrics.org Infiniband in EDA (Chip Design) Glenn Newell Sr. Staff IT Architect Synopsys.

Slides:



Advertisements
Similar presentations
All rights reserved © 2006, Alcatel The Food Chain – Alcatel View Joelle GAUTHIER Vice President Research & Innovation.
Advertisements

The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Virtualization Gary Lamb, Sr. Director, Data Center Virtualization Practice May 15, 2009.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Business Continuity and DR, A Practical Implementation Mich Talebzadeh, Consultant, Deutsche Bank
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Storage area network and System area network (SAN)
Data Centers and IP PBXs LAN Structures Private Clouds IP PBX Architecture IP PBX Hosting.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Building a High-performance Computing Cluster Using FreeBSD BSDCon '03 September 10, 2003 Brooks Davis, Michael AuYeung, Gary Green, Craig Lee The Aerospace.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
HPC at IISER Pune Neet Deo System Administrator
Globalpress Electronics Summit
Voltaire The Grid Backbone™ InfiniBand CERN Seminar Asaf Somekh VP Strategic Alliances June 2006.
Towards a Common Communication Infrastructure for Clusters and Grids Darius Buntinas Argonne National Laboratory.
Silicon Building Blocks for Blade Server Designs accelerate your Innovation.
MOVING FORWARD FASTER ®. IPO in 2000 Over $19B market cap (Teva: $25B, Checkpoint: $5B) FY05 (ended 01/05) revenue: $1.22B FY06 guidance $1.6B+ Over $800M.
LOGO Scheduling system for distributed MPD data processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
A TREASURE CALLED DATA Chris Shayan, Software Solution Architect How to use our data more effectively and efficiently.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
Chapter 2 Computer Clusters Lecture 2.2 Computer Cluster Architectures.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Common Practices for Managing Small HPC Clusters Supercomputing 12
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
Computer networks Internet, Intranet, Extranet, Lan, Wan, characteristics and differences.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Tackling I/O Issues 1 David Race 16 March 2010.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Background Computer System Architectures Computer System Software.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
Nexsan iSeries™ iSCSI and iSeries Topologies Name Brian Montgomery
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Welcome: Intel Multicore Research Conference
Appro Xtreme-X Supercomputers
Low Latency Analytics HPC Clusters
TrueLight Corporation
Footer.
CS 345A Data Mining MapReduce This presentation has been altered.
Ron Carovano Manager, Business Development F5 Networks
Cluster Computers.
Presentation transcript:

Infiniband in EDA (Chip Design) Glenn Newell Sr. Staff IT Architect Synopsys

2 Agenda  Synopsys + Synopsys Computing  EDA design flow vs. data size and communication  High Performance Linux Clusters at Synopsys  Storage is dominant vs. Inter-process communication  Performance Increases with Infiniband  Tell the world  Next Steps

3 Synopsys  “A world leader in semiconductor design software”  Company Founded:1986  Revenue for FY 2006: $1.096 billion  Employees for FY 2006: ~5,100  Headquarters: Mountain View, California  Locations: More than 60 sales, support and R&D offices worldwide in North America, Europe, Japan, the Pacific Rim and Israel

4 Synopsys IT (2007)  Over 60 Offices World Wide  Major Data Centers  5 at HQ  Hillsboro OR  Austin TX  Durham NC  Nepean Canada  Munich Germany  Hyderabad India  Yerevan Armenia  Shanghai China  Tokyo Japan  Taipei Taiwan  2 Petabytes of NFS Storage  ~15000 compute servers  Linux  4000 Solaris  700 HPUX  300 AIX  GRID Farms  65 farms composed of 7000 machines  75% SGE 25% LSF  Interconnect  GigE storage  Fast E clients #242 on Nov. ’06 Top400.org TFlops on 1200 Processors

5 Design Flow vs. Data Size and Communication (2007 )  RTL “relatively” small data sets  Physical Layout Data up to 300GB  Inter Process Communication “small” compared to file i/o  Post Optical Proximity Correction 300GB - >1TB  OPC adds complex polygons  Mask machines need flat data (no hierarchy or pattern replication )  Physical world is “messy” (FFT + FDTD)

6 High Performance Linux Clusters Progression NameInterconn ect StorageNodes HPLC1Non- Blocking GigE Dedicated NFS 112 cores 52 nodes HPLC2 (mobile) MyrinetGPFS (IP) 64 cores 29 nodes HPLC34X SDR IB Lustre (native IB) 76 cores 19 nodes HPLC2 HPLC3

7 HPLC3 vs. HPLC1 Why IB + Lustre?

8 Why? – HPLC1 NFS + GigE  Large CPU Count Fractures overtax NFS server CPU  Maximum Read Bandwidth is 90 MB/sec Explode Fracture

9 Why? – HPLC3 Lustre + IB 64 CPU Fracture Lustre splits traffic across resources 250 MB/sec max read bandwidth

10 Storage + Interconnect Option Comparisons + Myrinet

11 High Performance Linux Clusters Production Model NameInterconnectStorageNodes HPLC54X Infiniband 192 port DDR capable Switch DataDirectNetwo rks + Lustre 8 OSS 2 MDS 16TB 200 cores 50 nodes  17x the storage performance of HPLC1  6x the storage performance of HPLC3  IB gateway to Enterprise (6xGigE) means no dual homed hosts  “Fantastic” Performance  More Linear Scaling for Distributed Processing Applications

12 State of the IB Cluster  Our Customer facing Engineers typically see  10x improvement for post layout tools over Fast Ethernet  3x improvement for post OPC tools over GigE NFS, or direct attached storage  So, we are evangelizing IB + parallel file systems (Lustre) with our customers  User Manuals  Papers, Posters, and Presentations at conferences  Presentations and Site visits with customers  Push storage vendors to support IB to the client (vs. 10GigE + TOE)  But…

13 Estimated EDA Relative CPU Cycles Required 2007  2009 ~10x 65nm  45 nm 2007  2012 ~100x 65nm  22 nm 45 nm

14 Next Steps  “Inside the box” changing  Multicore  Hardware Acceleration (GPU/co-processor merges with CPU)  Micro Architecture  Applications changing to deal with above and increased data set sizes  Things for IT to explore  Other “new” parallel file systems (e.g. Gluster)  12X DDR IB  10GB uplinks  IB Top500 entry? ;-)