Rensselaer Why not change the world? Rensselaer Why not change the world? 1.

Slides:



Advertisements
Similar presentations
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Advertisements

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Blue brain Copyright © cs-tutorial.com.
2. Computer Clusters for Scalable Parallel Computing
Beowulf Supercomputer System Lee, Jung won CS843.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
 Information about RPI to decide whether they should join ICTBioMed.
Presented by: SACHIN N 1GA07EC087 Under the guidance: B.C.DIVAKAR.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Building a High-performance Computing Cluster Using FreeBSD BSDCon '03 September 10, 2003 Brooks Davis, Michael AuYeung, Gary Green, Craig Lee The Aerospace.
A.V. Bogdanov Private cloud vs personal supercomputer.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
HPC at IISER Pune Neet Deo System Administrator
and beyond Office of Vice President for Information Technology.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
EE Faculty. EE Technical Areas Micro Devices & Physical Principals Integrated Circuits & Systems Signals & Information Processing Networking & Communications.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
Modeling Billion-Node Torus Networks Using Massively Parallel Discrete-Event Simulation Ning Liu, Christopher Carothers 1.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Interconnection network network interface and a case study.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
An Introduction to GPFS
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
GPFS Parallel File System
Extreme Scale Infrastructure
NIIF HPC services for research and education
Appro Xtreme-X Supercomputers
IBM Linux Update for CAVMEN
IBM Power Systems.
Cluster Computers.
Presentation transcript:

Rensselaer Why not change the world? Rensselaer Why not change the world? 1

Rensselaer Why not change the world? Rensselaer Why not change the world? Computational Center for Nanotechnology Innovations A Computational and Research Center dedicated to Computational Nanotechnology Innovations A University/Industry/State Partnership

Rensselaer Why not change the world? Rensselaer Why not change the world? Rensselaer Overview 3 l Educates the leaders of tomorrow for technologically based careers l Schools – Architecture, Engineering, Humanities and Social Sciences, Management and Technology, Science l 6,200 resident students – 5,000 undergraduate, 1,200 graduate l Private institution founded in 1824 l 450 faculty, 1400 staff

Rensselaer Why not change the world? Rensselaer Why not change the world? 4 CCNI Vision - 1 The Computational Center for Nanotechnology Innovations (CCNI) will bring together university and industry researchers to address the challenges facing the semiconductor industry as devices shrink in size to the nanometer range.

Rensselaer Why not change the world? Rensselaer Why not change the world? 5 CCNI Vision - 2 To account for the interactions of atoms and molecules up to the behavior of a complete device, the CCNI must develop a new generation of computational methods to support the virtual design of the next generation of devices which will require the massive computing capabilities of the CCNI.

Rensselaer Why not change the world? Rensselaer Why not change the world? 6 CCNI Vision - 3 The resulting virtual design methods will further expand New York State’s leadership position in nanotechnology.

Rensselaer Why not change the world? Rensselaer Why not change the world? 7 Industry Needs Needs –Technical and cost constraints are limiting the growth of the semiconductor industry and nanotechnology innovations –Computational nanotechnology is essential for decreasing the time from concept creation to commercialization

Rensselaer Why not change the world? Rensselaer Why not change the world? 8 CCNI Goals Goals l Provide leadership in the development and application of computational nanotechnologies l Establish partnership to create world class competencies on design-to-manufacturing research capabilities l Produce new integrated predictive design tools for nano-scale materials, devices, and systems l Spur economic growth in the Capital district, NYS & beyond

Rensselaer Why not change the world? Rensselaer Why not change the world? 9 Facilities and Capabilities n Computational Systems –100 teraflops of computing –Heterogeneous computing environment n Rensselaer Technology Park –4300 sq. ft. Machine Room –Business Offices –Systems and Operations support –Scientific Support

Rensselaer Why not change the world? Rensselaer Why not change the world? 10 Layout of CCNI

Rensselaer Why not change the world? Rensselaer Why not change the world? Design and Engineering 11 Partners to Build CCNI

Rensselaer Why not change the world? Rensselaer Why not change the world? 12 Partners to Build CCNI Architect

Rensselaer Why not change the world? Rensselaer Why not change the world? Turner Construction Company 13 Partners to Build CCNI

Rensselaer Why not change the world? Rensselaer Why not change the world? 14 CCNI Construction Raised Floor

Rensselaer Why not change the world? Rensselaer Why not change the world? 15 CCNI Construction Cooling Towers

Rensselaer Why not change the world? Rensselaer Why not change the world? 16 CCNI Construction Lobby

Rensselaer Why not change the world? Rensselaer Why not change the world? 17 CCNI Installation Blue Gene Racks and Inter-rack Cables

Rensselaer Why not change the world? Rensselaer Why not change the world? 18 CCNI Installation Blue Gene Racks Without Covers

Rensselaer Why not change the world? Rensselaer Why not change the world? 19 CCNI Installation Blade Racks, Storage Racks, and Network Cables

Rensselaer Why not change the world? Rensselaer Why not change the world? 20 CCNI – Blue Gene/L Blue Gene/L System –16 rack IBM Blue Gene/L system –#7 on Top 500 Supercomputer List –32,768 PowerPC 700 MHz processors –12 TB of memory total –Compute Node Kernel Simple, flat, fixed-size, address space Single threaded, no paging Familiar POSIX interface Basic file I/O operations –Two modes - coprocessor or virtual mode

Rensselaer Why not change the world? Rensselaer Why not change the world? Blue Gene/L Hardware 21 Optimized Communications 1) 3D Torus 2) Collective Network 3) Global Barrier/Interrupt 4) Gigabit Ethernet (I/O & connectivity) 5) Control (system boot, debug, monitoring) 5.6 GF/s 4 MB 11.2 GF/s 2 GB 180 GF/s 16 or 32 GB 5.7 TF/s (peak) 512 GB or 1 TB 91.7 TF/s (peak) 12 TB 16 Racks

Rensselaer Why not change the world? Rensselaer Why not change the world? Blue Gene Architecture 22

Rensselaer Why not change the world? Rensselaer Why not change the world? 23 CCNI – Blade Servers Blade Server Cluster –462 IBM LS21 blades –1,848 Opteron 2.6 GHz cores –5.5 TB of memory total –4X InfiniBand interconnect (10 Gbps) –Red Hat Linux

Rensselaer Why not change the world? Rensselaer Why not change the world? 24 CCNI – Large Memory AMD and Intel SMP Servers n 40 IBM x3755 servers –Each with 8 Opteron 2.8 GHz cores and 64 GB of memory n 2 IBM x3755 servers –Each with 8 Opteron 2.8 Ghz cores and 128 GB of memory n 2 IBM x3950 servers –One with 64 Xeon 2.8 GHz cores and 128 GB of memory –One with 32 Xeon 2.8 GHz cores and 256 GB of memory n All with 4X InfiniBand interconnect n All with Red Hat Linux Power SMP Server –IBM p590 –16 Power GHz processors –256 GB of memory –AIX

Rensselaer Why not change the world? Rensselaer Why not change the world? 25 CCNI – Disk Storage File Storage –Common file system for all hardware –IBM General Parallel File System, GPFS –832 TB of raw disk storage –52 IBM x3655 file server nodes –26 IBM DS4200 storage controllers GPFS –High performance parallel I/O –Cache consistent shared access –Aggressive read ahead, write behind

Rensselaer Why not change the world? 26 Local fiber: CCNI/Campus/NYSERNet State International CCNI Networking

Rensselaer Why not change the world? Rensselaer Why not change the world? 27 –Nanoelectronics modeling and simulation –Modeling of material structure and behavior –Modeling of complex flows –Computational biology –Biomechanical system modeling –Multiscale methods –Parallel simulation technologies CCNI – Research Areas

Rensselaer Why not change the world? Rensselaer Why not change the world? 28 –Functionality of new materials and devices –Fabrication modeling –Mechanics of nanoelectronic systems –Application to the design of new devices Nanoelectronics Modeling and Simulation carbon nanotube T-junctions (Nayak ) submicron to nano (Huang)

Rensselaer Why not change the world? Rensselaer Why not change the world? 29 –Modeling and design of material systems –Modeling of energetic materials –Multiscale modeling of nanostructured polymer rheology Modeling of Material Structure and Behavior Multiscale modeling of polymer rheolgy

Rensselaer Why not change the world? Rensselaer Why not change the world? 30 –Hierarchic modeling of turbulent flows –Modeling of biological systems flows Modeling of Complex Flows

Rensselaer Why not change the world? Rensselaer Why not change the world? 31 –Protein structure and interactions with small molecules –Membranes and membrane protein structure and function –Modeling cellular processes and communities of cells Computational Biology

Rensselaer Why not change the world? Rensselaer Why not change the world? 32 –Virtual biological flow facility for patient specific surgical planning –Distributed digital surgery –Biomedical imaging via inverse problem construction Biomechanical System Modeling

Rensselaer Why not change the world? Rensselaer Why not change the world? 33 –Multiscale mathematics and modeling –Adaptive simulation systems applied to applications Multiscale Science and Engineering

Rensselaer Why not change the world? Rensselaer Why not change the world? 34 –High-performance network models –Optimistic parallel approaches –Multi-level parallel network models Parallel Simulation Technologies Geometric modelPartition modelPartitioned mesh Adapted mesh (23,082,517 tets)Initial mesh (1,595 tets)

Rensselaer Why not change the world? Thus Begins the CCNI Odyssey 35

Rensselaer Why not change the world?Questions 36

Rensselaer Why not change the world? Rensselaer Why not change the world? 37