Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.

Slides:



Advertisements
Similar presentations
Intel Do-It-Yourself Challenge node.js
Advertisements

Intel Do-It-Yourself Challenge Intel Galileo and Edison Paul Guermonprez Intel Software.
April 19, 2015 CASC Meeting 7 Sep 2011 Campus Bridging Presentation.
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
Bill Barnett, Bob Flynn & Anurag Shankar Pervasive Technology Institute and University Information Technology Services, Indiana University CASC. September.
Data Gateways for Scientific Communities Birds of a Feather (BoF) Tuesday, June 10, 2008 Craig Stewart (Indiana University) Chris Jordan.
Internet of Things with Intel Edison Presentation Paul Guermonprez Intel Software
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
ESE Einführung in Software Engineering X. CHAPTER Prof. O. Nierstrasz Wintersemester 2005 / 2006.
1 Supplemental line if need be (example: Supported by the National Science Foundation) Delete if not needed. Supporting Polar Research with National Cyberinfrastructure.
SWOT Analysis Strengths Weaknesses SWOT Opportunities Threats.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Pti.iu.edu /jetstream Award # A national science & engineering cloud funded by the National Science Foundation Award #ACI Jetstream Overview.
Pti.iu.edu /jetstream Award # A national science & engineering cloud funded by the National Science Foundation Award #ACI Prepared for the.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
HPC at IISER Pune Neet Deo System Administrator
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Campus Bridging: What is it and why is it important? Barbara Hallock – Senior Systems Analyst, Campus Bridging and Research Infrastructure.
Statewide IT Conference, Bloomington IN (October 7 th, 2014) The National Center for Genome Analysis Support, IU and You! Carrie Ganote (Bioinformatics.
Next Generation Cyberinfrastructures for Next Generation Sequencing and Genome Science AAMC 2013 Information Technology in Academic Medicine Conference.
Craig Stewart 23 July 2009 Cyberinfrastructure in research, education, and workforce development.
© Trustees of Indiana University Released under Creative Commons 3.0 unported license; license terms on last slide. Using the Purdue DB Technology to build.
INDIANAUNIVERSITYINDIANAUNIVERSITY January 2002 INGEN's advanced IT facilities Craig A. Stewart
Goodbye from Indianapolis, IUPUI, and Craig A. Stewart Executive Director, Pervasive Technology Institute Associate Dean, Research Technologies Indiana.
I-Light: A Network for Collaboration between Indiana University and Purdue University Craig Stewart Associate Vice President Gary Bertoline Associate Vice.
Genomics, Transcriptomics, and Proteomics: Engaging Biologists Richard LeDuc Manager, NCGAS eScience, Chicago 10/8/2012.
Internet of Things with Intel Edison Compiling and running Pierre Collet Intel Software.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
The National Center for Genome Analysis Support as a Model Virtual Resource for Biologists Internet2 Network Infrastructure for the Life Sciences Focused.
© Copyright Showeet.com I NSERT YOUR TITLE HERE. © Copyright Showeet.com Insert Your Title Here 2 Master text styles –Second level Third level –Fourth.
Leveraging the National Cyberinfrastructure for Top Down Mass Spectrometry Richard LeDuc.
September 6, 2013 A HUBzero Extension for Automated Tagging Jim Mullen Advanced Biomedical IT Core Indiana University.
© Trustees of Indiana University Released under Creative Commons 3.0 unported license; license terms on last slide. The IQ-Table & Collection Viewer A.
RNA-Seq 2013, Boston MA, 6/20/2013 Optimizing the National Cyberinfrastructure for Lower Bioinformatic Costs: Making the Most of Resources for Publicly.
Implementation and experience with Big Red (a 30.7 TFLOPS IBM BladeCenter cluster), the Data Capacitor, and HPSS Craig A. Stewart 1 November.
Hotfoot HPC Cluster March 31, Topics Overview Execute Nodes Manager/Submit Nodes NFS Server Storage Networking Performance.
Pti.iu.edu /jetstream Award # funded by the National Science Foundation Award #ACI Jetstream - A self-provisioned, scalable science and.
Making Campus Cyberinfrastructure Work for Your Campus Guy Almes Patrick Dreher Craig Stewart Dir. Academy for Dir. Advanced Computing Associate Dean Advanced.
Pti.iu.edu /jetstream Award # funded by the National Science Foundation Award #ACI Jetstream Overview – XSEDE ’15 Panel - New and emerging.
Using Prior Knowledge to Improve Scoring in High-Throughput Top-Down Proteomics Experiments Rich LeDuc Le-Shin Wu.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
On the road to petascale processing with IU’s Big Red Supercomputer and IBM BladeCenter H Gregory P. Rodgers, IBM Craig A. Stewart, Indiana University.
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
INDIANAUNIVERSITYINDIANAUNIVERSITY Spring 2000 Indiana University Information Technology University Information Technology Services Please cite as: Stewart,
November 18, 2015 Quarterly Meeting 30Aug2011 – 1Sep2011 Campus Bridging Presentation.
February 27, 2007 University Information Technology Services Research Computing Craig A. Stewart Associate Vice President, Research Computing Chief Operating.
Win8 on Intel Programming Course Paul Guermonprez Intel Software
1 Supplemental line if need be (example: Supported by the National Science Foundation) Delete if not needed. Grand Challenges Discussion 7 Oct 2015 Craig.
Recent key achievements in research computing at IU Craig Stewart Associate Vice President, Research & Academic Computing Chief Operating Officer, Pervasive.
© Trustees of Indiana University Released under Creative Commons 3.0 unported license; license terms on last slide. Update on EAGER: Best Practices and.
Award # funded by the National Science Foundation Award #ACI Jetstream: A Distributed Cloud Infrastructure for.
Jetstream: A new national research and education cloud Jeremy Fischer ORCID Senior Technical Advisor, Collaboration.
A national science & engineering cloud funded by the National Science Foundation Award #ACI Craig Stewart ORCID ID Jetstream.
1 A national science & engineering cloud funded by the National Science Foundation Award #ACI Craig Stewart ORCID ID Jetstream.
© Trustees of Indiana University Released under Creative Commons 3.0 unported license; license terms on last slide. Informatics Tools at the Indiana CTSI.
Award # funded by the National Science Foundation Award #ACI Jetstream: INFO-590, Science Gateways Architecture.
Jetstream Overview Jetstream: A national research and education cloud Jeremy Fischer ORCID Senior Technical Advisor,
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
ORNL is managed by UT-Battelle for the US Department of Energy OLCF HPSS Performance Then and Now Jason Hill HPC Operations Storage Team Lead
1 Campus Bridging: What is it and why is it important? Barbara Hallock – Senior Systems Analyst, Campus Bridging and Research Infrastructure.
Jetstream: A national research and education cloud Jeremy Fischer ORCID Senior Technical Advisor, Collaboration and.
Research & Academic Computing Indiana University Statewide IT Conference 11 September 2003 Indianapolis IN.
New Ventures in Research, Engineering, and Educational Computing
Jetstream: A science & engineering cloud Mike Lowe
DSS-G Configuration Bill Luken – April 10th , 2017
The demonstration of Lustre in EAST data system
Matt Link Associate Vice President (Acting) Director, Systems
Presentation transcript:

Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee meeting 9 April 2013 Research Technologies UITS Indiana University

License Terms Please cite as: Stewart, C.A. Big Red II & Supporting Infrastructure, Presentation. Presented at: IUPUI Faculty Council Information Technology Subcommittee meeting (Indianapolis, IN 9 April 2013). Items indicated with a © are under copyright and used here with permission. Such items may not be reused without permission from the holder of copyright except where license terms noted on a slide permit reuse. Except where otherwise noted, contents of this presentation are copyright 2013 by the Trustees of Indiana University. This document is released under the Creative Commons Attribution 3.0 Unported license ( This license includes the following terms: You are free to share – to copy, distribute and transmit the work and to remix – to adapt the work under the following conditions: attribution – you must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). For any reuse or distribution, you must make clear to others the license terms of this work.

Overview The big goal: transform the way IU generally uses advanced computing to advance basic research, scholarship, translational research, and artistic expression Big Red II IU Bloomington Data Center IU Research Network Data Capacitor II Research Home Directories Quarry is still here!!! IU Cyberinfrastructure Gateway Flikr user hassanrafeek CC license

Big Red II - What is it? 4

Big Red II – System Specifications 5 System Size – 1056 nodes (264 blades) 344 XE6 compute nodes (86 blades) 2.5 GHz 16-core Abu Dhabi 64 GB system memory 676 XK7 Compute (169 blades) 2.3 GHz 16-core Interlagos NVIDIA K20 GPU (Kepler) 32 GB system memory 5 GB video memory 36 service & IO nodes File System & Storage Boot Raid 180 TB Lustre File system (1 GB/s) Interconnect:Gemini Topology:11 x 6 x 8 3D Torus Peak Performance: PFLOPS Total Memory Size:43.6 TB Total XE6 Cores: 11,008 Total XK7 Cores: 10,816 Total x86-64 Cores:21,824 System Size – 1056 nodes (264 blades) 344 XE6 compute nodes (86 blades) 2.5 GHz 16-core Abu Dhabi 64 GB system memory 676 XK7 Compute (169 blades) 2.3 GHz 16-core Interlagos NVIDIA K20 GPU (Kepler) 32 GB system memory 5 GB video memory 36 service & IO nodes File System & Storage Boot Raid 180 TB Lustre File system (1 GB/s) Interconnect:Gemini Topology:11 x 6 x 8 3D Torus Peak Performance: PFLOPS Total Memory Size:43.6 TB Total XE6 Cores: 11,008 Total XK7 Cores: 10,816 Total x86-64 Cores:21,824

6 Big Red II – Logical Diagram

7 Cray XE/XK Node Functions

Big Red II – Can I log in yet? 8 Stability testing & early user mode – April 2013 Dedication – April 26, 2013 General availability – sometime after that

Big Red II – What will we do with it? 9 HPC applications in Extreme Scalability Mode ISV support through Cluster Compatibility Mode High Throughput Computing (HTC) PGI/Intel compilers OpenACC Support GPU-enabled applications NAMDAMBER CHARMMGROMACS NWChemMILC Matlab

Big Red II – Similar Systems PFLOPS XE6/XK7 Blue Waters at NCSA 20 PFLOPS XK7 - Titan at ORNL

Supporting Infrastructure - Networking

Supporting File Systems – Data Capacitor II 12 2 SFA12K40 with slot chassis each 1680 total 3 TB SATA drives – 5,040 TB raw capacity 16 Object Storage Servers (96 GB RAM) 2 Metadata Servers (192 GB RAM) SFA6620 storage system with GB 15K RPM SAS drives & 20 3 TB SATA drives 8 Lustre routers – provide access to DCII via a 10GbE Bandwidth – >20 GB/s via Ethernet; >40 GB/s via InfiniBand

Supporting File Systems – Data Capacitor II 13

Supporting File Systems – Home Directories The Home File System uses the DDN SFA12K20E with embedded GridScaler (GPFS) as NAS storage Each SFA12K20E has 5 60-slot chassis, 140 3TB NL SAS drives for user data and GB 15K RPM SAS drives for metadata. Each system has 2 servers for non- NFS client access Each system has multiple 10 GbE connections for clients Native GPFS clients are supported and can be added if licensed System 1 -- IUB System 2 -- IUPUI

Quarry is still here!

Condominium computing – on Quarry Condominium computing - UITS-managed, departmentally owned research IT systems or system components: You can purchase nodes that are compatible with IU's Quarry cluster, have them installed in the very secure IUB Data Center, have them available when you want to use them, and have them managed, backed up, and secured by UITS RT staff. You get access to your nodes within seconds of requesting their use. When they are not in use by you, they become available to others in the IU community, thereby expanding the computing capability available to the IU community while conserving natural resources and energy.

IU Cyberinfrastructure Gateway

Questions? References PTI – pti.iu.edu Research Technologies - Cray XE6/XK7 - NVIDIA K Condominium computing - NCSA Blue Waters - ORNL Titan - DataDirect Networks -