SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO TeraGrid Coordination Meeting June 10, 2010 TeraGrid Forum Meeting June 16, 2010.

Slides:



Advertisements
Similar presentations
Supercomputing Institute for Advanced Computational Research © 2009 Regents of the University of Minnesota. All rights reserved. The Minnesota Supercomputing.
Advertisements

QCloud Queensland Cloud Data Storage and Services 27Mar2012 QCloud1.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO HPEC September 16, 2010 Dr. Mike Norman, PI Dr. Allan Snavely, Co-PI.
SAN DIEGO SUPERCOMPUTER CENTER Using Gordon to Accelerate LHC Science Rick Wagner San Diego Supercomputer Center XSEDE 13 July 22-25, 2013 San Diego, CA.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon: NSF Flash-based System for Data-intensive Science Mahidhar Tatineni 37.
SAN DIEGO SUPERCOMPUTER CENTER Choonhan Youn Viswanath Nandigam, Nancy Wilkins-Diehr, Chaitan Baru San Diego Supercomputer Center, University of California,
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update Trestles Recent Dash results Gordon schedule SDSC’s broader HPC.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Performance of Applications Using Dual-Rail InfiniBand 3D Torus Network on the.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Status Update TeraGrid Science Advisory Board Meeting July 19, 2010 Dr. Mike.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE PITAC Transforming Learning Workshop - SDSC - July 17, Using IT in Formal Education.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
NSF Vision and Strategy for Advanced Computational Infrastructure Vision: NSF Leadership in creating and deploying a comprehensive portfolio…to facilitate.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
3DAPAS/ECMLS panel Dynamic Distributed Data Intensive Analysis Environments for Life Sciences: June San Jose Geoffrey Fox, Shantenu Jha, Dan Katz,
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
CBP 2006MSc. Computing1 Modelling and Simulation.
SDSC RP Update TeraGrid Roundtable Changes in SDSC Allocated Resources We will decommission our IA-64 cluster June 30 (rather than March 2010)
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
SoCal Education OptIPuter Education and Outreach at Preuss School UCSD and 6th College Rozeanne Steckler, Ph.D. Director of Education, SDSC September 2003.
Diane Baxter Welcome to the San Diego Supercomputer Center Dr. Diane Baxter Education Director San Diego Supercomputer Center Thanks to Fran Berman, Jeff.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
SAN DIEGO SUPERCOMPUTER CENTER NUCRI Advisory Board Meeting November 9, 2006 Science Gateways on the TeraGrid Nancy Wilkins-Diehr TeraGrid Area Director.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
/ ZZ88 Performance of Parallel Neuronal Models on Triton Cluster Anita Bandrowski, Prithvi Sundararaman, Subhashini Sivagnanam, Kenneth Yoshimoto,
TeraGrid Overview Cyberinfrastructure Days Internet2 10/9/07 Mark Sheddon Resource Provider Principal Investigator San Diego Supercomputer Center
“Comparative Human Microbiome Analysis” Remote Video Talk to CICESE Big Data, Big Network Workshop Ensenada, Mexico October 10, 2013 Dr. Larry Smarr Director,
S AN D IEGO S UPERCOMPUTER C ENTER San Diego State University / Ed Center on Computational Science & Engineering K Stewart – 31Jan06 Curriculum Development.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
SAN DIEGO SUPERCOMPUTER CENTER Impact Requirements Analysis Team Co-Chairs: Mark Sheddon (SDSC) Ann Zimmerman (University of Michigan) Members: John Cobb.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely.
My-Plant.org A Phylogenetically Structured Social Network Matthew R Hanlon November 13, 2010.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Hybrid MPI/Pthreads Parallelization of the RAxML Phylogenetics Code Wayne Pfeiffer.
OOI-CYBERINFRASTRUCTURE OOI Cyberinfrastructure Education and Public Awareness Plan Cyberinfrastructure Design Workshop October 17-19, 2007 University.
“The UCSD Big Data Freeway System” Invited Short Talk Workshop on “Enriching Human Life and Society” UC San Diego February 6, 2014 Dr. Larry Smarr Director,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Galaxy Community Conference July 27, 2012 The National Center for Genome Analysis Support and Galaxy William K. Barnett, Ph.D. (Director) Richard LeDuc,
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Education, Outreach and Training (EOT) Scott Lathrop Area Director for EOT February 2009.
Tackling I/O Issues 1 David Race 16 March 2010.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation An Online.
Collection-Based Persistent Archives Arcot Rajasekar, Richard Marciano, Reagan Moore San Diego Supercomputer Center Presented by: Preetham A Gowda.
Opportunistic Computing Only Knocks Once: Processing at SDSC Ian Fisk FNAL On behalf of the CMS Collaboration.
ORNL is managed by UT-Battelle for the US Department of Energy Musings about SOS Buddy Bland Presented to: SOS20 Conference March 25, 2016 Asheville, NC.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
SAN DIEGO SUPERCOMPUTER CENTER SDSC Resource Partner Summary March, 2009.
Quarterly Meeting Spring 2007 NSTG: Some Notes of Interest Adapting Neutron Science community codes for TeraGrid use and deployment. (Lynch, Chen) –Geared.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Low-Cost High-Performance Computing Via Consumer GPUs
Geoffrey Fox, Shantenu Jha, Dan Katz, Judy Qiu, Jon Weissman
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Campus Cyberinfrastructure
Presentation transcript:

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO TeraGrid Coordination Meeting June 10, 2010 TeraGrid Forum Meeting June 16, 2010

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO The Gordon Sweet Spot Data Mining De novo genome assembly from sequencer reads & analysis of galaxies from cosmological simulations and observations. Federations of databases and Interaction network analysis for drug discovery, social science, biology, epidemiology, etc. Predictive Science Solution of inverse problems in oceanography, atmospheric science, & seismology. Modestly scalable codes in quantum chemistry & structural engineering. Large Shared Memory; Low Latency, Fast Interconnect; Fast I/O system

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO The Usual (HPC)Suspects are, well, suspect.

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Typical HPC I/O has very little Random I/O – which is a sweet spot for SSD’s and Data Intensive Computing For example, NERSC study * of 50 applications found: Random access is rare for HPC applications; the I/O access is dominated by Sequential operations. Applications I/O dominated by append-only writes The majority of applications have adopted a one-file-per-processor approach to disk-I/O where each process of a parallel applications writes to its own separate file rather than using parallel/shared I/O API’s to write from all of the processors into a single file. * Source: Characterizing and Predicting the I/O Performance of HPC Applications Using a Parameterized Synthetic Benchmark (Shalf, et al, SC ‘08)

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Data Intensive Workshop October 26-29, 2010 Identify "Grand Challenges" in data-intensive science across a broad range of topics Identify applications and disciplines that will benefit from Gordon's unique architecture and capabilities Invite potential users of Gordon to speak and participate Make leaders in data-intensive science aware of what SDSC is doing in this space Raise awareness among disciplines poorly served by current HPC offerings Better understand Gordon's niche in the data-intensive cosmos and potential usage modes Logistics: ~100 incl. 1-day hands-on; plenary speakers; astronomy, geoscience, neuroscience, physics, engineering, social science, and data-related technologies

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Highlights 245TF; 1024 Nodes; 64GB/node (64TB) Sandy Bridge processor Dual socket Core count TBD 8 flops/clock/core via AVX instruction set 256TB Enterprise Intel SSD via 64 Nehalem/Westmere I/O Nodes (4TB per node) Dual rail, QDR 3D torus IB Interconnect Shared memory supernodes via ScaleMP vSMP Foundation 32 Compute nodes/supernode 128 node version launching in fall Message passing between supernodes coming 4PB Data Oasis Disk

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Supernode Architecture 32 Appro GreenBlade Dual processor Intel Sandy Bridge 240 GFLOPS 64 GB/node # Cores TBD 2 Appro IO nodes/32 SN Intel SSD drives 4 TB ea. 560,000 IOPS ScaleMP vSMP virtual shared memory 2 TB RAM aggregate (64GBx32) 8 TB SSD aggregate (256GBx32) 240 GF Comp. Node 64 GB RAM 240 GF Comp. Node 64 GB RAM 4 TB SSD I/O Node vSMP memory virtualization

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Project Milestones Dash is now a TeraGrid resource Allocation processes Allocated users Account setup Application Environment 16-Way vSMP Acceptance Approved SDSC is becoming a flash center of excellence in HPC. Working closely with Dr. Steve Swanson in UCSD’s Center for Magnetic Recording Research (CMRR) Education, Outreach and Training Data Intensive Workshop set for October at SDSC. NVM Workshop at UCSD in April SC ‘10 Papers submitted TeraGrid 2010 papers, tutorial, BOF submitted Data intensive use cases being developed

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Production Dash as of April 1 Two 16 node virtual clusters SSD-only 16 node; Nehalem, dual socket 8 core; 48GB ; 1 TB SSD (16) SSD’s are local to the nodes Standard queues available vSMP + SSD 16 nodes, Nehalem, dual socket, 8 core, 48GB; 960GB SSB (15) SSD’s are local to the nodes Treated as a single shared resource GPFS-WAN Additional 32 nodes will be brought online after the vSMP 32-way acceptance testing in July

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Gordon Timeline

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO The Road Ahead Understanding data intensive applications and how they can benefit from Gordon’s unique architecture Identifying new user communities Education, Outreach and Training Managing to the schedule and milestones Track and assess flash technology developments Education, Outreach and Training I/O performance Parallel file systems InfiniBand/3D torus routing Individual roles and responsibilities Systems management processes Education, Outreach and Training Staffing ramp-up in October Have fun doing this!

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO TeraGrid Support has been Instrumental Diane Baxter Jeff Bennett Leo Carson Larry Diegel Jerry Greenberg Dave Hart Jiahua He Eva Hocks Tom Hutton Arun Jagatheesen Adam Jundt Richard Moore Mike Norman Wayne Pfeiffer Susan Rathbun Scott Sakai Allan Snavely Mark Sheddon Shawn Strande Mahidhar Tatineni And many others…

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO SDSC’s Summer Education Program TeacherTech summer workshops Conference of New Teachers in Genomics Modeling Instruction in High School Physics: An Introduction Introduction to Adobe Photoshop and the World of Digital Art TeacherTECH Begins a Collaboration with UCSD-TV – Tune In! Newton’s Laws of Gravity: From the Celestial to the Terrestrial Earthquake Science: Beyond Static Images and Flat Maps Student summer workshops Exploring the World of Digital Art and Design Introduction to Matlab: An Interactive Visual Math Experience UCSD Biotechnology Academy "Full Color Heroes" in Digital Art & Design: Comic Book Coloring! 2D – 3D Insani-D! 3D Photography: Experience It! Photography + Photoshop = Fun! Exploring Digital Photography and the Wonders of Photoshop Introduction to Maya and 3D Modeling

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO SDSC’s Summer Education Program (cont.) Research Experience For High School Students (REHS) (21 students) Supercomputer-based Workflow for Managing Large Biomedical Images Refinement of Data Mining Software and Application to Space Plasmas for Data Analysis and Visualization Sonification of UCSD Campus Energy Consumption Visualization and 3D Content Creation The Cooperative Association for Internet Data Analysis Web Development Intern Documentation Assistant – Health Info Databases Project