The Great White SHARCNET. SHARCNET: Building an Environment to Foster Computational Science Shared Hierarchical Academic Research Computing Network.

Slides:



Advertisements
Similar presentations
Activities in the ANU Supercomputer Facility ANUSF
Advertisements

Oklahoma Center for High Energy Physics A DOE EPSCoR Proposal Submitted by a consortium of University of Oklahoma, Oklahoma State University, Langston.
Part III Solid Waste Engineering
High-Performance Computing
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
1 Strategic Planning: An Update March 13, Outline What we have done so far? Where do we stand now? Next steps?
Transforming Research in Atlantic Canada ACEnet. Objectives Describe ACEnet Describe our relationship with the ORAN and with CA*Net 4 Briefly describe.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Simo Niskala Teemu Pasanen
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
FACULTY OF COMPUTER SCIENCE & INFORMATION TECHNOLOGY, UNIVERSITY OF MALAYA.
Ministry of Research and Innovation Research Programs International Society for Computational Biology July 23, 2008.
Kick-off University Partners Meeting September 18, 2012 Michael Owen, VP Research, Innovation & International, UOIT on behalf of Consortium partners Southern.
Overview of Ryerson University Ryerson International
Bridging the Gap Between the CIO and the Scientist Michael A. Pearce – Deputy CIO University of Southern California (USC) Workshop April, 2006.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
© 2008 Pittsburgh Supercomputing Center Tour Your Future The Girls, Math & Science Partnership Pittsburgh Supercomputing Center Computer Network Engineering.
The Smart Grid Enabling Energy Efficiency and Demand Response Clark W
AIAA’s Publications Business Publications New Initiatives Subcommittee Wednesday, 9 January 2008 Rodger Williams.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
April, The Governor's Information Technology Initiative Presentation for the Appropriations Committee, Louisiana House of Representatives.
OSG - Education, Training and Outreach OpenScienceGrid.org/Education OpenScienceGrid.org/About/Outreach OpenScienceGrid.org/Education OpenScienceGrid.org/About/Outreach.
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
C3 TASP Meeting/HPCS 2006 Memorial University, St John’s Newfoundland, Copyright © 2006 Ge Baolai Outline  About SHARCNET  Resources and Usage.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
The Research Computing Center Nicholas Labello
From Nano Structures to Quantum Information Processing: A Technology Incubator for the 21st Century MIKE LAZARIDIS Chancellor, UW and President and Co-CEO,
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Cyberinfrastructure: Enabling New Research Frontiers Sangtae “Sang” Kim Division Director – Division of Shared Cyberinfrastructure Directorate for Computer.
Technology for People Vienna University of Technology September 8th, 2014 Welcome 8 th International Workshop on the CKM Unitarity Triangle (CKM2014) Univ.Prof.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
STRATEGIC ENROLMENT MANAGEMENT AT WATERLOO Staff Association Area Representatives September 2015 Ray Darling Registrar.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
How Stars Form Shantanu Basu Physics & Astronomy University of Western Ontario Preview Western, May 3/4, 2003.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
CCS Overview Rene Salmon Center for Computational Science.
The New Physical Sciences Complex A state-of-the-art center for cutting-edge research in some of the most exciting & promising areas of science: Condensed.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
3 December 2015 Examples of partnerships and collaborations from the Internet2 experience Interworking2004 Ottawa, Canada Heather Boyles, Internet2
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Overview. 2 Mission Point Of Entry Communication and collaboration between the university and industry Create, coordinate, and promote Interdisciplinary.
Marv Adams Chief Information Officer November 29, 2001.
Scientific Computing at SLAC: The Transition to a Multiprogram Future Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
NC College of Engineering 1 Grid Computing: Harnessing Underutilized Resources Compiled by Compiled by Rajesh & Anju NCCE,Israna, Panipat.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Computer System Evolution. Yesterday’s Computers filled Rooms IBM Selective Sequence Electroinic Calculator, 1948.
2009 Lynn Sutherland February 4, From advanced networks to economic development WURCNet – Western University Research Consortium.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
IPCEI on High performance computing and big data enabled application: a pilot for the European Data Infrastructure Antonio Zoccoli INFN & University of.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
NIIF HPC services for research and education
Clouds , Grids and Clusters
Walt Johnson Director, Rochester Center of Excellence in Data Science.
System G And CHECS Cal Ribbens
Undergraduate Research and High Impact Practices
Dave Hudson Deputy Director, Trillium Network
INNOVATION SUPERCLUSTERS INITIATIVE
Presentation transcript:

The Great White SHARCNET

SHARCNET: Building an Environment to Foster Computational Science Shared Hierarchical Academic Research Computing Network

l How do you study the first milliseconds of the universe? l And then change the rules?

l How do you study materials that can’t be made in a lab (yet)? l On the surface of the sun? l At the centre of the earth?

l How do you study the effects of an amputated limb on blood flow through the heart? l How do you repeat the experiment? l Where do you get volunteers?

Increasingly, the answer to “how” in these questions is by using a computer In addition to experimental and theoretical, we now have computational science!

So what does this have to do with Western and SHARCNET???? And GreatWhite?

SHARCNET was created to meet the needs of computational researchers in South Western Ontario. Western is the lead institution and the administrative home of SHARCNET.

Vision To establish a world-leading, multi-university and college, interdisciplinary institute with an active academic-industry partnership, enabling forefront computational research in critical areas of science, engineering and business.

Focus on Infrastructure and Support l Support development of computational approaches for research in science, engineering, business, social sciences l Computational facilities l People

Computational Focus l Provide world-class computational facilities and support l Explore new computational models Build on “Beowulf” model

“Beowulf” l 6 th century Scandinavian hero l Yes, but not in this context: Collection of separate computers connected by standard communications l What’s so interesting about this?

First Beowulf l First explored in 1994, two researchers at NASA l Built out of “common” computers 16 Intel ‘486 processors 10Mb ethernet l To meet specific computational needs

Beowulf “Philosophy” l Build out of “off the shelf computational components” l Take advantage of increased capabilities, reliability and cost effectiveness of mass market computers l Take advantage of parallel computation l “Price/Performance”: “cheap” supercomputing!

Growth of Beowulf l Growth in number and size of “Beowulf clusters” l Continued development of mass produced computational elements l Continued development of communication technologies l Development of new parallel programming techniques

SHARCNET l Sought to exploit “Beowulf” approach l High performance clusters: “Beowulf on steroids” Powerful “off the shelf” computational elements Advanced communications l Geographical separation (local use) l Connect clusters: emerging optical communications

Great White! l Processors: 4 alpha processors: 833Mhz (4p-SMP) 4 Gb of memory 38 SMPs: a total of 152 processors l Communications 1 Gb/sec ethernet 1.6 Gb/sec quadrics connection l November 2001: #183 in the world Fastest academic computer in Canada 6 th fastest academic computer in North America

Great White

SHARCNET l Extend “Beowulf” approach to clusters of high performance clusters l Connect clusters: “clusters of clusters” Build on emerging optical communications Initial configuration used optical equipment from telecommunications industry l Collectively a supercomputer!

Clusters Across Universities GUELPH MAC UWO Optical communication

Experimental Computational Environment Deeppurple 48 processors Optical communication (8Gb/sec) Greatwhite 152 processors

Technical Issues l Resource Management How to intelligently allocate and monitor resources Availability –Failure rates multiplied by number of systems –Job migration, check pointing Performance l User Management

Technical Issues l Data Management Must get the data to the processors Some data sets are too large to move Some HPC centers now focusing on “Data Grids” vs “Computation Grids” l Most Sharcnet programs use MPI which can run either over TCP or the Quadrics transport layer.

SHARCNET: More than this! l Not just computational facilities l Focus on computational resources to provide innovative science Support Build research community l CFI-OIT: Infrastructure l ORDCF: People and programs

Objectives l Provide state of the art computational facilities l Develop a network of HPC Clusters l Facilitate & enable world class computational research l Increase pool of people skilled in HPC techniques & processes l Evaluate & create computational Grid as a means of providing supercomputing capabilities l Achieve log term self sustainability l Create major business opportunities in Ontario

Operating Principles l Partnership among institutions l Shared resources l Equality of opportunity l Services at no cost to researchers

Partners Academic:Industry: University of Guelph Hewlett Packard McMaster UniversityQuadrics Supercomputing World The University of Western Ontario Platform Computing University of Windsor Nortel Networks Wilfrid Laurier University Bell Canada Sheridan College Fanshawe College

Support Programs: Part 1 l Chairs Program: up to 14 new faculty l Fellowships: approx. $1 Million per year Undergraduate summer jobs Graduate scholarships Post doctoral fellowships

Support Programs: Part 2 l Technical Staff System administrators at sites HPC consultants at sites l Workshops l Conferences

Results?? l Researchers from a variety of disciplines Chemistry Physics Economics Biology l Beginning to “ramp up”

Chemistry l Model chemical systems and processes at the atomic and electronic levels New quantum chemical techniques New computational methods l For example: Molecular dynamics simulation of hydrogen formation in a single-site Olefin polymerization catalyst

Economics l Research on factors that influence people to retire l Model incorporates both health and financial factors Previous models looked at one or other Much more complex: difficult to estimate parameters

Materials l Understand friction and lubrication at molecular and atomic levels l Friction between polymer bearing surfaces l Two polymer bearing surfaces in sliding motion and good solvent conditions l Green: upper wall l Red: lower wall

Astrophysics l Galaxy merger l Approximately 100,000 particles (stars and dark matter)

Astrophysics l Forming giant planets from protoplanetary disks l Shows evolution of disk over about 200 years

Materials: Granular matter: l Understand flow of granular matter l Study the effectiveness of mixing l Study the effect of different components on mixing

The Future? l New members: Southwestern Ontario l Support for new science Greater computation Storage facilities l New areas Bioinformatics

The Future? l New Partners ( Waterloo, Brock, York, UOIT) l Additional Capacity –Storage ½ petabyte across 4 sites (multistage performance) –Network 10 Gb/s core (UWO, Waterloo, Guelph, Mac) 1 Gb/s to other SHARCNET sites 10 Gb/s to Michnet, HPCVL (?) –Upgrades Large capability machines at Mac, UWO, Guelph Large capacity machines at Waterloo Increased development sites

The Future? l Additional Capabilities –Visualization l Total investment –~$49M (actually there is an additional 7 million cash from HP). l –With the new capabilities, Sharcnet could be in the top 100 or 150 of supercomputers. –Will be the fastest supercomputer of its kind – I.e.,a distributed system where nodes are clusters.

Lake Ontario Lake Huron I I X I α α I S S Windsor Western Waterloo Guelph UOIT York Fields Brock McMaster Laurier Sheridan Fanshawe Robarts Perimeter Lake Erie T α G I I 10 Gb/s α Redeployed Alphas 1 Gb/s I Itanium cluster X Xeon cluster S SMP G Grid Lab T Interconnect Topology Cluster Tape EVA Disc MSA Disc 100km 50 mi (0.1ms)

The Future: SHARCNET!