and beyond Office of Vice President for Information Technology.

Slides:



Advertisements
Similar presentations
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Information Technology Center Introduction to High Performance Computing at KFUPM.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Dr. Dali Wang, Grid Infrastructure Specialist.
UABgrid Identity Infrastructure John-Paul Robinson, David Shealy, UAB, IT Infrastructure Services Educause.
The Office of Information Technology Welcomes the Chinese National Natural Science Foundation.
Marilyn T. Smith, Head, MIT Information Services & Technology DataSpace IS&T Data CenterMIT Optical Network 1.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Approaching a Data Center Guy Almes meetings — Tempe 5 February 2007.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Arkansas Research and Education Optical Network Arkansas Association of Public Universities Little Rock, Arkansas April 10, 2008 Dr. Robert Zimmerman ARE-ON.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Constructing Campus Grids Experiences adapting myVocs to UABgrid John-Paul Robinson High Performance Computing Services Office of the Vice President for.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Gurcharan S. Khanna Director of Research Computing RIT
Unidata Policy Committee Meeting Bernard M. Grant, Assistant Program Coordinator for the Atmospheric and Geospace Sciences Division May 2012 NSF.
CI Days: Planning Your Campus Cyberinfrastructure Strategy Russ Hobby, Internet2 Internet2 Member Meeting 9 October 2007.
Thursday, August 21, 2008 Cyberinfrastructure for Research Teams UAB High Performance Computing Services John-Paul Robinson.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Gary Crane, Director IT Initiatives.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Building the Infrastructure Grid: Architecture, Design & Deployment Logan McLeod – Database Technology Strategist.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
LONI Overview State-wide IT initiative: $25M – Gov. Mike Foster, present LONI - $40M, Gov. Kathleen Blanco, LONI - $10M, Gov. Kathleen.
Cyberinfrastructure Planning at NSF Deborah L. Crawford Acting Director, Office of Cyberinfrastructure HPC Acquisition Models September 9, 2005.
1 Pan-American Advanced Studies Institute (PASI) Program Grid Computing and Advanced Networking Technologies for e-Science Mendoza, Argentina May 15-21,
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
Top Issues Facing Information Technology at UAB Sheila M. Sanders UAB Vice President Information Technology February 8, 2007.
Case Study – Building A Regional Optical Network (RON) The Quilt Fiber Workshop June 21-23, 2004.
Internet2 Meeting 2006 UABgrid : A campus-wide distributed computational infrastructure University of Alabama at Birmingham UABgrid Architecture Team Jill.
Information Resources and Communications University of California, Office of the President UC-Wide Activities in Support of Research and Scholarship David.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
CyberInfrastructure workshop CSG May Ann Arbor, Michigan.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Linking Research Data to Clinical Data – a Pilot The University of Alabama at Birmingham.
CCS Overview Rene Salmon Center for Computational Science.
Overview of the Texas Advanced Computing Center and International Partnerships Marcia Inger Assistant Director Development & External Relations April 26,
08/05/06 Slide # -1 CCI Workshop Snowmass, CO CCI Roadmap Discussion Jim Bottum and Patrick Dreher Building the Campus Cyberinfrastructure Roadmap Campus.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
3 December 2015 Examples of partnerships and collaborations from the Internet2 experience Interworking2004 Ottawa, Canada Heather Boyles, Internet2
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
Boulder Research and Administration Network (BRAN), Front Range GigaPoP (FRGP), Bi-State Optical Network (BiSON) 1 Marla Meehl UCAR/FRGP/BiSON Manager.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
National Strategic Computing Initiative
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Internet2 and Cyberinfrastructure Russ Hobby Program Manager,
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
Lawrence H. Landweber National Science Foundation SC2003 November 20, 2003
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Clouds , Grids and Clusters
Putting All The Pieces Together: Developing a Cyberinfrastructure at the Georgia State University Library Tim Daniels, Learning Commons Coordinator Doug.
CLUSTER COMPUTING.
Presentation transcript:

and beyond Office of Vice President for Information Technology

NSF Cyberinfrastructure (CI) Vision Supports broad and open access to leadership computing; data and information resources; online instruments and observatories; and visualization and collaboration services. Enables distributed knowledge communities that collaborate and communicate across disciplines, distance and cultures. Research and education communities become virtual organizations that transcend geographic and institutional boundaries.

CI Complimentary Areas HPC in support of modeling, simulations, and extraction of knowledge from huge data collections. NSF will invest in petascale perform ranges where petascale is operations per second with comparable storage and networking capacity. Data, Data Analysis, and Visualization Virtual Organizations for Distributed Communities Leaning and Workforce Development covering K- 12, post-secondary, the workforce, and general public.

UAB CyberInfrastructure UAB HPC Resources The Shared HPC Facility is located in BEC155 has 4 clusters Computer Science HPC Facility has 2 clusters UAB overall HPC computing power has been tripling approximately on a 2 year cycle during the past 4 years. Optical Networks – campus & regional UABgrid – a campus computing and collaboration environment

UAB HPC Resources IBM BlueGene/L System IBM’s BlueGene/L is a uniquely designed massively parallel supercomputer. A single BlueGene/L rack contains 1024 nodes, each node having two processors and 512MB of main memory. The 2048 processors in one BlueGene/L rack are tightly integrated in one form factor using five proprietary high-speed interconnection networks. This system has a theoretical 5.6-Teraflop computing capacity. DELL Xeon 64-bit Linux Cluster – “Coosa” This cluster consists of 128 nodes of DELL PE1425 computer with dual Xeon 3.6GHz processors with either 2GB or 6GB of memory per node. It uses Gigabit Ethernet inter- node network connection. There are 4 Terabytes of disk storage available to this cluster. This cluster is rated at more than 1.0 Teraflops computing capacity. DELL Xeon 64-bit Linux Cluster w/ Infinitband – “Olympus” 2- Verari Opteron 64-bit Linux Clusters – “Cheaha” & “Everest” This cluster is a 64-node computing cluster consists of dual AMD Opteron 242 processors, with 2GB of memory each node. Each node is interconnected with a Gigabit Ethernet network. IBM Xeon 32-bit Linux Cluster – “Cahaba” This cluster is a 64-node computing cluster consists of IBM x335 Series computer with dual Xeon 2.4GHz processors, 2 or 4GB of memory each node) and a 1-Terabyte storage unit. Each node is interconnected with a Gigabit Ethernet network

BlueGene Cluster DELL Xeon 64-bit Linux Clusters – “Coosa” & “Cahaba” Verari Opteron 64-bit Linux Clusters – “Cheaha” & “Everest”

Computer Science DELL Xeon 64-bit Linux Cluster w/ Infinitband – “Olympus”

UAB 10GigE Research Network Build high bandwidth network linking UAB compute clusters Leverage network for staging and managing Grid- based compute jobs Connect directly to High-bandwidth regional networks

UABgrid Common interface for access to HPC infrastructure Leverage UAB identity management system for consistent identity across resources Provide access to regional, national, and international collaborators using Shibboleth identity framework Support research collaboration through autonomous virtual organizations Collaboration between computer science, engineering, and IT

UABgrid Architecture Leverages IdM investments via InCommon Provides collaboration environment for autonomous virtual organizations Supports integration of local, shared, and regional resources

Alabama Cyberinfrastructure Unlimited bandwidth optical network links major research areas in state High performance computational resources distributed across state Campus grids like UABgrid provide uniform access to computational resources Regional grids like SURAgrid provide access to aggregate computational power and unique computational resources Cyberinfrastructure enables new research paradigms throughout state

Alabama Regional Optical Network (RON) Alabama RON is a very high bandwidth lambda network. Operated by SLR. Connects major research institutions across state Connects Alabama to National Lambda Rail and Internet2 In collaboration with UA System, UA, and UAH

National LambdaRail (NLR) Consortium of research universities and leading edge technology companies Deploying national infrastructure for advanced network research next-generation, network-based applications Supporting multiple, independent high speed links to research universities and centers

National LambdaRail Infrastructure

SURAgrid Provides access to aggregate compute power across region

SURAgrid HPC Resources

Alabama Grid? Leverage Alabama's existing investments in Cyberinfrastructure Need for dynamic access to a region infrastructure increasing Need to build a common trust infrastructure Benefit from shared and trusted identity management Enable development of advanced workflows specific to regional research expertise

Future Directions Begin pilot of a State grid linking UAB, ASA, and UAH resources?

Atlanta means Southern Light Rail – take out Georgia Tech’s non-profit cooperative corporation Provides access to NLR for 1/5 the cost of an NLR membership Provides access to other network initiatives Commodity Internet Internet2 NSF’s ETF – Atlanta Hub Georgia Tech’s International Connectivity Leverage Georgia Tech expertise and resources

Where does UAS Connect?

Mission Statement of HPC Services HPC Services is the division within the Infrastructure Services organization with a focus on HPC support for research and other HPC activities. HPC Services represents the Office of Vice- President of Information Technology to IT- related academic campus committees, regional / national technology research organizations and/or committees as requested.

HPC Project Five Year Plan Scope: Establish a UAB HPC data center, whose operations will be managed by IT Infrastructure and which will include additional machine room space designed for HPC and equipped with a new cluster. The UAB HPC Data Center and HPC resource will be used by researchers throughout UAB, the UAS system, and other State of Alabama Universities and research entities in conjunction with the Alabama Supercomputer Authority. Oversight of the UAB HPC resources will be provided by a committee made up of UAB Deans, Department Heads, Faculty, and the VPIT. Daily administration of this shared resource will be provided by Infrastructure Services.

Preliminary Timeline FY2007: Rename Academic Computing, HPCS, and merge HPCS with Network and Infrastructure, to leverage the HPC related talents, and resources of both organizations. FY2007: Connect existing HPC Clusters to each other and 10Gig backbone.. FY2007: Establish Pilot Grid Identity Management System – GridShib (HPCS, Network/Services) FY2007: Enable Grid Meta Scheduling (HPCS, CIS, ETL) FY2007: Establish Grid connectivity with SURA, UAS, and, ASA. FY2008: Increase support staff as needed by reassigning legacy Mainframe technical resources FY2008: Develop requirements for expansion or replacement of older HPC’s. xxxxTeraFlops. FY2008: Using HPC requirements[1] (xxxx TeraFlops) for Data Center Design, begin design of[1] HPC Data Center. FY2009: Secure Funding for new HPC Cluster xxxxTera Flops FY2010: Complete HPC Data Center Infrastructure. FY2010: Secure final funding for expansion or replacement of older HPC’s. FY2011: Procure and deploy new HPC cluster. xxxxTeraFlops.