Steve Corbató Deputy CIO and Adjunct Faculty, School of Computing University of Utah Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance.

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
Advertisements

Duke University SDN Approaches and Uses GENI CIO Workshop – July 12, 2012.
Hello i am so and so, title/role and a little background on myself (i.e. former microsoft employee or anything interesting) set context for what going.
Distributed Data Processing
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
The Internet2 NET+ Services Program Jerry Grochow Interim Vice President CSG January, 2012.
HEAnet Cloud Compute Update 1.HEAnet Cloud – a Multilayered Strategy 2.HEAnet Cloud Project 3.Cloud Realities 4.Role of the NREN – Why Cloud ? 5.NREN Collaboration.
Updated: 4/8/15 CloudLab. Updated: 4/8/15 CloudLab Clouds are changing the way we look at a lot of problems Impacts go far beyond Computer Science … but.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
Sponsored by the National Science Foundation Strategies for Cyber-Infrastructure Integration Marshall Brinn, GPO Brecht Vermeulen, iMinds GEC22: March.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
ITSC Report From The CIO: Network Program Update 25 September 2014.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
An Introduction to the Open Science Data Cloud Heidi Alvarez Florida International University Robert L. Grossman University of Chicago Open Cloud Consortium.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Copyright © The OWASP Foundation Permission is granted to copy, distribute and/or modify this document under the terms of the OWASP License. The OWASP.
Center For Research Computing (CRC), University of Notre Dame, Indiana Application of ND CRC to be a member of the OSG Council Jarek Nabrzyski CRC Director.
S T A N F O R D U N I V E R S I T Y I N F O R M A T I O N T E C H N O L O G Y S E R V I C E S C o m m u n i c a t i o n S e r v i c e s July 12,
Updated: 6/15/15 CloudLab. updated: 6/15/15 CloudLab Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent,
THE REGIONAL MUNICIPALITY OF YORK Information Technology Strategy & 5 Year Plan.
Research Cyberinfrastructure Alliance Working in partnership to enable computationally intensive, innovative, interdisciplinary research for the 21 st.
Technology Overview. Agenda What’s New and Better in Windows Server 2003? Why Upgrade to Windows Server 2003 ?  From Windows NT 4.0  From Windows 2000.
INTERNET2 COLLABORATIVE INNOVATION PROGRAM DEVELOPMENT Florence D. Hudson Senior Vice President and Chief Innovation.
Internet2 – InCommon and Box Marla Meehl Colorado CIO 11/1/11.
Sponsored by the National Science Foundation GENI-enabled Campuses Responsibilities, Requirements, & Coordination Bryan Lyles, NSF Mark Berman & Chip Elliott,
Today’s Plan Everyone will build their own clouds
Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012.
Sponsored by the National Science Foundation GENI Integration of Clouds and Cyberinfrastructure Chip Elliott GENI Project Director
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
PPTTEST 10/24/ :07 1 IT Ron Williams Business Innovation Through Information Technology IS Organization.
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Cyberinfrastructure for Data Intensive Science (DIS) Follow-on panel to DIS session at Internet2/ESCC Joint Techs Conference Baton Rouge – January 24,
Campus networks preparing for BIGDATA Erik Deumens Research Computing UFIT 10/29/2012.
Enhancing Networking Expertise Across the Great Plains Greg Monaco, Ph.D. Director for Research & Cyberinfrastructure Initiatives Great Plains Network.
08/05/06 Slide # -1 CCI Workshop Snowmass, CO CCI Roadmap Discussion Jim Bottum and Patrick Dreher Building the Campus Cyberinfrastructure Roadmap Campus.
Slide 1 Leveraging Campus Resources and Engaging Researchers NSF CC* PI 2015 Workshop September 29, 2015.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Slide 1 Campus Design – Successes and Challenges Michael Cato, Vassar College Mike James, Northwest Indian College Carrie Rampp, Franklin and Marshall.
The Condo of Condos Panel discussion Internet Annual Meeting Arlington, VA 4/22/13 1.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
ACI-REF Virtual Residency Henry Neeman, University of Oklahoma Director, OU Supercomputing Center for Education & Research (OSCER) Assistant Vice President,
Virtualization as Architecture - GENI CSC/ECE 573, Sections 001, 002 Fall, 2012 Some slides from Harry Mussman, GPO.
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Slide 1 Regional Network Collaborations Marla Meehl, UCAR/FRGP CC*IIE Collaborative Research: CC*IIE Region: Rocky Mountain Cyberinfrastructure Mentoring.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Mark Gilbert Microsoft Corporation Services Taxonomy Building Block Services Attached Services Finished Services.
HUIT Cloud Initiative Update November, /20/2013 Ryan Frazier & Rob Parrott.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Slide 1 E-Science: The Impact of Science DMZs on Research Presenter: Alex Berryman Performance Engineer, OARnet Paul Schopis, Marcio Faerman.
Emerging StateNets Issues Associated with CI and the 3- Tier Networking Model Steve Corbató CI Strategic Initiatives, University of Utah StateNets – Tempe.
Today’s Plan Everyone will build their own clouds
The Post Windows Operating System
Collaborative Innovation Communities: Bringing the Best Together
South Big Data Innovation Hub
The Need Addressed by CloudLab
Intelledox Infiniti Helps Organizations Digitally Transform Paper and Manual Business Processes into Intuitive, Guided User Experiences on Azure MICROSOFT.
NSF cloud Chameleon: Phase 2 Networking
GENI Integration of Clouds and Cyberinfrastructure
Dell Data Protection | Rapid Recovery: Simple, Quick, Configurable, and Affordable Cloud-Based Backup, Retention, and Archiving Powered by Microsoft Azure.
GENI Global Environment for Network Innovation
GENI Exploring Networks of the Future
Comparison to existing state of security experimentation
Presentation transcript:

Steve Corbató Deputy CIO and Adjunct Faculty, School of Computing University of Utah Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance (RMCMOA) Workshop – ASU, Tempe January 13, 2015 Strategic Positioning of Networking and CI at a Research University – “It’s a System Now!”

Collaboration Primer Why collaborate? Lack of resources and/or specific capabilities to accomplish something (task->initiative) alone Side benefit: facilitates organizational change and individual professional development Examples of successful collaborations Big Science (Large Hadron Collider, genomics) Corporate joint ventures (Big Pharma, transoceanic telecom cables, construction) Mountain climbing parties

Collaboration Basics Basic requirements Shared purpose Willingness to understand partner’s objectives and needs at the outset Mutual trust – hard to gain, easy to lose… Honest assessment of each partner’s strengths, capabilities, and gaps Continuous engagement and communication throughout

Collaboration Truisms “It’s amazing what’s gets done when no one cares who gets the credit” “Be yes if, not no because” DWYSYWD: Do What You Said You Would Do Follow through and timely keeping of commitments is essential We succeed when our partners succeed

Today’s network challenges Increasing security and privacy concerns Network load/performance expectations Increasing availability expectations More wireless traffic Increasing management complexity Growing technical debt generally How fundamentally different is this list from 15 years ago?

Observations It’s about the services! We must overlay the campus network architecture designed around geography and organization with one based on role, function, and risk. Paradoxically, as we increasingly rely on outsourced cloud services, network ownership, management, and reliability are more important than ever. Identity services could be even more important – especially when used in support of SDN. We now see greater overlap between the network research and computational science disciplines.

Emerging campus network requirements Network segmentation capability High performance – Science DMZ Protected information High reliability Close tie to campus IdAM service Centralized and scalable device management Compatibility with IPv4/v6 installed base Lower total cost of ownership Every CIO knows at least one thing about SDN – merchant silicon

The University of Utah: the fourth node on the ARPANET ARPANET ( ) Univ. of Utah Computer Science Department (now School of Computing)

SALT LAKE CITY - THE (OPTICAL FIBER) CROSSROADS OF THE WEST

11

Came online in spring Shared with enterprise (academic/hospital) groups (wall between rooms). low PUE/power costs 92 racks for research computing. 1.2MW of power with upgrade path to add capacity for research computing Metro optical ring connecting campus, data center, and internet2 24/7/365 facility 12

13

14

Partnership approach Campus partners Network research – Emulab/protoGENI/CloudLab Flux research group, School of Computing Computational science Center for High Performance Computing (UIT) Production network UIT Common Infrastructure Services (IdAM, too) UIT Information Security Office Advanced regional network Utah Education Network (UEN) External partners – many! Internet2, GENI, NSF, NTIA, NOAA, UDOT, UTA ACI-REF and CloudLab national collaborations

University Information Technology 1 1) Building a dynamic Science DMZ on top of an SDN-based framework (GENI) 2) Extending slices to key campus labs, HPC center, and the Honors residential community 3) Working closely with central IT, campus IT Governance, and Utah Education Network Leverages new infrastructure: Downtown Data Center Utah Optical Network (BONFIRE) NSF MRI for novel cluster (Apt) Campus Net Upgrade Utah CC-NIE Integration project: Science Slices (NSF #ACI ) PI: S. Corbató; co-PIs: A. Bolton, T. Cheatham. R. Ricci, K. Van der Merwe; SP: J. Breen, S. Torti Premise (Rob Ricci): What if we flipped the concept and built our Science DMZ on top of SDN infrastructure, rather than just plugging our SDN testbed into the DMZ? Target areas: Molecular dynamics Astrophysics data Genomics Network/systems research Honors students

Dual purposes 1)A platform for sharing research artifacts and environments 2)A facility for building testbeds tailored to specific domains 56 Gbps Ethernet/Infiniband Funded under NSF MRI award # (Rob Ricci, PI)

updated: 10/23/14 CloudLab Supported by National Science Foundation under Grant #

updated: 10/23/14 CloudLab What Is CloudLab? Utah Wisconsin Clemson GENI Slice B Stock OpenStack CC-NIE, Internet2 AL2S, Regionals Slice A Geo-Distributed Storage Research Slice D Allocation and Scheduling Research for Cyber-Physical Systems Slice C Virtualization and Isolation Research Supports transformative cloud research Built on Emulab and GENI Control to the bare metal Diverse, distributed resources Repeatable and scientific 19

updated: 10/23/14 CloudLab CloudLab’s Hardware One facility, one account, three locations Wisconsin Clemson Utah About 5,000 cores each (15,000 total) 8-16 cores per node Baseline: 4GB RAM / core Latest virtualization hardware TOR / Core switching design 10 Gb to nodes, SDN 100 Gb to Internet2 AL2S Partnerships with multiple vendors Storage and net. Per node: 128 GB RAM 2x1TB Disk 400 GB SSD Clos topology Cisco (OF 1.0) High-memory 16 GB RAM / core 16 cores / node Bulk block store Net. up to 40Gb High capacity Dell (OF 1.0) Power-efficient ARM64 / x86 Power monitors Flash on ARMs Disk on x86 Very dense HP (OF 1.3) 20

updated: 10/23/14 CloudLab Built on Emulab and GENI (“ProtoGENI”) In active development at Utah since 1999 Several thousand users (incl. GENI users) Provisions, then gets out of the way “Run-time” services are optional Controllable through a web interface and GENI APIs Scientific instrument for repeatable research CloudLab already federated with GENI Availability Now: Technology preview available! Late 2014: Open to early adopters Early 2015: Generally available Technology Foundations 21

ACI-REF project: Condo of Condos ACI-REF campus research computing centers are federating to share resources – both human and computational Also all connect to the Internet2 Layer 2/3 Network at 100G NSF #CNS – PI: Jim Bottum, Clemson

23 Rio Mesa Center on the Dolores River (SE Utah, U of Utah) 

Bootstrapping Research CI Network is critical – on-campus and external Science DMZ, Metro optical network, local peering Robust identity services also should support research collaborations (inCommon, TIER) Faculty/researcher support – speak their language! Basic HPC and data storage capability Data management strategies (partner with campus library) Partner on faculty research grants Then lead out on NSF and NIH CI grants Incorporate research support into IT Governance Ability to partner and outsource State-based advanced regional networks; Internet2 Research computing consortia