The Need Addressed by CloudLab

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
Advertisements

GENI/CCNIE Workshop: UW-Madison Key components & approx timetable/milestones 100Gbps (campus to Internet2 & ESNet) (6/30/13) Science DMZ (8/15/13) Openflow/SDN.
Sponsored by the National Science Foundation1April 8, 2014, Testbeds as a Service: GENI Heidi Picher Dempsey Internet2 Annual Meeting April 8,
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
Updated: 4/8/15 CloudLab. Updated: 4/8/15 CloudLab Clouds are changing the way we look at a lot of problems Impacts go far beyond Computer Science … but.
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Abstract Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement.
Software to Data model Lenos Vacanas, Stelios Sotiriadis, Euripides Petrakis Technical University of Crete (TUC), Greece Workshop.
Updated: 6/15/15 CloudLab. updated: 6/15/15 CloudLab Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent,
Sponsored by the National Science Foundation Research & Experiments on GENI GENI CC-NIE Workshop NSF Mark Berman, Mike Zink January 7,
GENI Workshop on Layer2/SDN Campus Deployment July 7, 2011 Larry Landweber GENI Project Office John P. Morgridge Professor, Emeritus University of Wisconsin.
Sponsored by the National Science Foundation GENI-enabled Campuses Responsibilities, Requirements, & Coordination Bryan Lyles, NSF Mark Berman & Chip Elliott,
Today’s Plan Everyone will build their own clouds
Improving Network I/O Virtualization for Cloud Computing.
Software-defined Networking Capabilities, Needs in GENI for VMLab ( Prasad Calyam; Sudharsan Rajagopalan;
1 Supporting the development of distributed systems CS606, Xiaoyan Hong University of Alabama.
Sponsored by the National Science Foundation Programmable Networks and GENI Marshall Brinn, GPO GEC October 25, 2012.
Steve Corbató Deputy CIO and Adjunct Faculty, School of Computing University of Utah Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance.
Overview of PlanetLab and Allied Research Test Beds.
Sponsored by the National Science Foundation GENI Integration of Clouds and Cyberinfrastructure Chip Elliott GENI Project Director
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
SDN Management Layer DESIGN REQUIREMENTS AND FUTURE DIRECTION NO OF SLIDES : 26 1.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Advancing National Wireless Capability Date: March 22, 2016 Wireless Test Bed & Wireless National User Facility Paul Titus Department Manager, Communications.
Communication Needs in Agile Computing Environments Michael Ernst, BNL ATLAS Distributed Computing Technical Interchange Meeting University of Tokyo May.
EGI-InSPIRE RI An Introduction to European Grid Infrastructure (EGI) March An Introduction to the European Grid Infrastructure.
Extreme Scale Infrastructure
Today’s Plan Everyone will build their own clouds
Md Baitul Al Sadi, Isaac J. Cushman, Lei Chen, Rami J. Haddad
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
Accessing the VI-SEEM infrastructure
Organizations Are Embracing New Opportunities
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
OpenStack Swift Where do big data go? Eben van Zyl
Clouds , Grids and Clusters
Atsushi Iwata, Takashi Egawa System Platforms Research Laboratories
By Chris immanuel, Heym Kumar, Sai janani, Susmitha
Tools and Services Workshop
Networking & Communications Prof. Javad Ghaderi
Joslynn Lee – Data Science Educator
GENI Exploring Networks of the Future
Windows Server* 2016 & Intel® Technologies
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
Grid Computing.
NextGENI: The Nation’s Edge Cloud
GGF15 – Grids and Network Virtualization
Marrying OpenStack and Bare-Metal Cloud
Software Defined Networking (SDN)
NSF cloud Chameleon: Phase 2 Networking
GENI Integration of Clouds and Cyberinfrastructure
Towards Distributed Test-Lab for Planetary-Scale Services
GENI Global Environment for Network Innovation
Cloud-Enabling Technology
GENI Exploring Networks of the Future
Comparison to existing state of security experimentation
Towards Predictable Datacenter Networks
Presentation transcript:

The Need Addressed by CloudLab Clouds are changing the way we look at a lot of problems Impacts go far beyond Computer Science … but there's still a lot we don't know, from perspective of Researchers (those who will transform the cloud) Users (those who will use the cloud to transform their own fields) To investigate these questions, we need: Flexible, scalable scientific infrastructure That enables exploration of fundamental science in the cloud Built by and for the research community

The CloudLab Vision A “meta-cloud” for building clouds Build your own cloud on our hardware resources Agnostic to specific cloud software Run existing cloud software stacks (like OpenStack, Hadoop, etc.) … or new ones built from the ground up Control and visibility all the way to the bare metal “Sliceable” for multiple, isolated experiments at once With CloudLab, it will be as easy to get a cloud tomorrow as it is to get a VM today

What Is CloudLab? Utah Wisconsin Clemson GENI Supports transformative cloud research Built on Emulab and GENI Control to the bare metal Diverse, distributed resources Repeatable and scientific Slice A Geo-Distributed Storage Research Slice B Stock OpenStack Slice C Virtualization and Isolation Research Slice D Allocation and Scheduling Research for Cyber-Physical Systems Utah Wisconsin Clemson GENI CC-NIE, Internet2 AL2S, Regionals

CloudLab’s Hardware One facility, one account, three locations About 5,000 cores each (15,000 total) TOR / Core switching design 8-16 cores per node 10 Gb to nodes, SDN Baseline: 8GB RAM / core 100 Gb to Internet2 AL2S Latest virtualization hardware Partnerships with multiple vendors Wisconsin Clemson Utah Storage and net. Per node: 128 GB RAM 2x1TB Disk 400 GB SSD Clos topology Cisco and HP High-memory 16 GB RAM / core 16 cores / node Bulk block store Net. up to 40Gb High capacity Dell Power-efficient ARM64 / x86 Power monitors Flash on ARMs Disk on x86 Very dense HP

Technology Foundations Built on Emulab and GENI (“ProtoGENI”) In active development at Utah since 1999 Several thousand users (incl. GENI users) Provisions, then gets out of the way “Run-time” services are optional Controllable through a web interface and GENI APIs Scientific instrument for repeatable research Physical isolation for most resources Profiles capture everything needed for experiments Software, data, and hardware details Can be shared and published (eg. in papers)

Who can use CloudLab? US academics and educators Researchers in cloud architecture and novel cloud applications Teaching classes, other training activities No charge: free for research and educational use International federations expected Apply on the website at www.cloudlab.us

CloudLab Users So Far May 2016: 300 projects 1,250 users 21,000 experiments

Cloud Architecture Research Exploring emerging and extreme cloud architectures Evaluating design choices that exercise hardware and software capabilities Studying geo-distributed data centers for low-latency applications Developing different isolation models among tenants Quantifying resilience properties of architectures Developing new diagnostic frameworks Exploring cloud architectures for cyber-physical systems Enabling realtime and near-realtime compute services Enabling data-intensive computing (“big data”) at high performance in the cloud

Application Research Questions Experiment with resource allocation and scheduling Develop enhancements to big data frameworks Intra- and inter-datacenter traffic engineering and routing New tenant-facing abstractions New mechanisms in support of cloud-based services Study adapting next-generation stacks to clouds New troubleshooting and anomaly detection frameworks Explore different degrees of security and isolation Composing services from heterogeneous clouds Application-driven cloud architectures

Federated with GENI CloudLab can be used with a GENI account, and vice-versa GENI Racks: ~ 50 small clusters around the country Programmable wide-area network Openflow at dozens of sites Connected in one layer 2 domain Large clusters (100s of nodes) at several sites Wireless and mobile WiMax at 8 institutions LTE / EPC testbed (“PhantomNet”) at Utah International partners Europe (FIRE), Brazil, Japan

Many Sites, One Facility = GENI racks

Community Outreach Applications in areas of national priority Medicine, emergency response, smart grids, etc. Through “Opt in” to compute jobs from domain scientists Summer camps Through Clemson data-intensive computing program Under-represented groups

Availability and Schedule ✔ Open and in use! Hardware being deployed in stages: ✔ Fall 2014: Utah / HP cluster ✔ Winter 2015: Wisconsin / Cisco cluster ✔ Spring 2015: Dell / Clemson cluster Hardware refresh in early 2016 ✔ Spring 2016: Clemson and Wisconsin clusters Summer 2016: Utah cluster planned

Your Own Cloud in One Click

The CloudLab Team Robert Ricci (PI) Eric Eide Kobus Van der Merwe Aditya Akella (co-PI) Remzi Arpaci-Dusseau Miron Livny KC Wang (co-PI) Jim Bottum Jim Pepin Chip Elliott (co-PI) Larry Landweber Mike Zink (co-PI) David Irwin Glenn Ricart (co-PI)

www.CloudLab.us Learn more, sign up: This material is based upon work supported by the National Science Foundation under Grant No. 1419199. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.