Today’s Plan Everyone will build their own clouds

Slides:



Advertisements
Similar presentations
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Advertisements

The Instageni Initiative
University of Kentucky Brent Salisbury Partnership between IT, CS, CCS, and Researchers. Liberation of research traffic from.
4/11/2017 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
System Center 2012 R2 Overview
Sponsored by the National Science Foundation1April 8, 2014, Testbeds as a Service: GENI Heidi Picher Dempsey Internet2 Annual Meeting April 8,
Deploying GMP Applications Scott Fry, Director of Professional Services.
Updated: 4/8/15 CloudLab. Updated: 4/8/15 CloudLab Clouds are changing the way we look at a lot of problems Impacts go far beyond Computer Science … but.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
The Architecture of Transaction Processing Systems
Introduction to DoC Private Cloud
Router Architectures An overview of router architectures.
Router Architectures An overview of router architectures.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
New Direction Proposal: An OpenFabrics Framework for high-performance I/O apps OFA TAC, Key drivers: Sean Hefty, Paul Grun.
A Brief Overview by Aditya Dutt March 18 th ’ Aditya Inc.
1 1 Hybrid Cloud Solutions (Private with Public Burst) Accelerate and Orchestrate Enterprise Applications.
1 1 Hybrid Cloud Solutions (Private with Public Burst) Accelerate and Orchestrate Enterprise Applications.
Components of Windows Azure - more detail. Windows Azure Components Windows Azure PaaS ApplicationsWindows Azure Service Model Runtimes.NET 3.5/4, ASP.NET,
Updated: 6/15/15 CloudLab. updated: 6/15/15 CloudLab Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent,
Sponsored by the National Science Foundation GENI and Cloud Computing Niky RIga GENI Project Office
Today’s Plan Everyone will build their own clouds
SDN Dev Group, Week 2 Aaron GemberAditya Akella University of Wisconsin-Madison 1 Wisconsin Testbed; Design Considerations.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Steve Corbató Deputy CIO and Adjunct Faculty, School of Computing University of Utah Rocky Mountain Cyberinfrastructure Mentoring and Outreach Alliance.
© 2012 IBM Corporation IBM Flex System™ The elements of an IBM PureFlex System.
Windows Azure Conference 2014 Designing Applications for Scalability.
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
LAMP: Bringing perfSONAR to ProtoGENI Martin Swany.
CloudLab Power Monitoring Emmanuel Cecchet & Mike Zink University of Massachusetts Amherst & TU Darmstadt FIRE-GENI Collaboration Workshop Sep ,
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
By: Joel Dominic and Carroll Wongchote 4/18/2012.
Network Requirements for Resource Disaggregation
Open Networking and SDN
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
Course: Cluster, grid and cloud computing systems Course author: Prof
Instructor Materials Chapter 7: Network Evolution
Organizations Are Embracing New Opportunities
Introduction to VMware Virtualization
Working With Azure Batch AI
Sebastian Solbach Consulting Member of Technical Staff
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
Tools and Services Workshop Overview of Atmosphere
Traditional Enterprise Business Challenges
Regional Software Defined Science DMZ (SD-SDMZ)
Design and Implement Cloud Data Platform Solutions
University of Technology
GGF15 – Grids and Network Virtualization
The Need Addressed by CloudLab
Virtual Private Servers – Types of Virtualization platforms Virtual Private ServersVirtual Private Servers, popularly known as VPS is considered one of.
Marrying OpenStack and Bare-Metal Cloud
Indigo Doyoung Lee Dept. of CSE, POSTECH
NSF cloud Chameleon: Phase 2 Networking
Outline Virtualization Cloud Computing Microsoft Azure Platform
COMP4442 Cloud Computing: Assignment 1
Cloud-Enabling Technology
GENI Exploring Networks of the Future
OpenStack Summit Berlin – November 14, 2018
Nolan Leake Co-Founder, Cumulus Networks Paul Speciale
Presentation transcript:

Today’s Plan Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent, with it’s own compute and storage resources Log in using GENI accounts Create a cloud Explore the CloudLab interface Use your cloud Administer your cloud CloudLab is about more than OpenStack

Prerequisites Account on the GENI portal (sent to you as “pre work”) Optional, but will make your experience better: SSH keypair associated with your GENI portal account Knowledge of how to use the private SSH key from your laptop Known to work best in Chrome and Firefox browsers Tablets might work, but not well tested

It’s important to realize that CloudLab is not a cloud It’s a facility for building clouds of your own – a place where you can take the pieces that would a given in a cloud that you might use – the network, the virtualization layer, the storage system, and instead of taking what someone else gives you, build them and experiment with them yourself

Crash Course in CloudLab Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts, hardware) Physical isolation for compute, storage (shared net.*) Profiles are one of the key abstractions Defines an environment – hardware (RSpec) / software (images) Each “instance” of a profile is a separate Provide standard environments, and a way of sharing Explicit role for domain experts “Instantiate” a profile to make an “Experiment” Lives in a GENI slice * Can be dedicated in some cases

What Is CloudLab? Utah Wisconsin Clemson GENI Supports transformative cloud research Built on Emulab and GENI Control to the bare metal Diverse, distributed resources Repeatable and scientific Slice A Geo-Distributed Storage Research Slice B Stock OpenStack Slice C Virtualization and Isolation Research Slice D Allocation and Scheduling Research for Cyber-Physical Systems Utah Wisconsin Clemson GENI CC-NIE, Internet2 AL2S, Regionals

CloudLab’s Hardware One facility, one account, three locations About 5,000 cores each (15,000 total) TOR / Core switching design 8-16 cores per node 10 Gb to nodes, SDN Baseline: 8GB RAM / core 100 Gb to Internet2 AL2S Latest virtualization hardware Partnerships with multiple vendors Wisconsin Clemson Utah Storage and net. Per node: 128 GB RAM 2x1TB Disk 400 GB SSD Clos topology Cisco High-memory 16 GB RAM / core 16 cores / node Bulk block store Net. up to 40Gb High capacity Dell Power-efficient ARM64 / x86 Power monitors Flash on ARMs Disk on x86 Very dense HP

cloudlab.us/tutorial

CloudLab Hardware

Utah/HP: Very dense

Utah/HP: Low-power ARM64 2 switches 1.3 315 nodes 2,520 cores 8.5 Tbps 8 cores 120 GB Flash 45 cartridges 64 GB RAM

Utah/HP Network: Core switch 4x 40 Gb 2x 10 Gb 320 Gb uplink x7

Utah - Suitable for experiments that: … explore power/performance tradeoffs … want instrumentation of power and temperature … want large numbers of nodes and cores … want to experiment with RDMA via RoCE … need bare-metal control over switches … need OpenFlow 1.3 … want tight ARM64 platform integration

Wisconsin/Cisco Nexus 3172PQ 8X10G 40G Nexus 3132Q 40G Nexus 3172PQ servers

12X 3TB HDD (donated by Seagate) Compute and storage 90X Cisco 220 M4 10X Cisco 240 M4 2X 1.2 TB HDD 1X 1TB HDD 12X 3TB HDD (donated by Seagate) 2X 8 cores @ 2.4GHz 128GB RAM 1X 480GB SSD Over the next year: ≥ 140 additional servers; Limited number of accelerators, e.g., FPGAs, GPUs (planned)

Networking OF 1.0 (working with Cisco on OF 1.3 support) Nexus 3132q Nexus 3172pq OF 1.0 (working with Cisco on OF 1.3 support) Monitoring of instantaneous queue lengths Fine-grained tracing of control plane actions Support for multiple virtual router instances per router Support for many routing protocols

Experiments supported Large number of nodes/cores, and bare-metal control over nodes/switches, for sophisticated network/memory/storage research … Network I/O performance, intra-cloud routing (e.g., Conga) and transport (e.g., DCTCP) … Network virtualization (e.g., CloudNaaS) … In-memory big data frameworks (e.g., Spark/Shark) … Cloud-scale resource management and scheduling (e.g., Mesos; Tetris) … New models for Cloud storage (e.g., tiered; flat storage; IOFlow) … New architectures (e.g., RAM Cloud for storage)

Clemson/Dell: High Memory, IB 20 cores/node 1 x 40 Gb IB/node 8 nodes/chassis 2*x 10 GbE OF/node 10 chasses/rack 2*x 1 GbE OF/node 256 GB RAM/node 2 x 1 TB drive/server * 1 NIC in 1st build

Clemson/Dell Network: IB + 10 GbE NextNet S6000 IB QDR Q1 2015: 2K+ cores Complete: ~5K cores 100 GbE 2 x 10 GbE 2 x 10 GbE 8x40 GbE 8x40 GbE N2048 N2048 S6000 N2048 S6000 N2048 80x10GbE 80x1GbE 80x10GbE 80x1GbE 8 node chassis 8 node chassis 8 node chassis 8 node chassis 96x40GbE 10 chasses/rack 10 chasses/rack 8 node chassis 8 node chassis

Clemson - Suitable for experiments that: … need large per-core memory e.g., High-res media processing e.g. Hadoop e.g., Network Function Virtualization … want to experiment with IB and/or GbE networks e.g., hybrid HPC with MPI and TCP/IP e.g., cyber physical system … need bare-metal control over switches … need OpenFlow 1.3

Building Profiles

Copy an Existing Profile

Use a GUI (Jacks)

Write Python Code (geni-lib)

GENI-LIB http://geni-lib.readthedocs.org/ or http://geni-lib.readthedocs.io/en/latest/

Build From Scratch

Sign Up

Sign Up At CloudLab.us