Alan Shieh Cornell University Srikanth Kandula Microsoft Research Emin Gün Sirer Cornell University Sidecar: Building Programmable Datacenter Networks.

Slides:



Advertisements
Similar presentations
OpenFlow and Software Defined Networks. Outline o The history of OpenFlow o What is OpenFlow? o Slicing OpenFlow networks o Software Defined Networks.
Advertisements

Fluffy’s Safe Right? If you want to limit a user’s functionality, don’t make them an administrator.
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
VCRIB: Virtual Cloud Rule Information Base Masoud Moshref, Minlan Yu, Abhishek Sharma, Ramesh Govindan HotCloud 2012.
Slick: A control plane for middleboxes Bilal Anwer, Theophilus Benson, Dave Levin, Nick Feamster, Jennifer Rexford Supported by DARPA through the U.S.
An Overview of Software-Defined Network Presenter: Xitao Wen.
OpenFlow : Enabling Innovation in Campus Networks SIGCOMM 2008 Nick McKeown, Tom Anderson, et el. Stanford University California, USA Presented.
Application Centric Infrastructure
SDN and Openflow.
Alan Shieh Cornell University Srikanth Kandula Albert Greenberg Changhoon Kim Bikas Saha Microsoft Research, Azure, Bing Sharing the Datacenter Network.
Network Innovation using OpenFlow: A Survey
Course Name- CSc 8320 Advanced Operating Systems Instructor- Dr. Yanqing Zhang Presented By- Sunny Shakya Latest AOS techniques, applications and future.
Trusted End Host Monitors for Securing Cloud Datacenters Alan Shieh †‡ Srikanth Kandula ‡ Albert Greenberg ‡ †‡
Alan Shieh Cornell University Srikanth Kandula Albert Greenberg Changhoon Kim Microsoft Research Seawall: Performance Isolation for Cloud Datacenter Networks.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
Keith Wiles DPACC vNF Overview and Proposed methods Keith Wiles – v0.5.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
BETA!BETA! Building a secure private cloud on Microsoft technologies Private cloud security concerns Security & compliance in a Microsoft private cloud.
Towards Application Security On Untrusted OS
An Overview of Software-Defined Network
Data Center Virtualization: Open vSwitch Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking.
Class 3: SDN Stack Theophilus Benson. Outline Background – Routing in ISP – Cloud Computing SDN application stack revisited Evolution of SDN – The end.
Router Architectures An overview of router architectures.
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Software Stack COS 597E: Software Defined Networking.
Router Architectures An overview of router architectures.
An Overview of Software-Defined Network Presenter: Xitao Wen.
Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access.
Data Center Network Redesign using SDN
Enabling Innovation Inside the Network Jennifer Rexford Princeton University
Network+ Guide to Networks 6 th Edition Chapter 10 Virtual Networks and Remote Access.
Extreme Networks Confidential and Proprietary. © 2010 Extreme Networks Inc. All rights reserved.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
OpenFlow: Enabling Technology Transfer to Networking Industry Nikhil Handigol Nikhil Handigol Cisco Nerd.
Software-Defined Networks Jennifer Rexford Princeton University.
What’s new in Hyper-V in Windows Server 2012 (Part 2) Stu Fox Technical Specialist, Microsoft NZ VIR315.
Eric Keller, Evan Green Princeton University PRESTO /22/08 Virtualizing the Data Plane Through Source Code Merging.
Three fundamental concepts in computer security: Reference Monitors: An access control concept that refers to an abstract machine that mediates all accesses.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
Fast NetServ Data Path: OpenFlow integration Emanuele Maccherani Visitor PhD Student DIEI - University of Perugia, Italy IRT - Columbia University, USA.
Virtual Workspaces Kate Keahey Argonne National Laboratory.
Garrett Drown Tianyi Xing Group #4 CSE548 – Advanced Computer Network Security.
CS 4396 Computer Networks Lab Router Architectures.
Hyper-V Performance, Scale & Architecture Changes Benjamin Armstrong Senior Program Manager Lead Microsoft Corporation VIR413.
Operating Systems Security
Extending OVN Forwarding Pipeline Topology-based Service Injection
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
Network Virtualization in Multi-tenant Datacenters Author: VMware, UC Berkeley and ICSI Publisher: 11th USENIX Symposium on Networked Systems Design and.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
OpenFlow: Enabling Innovation in Campus Networks Yongli Chen.
Slide 1/20 "PerfSight: Performance Diagnosis for Software Dataplanes." Wu, Wenfei, Keqiang He, and Aditya Akella ACM ICM, Presented by: Ayush Patwari.
Fabric: A Retrospective on Evolving SDN Presented by: Tarek Elgamal.
Software Defined Networking and OpenFlow Geddings Barrineau Ryan Izard.
SDN and Beyond Ghufran Baig Mubashir Adnan Qureshi.
Programming Assignment 2 Zilong Ye. Traditional router Control plane and data plane embed in a blackbox designed by the vendor high-seed switching fabric.
Level 300 Windows Server 2012 Networking Marin Franković, Visoko učilište Algebra.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
Instructor Materials Chapter 7: Network Evolution
SDN challenges Deployment challenges
Multi-layer software defined networking in GÉANT
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
Martin Casado, Nate Foster, and Arjun Guha CACM, October 2014
A SDN Attestation Approach
Advanced Borderless Network Architecture Sales Exam practice-questions.html.
Aled Edwards, Anna Fischer, Antonio Lain HP Labs
The Stanford Clean Slate Program
CS 31006: Computer Networks – The Routers
Open vSwitch HW offload over DPDK
All or Nothing The Challenge of Hardware Offload
Elmo Muhammad Shahbaz Lalith Suresh, Jennifer Rexford, Nick Feamster,
Presentation transcript:

Alan Shieh Cornell University Srikanth Kandula Microsoft Research Emin Gün Sirer Cornell University Sidecar: Building Programmable Datacenter Networks without Programmable Switches

Programming the datacenter network Many new problems in datacenter networks can benefit from customization on data path But few datacenter networks are programmable! e.g, fast path on switch, NIC, I/O coprocessor on PC,... Too much to chew off e.g., Click, RouteBricks, PacketShader Uphill battle for deployment Program commodity hardwareAdopt software-based switches

Goal: Enable operator to install new packet processing code Provides better visibility: Can run custom processing everywhere, for fraction (1-10%) of packets Works in existing datacenters with minor upgrades Rely only on existing commodity devices Use only widely available functionality Scales to data rates of current networks Similar management model and comparable TCB size

Existing platform: Commodity Switch Most switches share similar architecture: Line-rate, fixed functionality fast path Limited flexibility: few rules, limited actions Programmable slow path Use rules to divert some packets to control processor Slow, closed platform

Existing platform: OpenFlow switches Enables innovation in the control plane Abstracts away complexity of programming switch Switches export open, common API for low-level control Similar datapath as commodity switches But, requires cooperation from equipment vendors, perhaps new switches Different target applications Flow granularity, hooks for flow setup/teardown, hooks into forwarding tables High frequency of changes to switch

Existing platform: Modifying end hosts Can run arbitrary processing on general purpose processor Isolated execution domain (e.g., hypervisor, AppEngine) Centralized control over end host software (e.g. Autopilot) Hypervisor kernel v1 Tenant VM Root VM Provisioning Service v2

Limitations: Modifying end hosts Limited to adding functionality at edge Can’t see into the middle or change packet processing Isolation is a concern Legacy systems: Can’t generalize to non-virtualized settings Compromised hosts: Can amplify network damage Direct I/O: Need to modify NIC to maintain control Hypervisor kernel Tenant VM Root VM Network Mutual trust Hypervisor kernel Tenant VM Tenant VM Root VM Hypervisor kernel

SideCar architecture Offload custom in-network packet processing to sidecars Steer packets to sidecars with config changes Push packet classification to end hosts Sidecars have better isolation than hypervisor Attack surface limited to network protocols Can spot-check end hosts to remove trust dependencies Unlike OpenFlow, no software or hardware changes on switch

Transfer packets to sidecars without modifying switches? Implement sidecars at reasonable performance and cost? Realizing SideCar in today’s datacenters How to:

Transferring packets to sidecars Marked steering 1. Hosts mark packets 2. Switches redirect all marked packets to sidecar Marked steering 1. Hosts mark packets 2. Switches redirect all marked packets to sidecar

Sampled steering Switches redirect packets uniformly at random Transferring packets to sidecars

Input CPU Ruleset lookup Switching Fabric Forwarding table lookup Drop Mirror sFlow Divert Output Configuring switches for steering Sampled steering Marked steering: L2: Match by VLAN L3: Match by IP headers or policy routing Marked steering: L2: Match by VLAN L3: Match by IP headers or policy routing

Realizing SideCar in today’s datacenters Transfer packets to sidecars without modifying switches? Implement sidecars at reasonable performance and cost? How to:

What to use as sidecars? Option 1: Commodity servers  Can process 3-5 Gb/s Standard OS & network stack, previous generation hardware I/O architecture Option 2: New platforms (e.g., RouteBricks, PacketShader)  Can process Gb/s Exploit parallelism in new hardware I/O architectures Point-to-point connections, multi-queue NIC, multi-core CPU/GPU Can keep pace with network scaling? More cores, faster busses

Sidecar platform suitability For typical topology 3% – 25% cost increase per server port LocationIngress bandwidth Added cost per server port Processing rate Commodity server ToR80 Gb/s$706.3% RouteBricks / PacketShader Agg/ Core 1560 Gb/s$82.6%

Proof of concept applications SideCar: Works with legacy switches, endpoints, practices Provides network visibility Improves isolation Next: Initial ideas on how to fix some datacenter problems with SideCar

Application #1: Reachability isolation Problem Cloud datacenters run a mix of mutually untrusting tenants Best to put each tenant on own (virtual) private network Need rich, granular policy support

Switch-only ToR switches support only small, inflexible policies (100s of rules)  Not enough precision Host-only: Host enforces policy with send/receive filters  DoS, needs hypervisor SideCar: Protection w/o hypervisors Sample packets, spot-check against policy Quarantine violators Application #1: Reachability isolation

allows Switch-only ToR switches support only small, inflexible policies  Not enough precision Host-only: Host enforces policy with send/receive filters  DoS, needs hypervisor SideCar: Protection w/o hypervisors Sample packets, spot-check against policy Quarantine violators Application #1: Reachability isolation

Switch-only ToR switches support only small, inflexible policies  Not enough precision Host-only: Host enforces policy with send/receive filters  DoS, needs hypervisor SideCar: Protection w/o hypervisors Sample packets, spot-check against policy Quarantine violators is bad Isolate Network Policy Enforcer Probabilistic bound (<100s of packets per source) on # of leaked packets & size of DoS allows Leverages improved isolation & legacy support Application #1: Reachability isolation

Application #2: Exploit better network feedback Example: Enforce network sharing between applications TCP-like approach: Tunnel all traffic through one control loop Throughput loss from probing for network state SideCar approach: Use explicit feedback Better throughput from measuring network state directly Feedback 50% 25% Feedback 50% 25% 25% unused Leverages network visibility

Application #3: Limit SAN usage Disks are constrained by IOPS (seek rate) and transfer rate Goal: Achieve performance isolation across users Can we do this without trusting the client (e.g., legacy or direct I/O) or modifying the target array?

Data Rule #1: Steer command packets to gather stats Rule #2: Spot check that command packets are marked correctly Write 32KB Data Write Data Write Data Write Not marked: Punish host

Related work Programmable network elements (NetFPGA, PacketShader, RouteBricks, OpenFlow) Hypervisor–network integration (End to the middle, Open vSwitch, CloudPolice)

SideCar provides a new way to program the datacenter network Architectural benefits Works with legacy switches, endpoints, practices Network visibility Better isolation Feasible to process a modest subset of all packets Designed applications that benefit from SideCar

Questions?