IOFlow: a Software-Defined Storage Architecture

Slides:



Advertisements
Similar presentations
Towards Predictable Datacenter Networks
Advertisements

Alex Cheung and Hans-Arno Jacobsen August, 14 th 2009 MIDDLEWARE SYSTEMS RESEARCH GROUP.
CloudWatcher: Network Security Monitoring Using OpenFlow in Dynamic Cloud Networks or: How to Provide Security Monitoring as a Service in Clouds? Seungwon.
Virtualization and Cloud Computing. Definition Virtualization is the ability to run multiple operating systems on a single physical system and share the.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
Stratos: A Network-Aware Orchestration Layer for Middleboxes in the Cloud Aditya Akella, Aaron Gember, Anand Krishnamurthy, Saul St. John University of.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 – QoS.
Data Center Storage and Networking Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking December.
What’s the Problem Web Server 1 Web Server N Web system played an essential role in Proving and Retrieve information. Cause Overloaded Status and Longer.
Software Defined Networking COMS , Fall 2013 Instructor: Li Erran Li SDNFall2013/
Low-Cost Data Deduplication for Virtual Machine Backup in Cloud Storage Wei Zhang, Tao Yang, Gautham Narayanasamy University of California at Santa Barbara.
Reciprocal Resource Fairness: Towards Cooperative Multiple-Resource Fair Sharing in IaaS Clouds School of Computer Engineering Nanyang Technological University,
Course Name- CSc 8320 Advanced Operating Systems Instructor- Dr. Yanqing Zhang Presented By- Sunny Shakya Latest AOS techniques, applications and future.
Software Defined Networking COMS , Fall 2014 Instructor: Li Erran Li SDNFall2014/
End-to-End Analysis of Distributed Video-on-Demand Systems Padmavathi Mundur, Robert Simon, and Arun K. Sood IEEE Transactions on Multimedia, February.
Shadow Configurations: A Network Management Primitive Richard Alimi, Ye Wang, Y. Richard Yang Laboratory of Networked Systems Yale University.
Virtualizing Mission-Critical Apps 1PM EST, 3/29/2011 Ilya Mirman Philip Thomas.
ENFORCING PERFORMANCE ISOLATION ACROSS VIRTUAL MACHINES IN XEN Diwaker Gupta, Ludmila Cherkasova, Rob Gardner, Amin Vahdat Middleware '06 Proceedings of.
1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS.
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Data Center Virtualization: Open vSwitch Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking.
1 Exploring Data Reliability Tradeoffs in Replicated Storage Systems NetSysLab The University of British Columbia Abdullah Gharaibeh Matei Ripeanu.
Class 3: SDN Stack Theophilus Benson. Outline Background – Routing in ISP – Cloud Computing SDN application stack revisited Evolution of SDN – The end.
IOFlow: A Software-defined Storage Architecture Eno Thereska, Hitesh Ballani, Greg O’Shea, Thomas Karagiannis, Antony Rowstron, Tom Talpey, Richard Black,
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Software Stack COS 597E: Software Defined Networking.
Xen and the Art of Virtualization. Introduction  Challenges to build virtual machines Performance isolation  Scheduling priority  Memory demand  Network.
Microsoft Virtual Academy Module 4 Creating and Configuring Virtual Machine Networks.
Toward Software-Defined Middlebox Networking Aaron Gember, Prathmesh Prabhu, Zainab Ghadiyali, Aditya Akella University of Wisconsin-Madison 1.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Black-box and Gray-box Strategies for Virtual Machine Migration Timothy Wood, Prashant.
1 Exploring Data Reliability Tradeoffs in Replicated Storage Systems NetSysLab The University of British Columbia Abdullah Gharaibeh Advisor: Professor.
Virtualized FPGA accelerators in Cloud Computing Systems
Network Sharing Issues Lecture 15 Aditya Akella. Is this the biggest problem in cloud resource allocation? Why? Why not? How does the problem differ wrt.
Sharing the Data Center Network Alan Shieh, Srikanth Kandula, Albert Greenberg, Changhoon Kim, Bikas Saha Microsoft Research, Cornell University, Windows.
Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, Aydan.
QOS مظفر بگ محمدی دانشگاه ایلام. 2 Why a New Service Model? Best effort clearly insufficient –Some applications need more assurances from the network.
The Center for Autonomic Computing is supported by the National Science Foundation under Grant No NSF CAC Seminannual Meeting, October 5 & 6,
MClock: Handling Throughput Variability for Hypervisor IO Scheduling in USENIX conference on Operating Systems Design and Implementation (OSDI ) 2010.
Othman Othman M.M., Koji Okamura Kyushu University 1.
Windows Server 2012 Hyper-V Networking
The Only Constant is Change: Incorporating Time-Varying Bandwidth Reservations in Data Centers Di Xie, Ning Ding, Y. Charlie Hu, Ramana Kompella 1.
1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester.
IT Pro Day Windows Server 2012 Hyper-V – The next chapter Michel Luescher, Senior Consultant Microsoft Thomas Roettinger, Program Manager Microsoft.
Consolidation and Optimization Best Practices: SQL Server 2008 and Hyper-V Dandy Weyn | Microsoft Corp. Antwerp, March
Hyper-V Performance, Scale & Architecture Changes Benjamin Armstrong Senior Program Manager Lead Microsoft Corporation VIR413.
Symbiotic Routing in Future Data Centers Hussam Abu-Libdeh Paolo Costa Antony Rowstron Greg O’Shea Austin Donnelly MICROSOFT RESEARCH Presented By Deng.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
Slide 1 Simple, Flexible Programming of Data Movement Paths using Algorithmic Policies PIs: Y. Richard Yang, Robert Bjornson, Andrew Sherman Architect:
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
Network Virtualization Sandip Chakraborty. In routing table we keep both the next hop IP (gateway) as well as the default interface. Why do we require.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
An End-to-End Service Architecture r Provide assured service, premium service, and best effort service (RFC 2638) Assured service: provide reliable service.
Slide 1/20 "PerfSight: Performance Diagnosis for Software Dataplanes." Wu, Wenfei, Keqiang He, and Aditya Akella ACM ICM, Presented by: Ayush Patwari.
6.888 Lecture 6: Network Performance Isolation Mohammad Alizadeh Spring
Shuihai Hu, Wei Bai, Kai Chen, Chen Tian (NJU), Ying Zhang (HP Labs), Haitao Wu (Microsoft) Sing Hong Kong University of Science and Technology.
Programming Assignment 2 Zilong Ye. Traditional router Control plane and data plane embed in a blackbox designed by the vendor high-seed switching fabric.
R2C2: A Network Stack for Rack-scale Computers Paolo Costa, Hitesh Ballani, Kaveh Razavi, Ian Kash Microsoft Research Cambridge EECS 582 – W161.
IT Pro Day Windows Server 2012 Hyper-V – The next chapter Michel Luescher, Senior Consultant Microsoft Thomas Roettinger, Program Manager Microsoft.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
sRoute: Treating the Storage Stack Like a Network
CIS 700-5: The Design and Implementation of Cloud Networks
sRoute: Treating the Storage Stack Like a Network
Networking Devices.
NOX: Towards an Operating System for Networks
Building a Database on S3
Programmable Networks
Specialized Cloud Architectures
Towards Predictable Datacenter Networks
Presentation transcript:

IOFlow: a Software-Defined Storage Architecture Eno Thereska, Hitesh Ballani, Greg O’Shea, Thomas Karagiannis, Antony Rowstron, Tom Talpey, Richard Black, Timothy Zhu Microsoft Research You may re-use these slides freely, but please cite them appropriately: “IOFlow: A Software-Defined Storage Architecture. Eno Thereska, Hitesh Ballani, Greg O'Shea, Thomas Karagiannis, Antony Rowstron, Tom Talpey, and Timothy Zhu. In SOSP'13, Farmington, PA, USA. November 3-6, 2013. “

Background: Enterprise data centers General purpose applications Application runs on several VMs Separate network for VM-to-VM traffic and VM-to-Storage traffic Storage is virtualized Resources are shared Hypervisor Switch Storage server S-NIC NIC VM Virtual Machine vDisk -Such DCs comprise compute and storage servers. We can think of each app as a tenant -The storage servers act as front ends for back end storage. Storage is virtualized and VMs are presented with virtual hard disks that are simply large files on storage servers. 2

It is hard to provide such SLAs today Motivation Want: predictable application behaviour and performance Need system to provide end-to-end SLAs, e.g., Guaranteed storage bandwidth B Guaranteed high IOPS and priority Per-application control over decisions along IOs’ path It is hard to provide such SLAs today -Tenant want predictable behaviour and performance in these shared resource environments, as if they had their own dedicated resources. They want performance guarantees and control over the services processing their data along the IO stack. - The problem is that data centers today cannot provide those guarantees and control. Let’s understand why through an example.

Example: guarantee aggregate bandwidth B for Red tenant App OS Hypervisor Caching Scheduling IO Manager Drivers Malware scan File system … Compression Hypervisor Switch Storage server S-NIC NIC VM Virtual Machine vDisk Storage server Caching Scheduling Drivers File system Deduplication … Deep IO path with 18+ different layers that are configured and operate independently and do not understand SLAs No understanding of application performance requirements. Applications today do not have any way of expressing their SLAs, e.g., you do not open a file specifying how fast you want to read from it. It’s all best effort. Sharing resources today means living with interference from neighbouring tenants, e.g., blue application can be aggressive and consume most of the storage bandwidth. This interference causes unpredictable performance. Now, you might be thinking: there are other resources, e.g., network resources are also part of the end-to-end SLA, and we have ways to handle this complexity like software-defined networking. Why is storage different?

Challenges in enforcing end-to-end SLAs No storage control plane No enforcing mechanism along storage data plane Aggregate performance SLAs - Across VMs, files and storage operations Want non-performance SLAs: control over IOs’ path Want to support unmodified applications and VMs -No storage control plane that can coordinate the layers configuration to enforce an SLA -Each layer has own configuration and makes own decisions. - We have set a high bar for the SLAS: Want detailed SLAs. Aggregate is challenging because it adds a further distributed dimension to the config problem: configuration of layers across machines

IOFlow architecture Decouples the data plane (enforcement) from the 4/19/2017 IOFlow architecture Decouples the data plane (enforcement) from the control plane (policy logic) App App High-level SLA Storage server OS OS File system Malware scan Compression File system File system Controller Deduplication Scheduling Scheduling … Hypervisor Caching IO Manager Scheduling Drivers Drivers IOFlow API Client-side IO stack Server-side IO stack Controller is logically centralized and has global visibility © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

Contributions Defined and built storage control plane Controllable queues in data plane Interface between control and data plane (IOFlow API) Built centralized control applications that demonstrate power of architecture

Storage flows Aggregate, per-operation and per-file SLAs, e.g., Storage “Flow” refers to all IO requests to which an SLA applies <{VMs}, {File Operations}, {Files}, {Shares}> ---> SLA Aggregate, per-operation and per-file SLAs, e.g., <{VM 1-100}, write, *, \\share\db-log}>---> high priority <{VM 1-100}, *, *, \\share\db-data}> ---> min 100,000 IOPS Non-performance SLAs, e.g., path routing <VM 1, *, *, \\share\dataset>---> bypass malware scanner source set destination sets - IOFlow supports a flow abstraction described using four parameters - Min: system can do more if underutilized -We make the observation that this high level flow description maps in a straightforward way to low level queues and their functions that we’ve built as we’ll see next.

IOFlow API: programming data plane queues 1. Classification [IO Header -> Queue] 2. Queue servicing [Queue -> <token rate, priority, queue size>] 3. Routing [Queue -> Next-hop] IO Header Malware scanner … … … 3 key functions that IOFlow provides. -If a queue reaches the queue size threshold, the queue notifies the controller and blocks any further inserts to that queue. There is no support in the IO stack for dropping IO requests like in networking, but there is support for blocking requests and applying - There are two storage-specific technical challenges: there is no end-to-end classification of storage traffic in the storage stack. Also, rate limiting storage requests is challenging.

Lack of common IO Header for storage traffic SLA: <VM 4, *, *, \\share\dataset> --> Bandwidth B Block device Z: (/device/scsi1) Stage configuration: “All IOs from VM4_sid to //serverX/AB79.vhd should be rate limited to B” Volume and file H:\AB79.vhd Server and VHD \\serverX\AB79.vhd Block device /device/ssd5 -For concreteness, we move to a concrete storage stack inside Windows. - Mismatch: SLAs specified using high-level names, but layers observe different low-level identifiers/IO Headers -VM name is also lost. By the time a request arrives at the storage server its source VM is lost. -This is not just a problem in Windows. Other OSes have similar stacks. - Root cause: layers with diverse functionalities. Virtualization made problem worse. Networks do not have this problem <ip, port, protocol>

Flow name resolution through controller SLA: {VM 4, *, *, //share/dataset} --> Bandwidth B Controller SMBc exposes IO Header it understands: <VM_SID, //server/file.vhd> Queuing rule (per-file handle): <VM4_SID, //serverX/AB79.vhd> --> Q1 Q1.token rate --> B -Layers expose IO headers to controller. Controller maps the SLA to layer-specific IO headers -SMB is file server protocol inside hypervisor. It understands VM Security descriptors, remote servers and file names. – When do you install this rule? When the file is opened. Queuing rule is maintained as part of the file handle and any subsequent IOs on that handle obey the rule. The controller is not on the IO data path -Main takeaway is that controller is exposed to previously-private metadata that each layer had. Controller is not on critical path

Rate limiting for congestion control Queue servicing [Queue -> <token rate, priority, queue size>] Important for performance SLAs Today: no storage congestion control Challenging for storage: e.g., how to rate limit two VMs, one reading, one writing to get equal storage bandwidth? IOs tokens - If an aggressive tenant sends too many IO requests to the storage node it might prevent it from meeting another tenant’s SLA -Token buckets are a standard algorithm for rate limiting network packets. The token bucket algorithm outputs incoming packets as long as enough tokens are available. By providing a queue with more or less tokens, one can control the rate at which its requests drain.

Rate limiting on payload bytes does not work VM VM 8KB Reads 8KB Writes Storage server - Note that requests cannot be dropped on the storage stack, unlike in networks. So a key congestion signal is unavailable for storage. Once they enter the storage server, the server is committed to completing them.

Rate limiting on bytes does not work VM VM 8KB Reads 8KB Writes Storage server - Step in the right direction since the hypervisor knows the read io size from the IO header - For SSDs, writes could be more expensive than reads, by an order of magnitude - So let’s rate limit on number of requests or IOPS

Rate limiting on IOPS does not work VM VM 64KB Reads 8KB Writes Storage server Need to rate limit based on cost Read limiting based on IOPS cannot handle bandwidth changes. Reads would get more bandwidth. Furthermore, storage requests can have arbitrary long sizes (e.g., 1GB) Clear that need to rate limit based on cost.

Rate limiting based on cost Controller constructs empirical cost models based on device type and workload characteristics RAM, SSDs, disks: read/write ratio, request size Cost models assigned to each queue ConfigureTokenBucket [Queue -> cost model] Large request sizes split for pre-emption Controller knows when new devices are added and through the discovery component it constructs models for them. Cost models: E.g., consume 2 tokens for 64KB requests and 1 token for 4K requests.

Recap: Programmable queues on data plane Classification [IO Header -> Queue] Per-layer metadata exposed to controller Controller out of critical path Queue servicing [Queue -> <token rate, priority, queue size>] Congestion control based on operation cost Routing [Queue -> Next-hop] How does controller enforce SLA? Changes on data plane: Not all layers are expected to implement all 3 calls, they can implement 0 or a subset

Distributed, dynamic enforcement <{Red VMs 1-4}, *, * //share/dataset> --> Bandwidth 40 Gbps VM Hypervisor Storage server SLA needs per-VM enforcement Need to control the aggregate rate of VMs 1-4 that reside on different physical machines Static partitioning of bandwidth is sub-optimal 40Gbps In practice they read from multiple servers, but for simplicity here we focus on VM aggregation

Work-conserving solution VMs with traffic demand should be able to send it as long as the aggregate rate does not exceed 40 Gbps Solution: Max-min fair sharing VM Hypervisor Storage server

Max-min fair sharing Well studied problem in networks Existing solutions are distributed Each VM varies its rate based on congestion Converge to max-min sharing Drawbacks: complex and requires congestion signal But we have a centralized controller Converts to simple algorithm at controller

Controller-based max-min fair sharing t = control interval s = stats sampling interval What does controller do? Infers VM demands Uses centralized max-min within a tenant and across tenants Sets VM token rates Chooses best place to enforce INPUT: per-VM demands Controller s t OUTPUT: per-VM allocated token rate

Controller decides where to enforce Minimize # times IO is queued and distribute rate limiting load SLA constraints Queues where resources shared Bandwidth enforced close to source Priority enforced end-to-end Efficiency considerations Overhead in data plane ~ # queues Important at 40+ Gbps VM Hypervisor Storage server -Based on SLA constraints and efficiency considerations. Can be thought of as an optimization problem. -The enforcement bottleneck tends to be queue locks. In this case this means enforcing at the hypervisors, but there are cases for high IOPS rates (>400000) when both hypervisors and storage servers do enforcement. - Controller sets up queues and their service rates

Centralized vs. decentralized control Centralized controller in SDS allows for simple algorithms that focus on SLA enforcement and not on distributed system challenges Analogous to benefits of centralized control in software-defined networking (SDN)

IOFlow implementation 2 key layers for VM-to-Storage performance SLAs Controller 4 other layers . Scanner driver (routing) . User-level (routing) . Network driver . Guest OS file system Implemented as filter drivers on top of layers - Adding the IOFlow API to all layers I mentioned in the design is ongoing work. So far we have added it to: - 2 key layers. File server protocol in hypervisor and entrance to the storage server. - Key strength: no need to change apps We have also added it to several other layers. For routing we have scanner and the ability to route to a user-space layer. Controller is a separate service

Evaluation map IOFlow’s ability to enforce end-to-end SLAs Aggregate bandwidth SLAs Priority SLAs and routing application in paper Performance of data and control planes

Evaluation setup Clients:10 hypervisor servers, 12 VMs each 4 tenants (Red, Green, Yellow, Blue) 30 VMs/tenant, 3 VMs/tenant/server VM Hypervisor Storage server Switch … Storage network: Mellanox 40Gbps RDMA RoCE full-duplex 1 storage server: 16 CPUs, 2.4GHz (Dell R720) SMB 3.0 file server protocol 3 types of backend: RAM, SSDs, Disks Controller: 1 separate server 1 sec control interval (configurable) RoCE: RDMA over Converged Ethernet. In this evaluation we show RAM for two reasons: it shows IOFlow can enforce high-perf SLAs and it shows worst-case data plane overheads in doing so.

Workloads 4 Hotmail tenants {Index, Data, Message, Log} Used for trace replay on SSDs (see paper) IoMeter is parametrized with Hotmail tenant characteristics (read/write ratio, request size) -The “Message” workload stores email content in files; ‘Index” is a background maintenance activity scheduled at night time in the data center; the “Data” and “Log” workloads are database data and transaction logs respectively. Metadata on emails is stored on these databases. So we have a diverse combination of file system, database and log workloads, typical of enterprise data centers.

Enforcing bandwidth SLAs 4 tenants with different storage bandwidth SLAs Tenants have different workloads Red tenant is aggressive: generates more requests/second Tenant SLA Red {VM1 – 30} -> Min 800 MB/s Green {VM31 – 60} -> Min 800 MB/s Yellow {VM61 – 90} -> Min 2500 MB/s Blue {VM91 – 120} -> Min 1500 MB/s Access same storage server

Things to look for Distributed enforcement across 4 competing tenants Aggressive tenant(s) under control Dynamic inter-tenant work conservation Bandwidth released by idle tenant given to active tenants Dynamic intra-tenant work conservation Bandwidth of tenant’s idle VMs given to its active VMs

Results Controller notices red tenant’s performance Intra-tenant work conservation Inter-tenant work conservation Tenants’ SLAs enforced. 120 queues cfg.

Data plane overheads at 40Gbps RDMA Negligible in previous experiment. To bring out worst case varied IO sizes from 512Bytes to 64KB Reasonable overheads for enforcing SLAs Metric: total throughput overhead across 120 tenants Per-op overheads in general dominate per-byte overheads as expected, but overall the overheads are reasonable. The results for SSDs and disks are qualitatively similar. At 512B: ~500K IOPS Details: IoMeter with random-access requests, read:write 1:1 120 tenants with an SLA of 1/120th of storage capacity Enforcement at hypervisors SMBc layer

Control plane overheads: network and CPU Controller configures queue rules, receives statistics and updates token rates every interval <0.3% CPU overhead at controller Overheads (MB)

Summary of contributions Defined and built storage control plane Controllable queues in data plane Interface between control and data plane (IOFlow API) Built centralized control applications that demonstrate power of architecture Ongoing work: applying to public cloud scenarios

Backup slides

Related work (1) Software-defined Networking (SDN) [Casado et al. SIGCOMM’07], [Yan et al. NSDI’07], [Koponen et al. OSDI’10], [Qazi et al. SIGCOMM’13], and more in associated workshops. OpenFlow [McKeown et al. SIGCOMM Comp. Comm.Review’08] Languages and compilers [Ferguson et al. SIGCOMM’13], [Monsanto et al. NSDI’13] SEDA [Welsh et al. SOSP’01] and Click [Kohler et al. ACM ToCS’00]

Related work (2) Tenant performance isolation Flow name resolution Label IOs [Sambasivan et al. NSDI’11], [Mesnier et al. SOSP’11], etc Tenant performance isolation For storage [Wachs et al. FAST’07], [Gulati et al. OSDI’10], [Shue et al. OSDI’12], etc. For networks [Ballani et al. SIGCOMM’11], [Popa et al. SIGCOMM’12] Distributed rate limiting [Raghavan et al. SIGCOMM’07]

returns kind of IO header layer uses for queuing, the queue properties that are configurable, and possible next hops getQueueInfo () returns queue statistics getQueueStats (Queue-id q) creates or removes queuing rule i -> q createQueueRule (IO Header i, Queue-id q) removeQueueRule (IO Header i, Queue-id q) sets queue service properties configureQueueService (Queue-id q, <token rate,priority, queue size>) sets queue routing properties configureQueueRouting (Queue-id q, Next-hop stage s) sets storage-specific parameters configureTokenBucket (Queue-id q, <benchmark-results>) IOFlow API

SDS: Storage-specific challenges 4/19/2017 SDS: Storage-specific challenges Low-level primitives Old networks SDN Storage today SDS End-to-end identifier Data plane queues Control plane © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.