It’s Time for Low Latency Steve Rumble, Diego Ongaro, Ryan Stutsman, Mendel Rosenblum, John Ousterhout Stanford University 報告者 : 603410040 厲秉忠 603410064.

Slides:



Advertisements
Similar presentations
MinCopysets: Derandomizing Replication in Cloud Storage
Advertisements

Copysets: Reducing the Frequency of Data Loss in Cloud Storage
Fast Crash Recovery in RAMCloud
What happens when you try to build a low latency NIC? Mario Flajslik.
Chabot College Chapter 2 Review Questions Semester IIIELEC Semester III ELEC
John Ousterhout Stanford University RAMCloud Overview and Update SEDCL Retreat June, 2014.
RAMCloud: Scalable High-Performance Storage Entirely in DRAM John Ousterhout Stanford University (with Nandu Jayakumar, Diego Ongaro, Mendel Rosenblum,
RAMCloud Scalable High-Performance Storage Entirely in DRAM John Ousterhout, Parag Agrawal, David Erickson, Christos Kozyrakis, Jacob Leverich, David Mazières,
RAMCloud 1.0 John Ousterhout Stanford University (with Arjun Gopalan, Ashish Gupta, Ankita Kejriwal, Collin Lee, Behnam Montazeri, Diego Ongaro, Seo Jin.
1 Web Server Performance in a WAN Environment Vincent W. Freeh Computer Science North Carolina State Vsevolod V. Panteleenko Computer Science & Engineering.
Communications in ISTORE Dan Hettena. Communication Goals Goals: Fault tolerance through redundancy Tolerate any single hardware failure High bandwidth.
SEDCL: Stanford Experimental Data Center Laboratory.
Networks: Sample Performance Problems 1 Sample Network Performance Problems.
5/12/05CS118/Spring051 A Day in the Life of an HTTP Query 1.HTTP Brower application Socket interface 3.TCP 4.IP 5.Ethernet 2.DNS query 6.IP router 7.Running.
An overview of Infiniband Reykjavik, June 24th 2008 R E Y K J A V I K U N I V E R S I T Y Dept. Computer Science Center for Analysis and Design of Intelligent.
CS 142 Lecture Notes: Large-Scale Web ApplicationsSlide 1 RAMCloud Overview ● Storage for datacenters ● commodity servers ● GB DRAM/server.
COM S 614 Advanced Systems Novel Communications U-Net and Active Messages.
Advanced Computer Networks 1 Sample Network Performance Problems.
03/12/08Nuova Systems Inc. Page 1 TCP Issues in the Data Center Tom Lyon The Future of TCP: Train-wreck or Evolution? Stanford University
\\fs\share File Server SMB Direct Client Application NIC RDMA NIC TCP/ IP SMB Direct Ethernet and/or InfiniBand TCP/ IP Unchanged.
The RAMCloud Storage System
RAMCloud Design Review Recovery Ryan Stutsman April 1,
A Compilation of RAMCloud Slides (through Oct. 2012) John Ousterhout Stanford University.
Supporting Strong Cache Coherency for Active Caches in Multi-Tier Data-Centers over InfiniBand S. Narravula, P. Balaji, K. Vaidyanathan, S. Krishnamoorthy,
Assumptions Hypothesis Hopes RAMCloud Mendel Rosenblum Stanford University.
What We Have Learned From RAMCloud John Ousterhout Stanford University (with Asaf Cidon, Ankita Kejriwal, Diego Ongaro, Mendel Rosenblum, Stephen Rumble,
RAMCloud Overview John Ousterhout Stanford University.
RAMCloud: a Low-Latency Datacenter Storage System John Ousterhout Stanford University
RAMCloud: Concept and Challenges John Ousterhout Stanford University.
RAMCloud: A Low-Latency Datacenter Storage System Ankita Kejriwal Stanford University (Joint work with Diego Ongaro, Ryan Stutsman, Steve Rumble, Mendel.
High Performance User-Level Sockets over Gigabit Ethernet Pavan Balaji Ohio State University Piyush Shivam Ohio State University.
TCP Throughput Collapse in Cluster-based Storage Systems
Cool ideas from RAMCloud Diego Ongaro Stanford University Joint work with Asaf Cidon, Ankita Kejriwal, John Ousterhout, Mendel Rosenblum, Stephen Rumble,
High Performance Computing & Communication Research Laboratory 12/11/1997 [1] Hyok Kim Performance Analysis of TCP/IP Data.
RAMCloud: System Performance Measurements (Jun ‘11) Nandu Jayakumar
Low-Latency Datacenters John Ousterhout Platform Lab Retreat May 29, 2015.
Elephants, Mice, and Lemmings! Oh My! Fred Baker Fellow 25 July 2014 Making life better in data centers and high speed computing.
Final Review!. So how’s it all work? I boot my machine I open my browser and type The page loads What all just happened?
Breakout Group: Low-Latency Datacenters Platform Lab Retreat May 29, 2015.
Durability and Crash Recovery for Distributed In-Memory Storage Ryan Stutsman, Asaf Cidon, Ankita Kejriwal, Ali Mashtizadeh, Aravind Narayanan, Diego Ongaro,
RAMCloud: Low-latency DRAM-based storage Jonathan Ellithorpe, Arjun Gopalan, Ashish Gupta, Ankita Kejriwal, Collin Lee, Behnam Montazeri, Diego Ongaro,
RAMCloud: Scalable High-Performance Storage Entirely in DRAM John Ousterhout Stanford University (with Christos Kozyrakis, David Mazières, Aravind Narayanan,
ICOM 6115©Manuel Rodriguez-Martinez ICOM 6115 – Computer Networks and the WWW Manuel Rodriguez-Martinez, Ph.D. Lecture 21.
Log-structured Memory for DRAM-based Storage Stephen Rumble, John Ousterhout Center for Future Architectures Research Storage3.2: Architectures.
CS 140 Lecture Notes: Technology and Operating Systems Slide 1 Technology Changes Mid-1980’s2012Change CPU speed15 MHz2.5 GHz167x Memory size8 MB4 GB500x.
4061 Session 25 (4/17). Today Briefly: Select and Poll Layered Protocols and the Internets Intro to Network Programming.
IP Communication Fabric Mike Polston HP
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
TCP Sockets Reliable Communication. TCP As mentioned before, TCP sits on top of other layers (IP, hardware) and implements Reliability In-order delivery.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
Networking Fundamentals. Basics Network – collection of nodes and links that cooperate for communication Nodes – computer systems –Internal (routers,
Experiences With RAMCloud Applications John Ousterhout, Jonathan Ellithorpe, Bob Brown.
TCP Offload Through Connection Handoff Hyong-youb Kim and Scott Rixner Rice University April 20, 2006.
RAMCloud Overview and Status John Ousterhout Stanford University.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Performance Networking ™ Server Blade Summit March 23, 2005.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
TCP Flow Control – an illustration of distributed system thinking David E. Culler CS162 – Operating Systems and Systems Programming
John Ousterhout Stanford University RAMCloud Overview and Update SEDCL Retreat June, 2013.
Presented by: Xianghan Pei
Lecture Topics: 11/29 IP UDP TCP DNS. Homework 9 Write a simple web server –not as hard as it sounds Specification out tonight Skeleton code out by tomorrow.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
RAMCloud and the Low-Latency Datacenter John Ousterhout Stanford Platform Laboratory.
Log-Structured Memory for DRAM-Based Storage Stephen Rumble and John Ousterhout Stanford University.
Network Requirements for Resource Disaggregation
Berkeley Cluster Projects
Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: illusion of having more physical memory program relocation protection.
CSE 461 HTTP and the Web.
TCP flow and congestion control
Presentation transcript:

It’s Time for Low Latency Steve Rumble, Diego Ongaro, Ryan Stutsman, Mendel Rosenblum, John Ousterhout Stanford University 報告者 : 厲秉忠 謝東霖 1

Future Web Applications Need Low Latency They will access more bytes of data Commodity net bandwidth has increased > 3,000x in 30 years But also more pieces of inter-dependent data Commodity net latency has decreased only ~30x in 30 years Facebook is a glimpse into future applications Huge datasets, DRAM-based storage, small requests, random dependent data accesses, low locality Dependent on network latency: Can only afford dependent accesses per page request 2

Datacenter Latency Is Too High Simple RPCs take us in current datacenters 3

On The Cusp Of Low Latency Low latency available in the HPC space (Infiniband) 100ns switches < 1us NIC latencies OS Bypass But, won’t displace Ethernet Now we need to pull in the rest of the ideas Let’s get the OS community involved and do it right Goal: 5-10us RTTs in the short term 4

An Opportunity To Define The Right Structure Re-think APIs: Apps need speed and simplicity Developers used to sockets, but can we make them fast? Network Protocols Can we live with TCP? (Needs in-order delivery, Slow stacks) How do we scale low-latency to 100,000+ nodes? Closed datacenter ecosystem makes new protocols feasible 5

Getting The Lowest Possible Latency The NIC will become the bottleneck under 10us 500ns round-trip propagation in 50m diameter 1us round-trip switching latency (10 x 100ns hops) One microsecond RTTs possible in 5-10 years 6

Low Latency Is Up To Us Low latency is the future of web applications If we don’t take action to make it happen, we risk: Not getting it at all, or Missing the opportunity to re-architect (and getting something that sucks) 7