Prof. Hakim Weatherspoon

Slides:



Advertisements
Similar presentations
Unshackle the Cloud: Commoditization of the Cloud Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5412, Guest Lecture, Cornell University.
Advertisements

Chapter 22: Cloud Computing and Related Security Issues Guide to Computer Network Security.
C LOUD C OMPUTING Presented by Ye Chen. What is cloud computing? Cloud computing is a model for enabling ubiquitous, convenient, on- demand network access.
PHY Covert Channels: Can you see the Idles? Ki Suh Lee Cornell University Joint work with Han Wang, and Hakim Weatherspoon 1 첩자첩자 Chupja.
Tunis, Tunisia, 28 April 2014 Business Values of Virtualization Mounir Ferjani, Senior Product Manager, Huawei Technologies 2.
SDN and Openflow.
OSI Model.
Timing is Everything: Accurate, Minimum Overhead, Available Bandwidth Estimation in High-speed Wired Networks Han Wang, Ki Suh Lee, Erluo Li, Chiun Lin.
M.A.Doman Model for enabling the delivery of computing as a SERVICE.
SPRING 2011 CLOUD COMPUTING Cloud Computing San José State University Computer Architecture (CS 147) Professor Sin-Min Lee Presentation by Vladimir Serdyukov.
Data Center Traffic and Measurements: Available Bandwidth Estimation Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance.
CS 5413: High Performance Systems and Networking
Clouds on IT horizon Faculty of Maritime Studies University of Rijeka Sanja Mohorovičić INFuture 2009, Zagreb, 5 November 2009.
Introduction to Cloud Computing
 Steganography is the art and science of writing hidden messages in such a way that no one, apart from the sender and intended recipient, suspects the.
Characteristics of Communication Systems
The Legal Issues Facing Digital Forensic Investigations In A Cloud Environment Presented by Janice Rafraf 15/05/2015Janice Rafraf1.
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
EEC4113 Data Communication & Multimedia System Chapter 1: Introduction by Muhazam Mustapha, September 2011.
William Stallings Data and Computer Communications
SDN AND OPENFLOW SPECIFICATION SPEAKER: HSUAN-LING WENG DATE: 2014/11/18.
PaaSport Introduction on Cloud Computing PaaSport training material.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Architecture & Cybersecurity – Module 3 ELO-100Identify the features of virtualization. (Figure 3) ELO-060Identify the different components of a cloud.
CLOUD COMPUTING RICH SANGPROM. What is cloud computing? “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Web Technologies Lecture 13 Introduction to cloud computing.
Innovative Partnership Solution-Driven Commitment Agile Value Sustainable.
Software as a Service (SaaS) Fredrick Dande, MBA, PMP.
Discussion Context NIST Cloud definition and extension to address network and infrastructure issues Discussion of the ISPD-RG Infrastructure definition.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Welcome To We have registered over 5,000 domain names and host over 1,500 cloud servers for individuals and organizations, Our fast and reliable.
CS 6027 Advanced Networking FINAL PROJECT ​. Cloud Computing KRANTHI ​ CHENNUPATI PRANEETHA VARIGONDA ​ SANGEETHA LAXMAN ​ VARUN ​ DENDUKURI.
1 Kyung Hee University Chapter 13 Wired LANs: Ethernet.
Introduction to Cloud Technology
Problem: Internet diagnostics and forensics
CIS 700-5: The Design and Implementation of Cloud Networks
Overview: Cloud Datacenters
Introduction to Cloud Computing
The Future? Or the Past and Present?
University of Maryland College Park
Architecture and Algorithms for an IEEE 802
IOT Critical Impact on DC Design
Network Architecture Layered system with alternative abstractions available at a given layer.
Recommendation 6: Using ‘cloud computing’ to meet the societal need ‘Faster and transparent access to public sector services’ Cloud computing Faster and.
CS 5413: High Performance Systems and Networking
Data Center Networks and Switching and Queueing
The Future? Or the Past and Present?
Cloud Computing Kelley Raines.
Chapter 21: Cloud Computing and Related Security Issues
Cloud Computing.
Chapter 22: Cloud Computing Technology and Security
Data Center Traffic and Measurements: SoNIC
SoNIC: Covert Channels over High Speed Cloud Networks
CNIT131 Internet Basics & Beginning HTML
The Stanford Clean Slate Program
Cloud Computing and Cloud Networking
Software Defined Networking (SDN)
Data Link Issues Relates to Lab 2.
Cloud Computing Cloud computing refers to “a model of computing that provides access to a shared pool of computing resources (computers, storage, applications,
Introduction to Local Area Networks
TCP over SoNIC Abhishek Kumar Maurya
Cloud: everything you wanted to know, but were afraid to ask
Internet and Web Simple client-server model
Xiuzhen Cheng Csci332 MAS Networks – Challenges and State-of-the-Art Research – Wireless Mesh Networks Xiuzhen Cheng
Cloud Computing: Concepts
IP Control Gateway (IPCG)
Chapter-5 Traffic Engineering.
EEC4113 Data Communication & Multimedia System Chapter 1: Introduction by Muhazam Mustapha, July 2010.
E-MiLi: Energy-Minimizing Idle Listening in Wireless Networks
Presentation transcript:

Prof. Hakim Weatherspoon From the Cloud to SoNIC: Precise Realtime Software Access and Control of Wired Networks Prof. Hakim Weatherspoon Joint with Ki Suh Lee and Han Wang Cornell University Stanford University April 17, 2014 -Greetings -What is sonic? -What have we done? -How to use sonic?

The Rise of Cloud Computing The promise of the Cloud A computer utility; a commodity Catalyst for technology economy Revolutionizing for health care, financial systems, scientific research, and society SEATTLE The cloud completes the commoditization process and makes storage and computation a commodity.

The Rise of Cloud Computing The promise of the Cloud ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. NIST Cloud Definition SEATTLE The cloud completes the commoditization process and makes storage and computation a commodity. Public vs private IaaS, PaaS, SaaS On demand (self service), network access, resource pooling, rapid elasticity, measured service

The Rise of Cloud Computing The promise of the Cloud ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. NIST Cloud Definition SEATTLE The cloud completes the commoditization process and makes storage and computation a commodity. Public vs private IaaS, PaaS, SaaS On demand (self service), network access, resource pooling, rapid elasticity, measured service

The Rise of Cloud Computing The promise of the Cloud ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Requires fundamentals in distributed systems Networking Computation Storage NIST Cloud Definition The cloud completes the commoditization process and makes storage and computation a commodity. Public vs private IaaS, PaaS, SaaS On demand (self service), network access, resource pooling, rapid elasticity, measured service

The Rise of Cloud Computing The promise of the Cloud Networking Computation Storage VM Switch Guest OS App 33KB Xen-Blanket VMM Guest OS App How do we understand this new Internet architecture? Xen-Blanket VMM 9/18/2018 SoNIC

My Contributions Cloud Networking Cloud Computation & Vendor Lock-in SoNIC in NSDI 2013 and 2014 Wireless DC in ANCS 2012 (best paper) and NetSlice in ANCS 2012 Bifocals in IMC 2010 and DSN 2010 Maelstrom in ToN 2011 and NSDI 2008 Chaired Tudor Marian’s PhD 2010 (now at Google) Cloud Computation & Vendor Lock-in Plug into the Supercloud in IEEE Internet Computing-2013 Supercloud/Xen-Blanket in EuroSys-2012 and HotCloud-2011 Overdriver in VEE-2011 Chaired Dan William’s PhD 2012 (now at IBM) Cloud Storage Gecko in FAST 2013 / HotStorage 2012 RACS in SOCC-2010 SMFS in FAST 2009 Antiquity in EuroSys 2007 / NSDI 2006 Chaired Lakshmi Ganesh’s PhD 2011 (now at Facebook)

The Rise of Cloud Computing The promise of the Cloud Networking Computation Storage Xen-Blanket VM Switch VMM How do we understand this new Internet architecture? 33KB Xen-Blanket VMM 9/18/2018 SoNIC

Cloud Networking: Challenges Challenges remain: Performance – Packets are still lost Why is it so hard to move data between clouds over the wide-area? [NSDI 2014, 2013, 2008, FAST 2009, IMC 2010, DSN 2010] Why is it so hard to move data between cloud platforms over the wide-area? 9/18/2018 SoNIC

Cloud Networking: Challenges Uncover the ground truth Network changes inter-packet gap Traffic sent: Traffic received: Bursty traffic induced by packet chaining Inter-packet gaps are invisible to higher layers Packet Interpacket gap Systems programmers treat layers 1 and 2 as black box. What can we do if we have software access to layers 1 and 2 How is it in a uncongested, semiprivate lambda network, packet are still lost( Transition) According to the 10G Ethernet standard, 10G Ethernet is always sending at 10Gbps. Back to the previous example, what we observed is that although the sender sends out uniformly distributed packet at a constant rate. What the receiver got, is some burstly traffic. The gap between each packet is at minimal threshold that the standard allows. So SoNIC provides this unique feature for you observe this kind of effect in software in realtime. === Ki Suh === 10G always sending at 10Gbps, If not sending, sending idles. Software layer can distinguish the top layer with bottom layer, system programmer does not see We want to give people ground truth, but from high level. ----- Meeting Notes (9/7/12 15:49) ----- If we have a capability like this, what can we with it? (Software) Access to the physical layer is required 9/18/2018 SoNIC

Cloud Networking: Opportunities Why access the physical layer from software? Issue: Programmers treat layers 1 and 2 as black box Opportunities Network Measurements Network Monitoring/Profiling Network Steganography Application Transport Network Data Link Physical 64/66b PCS PMA PMD Can improve security, availability, and performance of the Cloud Networks

Cloud Networking: Opportunities Understanding Cloud Networks via Software-defined Network InterfaCe (SoNIC) Improve understanding of network Improves security, availability, and performance of the network SoNIC: Software-defined NIC Access the PHY In real-time In software Separates what is sent (software) how it is sent (hardware) Application Transport Network Data Link Physical 64/66b PCS PMA PMD Netslice: Software Router [ANCS 2012] Enables Software-defined Networks Big Data-in-network solutions via Deep Packet Inspection (DPI) at 40Gbps

Outline Motivation SoNIC Network Research Applications Conclusion / Research Agenda Briefly discuss. 9/18/2018 SoNIC

10GbE Network Stack Application Transport Network Data Link Physical L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Preamble Eth Hdr Data L3 Hdr L2 Hdr CRC Gap Physical Idle characters (/I/) 64 bit 2 bit syncheader 10.3125 Gigabits 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ /S/ /D/ /T/ /E/ 16 bit Scrambler Scrambler Descrambler I need you to pay attention for 30 seconds, I will discuss the details of a network stack. Most of this you know, some you do not know Going from the Data link layer to the physical layer: The physical layer encodes an Ethernet frame into a sequence of 64 bit block. Then for each 64bit block it adds 2 bit syncheaders. That is why this is called 64/66bit encoding. maintain eqaul number of zeros and ones, which we call it DC balance. Where is /I/ characters? Gearbox Gearbox Blocksync PMA PMA 011010010110100101101001011010010110100101101001011010010110100101101 PMD 9/18/2018 SoNIC

10GbE Network Stack Commodity NIC Application Transport Packet i Data Transport Data L3 Hdr Packet i Packet i+1 SW HW Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ Scrambler Scrambler Descrambler Packet i Packet i+1 Gearbox Gearbox Blocksync PMA PMA 011010010110100101101001011010010110100101101001011010010110100101101 PMD Commodity NIC 9/18/2018 SoNIC

10GbE Network Stack SoNIC NetFPGA Application Physical 64/66b PCS PMA PMD Encode Scrambler Gearbox Decode Descrambler Blocksync Data Link Network Transport Application Data SW HW Transport Data L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical IPG 64/66b PCS Encode Encode Decode Packet i Packet i+1 /S/ /D/ /T/ /E/ SW HW IPD Scrambler Scrambler Descrambler we are actually doing physical layer in software. Gearbox Gearbox Blocksync PMA PMA 011010010110100101101001011010010110100101101001011010010110100101101 PMD SoNIC NetFPGA 9/18/2018 SoNIC

SoNIC Design SoNIC Application Transport Network Data Link Physical L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ SW HW Scrambler Scrambler Descrambler Our approach was to separate what is sent in software and how it is in hardware As I said in background, all these sublayes down in the physical layer do not manipulate bits, while all the layes above and including scrambler touches every bit. Gearbox Gearbox Blocksync PMA PMA 011010010110100101101001011010010110100101101001011010010110100101101 PMD SoNIC 9/18/2018 SoNIC

SoNIC Design and Architecture Application Data Transport TX MAC TX PCS Kernel APP RX MAC RX PCS Userspace Hardware Gearbox Transceiver Blocksync SFP+ Data L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ SW HW Scrambler Scrambler Descrambler This is overall design and architecture of SoNIC Say I will discuss design, optimization, and interfae Gearbox Gearbox Blocksync PMA PMA 011010010110100101101001011010010110100101101001011010010110100101101 PMD SoNIC 9/18/2018 SoNIC

SoNIC Design: API Hardware control: ioctl syscall I/O : character device interface Sample C code for packet generation and capture 1: #include "sonic.h" 19: /* CONFIG SONIC CARD FOR PACKET GEN*/ 2: 20: ioctl(fd1, SONIC_IOC_RESET) 3: struct sonic_pkt_gen_info info = { 21: ioctl(fd1, SONIC_IOC_SET_MODE, PKT_GEN_CAP) 4: .mode = 0, 22: ioctl(fd1, SONIC_IOC_PORT0_INFO_SET, &info) 5: .pkt_num = 1000000000UL, 23 6: .pkt_len = 1518, 24: /* START EXPERIMENT*/ 7: .mac_src = "00:11:22:33:44:55", 25: ioctl(fd1, SONIC_IOC_START) 8: .mac_dst = "aa:bb:cc:dd:ee:ff", 26: // wait till experiment finishes 9: .ip_src = "192.168.0.1", 27: ioctl(fd1, SONIC_IOC_STOP) 10: .ip_dst = "192.168.0.2", 28: 11: .port_src = 5000, 29: /* CAPTURE PACKET */ 12: .port_dst = 5000, 30: while ((ret = read(fd2, buf, 65536)) > 0) { 13: .idle = 12, 31: // process data 14: }; 32: } 15: 33: 16: /* OPEN DEVICE*/ 34: close(fd1); 17: fd1 = open(SONIC_CONTROL_PATH, O_RDWR); 35: close(fd2); 18: fd2 = open(SONIC_PORT1_PATH, O_RDONLY); interface is a regular C e.g., source C code for packet generation / caption What is the significance of “12”? 9/18/2018 SoNIC

Outline Motivation SoNIC Network Research Applications Measurement / traffic analysis Profiling / fingerprinting Covert channels Conclusion / Research Agenda How to design What can it actually do What it can do for researchers Vision+instance 9/18/2018 SoNIC

Measurement / Traffic Analysis using SoNIC Uncover the ground truth Network changes inter-packet gap Traffic sent: Traffic received: Bursty traffic induced by packet chaining Inter-packet gaps are invisible to higher layers, but not SoNIC Packet Interpacket gap Systems programmers treat layers 1 and 2 as black box. What can we do if we have software access to layers 1 and 2 How is it in a uncongested, semiprivate lambda network, packet are still lost( Transition) According to the 10G Ethernet standard, 10G Ethernet is always sending at 10Gbps. Back to the previous example, what we observed is that although the sender sends out uniformly distributed packet at a constant rate. What the receiver got, is some burstly traffic. The gap between each packet is at minimal threshold that the standard allows. So SoNIC provides this unique feature for you observe this kind of effect in software in realtime. === Ki Suh === 10G always sending at 10Gbps, If not sending, sending idles. Software layer can distinguish the top layer with bottom layer, system programmer does not see We want to give people ground truth, but from high level. ----- Meeting Notes (9/7/12 15:49) ----- If we have a capability like this, what can we with it? 9/18/2018 SoNIC

Measurement / Traffic Analysis using SoNIC Precise end-to-end instrumentation platform Measurement at large scale Towards an open measurement platform 9/18/2018 SoNIC

Profiling / Fingerprinting using SoNIC ----- Meeting Notes (9/11/12 15:38) ----- One Hop Profiling One Hop Through a switch 9/18/2018 SoNIC

Profiling / Fingerprinting using SoNIC Cisco Catalyst 6500 switch 1Gbps data (1518B) Socket 0 Socket 1 APP0 APP1 TX MAC0 RX MAC0 TX MAC1 RX MAC1 TX PCS0 RX PCS0 TX PCS1 RX PCS1 So I have described what we did for SoNIC, does it work? Here I showed you - Explain the graph. TX SFP0 RX SFP0 TX SFP1 RX SFP1 9/18/2018 SoNIC

Profiling / Fingerprinting using SoNIC Router/ Switch Signatures Different Routers and switches have different response function. Improve simulation model of switches and routers. Detect switch and router model in real network. ----- Meeting Notes (9/10/12 15:05) ----- Profile Different switches ----- Meeting Notes (9/12/12 16:51) ----- Data rate. Cisco 4948 Cisco 6509 IBM BNT G8264R 1500 byte packets @ 6Gbps 9/18/2018 SoNIC

Profiling / Fingerprinting using SoNIC Router/ Switch Signatures Different Routers and switches have different response function. Improve simulation model of switches and routers. Detect switch and router model in real network. ----- Meeting Notes (9/10/12 15:05) ----- Profile Different switches ----- Meeting Notes (9/12/12 16:51) ----- Data rate. NetFPGA 10G 1500 byte packets @ 6Gbps 9/18/2018 SoNIC

Profiling / Fingerprinting using SoNIC End-to-End Profile of GENI Network Modeling Network Elements Testbed for Network System Theory and Queue Theory Towards a Predictable Network What is the aggregate effect? Cornell Princeton UPenn ----- Meeting Notes (9/12/12 16:51) ----- Add Bifocals there, highest peak to the left may have caused in the middle. Stanford U. Wash Berkeley 9/18/2018 SoNIC

Profiling / Fingerprinting using SoNIC Challenges: Rogue routers 9/18/2018 SoNIC

Covert Channels in SoNIC ----- Meeting Notes (9/11/12 15:38) ----- One Hop Create / Detect / Prevent Covert Channels in Layers 1 and 2 9/18/2018 SoNIC

Covert Channels in SoNIC Application Application Hide transmission of data Storage Channel Writing/reading of a storage location Timing Channel Modulation of system resources Transport Transport Network Network Data Link Data Link Physical Physical 64/66b PCS PMA PMD 9/18/2018 SoNIC

Covert Channels in SoNIC Application Application Existing Covert Channels TCP/IP headers, HTTP requests Packet Rate / Timing Increasing number of detection techniques Covert Channels at the Physical layer Transport Transport Network Network Data Link Data Link Physical Physical 64/66b PCS PMA PMD 9/18/2018 SoNIC

Covert Channels in SoNIC Sync Block Payload 0 8 16 24 32 40 48 56 65 Application Application Data Block 01 D0 D1 D2 D3 D4 D5 D6 D7 Block Type Transport Transport /E/ 10 0x1e C0 C1 C2 C3 C4 C5 C6 C7 /S/ 10 0x33 C0 C1 C2 C3 D5 D6 D7 Network Network 10 0x78 D1 D2 D3 D4 D5 D6 D7 /T/ 10 0x87 C1 C2 C3 C4 C5 C6 C7 Data Link Data Link 10 0x99 D0 C2 C3 C4 C5 C6 C7 10 0xaa D0 D1 C3 C4 C5 C6 C7 Physical Physical 10 0xb4 D0 D1 D2 C4 C5 C6 C7 64/66b PCS 10 0xcc D0 D1 D2 D3 C5 C6 C7 10 0xd2 D0 D1 D2 D3 D4 C6 C7 PMA 10 0xe1 D0 D1 D2 D3 D4 D5 C7 10 0xff D0 D1 D2 D3 D4 D5 D6 PMD Ethernet Frame Gap /S/ Start of Frame block /T/ End of Frame block /E/ Idle block /D/ Data block /S/ /D/ /D/ /D/ /D/ /T/ /E/ SoNIC

Covert Timing Channel in SoNIC Embedding signals into interpacket gaps. Large gap: ‘1’ Small gap: ‘0’ Covert timing channel by modulating IPGs at 100ns Packet i Packet i+1 Packet i Packet i+1 Overt channel at 1 Gbps, Covert channel at 80 kbps Over 9-hop Internet path with cross traffic (NLR) less than 10% BER (can mask BER w/ FEC) Undetectable to software endhost Takeaways: Covert timing channel by modulating IPG with overt channel 3 Gbps with covert channel .25Mbps 9/18/2018 SoNIC

Covert Timing Channel in SoNIC Modulating IPGs at 100ns scale (=128 /I/s), over 4 hops 3562 /I/s 3562 - 128 /I/s 3562 + 128 /I/s BER = 0.37% CDF ‘0’ ‘1’ BER = 0.0037 Data rate = 0.2 Mbps 128 /I/ = 102.4 ns Interpacket delays (ns) ‘1’: 3562 + 128 /I/s ‘0’: 3562 – 128 /I/s ‘1’: 3562 + a /I/s ‘0’: 3562 – a /I/s 9/18/2018 SoNIC

Covert Timing Channel in SoNIC Prevent Covert Timing Channels? 3562 /I/s CDF BER = 0.0037 Data rate = 0.2 Mbps 128 /I/ = 102.4 ns Interpacket delays (ns) 9/18/2018 SoNIC

Covert Channels in SoNIC Challenges: Rogue end-hosts 9/18/2018 SoNIC

Outline Motivation SoNIC Applications Discussion and Conclusion Measurement / traffic analysis Profiling / fingerprinting Covert channels Discussion and Conclusion How to design What can it actually do What it can do for researchers Vision+instance 9/18/2018 SoNIC

Overview of Collaborations and Resources Mini-Cloud Testbed DURIP Funds for 16 SoNIC boards and Funds a small cloud: 38 nodes and 608 cores Funded by AFOSR NSF Future Internet Architecture Collaboration with Cisco and other Universities such as Washington, Penn, Purdue, Berkeley, MIT, Stanford, CMU, Princeton, UIUC, and Texas DARPA CSSP Funds research in three phases, we are currently in Phase 2 Requires Collaboration with non-DARPA DoD agency Collaboration with AFRL Collaboration with NGA Exo-GENI Cornell PI into national research network Layer 2 access nationally Research in Software-Defined Networks (SDN) like OpenFlow NSF CAREER and Alfred P. Sloan Fellowship Funds related basic research 38

SoNIC Contributions Network Research Engineering Status Unprecedented access to the PHY with commodity hardware A platform for cross-network-layer research Can improve network research applications Engineering Precise control of interpacket gaps (delays) Design and implementation of the PHY in software Novel scalable hardware design Optimizations / Parallelism Status Measurements in large scale: DCN, GENI, 40 GbE In two succinct sentences… We developed sonic to achieve unprecedented access to the physical layer with commodity hardware so that cross-network-layer research becomes possible. We carefully designed and did a lot of optimizations to achieve precise realtime software access to the physical layer. we are current integrating sonic board into data center network, and geni testbeds for large scale measurements, And trying to scale up sonic board to support 40GbE. 9/18/2018 SoNIC

Concluding Remarks SoNIC responds to network at the center of the cloud High precision network measurement Profiles and characterizes switches and routers Covert channel detection and prevention Understand and create more available and secure networks Status: SoNIC platform is available DURIP grant has seeded and paid for a number of boards SDNM: Software Defined Network Measurement SoNIC enabled SDN/Openflow networks (e.g. GENI) Collaboration: Deployment in experimental networks

Questions Contact: hweather@cs.cornell.edu Website: http://fireless.cs.cornell.edu, http://sonic.cs.cornell.edu, and http://www.cs.cornell.edu/~hweather 9/18/2018 SoNIC

My Contributions and Paper Trail Cloud Networking SoNIC in NSDI 2013 and 2014 Wireless DC in ANCS 2012 (best paper) and NetSlice in ANCS 2012 Bifocals in IMC 2010 and DSN 2010 Maelstrom in ToN 2011 and NSDI 2008 Chaired Tudor Marian’s PhD 2010 (now at Google) Cloud Computation & Vendor Lock-in Plug into the Supercloud in IEEE Internet Computing-2013 Supercloud/Xen-Blanket in EuroSys-2012 and HotCloud-2011 Overdriver in VEE-2011 Chaired Dan William’s PhD 2012 (now at IBM) Cloud Storage Gecko in FAST 2013 / HotStorage 2012 RACS in SOCC-2010 SMFS in FAST 2009 Antiquity in EuroSys 2007 / NSDI 2006 Chaired Lakshmi Ganesh’s PhD 2011 (now at Facebook)