Download presentation
Presentation is loading. Please wait.
1
Prof. Hakim Weatherspoon
From the Cloud to SoNIC: Precise Realtime Software Access and Control of Wired Networks Prof. Hakim Weatherspoon Joint with Ki Suh Lee and Han Wang Cornell University Stanford University April 17, 2014 -Greetings -What is sonic? -What have we done? -How to use sonic?
2
The Rise of Cloud Computing
The promise of the Cloud A computer utility; a commodity Catalyst for technology economy Revolutionizing for health care, financial systems, scientific research, and society SEATTLE The cloud completes the commoditization process and makes storage and computation a commodity.
3
The Rise of Cloud Computing
The promise of the Cloud ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. NIST Cloud Definition SEATTLE The cloud completes the commoditization process and makes storage and computation a commodity. Public vs private IaaS, PaaS, SaaS On demand (self service), network access, resource pooling, rapid elasticity, measured service
4
The Rise of Cloud Computing
The promise of the Cloud ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. NIST Cloud Definition SEATTLE The cloud completes the commoditization process and makes storage and computation a commodity. Public vs private IaaS, PaaS, SaaS On demand (self service), network access, resource pooling, rapid elasticity, measured service
5
The Rise of Cloud Computing
The promise of the Cloud ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Requires fundamentals in distributed systems Networking Computation Storage NIST Cloud Definition The cloud completes the commoditization process and makes storage and computation a commodity. Public vs private IaaS, PaaS, SaaS On demand (self service), network access, resource pooling, rapid elasticity, measured service
6
The Rise of Cloud Computing
The promise of the Cloud Networking Computation Storage VM Switch Guest OS App 33KB Xen-Blanket VMM Guest OS App How do we understand this new Internet architecture? Xen-Blanket VMM 9/18/2018 SoNIC
7
My Contributions Cloud Networking Cloud Computation & Vendor Lock-in
SoNIC in NSDI 2013 and 2014 Wireless DC in ANCS 2012 (best paper) and NetSlice in ANCS 2012 Bifocals in IMC 2010 and DSN 2010 Maelstrom in ToN 2011 and NSDI 2008 Chaired Tudor Marian’s PhD 2010 (now at Google) Cloud Computation & Vendor Lock-in Plug into the Supercloud in IEEE Internet Computing-2013 Supercloud/Xen-Blanket in EuroSys-2012 and HotCloud-2011 Overdriver in VEE-2011 Chaired Dan William’s PhD 2012 (now at IBM) Cloud Storage Gecko in FAST 2013 / HotStorage 2012 RACS in SOCC-2010 SMFS in FAST 2009 Antiquity in EuroSys 2007 / NSDI 2006 Chaired Lakshmi Ganesh’s PhD 2011 (now at Facebook)
8
The Rise of Cloud Computing
The promise of the Cloud Networking Computation Storage Xen-Blanket VM Switch VMM How do we understand this new Internet architecture? 33KB Xen-Blanket VMM 9/18/2018 SoNIC
9
Cloud Networking: Challenges
Challenges remain: Performance – Packets are still lost Why is it so hard to move data between clouds over the wide-area? [NSDI 2014, 2013, 2008, FAST 2009, IMC 2010, DSN 2010] Why is it so hard to move data between cloud platforms over the wide-area? 9/18/2018 SoNIC
10
Cloud Networking: Challenges
Uncover the ground truth Network changes inter-packet gap Traffic sent: Traffic received: Bursty traffic induced by packet chaining Inter-packet gaps are invisible to higher layers Packet Interpacket gap Systems programmers treat layers 1 and 2 as black box. What can we do if we have software access to layers 1 and 2 How is it in a uncongested, semiprivate lambda network, packet are still lost( Transition) According to the 10G Ethernet standard, 10G Ethernet is always sending at 10Gbps. Back to the previous example, what we observed is that although the sender sends out uniformly distributed packet at a constant rate. What the receiver got, is some burstly traffic. The gap between each packet is at minimal threshold that the standard allows. So SoNIC provides this unique feature for you observe this kind of effect in software in realtime. === Ki Suh === 10G always sending at 10Gbps, If not sending, sending idles. Software layer can distinguish the top layer with bottom layer, system programmer does not see We want to give people ground truth, but from high level. ----- Meeting Notes (9/7/12 15:49) ----- If we have a capability like this, what can we with it? (Software) Access to the physical layer is required 9/18/2018 SoNIC
11
Cloud Networking: Opportunities
Why access the physical layer from software? Issue: Programmers treat layers 1 and 2 as black box Opportunities Network Measurements Network Monitoring/Profiling Network Steganography Application Transport Network Data Link Physical 64/66b PCS PMA PMD Can improve security, availability, and performance of the Cloud Networks
12
Cloud Networking: Opportunities
Understanding Cloud Networks via Software-defined Network InterfaCe (SoNIC) Improve understanding of network Improves security, availability, and performance of the network SoNIC: Software-defined NIC Access the PHY In real-time In software Separates what is sent (software) how it is sent (hardware) Application Transport Network Data Link Physical 64/66b PCS PMA PMD Netslice: Software Router [ANCS 2012] Enables Software-defined Networks Big Data-in-network solutions via Deep Packet Inspection (DPI) at 40Gbps
13
Outline Motivation SoNIC Network Research Applications
Conclusion / Research Agenda Briefly discuss. 9/18/2018 SoNIC
14
10GbE Network Stack Application Transport Network Data Link Physical
L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Preamble Eth Hdr Data L3 Hdr L2 Hdr CRC Gap Physical Idle characters (/I/) 64 bit 2 bit syncheader Gigabits 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ /S/ /D/ /T/ /E/ 16 bit Scrambler Scrambler Descrambler I need you to pay attention for 30 seconds, I will discuss the details of a network stack. Most of this you know, some you do not know Going from the Data link layer to the physical layer: The physical layer encodes an Ethernet frame into a sequence of 64 bit block. Then for each 64bit block it adds 2 bit syncheaders. That is why this is called 64/66bit encoding. maintain eqaul number of zeros and ones, which we call it DC balance. Where is /I/ characters? Gearbox Gearbox Blocksync PMA PMA PMD 9/18/2018 SoNIC
15
10GbE Network Stack Commodity NIC Application Transport Packet i
Data Transport Data L3 Hdr Packet i Packet i+1 SW HW Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ Scrambler Scrambler Descrambler Packet i Packet i+1 Gearbox Gearbox Blocksync PMA PMA PMD Commodity NIC 9/18/2018 SoNIC
16
10GbE Network Stack SoNIC NetFPGA Application Physical 64/66b PCS PMA
PMD Encode Scrambler Gearbox Decode Descrambler Blocksync Data Link Network Transport Application Data SW HW Transport Data L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical IPG 64/66b PCS Encode Encode Decode Packet i Packet i+1 /S/ /D/ /T/ /E/ SW HW IPD Scrambler Scrambler Descrambler we are actually doing physical layer in software. Gearbox Gearbox Blocksync PMA PMA PMD SoNIC NetFPGA 9/18/2018 SoNIC
17
SoNIC Design SoNIC Application Transport Network Data Link Physical
L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ SW HW Scrambler Scrambler Descrambler Our approach was to separate what is sent in software and how it is in hardware As I said in background, all these sublayes down in the physical layer do not manipulate bits, while all the layes above and including scrambler touches every bit. Gearbox Gearbox Blocksync PMA PMA PMD SoNIC 9/18/2018 SoNIC
18
SoNIC Design and Architecture
Application Data Transport TX MAC TX PCS Kernel APP RX MAC RX PCS Userspace Hardware Gearbox Transceiver Blocksync SFP+ Data L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ SW HW Scrambler Scrambler Descrambler This is overall design and architecture of SoNIC Say I will discuss design, optimization, and interfae Gearbox Gearbox Blocksync PMA PMA PMD SoNIC 9/18/2018 SoNIC
19
SoNIC Design: API Hardware control: ioctl syscall
I/O : character device interface Sample C code for packet generation and capture 1: #include "sonic.h" 19: /* CONFIG SONIC CARD FOR PACKET GEN*/ 2: 20: ioctl(fd1, SONIC_IOC_RESET) 3: struct sonic_pkt_gen_info info = { 21: ioctl(fd1, SONIC_IOC_SET_MODE, PKT_GEN_CAP) 4: .mode = 0, 22: ioctl(fd1, SONIC_IOC_PORT0_INFO_SET, &info) 5: .pkt_num = UL, 23 6: .pkt_len = 1518, 24: /* START EXPERIMENT*/ 7: .mac_src = "00:11:22:33:44:55", 25: ioctl(fd1, SONIC_IOC_START) 8: .mac_dst = "aa:bb:cc:dd:ee:ff", 26: // wait till experiment finishes 9: .ip_src = " ", 27: ioctl(fd1, SONIC_IOC_STOP) 10: .ip_dst = " ", 28: 11: .port_src = 5000, 29: /* CAPTURE PACKET */ 12: .port_dst = 5000, 30: while ((ret = read(fd2, buf, 65536)) > 0) { 13: .idle = 12, 31: // process data 14: }; 32: } 15: 33: 16: /* OPEN DEVICE*/ 34: close(fd1); 17: fd1 = open(SONIC_CONTROL_PATH, O_RDWR); 35: close(fd2); 18: fd2 = open(SONIC_PORT1_PATH, O_RDONLY); interface is a regular C e.g., source C code for packet generation / caption What is the significance of “12”? 9/18/2018 SoNIC
20
Outline Motivation SoNIC Network Research Applications
Measurement / traffic analysis Profiling / fingerprinting Covert channels Conclusion / Research Agenda How to design What can it actually do What it can do for researchers Vision+instance 9/18/2018 SoNIC
21
Measurement / Traffic Analysis using SoNIC
Uncover the ground truth Network changes inter-packet gap Traffic sent: Traffic received: Bursty traffic induced by packet chaining Inter-packet gaps are invisible to higher layers, but not SoNIC Packet Interpacket gap Systems programmers treat layers 1 and 2 as black box. What can we do if we have software access to layers 1 and 2 How is it in a uncongested, semiprivate lambda network, packet are still lost( Transition) According to the 10G Ethernet standard, 10G Ethernet is always sending at 10Gbps. Back to the previous example, what we observed is that although the sender sends out uniformly distributed packet at a constant rate. What the receiver got, is some burstly traffic. The gap between each packet is at minimal threshold that the standard allows. So SoNIC provides this unique feature for you observe this kind of effect in software in realtime. === Ki Suh === 10G always sending at 10Gbps, If not sending, sending idles. Software layer can distinguish the top layer with bottom layer, system programmer does not see We want to give people ground truth, but from high level. ----- Meeting Notes (9/7/12 15:49) ----- If we have a capability like this, what can we with it? 9/18/2018 SoNIC
22
Measurement / Traffic Analysis using SoNIC
Precise end-to-end instrumentation platform Measurement at large scale Towards an open measurement platform 9/18/2018 SoNIC
23
Profiling / Fingerprinting using SoNIC
----- Meeting Notes (9/11/12 15:38) ----- One Hop Profiling One Hop Through a switch 9/18/2018 SoNIC
24
Profiling / Fingerprinting using SoNIC
Cisco Catalyst 6500 switch 1Gbps data (1518B) Socket 0 Socket 1 APP0 APP1 TX MAC0 RX MAC0 TX MAC1 RX MAC1 TX PCS0 RX PCS0 TX PCS1 RX PCS1 So I have described what we did for SoNIC, does it work? Here I showed you - Explain the graph. TX SFP0 RX SFP0 TX SFP1 RX SFP1 9/18/2018 SoNIC
25
Profiling / Fingerprinting using SoNIC
Router/ Switch Signatures Different Routers and switches have different response function. Improve simulation model of switches and routers. Detect switch and router model in real network. ----- Meeting Notes (9/10/12 15:05) ----- Profile Different switches ----- Meeting Notes (9/12/12 16:51) ----- Data rate. Cisco 4948 Cisco 6509 IBM BNT G8264R 1500 byte 6Gbps 9/18/2018 SoNIC
26
Profiling / Fingerprinting using SoNIC
Router/ Switch Signatures Different Routers and switches have different response function. Improve simulation model of switches and routers. Detect switch and router model in real network. ----- Meeting Notes (9/10/12 15:05) ----- Profile Different switches ----- Meeting Notes (9/12/12 16:51) ----- Data rate. NetFPGA 10G 1500 byte 6Gbps 9/18/2018 SoNIC
27
Profiling / Fingerprinting using SoNIC
End-to-End Profile of GENI Network Modeling Network Elements Testbed for Network System Theory and Queue Theory Towards a Predictable Network What is the aggregate effect? Cornell Princeton UPenn ----- Meeting Notes (9/12/12 16:51) ----- Add Bifocals there, highest peak to the left may have caused in the middle. Stanford U. Wash Berkeley 9/18/2018 SoNIC
28
Profiling / Fingerprinting using SoNIC
Challenges: Rogue routers 9/18/2018 SoNIC
29
Covert Channels in SoNIC
----- Meeting Notes (9/11/12 15:38) ----- One Hop Create / Detect / Prevent Covert Channels in Layers 1 and 2 9/18/2018 SoNIC
30
Covert Channels in SoNIC
Application Application Hide transmission of data Storage Channel Writing/reading of a storage location Timing Channel Modulation of system resources Transport Transport Network Network Data Link Data Link Physical Physical 64/66b PCS PMA PMD 9/18/2018 SoNIC
31
Covert Channels in SoNIC
Application Application Existing Covert Channels TCP/IP headers, HTTP requests Packet Rate / Timing Increasing number of detection techniques Covert Channels at the Physical layer Transport Transport Network Network Data Link Data Link Physical Physical 64/66b PCS PMA PMD 9/18/2018 SoNIC
32
Covert Channels in SoNIC
Sync Block Payload Application Application Data Block 01 D0 D1 D2 D3 D4 D5 D6 D7 Block Type Transport Transport /E/ 10 0x1e C0 C1 C2 C3 C4 C5 C6 C7 /S/ 10 0x33 C0 C1 C2 C3 D5 D6 D7 Network Network 10 0x78 D1 D2 D3 D4 D5 D6 D7 /T/ 10 0x87 C1 C2 C3 C4 C5 C6 C7 Data Link Data Link 10 0x99 D0 C2 C3 C4 C5 C6 C7 10 0xaa D0 D1 C3 C4 C5 C6 C7 Physical Physical 10 0xb4 D0 D1 D2 C4 C5 C6 C7 64/66b PCS 10 0xcc D0 D1 D2 D3 C5 C6 C7 10 0xd2 D0 D1 D2 D3 D4 C6 C7 PMA 10 0xe1 D0 D1 D2 D3 D4 D5 C7 10 0xff D0 D1 D2 D3 D4 D5 D6 PMD Ethernet Frame Gap /S/ Start of Frame block /T/ End of Frame block /E/ Idle block /D/ Data block /S/ /D/ /D/ /D/ /D/ /T/ /E/ SoNIC
33
Covert Timing Channel in SoNIC
Embedding signals into interpacket gaps. Large gap: ‘1’ Small gap: ‘0’ Covert timing channel by modulating IPGs at 100ns Packet i Packet i+1 Packet i Packet i+1 Overt channel at 1 Gbps, Covert channel at 80 kbps Over 9-hop Internet path with cross traffic (NLR) less than 10% BER (can mask BER w/ FEC) Undetectable to software endhost Takeaways: Covert timing channel by modulating IPG with overt channel 3 Gbps with covert channel .25Mbps 9/18/2018 SoNIC
34
Covert Timing Channel in SoNIC
Modulating IPGs at 100ns scale (=128 /I/s), over 4 hops 3562 /I/s /I/s /I/s BER = 0.37% CDF ‘0’ ‘1’ BER = Data rate = 0.2 Mbps 128 /I/ = ns Interpacket delays (ns) ‘1’: /I/s ‘0’: 3562 – 128 /I/s ‘1’: a /I/s ‘0’: 3562 – a /I/s 9/18/2018 SoNIC
35
Covert Timing Channel in SoNIC
Prevent Covert Timing Channels? 3562 /I/s CDF BER = Data rate = 0.2 Mbps 128 /I/ = ns Interpacket delays (ns) 9/18/2018 SoNIC
36
Covert Channels in SoNIC
Challenges: Rogue end-hosts 9/18/2018 SoNIC
37
Outline Motivation SoNIC Applications Discussion and Conclusion
Measurement / traffic analysis Profiling / fingerprinting Covert channels Discussion and Conclusion How to design What can it actually do What it can do for researchers Vision+instance 9/18/2018 SoNIC
38
Overview of Collaborations and Resources
Mini-Cloud Testbed DURIP Funds for 16 SoNIC boards and Funds a small cloud: 38 nodes and 608 cores Funded by AFOSR NSF Future Internet Architecture Collaboration with Cisco and other Universities such as Washington, Penn, Purdue, Berkeley, MIT, Stanford, CMU, Princeton, UIUC, and Texas DARPA CSSP Funds research in three phases, we are currently in Phase 2 Requires Collaboration with non-DARPA DoD agency Collaboration with AFRL Collaboration with NGA Exo-GENI Cornell PI into national research network Layer 2 access nationally Research in Software-Defined Networks (SDN) like OpenFlow NSF CAREER and Alfred P. Sloan Fellowship Funds related basic research 38
39
SoNIC Contributions Network Research Engineering Status
Unprecedented access to the PHY with commodity hardware A platform for cross-network-layer research Can improve network research applications Engineering Precise control of interpacket gaps (delays) Design and implementation of the PHY in software Novel scalable hardware design Optimizations / Parallelism Status Measurements in large scale: DCN, GENI, 40 GbE In two succinct sentences… We developed sonic to achieve unprecedented access to the physical layer with commodity hardware so that cross-network-layer research becomes possible. We carefully designed and did a lot of optimizations to achieve precise realtime software access to the physical layer. we are current integrating sonic board into data center network, and geni testbeds for large scale measurements, And trying to scale up sonic board to support 40GbE. 9/18/2018 SoNIC
40
Concluding Remarks SoNIC responds to network at the center of the cloud High precision network measurement Profiles and characterizes switches and routers Covert channel detection and prevention Understand and create more available and secure networks Status: SoNIC platform is available DURIP grant has seeded and paid for a number of boards SDNM: Software Defined Network Measurement SoNIC enabled SDN/Openflow networks (e.g. GENI) Collaboration: Deployment in experimental networks
41
Questions Contact: hweather@cs.cornell.edu
Website: and 9/18/2018 SoNIC
42
My Contributions and Paper Trail
Cloud Networking SoNIC in NSDI 2013 and 2014 Wireless DC in ANCS 2012 (best paper) and NetSlice in ANCS 2012 Bifocals in IMC 2010 and DSN 2010 Maelstrom in ToN 2011 and NSDI 2008 Chaired Tudor Marian’s PhD 2010 (now at Google) Cloud Computation & Vendor Lock-in Plug into the Supercloud in IEEE Internet Computing-2013 Supercloud/Xen-Blanket in EuroSys-2012 and HotCloud-2011 Overdriver in VEE-2011 Chaired Dan William’s PhD 2012 (now at IBM) Cloud Storage Gecko in FAST 2013 / HotStorage 2012 RACS in SOCC-2010 SMFS in FAST 2009 Antiquity in EuroSys 2007 / NSDI 2006 Chaired Lakshmi Ganesh’s PhD 2011 (now at Facebook)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.