Download presentation
Presentation is loading. Please wait.
Published byUtami Budiaman Modified over 6 years ago
1
SoNIC: Covert Channels over High Speed Cloud Networks
Ki Suh Lee, Han Wang, Hakim Weatherspoon Cornell University TRUST Autumn 2013 Conference Washington DC October 9, 2013 - Mission aware networking - - Greetings - SoNIC, software defined interface card which allows users to access and control an entire network protocol stack in software and in realtime. Joint work ** Less detail, slow talk**
2
Cloud business, government, society depend on the network Public
Private 9/18/2018 SoNIC TRUST 2013
3
Cloud Challenges: Data Exfiltration/Rogue routers Public Private
9/18/2018 SoNIC TRUST 2013
4
Cloud Challenges: Data Exfiltration/Rogue hosts Public Private
9/18/2018 SoNIC TRUST 2013
5
Understand how to prevent exfiltration of data
Goal Understand how to prevent exfiltration of data Need to first understand how one might use covert channels Our number one principle is precision. Precision of network measurements can be significantly improved at sub-nanosecond resolution via access to the physical layer. 9/18/2018 SoNIC TRUST 2013
6
Detecting timing channel
Covert Channels Application Covert Channel via TCP/IP Headers, HTTP Reqs Covert Timing Channel: Space between pckts Fairly easy to detect all the above IPG Packet i Packet i+1 IPD Transport Network Data Link Physical Detecting timing channel Definition of IPD How it is being used for network research. We argue here that we Can improve them with access to the physical layer of network protocol stack 9/18/2018 SoNIC TRUST 2013
7
Detecting timing channel
Covert Channels Application Valuable information in PHY: Idle characters Can provide precise timing base for control Each bit is ~97 ps wide IPG Transport Network Packet i Packet i+1 IPD Data Link Physical Detecting timing channel How can access to the physical layer of a network protocol stack improve network research? Because of special characeters that only reside in the physical layer, called idle characters Idle characters fills any gaps between two packets in 10 Gigabit Ethernet In particular, 10GbE protocol always sends 10 Gigabits per second, as a result each bit is about 100 ps wide, I need 7~8 of those to make up a idle characters Therefore, if we count the number of /I/s or the bits between two packets, we can measure the interpacket gaps very precisely. For example, <click> 9/18/2018 SoNIC TRUST 2013
8
Covert Channels Valuable information in PHY: Idle characters
Application Valuable information in PHY: Idle characters Can provide precise timing base for control Each bit is ~97 ps wide 12 /I/s = 100bits = 9.7ns IPG Transport Network Packet i Packet i+1 Data Link One Idle character (/I/) = 7~8 bits Physical Detecting timing channel * 12 /I/s can be translated into 100 bits, and consequently 9.7 ns Therefore, by accessing and controlling bits, we can achieve unheard level of precision and control, which subsequently can improve network research applications that I previously mentioned. specifically, for example, with 10GBE standard, there has to be minimum 12 characters, 7~8 bits and each bit is 97 ps wide. For example, 10GbE protoco always sends at 10 Gigabit, each bit is 97 ps wide, I need 7~8 of those to make up a idle characters Idle character, /I/ equals 7~8 bits 9/18/2018 SoNIC TRUST 2013
9
Covert Channels Valuable information in PHY: Idle characters
Application Valuable information in PHY: Idle characters Issue1: The PHY is simply a black box No interface from NIC or OS Valuable information is invisible (discarded) Issue2: Limited access to hardware . IPG Packet i Packet i+1 Transport Network Data Link Physical Then, are there any ways to access and control them? * Unfortunately, (click), physical layer has been a black box to network researchers for a very long time It is possible to access them with specialized hardware, such as costomized ASIC chips. However, we would do so in software for flexibility Packet i+2 Packet i Packet i+1 Packet i+2 Packet i Packet i+1 9/18/2018 SoNIC TRUST 2013
10
Covert Channels Goal: Control every bit in software in realtime
Application Transport Network Data Link Physical Goal: Control every bit in software in realtime Enable research on PHY covert challenge Challenge Requires unprecedented software access to the PHY IPG Packet i Packet i+1 IPD As a result, our ultimate goal is to control every single bit in software and in realtime, thus we can improve network research applications that rely on interpacket delays or gaps. And therefore, the challenge here is how can we access the physical layer in software. 9/18/2018 SoNIC TRUST 2013
11
SoNIC: Software-defined Network Interface Card
Application Transport Network Data Link Physical Implements the PHY in software Enabling control and access to every bit in realtime With commodity components Thus, enabling novel network research SoNIC: Precise Realtime Software Access and Control of Wired Networks, Ki Suh Lee, Han Wang and Hakim Weatherspoon, Appears in NSDI, April 2013 IPG Packet i Packet i+1 IPD Main idea of SoNIC is to implement the physical layer in software. So that we can access the entire network protocol stack similar to software defined radio for a wireless network. Before I discuss the design and implementation of sonic, I will briefly discuss the 10GbE network stack to help you understand how sonic is designed first. ----- Meeting Notes (4/1/13 15:17) ----- don't forget to click, Mention something about SDR here. 9/18/2018 SoNIC TRUST 2013
12
Outline Introduction Demo: PHY Covert Timing Channel
SoNIC: Software-defined Network Interface Card Discussion and Concluding Remarks So, I am going to introduce SoNIC, software defined network interface card, and applications that sonic enables in this talk. 9/18/2018 SoNIC TRUST 2013
13
SoNIC Demo: Covert Timing Channel
9/18/2018 SoNIC TRUST 2013
14
SoNIC Demo: Covert Timing Channel
Embedding signals into interpacket gaps. Large gap: ‘1’ Small gap: ‘0’ Covert timing channel by modulating IPGs at 100ns to 1us Packet i Packet i+1 Packet i Packet i+1 Overt channel at 1 Gbps, Covert channel at 80 kbps Over 9-hop Internet path with cross traffic (NLR) less than 10% BER (can mask BER w/ FEC) Undetectable to software endhost Takeaways: Covert timing channel by modulating IPG with overt channel 3 Gbps with covert channel .25Mbps BER = 0.01 Data rate = 0.08 Mbps 2,048 /I/ = 1,638.4 ns PacketSize = 1518B IPG = /I/ IPD = ns 9/18/2018 SoNIC TRUST 2013
15
SoNIC Demo: Covert Timing Channel
Modulating IPGS at 100ns scale (=128 /I/s), over 10 hops 3562 /I/s /I/s /I/s BER = 0.37% CDF ‘0’ ‘1’ BER = Data rate = 0.2 Mbps 128 /I/ = ns PacketSize = 1518B IPG = 3562 /I/ IPD = 4070 ns Interpacket delays (ns) ‘1’: /I/s ‘0’: 3562 – 128 /I/s ‘1’: a /I/s ‘0’: 3562 – a /I/s 9/18/2018 SoNIC TRUST 2013
16
SoNIC Demo: Covert Timing Channel
Modulating IPGS at 1us scale (=2048 /I/s), over 9 hops Overt channel at 1 Gbps, Covert channel at 80 kbps Over 9-hop Internet path with cross traffic (NLR) less than 10% BER (can mask BER w/ FEC) Undetectable to software endhost BER = 0.01 Data rate = 0.08 Mbps 2,048 /I/ = 1,638.4 ns PacketSize = 1518B IPG = /I/ IPD = ns ‘1’: /I/s ‘0’: 13738– 2048 /I/s ‘1’: a /I/s ‘0’: – a /I/s 9/18/2018 SoNIC TRUST 2013
17
SoNIC Demo: Covert Timing Channel
Prevent Covert Timing Channels? 3562 /I/s CDF BER = Data rate = 0.2 Mbps 128 /I/ = ns Interpacket delays (ns) 9/18/2018 SoNIC TRUST 2013
18
SoNIC Demo: Covert Timing Channel
Router/ Switch Signatures Different Routers and switches have different response function. Improve simulation model of switches and routers. Detect switch and router model in real network. ----- Meeting Notes (9/10/12 15:05) ----- Profile Different switches ----- Meeting Notes (9/12/12 16:51) ----- Data rate. Cisco 4948 Cisco 6509 IBM BNT G8264R 1500 byte 6Gbps 9/18/2018 SoNIC TRUST 2013
19
Outline Introduction Demo: PHY Covert Timing Channel
SoNIC: Software-defined Network Interface Card Discussion and Concluding Remarks So, I am going to introduce SoNIC, software defined network interface card, and applications that sonic enables in this talk. 9/18/2018 SoNIC TRUST 2013
20
10GbE Network Stack Application Transport Network Data Link Physical
L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Preamble Eth Hdr Data L3 Hdr L2 Hdr CRC Gap Physical Idle characters (/I/) 64 bit 2 bit syncheader Gigabits 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ /S/ /D/ /T/ /E/ I need you to pay attention for 30 seconds, I will discuss the details of a network stack. Most of this you know, some you do not know Going from the Data link layer to the physical layer: The physical layer encodes an Ethernet frame into a sequence of 64 bit block. Then for each 64bit block it adds 2 bit syncheaders. That is why this is called 64/66bit encoding. maintain eqaul number of zeros and ones, which we call it DC balance. Where is /I/ characters? 16 bit Scrambler Scrambler Descrambler Gearbox Gearbox Blocksync PMA PMA PMD 9/18/2018 SoNIC TRUST 2013
21
10GbE Network Stack Commodity NIC Application Transport Packet i
Data Transport Data L3 Hdr Packet i Packet i+1 SW HW Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ Scrambler Scrambler Descrambler Packet i Packet i+1 Gearbox Gearbox Blocksync PMA PMA PMD Commodity NIC 9/18/2018 SoNIC TRUST 2013
22
10GbE Network Stack SoNIC NetFPGA Application Physical 64/66b PCS PMA
PMD Encode Scrambler Gearbox Decode Descrambler Blocksync Data Link Network Transport Application Data SW HW Transport Data L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode Packet i Packet i+1 /S/ /D/ /T/ /E/ SW HW we are actually doing physical layer in software. Scrambler Scrambler Descrambler Gearbox Gearbox Blocksync PMA PMA PMD SoNIC NetFPGA 9/18/2018 SoNIC TRUST 2013
23
SoNIC Design SoNIC Application Transport Network Data Link Physical
L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ SW HW Our approach was to separate what is sent in software and how it is in hardware As I said in background, all these sublayes down in the physical layer do not manipulate bits, while all the layes above and including scrambler touches every bit. Scrambler Scrambler Descrambler Gearbox Gearbox Blocksync PMA PMA PMD SoNIC 9/18/2018 SoNIC TRUST 2013
24
SoNIC Design and Architecture
Application Data Transport TX MAC TX PCS Kernel APP RX MAC RX PCS Userspace Hardware Gearbox Transceiver Blocksync SFP+ Data L3 Hdr Network Data L3 Hdr L2 Hdr Data Link Eth Hdr CRC Preamble Data L3 Hdr L2 Hdr Gap Physical 64/66b PCS Encode Encode Decode /S/ /D/ /T/ /E/ SW HW This is overall design and architecture of SoNIC Say I will discuss design, optimization, and interfae Scrambler Scrambler Descrambler Gearbox Gearbox Blocksync PMA PMA PMD SoNIC 9/18/2018 SoNIC TRUST 2013
25
SoNIC Design: Hardware
Application To deliver every bit from/to software High-speed transceivers PCIe Gen2 (=32Gbps) Optimized DMA engine Transport Network Data Link Physical 64/66b PCS Encode Decode SW HW we have a development board for hardware component (Let’s not be too technical.) The red here is where fiber is plugged and where optical signal is converted to digital signal and vice versa. The purple here is than chops up a bitstream into smaller sized data and delivers them to the host machine. We developed and optimized DMA engine to deliver two bitstreams from two physical port via a PCIe Generation 2 interface which can support up to 32 Gbps. Scrambler Descrambler SFP+ SFP+ FPGA Gearbox Gearbox Blocksync Blocksync PMA PMA PMD PMD PCIeGen2 9/18/2018 SoNIC TRUST 2013
26
SoNIC Design: Software
Application Port 0 RX MAC RX PCS TX MAC TX PCS Port 1 APP APP Transport Network TX MAC RX MAC Data Link Data Link TX PCS RX PCS Physical 64/66b PCS Encode Encode Decode Decode SW HW Dedicated Kernel Threads TX / RX PCS, TX / RX MAC threads APP thread: Interface to userspace what are pictures (talk them through) … in purple, … in red Scrambler Scrambler Descrambler Descrambler Gearbox Blocksync PMA Packet i Packet i+1 PMD 9/18/2018 SoNIC TRUST 2013
27
SoNIC Design: Synchronization
Low-latency FIFOs Application Port 0 Port 1 APP APP Transport Network TX MAC RX MAC TX MAC RX MAC Data Link TX PCS RX PCS TX PCS RX PCS Physical 64/66b PCS Encode Decode SFP+ FPGA PCIeGen2 SW HW It is very important to tightly synchronize cores and hardware and software to continuously process bits, because otherwise, broken link and lost packets.’ We do two things for this Scrambler Descrambler Pointer-polling No Interrupts Gearbox Blocksync PMA PMD 9/18/2018 SoNIC TRUST 2013
28
SoNIC Design: Interface and Control
Hardware control: ioctl syscall I/O : character device interface Sample C code for packet generation and capture 1: #include "sonic.h" 19: /* CONFIG SONIC CARD FOR PACKET GEN*/ 2: 20: ioctl(fd1, SONIC_IOC_RESET) 3: struct sonic_pkt_gen_info info = { 21: ioctl(fd1, SONIC_IOC_SET_MODE, PKT_GEN_CAP) 4: .mode = 0, 22: ioctl(fd1, SONIC_IOC_PORT0_INFO_SET, &info) 5: .pkt_num = UL, 23 6: .pkt_len = 1518, 24: /* START EXPERIMENT*/ 7: .mac_src = "00:11:22:33:44:55", 25: ioctl(fd1, SONIC_IOC_START) 8: .mac_dst = "aa:bb:cc:dd:ee:ff", 26: // wait till experiment finishes 9: .ip_src = " ", 27: ioctl(fd1, SONIC_IOC_STOP) 10: .ip_dst = " ", 28: 11: .port_src = 5000, 29: /* CAPTURE PACKET */ 12: .port_dst = 5000, 30: while ((ret = read(fd2, buf, 65536)) > 0) { 13: .idle = 12, 31: // process data 14: }; 32: } 15: 33: 16: /* OPEN DEVICE*/ 34: close(fd1); 17: fd1 = open(SONIC_CONTROL_PATH, O_RDWR); 35: close(fd2); 18: fd2 = open(SONIC_PORT1_PATH, O_RDONLY); interface is a regular C e.g., source C code for packet generation / caption What is the significance of “12”? 9/18/2018 SoNIC TRUST 2013
29
Outline Introduction Demo: PHY Covert Timing Channel
SoNIC: Software-defined Network Interface Card Discussion and Concluding Remarks So, I am going to introduce SoNIC, software defined network interface card, and applications that sonic enables in this talk. 9/18/2018 SoNIC TRUST 2013
30
Overview of Collaborations and Resources
Mini-Cloud Testbed DURIP Funds for 16 SoNIC boards and Funds a small cloud: 38 nodes and 608 cores Funded by AFOSR NSF TRUST and Future Internet Architecture Collaboration with Cisco and other Universities such as Berkeley, CMU, Washington, Penn, Purdue, MIT, Stanford, Princeton, UIUC, and Texas DARPA CSSP Funds research in three phases, we are currently in Phase 2 Requires Collaboration with non-DARPA DoD agency Collaboration with AFRL Collaboration with NGA Exo-GENI Cornell PI into national research network Layer 2 access nationally Research in Software-Defined Networks (SDN) like OpenFlow NSF CAREER and Alfred P. Sloan Fellowship Funds related basic research 31
31
Contributions Network Research Engineering Status
Unprecedented access to the PHY with commodity hardware A platform for cross-network-layer research Can improve network research applications Engineering Precise control of interpacket gaps (delays) Design and implementation of the PHY in software Novel scalable hardware design Optimizations / Parallelism Status Measurements in large scale: DCN, GENI, 40 GbE In two succinct sentences… We developed sonic to achieve unprecedented access to the physical layer with commodity hardware so that cross-network-layer research becomes possible. We carefully designed and did a lot of optimizations to achieve precise realtime software access to the physical layer. we are current integrating sonic board into data center network, and geni testbeds for large scale measurements, And trying to scale up sonic board to support 40GbE. 9/18/2018 SoNIC TRUST 2013
32
Conclusing Remarks SoNIC: Precise realtime software access&control of network Necessary to understand and trust cloud and cloud networks Demonstrate: Covert Timing Channel 9 hops with cross traffic, 80kbps, less than 10% BER Network applications Network steganography , measurements and characterization Status SoNIC in large scale: DURIP, GENI, 40 GbE Webpage: SoNIC is available Open Source. 9/18/2018 SoNIC TRUST 2013
33
My Contributions Cloud Networking Cloud Computation & Vendor Lock-in
SoNIC in NSDI 2013 Wireless DC in ANCS 2012 (best paper) and NetSlice in ANCS 2012 Bifocals in IMC 2010 and DSN 2010 Maelstrom in ToN 2011 and NSDI 2008 Chaired Tudor Marian’s PhD 2010 (now at Google) Cloud Computation & Vendor Lock-in Plug into the Supercloud in IEEE Internet Computing-2013 Supercloud/Xen-Blanket in EuroSys-2012 and HotCloud-2011 Overdriver in VEE-2011 Chaired Dan William’s PhD 2012 (now at IBM) Cloud Storage Gecko in FAST 2013 / HotStorage 2012 RACS in SOCC-2010 SMFS in FAST 2009 Antiquity in EuroSys 2007 / NSDI 2006 Chaired Lakshmi Ganesh’s PhD 2011 (now at UT Austin)
34
Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.