Download presentation
Presentation is loading. Please wait.
1
Software Defined Networking
Aiming at addressing the challenges in controlling and managing networks Basic challenge: how to to easily configure and manage (large) networks? Key Idea: separating network control from forwarding elements (the data plane) with Two Key Abstractions: Programmable data plane with “match-action” forwarding abstraction forwarding elements controlled via (standardized) APIs (Logically) centralized control plane with “global” view of network state (a Network OS) net. control/management via policies or “control programs”
2
Software Defined Networking
FE Network Operating System Network Virtualization Control Programs A network in which the control plane is physically separate from the forwarding plane and A single control plane controls several forwarding devices Clear control abstraction Clear forwarding abstraction Clear forwarding behavior
3
SDN: Clean Separation of Concerns
Control program: specify behavior on abstract model Driven by Operator Requirements (“Policies”) Operated on a global view of “(Virtualized) Network Graphs” NOS: map global view to physical switches API: driven by Distributed State Abstraction network topologies, policies via flow rules, etc. Switch/fabric interface: driven by Forwarding Abstraction (match-action rules) Clean separation of concerns; the modularity we were seeking independent innovation, rich ecosystem, etc. Innovation is the true value proposition of SDN! by enterprises by ecosystem as a whole…. There is a tie between abstractions and innovation, and between abstractions and science.
4
SDN vs. Middleboxes/NFV
Policy enforcement based on Layer2/3/4 only Content based server selection NAT Private Network Public Network IP & port rewrite Keep track of available public IP & port Assign server IP based on URL Maintain connection state to support late binding Can we extend SDN with a “stateful” data plane? or ii) Make NFV a “SDN-like” (i.e., composible via APIs)? Need to maintain application layer state in the data path Need to access packet information beyond L2-4 header
5
Network Function Virtualization
Besides software switches, e.g., openVirtualSwitch (OVS) which implement the openflow forwarding abstraction Network function virtualization is more targeted at virtualizing various special network functions currently implemented by various hardware middleboxes or network appliances Examples of Middleboxes/Network Appliances: NATs, Mobility/Location Servers Firewall, IDS, IPS (Application/Content-aware) Load Balancer Web/Content Cache …
6
NFV SDN Simply virtualizing hardware middleboxes as software modules does not yield a “software-defined” network Each vNF may still have its own control logic & APIs, manipulating packets in its own manner Configuring and orchestrating these virtualized network functions (vNFs) no less a complex or difficult task! SDN could potentially make it easier to chain various vNFs together service steering & service chaining But: current SDN controllers (designed for openflow-based data plane) only understand layer 2-4 semantics!
7
Application-awareness via Middleboxes
NAT FE Firewall Network Operating System Network Virtualization Control Programs
8
NFV and SDN: Challenges
public IP addresses private IP addresses Firewall NAT Load Balancer SDN Controller
9
NFV into SDN: Challenges
Firewall Policy: limit # of TCP connections per IP prefix allow reverse-path traffic only if forward-path conn is established public IP addresses private IP addresses Firewall NAT Load Balancer SDN Controller
10
NFV and SDN: Challenges
Network Address Translation Configuration: IP & port rewrite (port # generated via hashing) keep track of available public IP & port public IP addresses private IP addresses Firewall NAT Load Balancer SDN Controller
11
NFV into SDN: Challenges
Layer-7 (Application-Aware) Load Balancer Policy: Assign server IP based on URL & server load Maintain connection state to support late binding public IP addresses private IP addresses Firewall NAT Load Balancer SDN Controller
12
SDN + Middleboxes Issues
Policy: Block web access to Host1 and Host2 sw1 sw2 NAT FW Host1 Host2 Host3 Servers Attribution Issue Policy: Only send suspicious packets to the Heavy IDS sw1 sw2 Light IDS Heavy IDS Host1 Host2 Host3 Servers Dependency Issue Policy: Diagnose latency for Host1 sw1 sw2 NAT LB Host1 Host2 Host3 Servers Experiencing high latency Diagnosis Issue Policy: Block Host3 access to server sw1 sw2 Proxy ACL Host1 Host2 Host3 Servers 1.Access content on server 3. Content 2. Cache content 4. Access cached content 5. Cached response Policy Violation Issue
13
Application-awareness via SDN Controller
Use SDN controller to implement application processing logic off-path Switch-Controller delay causes slowdown in data path Control plane not designed to handle every packet, which creates a throughput bottleneck
14
Application-awareness in SDN via NFV
NFV moves physical middleboxes to virtual machines Reduces cost for infrastructure and personnel Flexibility in trying new services without high cost Small businesses can use more network functions Absorbs high loads by starting more VMs NFV Issues Traffic detouring Two control planes Complex service chains Inherits middleboxes issues Need to manage two separate control planes Need to detour the traffic (extra hops) to NF that can be remotely located Complex service chaining for network functions SDN lacks visibility into the NF context makes SDN policy enforcement challenging (same issues as traditional Middleboxes) Xen/KVM
15
Key Observations Two separate control planes
Hypervisor already includes software switch High core density per server Network functions can be decomposed into primitive functions E.g., a firewall is composed of conntrack, connlimit, hashlimit, etc. E.g., a NAT box is composed of SNAT and DNAT E.g., a load balancer include L4, L7, NAT, and ACLs Redundant functions between different middleboxes Middleboxes can be broken down into smaller components
16
Application-aware (Stateful) SDN Data Plane
System Goals No switch-to-controller delay No traffic detouring Uniform central control plane for forwarding and NFs Scale-out data plane Apply policy closer to the source using loadable modules NEWS: NFV Enablement within SDN Data Plane
17
Application-aware SDN Data Plane
Application-aware SDN controller uses the vSW to perform stateful NFs NEWS SDN Controller OVS VM VM OVS OVS VM OVS VM
18
Open vSwitch Architecture
Controller Firewall Load Balancer Design Choice: Where to intercept the packet and implement application processing logic ? Option 1: connection manager Pros: modular design Cons: redundant coding, slow Unnecessary encap/decap Redundant flow table Option 3: user space flow table Good tradeoff between options 1 & 2: easy implementation & reasonable performance Option 2: kernel flow table Pros: best performance Cons: hard to implement 5 Connection Manager 1 4 OpenFlow API 1 3 User Space Flow table Pipeline 1 2 1 Kernel Flow Table Open vSwitch
19
NEWS System Application-aware Stateful SDN Data Plane
NEWS Controller Firewall Load Balancer Controller app module keeps global policy/state and pushes it to the app in the OVS data plane 5 7 Connection Manager Data plane app module keeps local state and enforces the global policy 1 6 OpenFlow API 1 5 Firewall Load Balancer 1 6 1 4 User space 1 3 App table Flow table Pipeline 1 2 1 Kernel Flow Table Open vSwitch
20
Service Chaining Example
Service chain: Firewall Loadbalancer Forward App table rule (dst_ip=x, tcp, dport=80: connlimit(1000), lb, fwd, install) Web traffic to server x Send to server s1 or s2 by using hash(src_ip) #TCP conn < 1000
21
Example: Firewall & Load Balancer
Match Action List dst_ip=x, tcp, dport=80 fw, lb, fwd, install App table Standard flow table Kernel flow table PACKET src_ip=a, sport=6000, tcp, dst_ip=x, dport=80 nFlow = 998 nFlow = 998 Hash(a) = s1 nFlow = 999 break = false (continue) src_ip=a, sport=6000, tcp, dst_ip=x, dport=80: set dst_ip=s1 , out port1 Meta data Scratch pad
22
NEWS: Loadable App Actions
The Apps are implemented as minimal C dynamic libraries SDN controller dynamically loads/unloads Apps according to chaining policies More OVS instances are spawned if load is high All apps implement the same interface init initialize app state according to the available number of threads xlate_actions updates the flow caches according to the NF state destroy cleans internal state before unloading that module This approach avoids steering packets between different network functions, and therefore reducing packet copies and unnecessary complexities.
23
NEWS: Advantages Placement of NFs Chaining NFs Scalable deployment
We allow the controller to install application modules at the switches using custom OpenFlow vendor messages Chaining NFs Use logical chains instead of physical chains Scalable deployment Scalability and elasticity is achieved by dynamically configuring the number switches supporting a specific network service Dynamic Service Creation The SDN controller in NEWS is in charge of app module activation at the switches. Placement of NFs: an application (app) module is responsible for a specific network function on the flow. Chaining of NFs: NEWS creates logical chain- ing of NFs instead of physical chaining used today where packets are routed through individual NF instances (virtual or physical). Within a switch we create a logical chain of different app modules for each flow that requires network service. Using flow matching rules, the chain appropriate for the flow is activated. Scalability: send flows to configured instances using ECMP. Dynamic Service Creation: we implement app modules as dynamically loadable software libraries at the switch.
24
Service Function Chain Total latency increases with the length of SFC
Motivation VPN Gateway Service Function Chain IDS Traffic Shaper Router Latency Let me first go through the motivation of our work. Given a specific service function chain, packets typically need to go through each VNF sequentially. In this example, packets need to go through VPN gateway, IDS, traffic shaper, and then go through a router. In this case, latency of this service function chain increases with its length. Total latency increases with the length of SFC
25
Exploiting Parallel Packet Processing Across VNFs
Serial Structure VPN Gateway IDS Traffic Shaper Router IDS Hybrid Structure (sequential + parallel) VPN Gateway Router Traffic Shaper Motivated by this example, we planed to exploit parallel packet processing across VNFs. More precisely, we proposed a hybrid structure which is a combination of sequential and parallel packet processing. For instance, packets can be processed in parallel by IDS and traffic shaper in this example, because neither of them modify packets. The goal of parallel packet processing is to reduce SFC latency. [IDS and traffic typically only read packets, and neither of them modify packets so IDS and traffic shaper can be parallelized] investigate how SFC latency may be reduced by instead exploiting opportunities for par- allel packet processing across NFs. Reducing SFC latency by sending data packets to VNFs in parallel
26
Parallelize or not? SFC Order Dependency
Dependency Type Example Read/Write NAT writes before IDS reads Termination Firewall drops packet Merge/Split WAN optimizer merges multiple packets Redirection LB selects next VNF instance IDS -> NAT Parallel SFC Sequential SFC In order to achieve a hybrid structure and process packets in parallel, we need to investigate the dependency between VNFs. Whether two VNFs can be put in parallel depends on its manipulations and the order of two VNFs. We summarized four factors which affect VNF parallel processing: Read/Write operations applied on data packets. For example, NAT writes before IDS reads. Flow termination such as packets dropped by firewall. Merge and Split operations. For instance, WAN optimizer merges multiple packets together. Flow redirection in the scenario which LB selects next VNF instance. I want to show you a simple example of Read/Write operations applied on the data packets here. For example, packets need to go through IDS, and then NAT in a sequential chain. IDS policy is to scan all traffic whose source IP is Then Nat converts source IP to This sequential chain can be converted into a parallel chain because there is no policy dependency. The first one is constructed of IDS and NAT. As we know, typically, IDS read header and payload information, while NAT rewrite several header fields. In this case, since their operations do not have conflict, then these two VNFs can be put in parallel. However, if packets go through NAT and then IDS, then they cannot be put in parallel. Because IDS policy may relie on NATed source ip. In this case, Packets need to go through NAT first unless IDS policies do not have dependency with NAT output. We analyzed several commonly used VNFs and investigated whether they can put in parallel or not. The analysis results are summarized into a table which is available in our paper. In this table, we assume that VNF administrator is not aware of policy dependency. Otherwise, even more pairs of VNFs can be put in parallel. For example, in current table, if packets go through NAT first, then no matter what VNF is after NAT, they cannot be put in parallel. However, if administrator can craft VNF policy carefully, for example, crafting policy based on original ip rather than NATed ip. Then this line will be turned to Y which means NAT can be put in parallel with others. 1) Show a small sample first, and the show the whole table. 2) Highlight the table 3) Narrow the text space | scan | scan © 2017 AT&T Intellectual Property. All rights reserved. AT&T, the AT&T logo and all other AT&T marks contained herein are trademarks of AT&T Intellectual Property and/or AT&T affiliated companies. The information contained herein is not an offer, commitment, representation or warranty by AT&T and is subject to change.
27
Parallelize or not? SFC Order Dependency
Dependency Type Example Read/Write NAT writes before IDS reads Termination Firewall drops packet Merge/Split WAN optimizer merges multiple padkets Redirection LB selects next VNF instance IDS -> NAT NAT -> IDS Parallel SFC Sequential SFC However, if traffic needs to go through NAT first and then IDS. When we parallelize this SFC, it causes problems. The NAT still converts ip 1 to ip 2, and then IDS scan all traffic based on NATed IP. If we parallelize this chain, we will notice that IDS policy will not be effective any more. In our paper, we proposed an concept of parabox-aware policies. If VNF administrator can modify VNF policies accordingly, Packets going through NAT and then IDS, can still be parallelized. | scan | scan © 2017 AT&T Intellectual Property. All rights reserved. AT&T, the AT&T logo and all other AT&T marks contained herein are trademarks of AT&T Intellectual Property and/or AT&T affiliated companies. The information contained herein is not an offer, commitment, representation or warranty by AT&T and is subject to change.
28
ParaBox System Overview
Steering Policies Requirements Correctness Lightweight No modification to VNF System Design Dependency Analysis Function Mirror & Merge Functions VNF VNF User Space User Space Controller analysis Kernel Space Kernel Space RX TX RX TX DPDK enabled Software Switch Memory ParaBox configuration Packets merge …… mirror FID SC MNF state table PID PKTR BUF CNT TMO Packets steering table After analyzing the dependency among VNFs, I am going to talk about our system, Parabox. In order to implement a system to archive our goal, at least three requirements need to be satisfied. ParaBos is supposed to guarantee the correctness. Parabox components should be lightweight; moreover, we require no modification to network functions. Based on above requirements, we design our system and implement three major functions. We have a dependency analysis function in controller. The major role of this function is to convert a sequential chain to a hybrid chain based on the dependency analysis we briefly talked about before. In data plane, Mirror and merge functions digest hybrid chains fed by the controller to support parallel packet processing, which I will illustrate next. State table Deemphasize state tables or briefly explain states (steering table or state table) May put them into backup slide DPDK is confusing here. User Space Linux/KVM NIC
29
Summary NEWS enables data plane stateful Network Functions within SDN by Decomposing NFs into primitive functions Loading primitive functions closer to source/destination Pushing state from SDN controller to data plane functions VMware/Nicira recently adopted a similar approach for Open vSwitch
30
Parabox: Parallesizing Service Function Chains (SFCs)
Operator sets up SFCs to meet traffic processing policies VPN Gateway Traffic Shaper IDS Router Service Function Chain Latency Total latency increases with the length of SFC !
31
Parabox: Parallesizing Service Function Chains (SFCs)
Speed up SFC processing by exploiting parallelism Serial Structure VPN Gateway IDS Traffic Shaper Router IDS Hybrid Structure (sequential + parallel) VPN Gateway Router Traffic Shaper Reducing SFC latency by sending data packets to VNFs in parallel Research in collaboration with AT&T Lab -- Research
32
Parabox: System Overview
Requirements Correctness Lightweight No modification to VNF System Design Dependency Analysis Function Mirror & Merge Functions Steering Policies VNF VNF User Space User Space Controller analysis Kernel Space Kernel Space RX TX RX TX DPDK enabled Software Switch Memory ParaBox configuration Packets merge …… mirror FID SC MNF state table PID PKTR BUF CNT TMO Packets steering table User Space Linux/KVM NIC Implemented on top of BESS developed by UC Berkeley
33
Thanks
34
Evaluation: NEWS vs. iptables
Firewall performance with small flows (1KB) Firewall performance with large flows (10MB) NEWS performance is very close to native Linux containers
35
Evaluation: NEWS vs. conntrack
Small (1KB) and medium (100KB) flows latency Large flows (10MB) goodput We are better in large flows because we don’t hit the CT table for every packet like OVS. NEWS performance is very close to OVS conntrack and better in large flows
36
Evaluation: Content-aware Server Selection
Clients Back End vSwitch Image server C1 Front End vSwitch abc.com/img.jpg C2 abc.com/video.mpg Video server 3-way TCP handshake with client set server IP (DNAT) TCP handshake with server SNAT and TCP splicing for return traffic NEWS Return traffic does not have to go through front-end vSwitch Both front-end & back-end vSwitches can be scaled out independently HAProxy Front end vSwitch is replaced with a HAProxy load balancer
37
Evaluation: Content-aware Server Selection
Flow completion time
38
Evaluation: Content-aware Server Selection
CPU load at front switch CPU load at back switch NEWS do significantly better at the front switch since HAProxy does more work per flow in the epoll event loop. NEW do more work at the back switch for the extra TCP handshake with the selected server.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.