Distributed FlowVisor: a distributed FlowVisor platform for quality of service aware cloud network virtualisation
Introduction QoS requirements in cloud environment Scalability in the cloud Current approaches: link virtualisation, overlaying OpenFlow protocol to support fine-grained QoS Adapt FlowVisor for the cloud network to support scalability
Issues in current FlowVisor OpenFlow 1.0 can support only 4096 VNs, being limited by the 12-bit VLAN field Less space efficient OpenFlow flow tables Pull-based flow setup and statistics gathering mechanism
Issues in current FlowVisor(Contd.) Shared control channel for statistics gathering and flow setup Individual controllers in FlowVisor causing difficulty in network configuration and management Centralised FlowVisor acting as a single point of failure in the network
Distributed FlowVisor Approach Layered Overlay Mechanism Multiple OpenFlow overlays on top of a single common overlay Common overlay supports network virtualisation via GRE tunnelling OpenFlow overlays to support fine-grained QoS Extend the OpenFlow switch specification
Distributed FlowVisor Approach Distributed synchronised two-level database system Global level: To store network configuration and information Local level: To maintain a copy of part of the data in the global database inside each switch and VNC Apache Zookeeper to provide synchronisation Push-based mechanisms Presence of global database and sync mechanisms facilitate push-based flow setup Dedicated data channel for statistics gathering
Distributed FlowVisor Approach Local FlowVisor module for each VN controller FlowVisor logic distributed across the VNCs as local modules sitting between the NOX core and switch controller module
DFVisor Architecture Overview Zookeeper’s file system-like hierarchical namespace to store network configuration Enhanced CPqD OpenFlow 1.3 softswitch and open-source NOX 1.3 oflib-based VNC