VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011
VICCI Overview Support research in: – Design, provisioning, and management of a global, multi-datacenter infrastructure (the Cloud) – Design and deployment of large-scale distributed services in a Cloud environment Compute clusters and networking hardware Bootstrapped using MyPLC software Project begun late 2010 November 3, 2011GEC122
Enabling Research A realistic environment for deployment studies Building block services – Replication, consistency, fault-tolerance, scalable performance, object location, migration New Cloud programming models – Targeted application domains, e.g., virtual worlds or managing personal data Cross-cutting foundational issues – Managing the network within/between data centers – Trusted cloud platform to ensure confidentiality November 3, 2011GEC123
Building Block Services Harmony – Consistent DHT for Cloud applications Syndicate – Global, content-oriented filesystem CRAQ – Key-value store with linearalizable operations Prophecy – Byzantine fault-tolerant replicated state machines Serval – Dynamic service-centric network routing November 3, 2011GEC124
Cloud Programming Models Virtual Worlds – Issues: federation, expansibility, scalability, migration, security – Cooperative but not necessarily collaborative – Application: Meru Rhizoma – A Cloud for personal applications – Issues: resource acquisition, maintaining inter- device connectivity November 3, 2011GEC125
Cross-Cutting Issues Tolerating and detecting faults – Zeno: BFT protocol with high availability – Accountable virtual machines Networking issues – Simple datacenter networks with static multipath routing – Multipath routing to improve reliability and load balance – Peering on demand Trusted Cloud Computing Platform – Confidentiality and integrity of Cloud computations November 3, 2011GEC126
VICCI Facility Hardware – 7 geographically dispersed compute clusters US: Seattle WA, Palo Alto CA, Princeton NJ, Atlanta GA Europe: Saarbrucken (Germany), Zurich (Switzerland) Asia: Tokyo (Japan) – 70 x 12-core Intel Xeon servers w/48GB RAM – 4 OpenFlow-enabled switches – 1Gbps connectivity between clusters, 10Mbps to Internet Software – Lightweight virtual machines – Remote management software for creating, provisioning, and controlling distributed VMs November 3, 2011GEC127
Developing VICCI ExtensionPlanetLabVICCI Node VirtualizationOnly container-based VMs (Vserver) Support Xen, KVM, OpenVZ, Linux Containers Network Virtualization IP connectivity; does not manage local network Use OpenFlow switches to manage intra-cluster traffic on a per- service/application basis Bandwidth Management Limits bandwidth on a per-node basis Limit bandwidth on a per-cluster basis using distributed rate limiting Resource AllocationBest-effort sharing of available resources Resource guarantees (e.g., reserve CPU cores to VMs) Cluster SupportAll nodes talk to PlanetLab Central Site Manager will configure and manage a cluster of nodes as a unit November 3, 2011GEC128
Questions Does VICCI have value to the GENI research community? – Resources: PlanetLab + OpenFlow clusters – Experimental building block services VICCI URL: November 3, 2011GEC129