CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t Computer Virtualization from a network perspective Jose Carlos Luna Duran - IT/CS/CT.

Slides:



Advertisements
Similar presentations
Chapter 19 Network Layer: Logical Addressing Stephen Kim.
Advertisements

CMPE 150- Introduction to Computer Networks 1 CMPE 150 Fall 2005 Lecture 25 Introduction to Computer Networks.
Lauri Virtanen Supervisor: Professor Raimo Kantola Instructor: Lic.Sc.(Tech.) Nicklas Beijar Faculty of Electronics, Communications and Automation Department.
IPv6 at CERN Update on Network status David Gutiérrez Co-autor: Edoardo MartelliEdoardo Martelli Communication Services / Engineering
Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
Network Layer4-1 Chapter 4: Network Layer r 4. 1 Introduction r 4.2 Virtual circuit and datagram networks r 4.3 What’s inside a router r 4.4 IP: Internet.
Module 5 - Switches CCNA 3 version 3.0 Cabrillo College.
Ethernet and switches selected topics 1. Agenda Scaling ethernet infrastructure VLANs 2.
IP Address 1. 2 Network layer r Network layer protocols in every host, router r Router examines IP address field in all IP datagrams passing through it.
1 13-Jun-15 S Ward Abingdon and Witney College LAN design CCNA Exploration Semester 3 Chapter 1.
1 K. Salah Module 5.1: Internet Protocol TCP/IP Suite IP Addressing ARP RARP DHCP.
Ch.6 - Switches CCNA 3 version 3.0.
Semester 4 - Chapter 3 – WAN Design Routers within WANs are connection points of a network. Routers determine the most appropriate route or path through.
Mr. Mark Welton.  Three-tiered Architecture  Collapsed core – no distribution  Collapsed core – no distribution or access.
Agenda Network Infrastructures LCG Architecture Management
Chap 10 Routing and Addressing Andres, Wen-Yuan Liao Department of Computer Science and Engineering De Lin Institute of Technology
Network Redundancy Multiple paths may exist between systems. Redundancy is not a requirement of a packet switching network. Redundancy was part of the.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 4: Addressing in an Enterprise Network Introducing Routing and Switching in the.
Chapter 6 High-Speed LANs Chapter 6 High-Speed LANs.
Using LISP for Secure Hybrid Cloud Extension draft-freitasbellagamba-lisp-hybrid-cloud-use-case-00 Santiago Freitas Patrice Bellagamba Yves Hertoghs IETF.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Internet Addressing. When your computer is on the Internet, anything you do requires data to be transmitted and received. For example, when you visit.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Addressing in an Enterprise Network Introducing Routing and Switching in the.
1 NAT Network Address Translation Motivation for NAT To solve the insufficient problem of IP addresses IPv6 –All software and hardware need to be updated.
V IRTUALIZATION Sayed Ahmed B.Sc. Engineering in Computer Science & Engineering M.Sc. In Computer Science.
CERN IT Department CH-1211 Genève 23 Switzerland t IPv6 Deployment Project 2 April 2012
Network Plus Virtualization Concepts. Virtualization Overview Virtualization is the emulation of a computer environment called a Virtual Machine. A Hypervisor.
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
Ethernet Basics - 5 IGMP. The Internet Group Management Protocol (IGMP) is an Internet protocol that provides a way for an Internet computer to report.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Addressing in an Enterprise Network Introducing Routing and Switching in the.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 4: Addressing in an Enterprise Network Introducing Routing and Switching in the.
1 LAN design- Chapter 1 CCNA Exploration Semester 3 Modified by Profs. Ward and Cappellino.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
Campus Networking Best Practices Hervey Allen NSRC & University of Oregon Dale Smith University of Oregon & NSRC
Chapter 4 Objectives Upon completion you will be able to: Classful Internet Addressing Understand IPv4 addresses and classes Identify the class of an.
CSC 600 Internetworking with TCP/IP Unit 7: IPv6 (ch. 33) Dr. Cheer-Sun Yang Spring 2001.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
1 Using VPLS for VM mobility cern.ch cern.ch HEPIX Fall 2015.
Guidance of Using Unique Local Addresses draft-liu-v6ops-ula-usage-analysis-05 draft-liu-v6ops-ula-usage-analysis-05 Bing Liu(speaker), Sheng Jiang, Cameron.
INFSO-RI Enabling Grids for E-sciencE Dynamic Connectivity Service Oscar Koeroo JRA3.
The Technical Network in brief Jean-Michel Jouanigot & all IT/CS.
A follow-up on network projects 10/29/2013 HEPiX Fall Co-authors:
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
Hardware Status Online CRRC08 prep meeting Niko Neufeld.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
CERN IT Department CH-1211 Geneva 23 Switzerland t ES 1 how to profit of the ATLAS HLT farm during the LS1 & after Sergio Ballestrero.
Exploration 3 Chapter 1. Access layer The access layer interfaces with end devices, such as PCs, printers, and IP phones, to provide access to the rest.
1 K. Salah Module 5.1: Internet Protocol TCP/IP Suite IP Addressing ARP RARP DHCP.
1 Layer 3: Routing & Addressing Honolulu Community College Cisco Academy Training Center Semester 1 Version
Network Troubleshooting CT NWT NameTP No. Gan Pei ShanTP Tan Ming FattTP Elamparithi A/L ThuraisamyTP Tan Ken SingTP
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
Virtual Local Area Networks In Security By Mark Reed.
19.1 Chapter 19 Network Layer: Logical Addressing Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Central Management of 300 Firewalls and Access-Lists Fabian Mauchle TNC 2012 Reykjavík, 21-May-2012.
IPv4 shortage and CERN 15 January 2013
4.3 Network Layer Logical Addressing
Network Overview.
Instructor Materials Chapter 8: Subnetting IP Networks
Part III Datalink Layer 10.
Planning and Troubleshooting Routing and Switching
Welcome! Thank you for joining us. We’ll get started in a few minutes.
How to address the increasing connectivity needs of the HEP community?
Instructor: Mr. Malik Zaib
Mr C Johnston ICT Teacher
IS3120 Network Communications Infrastructure
Module 5 - Switches CCNA 3 version 3.0.
Chapter 8: Subnetting IP Networks
NTHU CS5421 Cloud Computing
Part III Datalink Layer 10.
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t Computer Virtualization from a network perspective Jose Carlos Luna Duran - IT/CS/CT

CERN IT Department CH-1211 Genève 23 Switzerland t Agenda Introduction to Data Center networking Impact of virtualization on networks VM machine network management

CERN IT Department CH-1211 Genève 23 Switzerland t Part I Introduction to Data Center Networking

CERN IT Department CH-1211 Genève 23 Switzerland t Data Centers Typical small data center: Layer 2 based

CERN IT Department CH-1211 Genève 23 Switzerland t Layer 2 Data Center Flat layer two Ethernet network: same broadcast domain. Appropriate when: –Network traffic is very localized. –Same responsible for the whole infrastructure But… –Uplink shared with a big number of host. –Noise from other nodes (broadcast): problems may affect the whole infrastructure.

CERN IT Department CH-1211 Genève 23 Switzerland t Data Center L2: Limitations

CERN IT Department CH-1211 Genève 23 Switzerland t Data Center L3 Layer 3 Data Center

CERN IT Department CH-1211 Genève 23 Switzerland t Data Center L3 Advantages: –Broadcasts are contained in small area (subnet) –Easier management and network debugging. –Promotes “fair” networking (all services point-to- point are equally important). But… –Fragmentation of the IP space. –Move from one area (subnet) to another requires IP change. –Needs a performing backbone.

S2 376-R Vault Computer Center 887-R 513-C Vault 874-CCC 513-CCR Meyrin area CERN sites Farms Internet 230x 40x 50x 100x 1000x or Minor Starpoints 100x 21x433x SL1PL1 BL1RL1 RL2BL2RL3BL3 BL4 RL4 BL5 RL5 BL7RL7BB16 RB16RB17 BB17 BB20 RB20 BB52ZB52 AB52 RB52 ZB51BB51 RB51 AB51 ZB50 BB50 AB50 RB50 BB53 ZB53 AB53 RB53 BB54 ZB54 RB54AB54 BB55 ZB55 AB55RB55 BB56 ZB56 AB56 RB56 IB1 IB2 BT15 RT16 TT1TT2TT3TT4TT5TT6TT7 TT8 BT4RT5BT5RT6BT6BT7BT8BT9BT10BT11RT7RT8RT9RT10RT11RT12 BT2 RT3 BT1 RT2 BT3 RT4 BT13 RT14 BT12 RT13 Prevessin area BT16 RT17 PG1 PG2 SG2 SG1 Network Backbone Topology Gigabit Multi Gigabit 10 Gigabit Multi 10 Gigabit 2-S 874-R Computer Center LHC area 15x CDR DAQ AG ATCN 90x AG 15x AG Control CDR DAQ HLT TN AG 15x 25x 10x CDR DAQ AG Control 90x AG TN Control AG 8x 10x CDR DAQ Monitoring Original document : 2007/M.C. Latest update : 19-Feb O.v.d.V. MS3634-version 13-Mar-2009 O.v.d.V.

CERN IT Department CH-1211 Genève 23 Switzerland t CERN Network Highly Routed (L3 centred) –In the past several studies where done for localizing services -> Very heterogeneous behaviour: did not work out. –Promote small subnets (typical size: 64) –Switch to Router: 10 Gb uplink Numbers: –150+ Routers – Gb ports –2500+ Switches – Gb user ports – End nodes (physical user devices) –140 Gbps WAN connectivity (Tier 0 to Tier 1) + 20 Gbps General Internet –4.8 Tbps at the LCG backbone CORE

CERN IT Department CH-1211 Genève 23 Switzerland t Part II Impact of virtualization on networks

CERN IT Department CH-1211 Genève 23 Switzerland t Types of VM Connectivity Virtual Machine hypervisors offer different connectivity solutions: Bridged –Virtual machine has its own address (IP and MAC). –Seen from the network as a different machine. –Needed when incoming IP connectivity is necessary. NAT –Uses the address of the HOST system (invisible for us). –Provides offsite connectivity using the IP of the hypervisor. –NAT is currently not allowed at CERN (for debugging and traceability reasons). Host-Only –VM has no connectivity with the outside world

CERN IT Department CH-1211 Genève 23 Switzerland t Bridged and IPv4 For bridged reality is this:

CERN IT Department CH-1211 Genève 23 Switzerland t Bridged and IPv4 Observed by us as this:

CERN IT Department CH-1211 Genève 23 Switzerland t Bridged and IPv4 (II) It’s just the same as a physical machine, therefore should be considered as such! Two possibilities for addressing: –Private addressing Only on-site connectivity No direct off-site (NO INTERNET) connectivity –Public addressing: best option, but… Needs a public IPv4 address IPv4 address is limited. IPv4 address allocation: IPv4 address are given in form of subnets (no single IPv4 addresses around the infrastructure)-> Fragmentation -> Use wisely and fully.

CERN IT Department CH-1211 Genève 23 Switzerland t Why not IPv6? No address space problem, but: –ALL computers that the guest wants to contact would have to use IPv6 to have connectivity. –IPv6 “island” would not solve the problem If these machines need IPv4 connectivity IPv6 to IPv4 conversion is necessary. If you have to map each IPv6 address to one IPv4 address we are hitting the same limitations as IPv4. –All applications running in the VM should be IPv6 compatible.

CERN IT Department CH-1211 Genève 23 Switzerland t Private Addressing Go for it whenever possible! (space not as limited as if we use public addresses). But… no direct off-site connectivity (perfect for the hypervisors!) Depends on the use case for the VM

CERN IT Department CH-1211 Genève 23 Switzerland t NAT Currently not allowed at CERN:traceability... NAT where? –In the Hypervisor No network ports in the VM would be reachable from outside. Debugging network problems for VMs impossible –Private addressing in the VM and NAT in the Internet Gate: Would allow incoming in-site connectivity No box capable of handling 10Gb+ bandwidth –Distribution Layer (access to the core) Same as above plus more number of high speed NAT engines required. No path redundancy possible with NAT!

CERN IT Department CH-1211 Genève 23 Switzerland t Recommendations Everything depends on the behavior of the VM and its intended usage. Public addresses are a scarce resource. Can be provided if limited in number. Use private addressing if there is no other special need besides the use of local on-site resources.

CERN IT Department CH-1211 Genève 23 Switzerland t Part III VM machine network management.

CERN IT Department CH-1211 Genève 23 Switzerland t CS proposed solutions For desktops: –Desktops are not servers, therefore… –NAT in the hypervisor proposed: Responsible of the hypervisor is the same as responsible of VMs VM as a service (servers, batch, etc…): –For large number of VMs (farms) –Private addressing preferred –VMs should not be scattered around the physical infrastructure. –Creation of the “VM Cluster” concept.

CERN IT Department CH-1211 Genève 23 Switzerland t VM Clusters VM Cluster: separate set of subnets running in the SAME contiguous physical infrastructure :

CERN IT Department CH-1211 Genève 23 Switzerland t VM Clusters

CERN IT Department CH-1211 Genève 23 Switzerland t VM Clusters

CERN IT Department CH-1211 Genève 23 Switzerland t VM Cluster advantages Allows us to move the full virtualized infrastructure (without changing IP addresses for the VMs) in case of need. Delegate to the VM Cluster owner full allocation of network resources. All combinations possible: –Hypervisor in public address/private (preferred) –VM subnet1 public/private –VM subnet2 public/private Migration within the same VM subnet to any host in the same VM cluster possible.

CERN IT Department CH-1211 Genève 23 Switzerland t VM Clusters How this service is offered to service providers: SOAP Is flexible: can represent the actual VM or a VM Slot. VM Cluster is requested directly to us –Adding a VM subnet also has to be requested. What can be done programmatically?

CERN IT Department CH-1211 Genève 23 Switzerland t VM representation in LANDB Several use cases for VMs: we need flexibility They are still machines, responsible may differ from hypervisor. Should be registered as such: –Added a flag that indicates this is a Virtual Machine. –Pointer to the HOST machine using it at this moment.

CERN IT Department CH-1211 Genève 23 Switzerland t Operations allowed for service providers in LANDB Allows to document the VM infrastructure in LANDB: –Create a VM (creates device, IP allocation in the cluster) –Destroy a VM –Migrate a VM (inside the same VM subnet) –Move a VM (inside the same cluster or other cluster -> VM will change IP) –Query information on Clusters, hypervisors, and VMs What hypervisor is my VM-IP on? What VM-IPs are running in this hypervisor?

CERN IT Department CH-1211 Genève 23 Switzerland t Conclusions Is not obvious how to manage virtualization on large networks. We are already exploring possible solutions When the requirements are defined we are confident to find the appropriate networking solutions.

CERN IT Department CH-1211 Genève 23 Switzerland t Questions? THANK YOU!