Subtitle Speaker’s Name / Month day, 2015 Networking Lab Subtitle Speaker’s Name / Month day, 2015
Networking Lab - Goals From the theory …. to experimentation network switching (level 2) in an openstack environment external world communication with DVR ( network routing / NAT, level 3) network virtualization (underlay with vxlan) Several Use Cases (ping packet) Use case 1 VM to VM in single network on single compute node Use case 2 VM to VM in single network on two compute nodes Use case 3 North-South with Floating IP, VM To Internet (DVR / snat) Use case 4 East-West routing, VM to VM in two sub-networks on two compute nodes (DVR) Use case 5 North-South routing with SNAT, VM to Internet (Dynamic NAT) The following Hands on Lab will focus on HP Helion Openstack networking functions. Through various use cases you will follow the life of a simple packet and discover all the various components (Bridges, routers, filtering tables) that are taking part of the network switching, routing and virtualization.
Main CLI on Compute node Libvirt - Virtualization virsh Linux bridge brctl show iptables --list-rules tcpdump openvswicth ovs-vsctl show - utility for querying and configuring ovs-vswitchd ovs-ofctl show - administer OpenFlow switches ovs-appctl - utility for configuring running Open vSwitch daemons http://docs.openstack.org/networking-guide/deploy_scenario3a.html
Main CLI on Compute node network namespace ip-netns - process network namespace management (ip, tcpdump, iptables) http://docs.openstack.org/networking-guide/deploy_scenario2.html
Use Case 1: VM to VM in single network on single compute node Pure L2 switching
Use Case 2: VM to VM in single network on two compute nodes Routing
Use Case 3: North-South with Floating IP DVR Routing + Static NAT / IP translation + MAC translation That scenario use distributed routing and static NAT so it is handled directly in compute node
Use Case 4: East-West routing – VM on different computes / networks + DVR Routing distributed routing between subnets within tenant on VMs on different compute nodes. Note: in case of VMs being collocated on the same compute node there is no need for traffic to leave physical host
Use Case 5: North-South routing with SNAT VMs are sharing single external IP and dynamic NAT (PAT) is used. Traffic is sent to network node (by default hosted on Helion OpenStack controller node).
Network Lab - Pre-requisites Having follow the theory Having done the previous Lab Dashboard: https://192.168.24.31/ a Tenant Id and User Id a Private Network and a subnet a VM (you know how to access to) with security group, keypair, floating IP A router Use you own environnement (VM / network) or Use the prepared one
Lab Environement (reminder) Jump Host RDP to 16.16.11.96 as userXYZ / XXXXx Seed Host SSH 10.2.1.230 as demopaq / xxxx (from Jump Host) Run sudo –i t switch to root user Seed VM ssh 192.168.24.2 (from Seed Host) source stackrc nova list Please do not stop the SEED VM. ! This would break the entire lab! Undercloud ssh heat-admin@192.168.24.6 (from Seed VM) # sudo -i # source stackrc # nova list Overcloud ssh heat-admin@192.168.24.31 (from Seed VM) Compute Node ssh heat-admin@192.168.24.xx (from Seed VM)
Collecting Information
Collecting Information on VMs Get your project tenant ID (from Overcloud) # keystone tenant-get <your tenantName> e.g. 0262df5bef734da1a44e591ef9019cfe On what physical compute nodes your instances are running and what is its local VM name (from Overcloud) # nova list --all-tenants 1 --tenant <tenantId> --fields name,OS-EXT-SRV-ATTR:host,OS-EXT-SRV-ATTR:instance_name e.g. NetworkLabVM1 | overcloud-ce-novacompute1-novacompute1-qr52vumlc4in | instance-000001b6 Get compute node IPs (from Overcloud) # nova hypervisor-list # nova hypervisor-show <computeNodeHostname> | grep host_ip e.g. 192.168.24.35 (compute 0) and 192.168.24.36 (compute 1) Log into compute node and Get the Virtual Nic + bridge (from Seed VM) # ssh heat-admin@<ComputeNode IP> $ sudo –i [# virsh list] [# virsh dumpxml <Instance ID> | grep “<nova:name” to check it is your VM] # virsh dumpxml <Instance ID> | grep -A 7 "<interface“ e.g. tap551d286a-e4/ qbr551d286a-e4 For troubleshooting it is essential to start by collecting information regarding instances, compute nodes, MAC and IP addresses, various IDs such as tenant ID, instance ID etc.
Prepared environement Network: Private-NetworkLab1 private-subnetNetworkLab1 - 10.101.0.0/24 with router-NetworkLab1 (ID = 89ca06dc-6d80-469f-b86f-34d5e359988d ) Security group: SG-SSH-Ping-NetworkLab KeyPair: keypairNetworkLab VMs IPs Associated FIPs Instance Id Hypervisor IPs Bridge Id vNIC Id NetworkLabVM0 on Cumpute0 10.101.0.8 192.168.25.121 instance-000001b9 192.168.24.35 qbr551d286a-e4 tap551d286a-e4 NetworkLabVM1 on Cumpute1 10.101.0.9 instance-000001bc 192.168.24.36 qbr0d4c2f0e-8b tap0d4c2f0e-8b NetworkLabVM2 on Cumpute0 10.101.0.10 instance-000001bf qbr8f0d43bf-95 tap8f0d43bf-95
Overcloud Compute IP +--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | | ef89adfa-e461-4454-8a77-6e8ad1edf091 | overcloud-ce-controller-SwiftStorage0-gprslkliy3ca | ACTIVE | - | Running | ctlplane=192.168.24.33 | | 592a3727-4b38-4320-9185-9bc56d0da872 | overcloud-ce-controller-SwiftStorage1-gtcatijor4kd | ACTIVE | - | Running | ctlplane=192.168.24.29 | | 3fa95dd8-1d21-476f-95ea-823be2eee2ed | overcloud-ce-controller-controller0-fywj4gidtsn4 | ACTIVE | - | Running | ctlplane=192.168.24.34 | | ab5869fd-edc5-4828-aea8-d02dc02cff67 | overcloud-ce-controller-controller1-enjbwvupqm3p | ACTIVE | - | Running | ctlplane=192.168.24.32 | | 128cba02-865d-41fc-b512-62d80f1ba355 | overcloud-ce-controller-controller2-vnizvy2i7ix4 | ACTIVE | - | Running | ctlplane=192.168.24.30 | | eef056db-e2a1-40fd-bb1e-96380cb7d4c3 | overcloud-ce-novacompute0-NovaCompute0-n2a4grysfunc | ACTIVE | - | Running | ctlplane=192.168.24.35 | | d54fbbda-6ac6-4fc3-a32a-5c7cb85e1eba | overcloud-ce-novacompute1-NovaCompute1-qr52vumlc4in | ACTIVE | - | Running | ctlplane=192.168.24.36 | | 0150a73f-d85c-4dab-9200-80107bfafcf0 | overcloud-ce-novacompute2-NovaCompute2-si2j7g5mcaxn | ACTIVE | - | Running | ctlplane=192.168.24.37 | | d824b508-ffc8-42cb-9851-668269eb8346 | overcloud-ce-novacompute3-NovaCompute3-nramvaamkzuz | ACTIVE | - | Running | ctlplane=192.168.24.38 | | d50aea4b-8c3f-466a-bd34-543294a9ca7f | overcloud-ce-novacompute4-NovaCompute4-2yjelxkfbj4d | ACTIVE | - | Running | ctlplane=192.168.24.39 | | 19e257c2-9c5b-4784-bf63-be71bb01fb38 | overcloud-ce-novacompute5-NovaCompute5-gl7xjs62p27c | ACTIVE | - | Running | ctlplane=192.168.24.40 | | 6d61d7f3-a30f-4b95-90e8-7ec9e9bc7468 | overcloud-ce-novacompute6-NovaCompute6-zlre36geotgs | ACTIVE | - | Running | ctlplane=192.168.24.41 | | 81e39701-d0ec-48d7-9234-6c5a28dc54d5 | overcloud-ce-novacompute7-NovaCompute7-hbo7u7qiiwgb | ACTIVE | - | Running | ctlplane=192.168.24.42 | | 13f86c01-42f4-47fe-a395-e6e86cde76b9 | overcloud-ce-novacompute8-NovaCompute8-4od52mez4u32 | ACTIVE | - | Running | ctlplane=192.168.24.43 | | af4f41a4-d19c-4088-ae09-660479a24c85 | overcloud-ce-novacompute9-NovaCompute9-dfm5ftb3d6kj | ACTIVE | - | Running | ctlplane=192.168.24.44 |
Use Case 1 VM to VM in single network on single compute node During our lab we will be using ping. Connect to first instance and initiate ping to second instance – both are on the same compute node, in the same tenant, in the same subnet. VM to VM in single network on single compute node
Use Case 1: VM to VM in single network on single compute node
Use Case 1: VM to VM in single network on single compute node What you need (Refer to the Cloud Lab for How To) 2 VMs, on the same network and on the same compute node Tips: to ensure you are on the same compute node, create your first VM and check on what compute node it is hosted. Then create your second VM using the relevant Availability Zone Scenario Connect to first instance and initiate ping to second instance
Use Case 1: VM to VM in single network on single compute node ping <VM2 IP> eth0 tcpdump icmp -e -i <tap> (the VM vNIC) check Dst MAC : fa:16:3e:d5:14:0c 2.3.1 tap Security rules on Dashboard iptables --list-rules | grep <tap> neutron-openvswi-i551d286a-e => Input neutron-openvswi-o551d286a-e => Output iptables –list <neutron-openvswi-i> -v –n 0 0 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0 => ICMP security rule (ingress) 7 1056 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 => SSH security rule (ingress) 2.3.2 per-VM Linux Bridge (qbr) Iptables qvb 2.3.3 brctl show <qbr> tcpdump icmp -e -i <qvb> ==> Test with a security rules without ICMP qvo ovs-vsctl show | grep -A3 qvo tag: 47 Tenants are locally isolated on L2 by assigning VLAN tags ovs-ofctl show br-int | grep qvo 140 Port Id used for OpenFlow rules ovs-ofctl dump-flows br-int table=0 match is with rule forward NORMAL (we will do L2 forwarding) ovs-appctl fdb/show br-int | grep <Dest MAC> packet switch to port 141 Compute1 vSwitch Integration Bridge (br-int) 2.3.4 Table 0 – Forward NORMAL VLAN
Use Case 1: VM to VM in single network on single compute node VLAN Tag 2.3.5 Compute vSwitch Internal Bridge ovs-ofctl show br-int | grep <port> 141 qvo8f0d43bf-95 not leaving br-int, going to local bridge tcpdump icmp -e -i qvb<ID> Table - Forward qvo 20 qvb tcpdump icmp -e -i tap<VM2> per-VM Linux Bridge (qbr) Iptables tap eth0 VM2
Use Case 2 VM to VM in single network on two compute nodes
Use Case 2: VM to VM in single network on two compute nodes
Use Case 2: VM to VM in single network on two compute nodes What you need (Refer to the Cloud Lab for How To) 2 VMs, on the same network BUT on different compute nodes Tips: to ensure you are on the same compute node, create your first VM and check on what compute node it is hosted. Then create your second VM using the relevant Availability Zone Scenario Connect to first instance and initiate ping to second instance
Use Case 2: VM to VM in single network on two compute nodes ping <VM1 IP> eth0 tcpdump icmp -e -i <tap> (the VM vNIC) check fa:16:3e:91:d1:24 2.3.1 tap Security rules on Dashboard iptables --list-rules | grep <tap> neutron-openvswi-i551d286a-e => Input neutron-openvswi-o551d286a-e => Output iptables –list <neutron-openvswi-i> -v –n 0 0 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0 => ICMP security rule (ingress) 7 1056 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 => SSH security rule (ingress) 2.3.2 per-VM Linux Bridge (qbr) Iptables qvb 2.3.3 brctl show <qbr> tcpdump icmp -e -i <qvb> ==> Test with a security rules without ICMP qvo ovs-vsctl show | grep -A3 qvo tag: 47 Tenants are locally isolated on L2 by assigning VLAN tags ovs-ofctl show br-int | grep qvo 140 Port Id used for OpenFlow rules ovs-ofctl dump-flows br-int table=0 match is with rule forward NORMAL (we will do L2 forwarding) ovs-appctl fdb/show br-int | grep <Dest MAC> packet switch to port 6 Compute1 vSwitch Integration Bridge (br-int) 2.3.4 Table 0 – Forward NORMAL VLAN
Use Case 2: VM to VM in single network on two compute nodes 2.4.1 Compute 1 Integration Bridge (br-int) ovs-ofctl show br-int | grep <port> patch Tun MAC is not reachable on br-int and we need to go out of compute node Table – Forward ovs-ofctl show br-tun | grep '(' 1(patch-int): addr:f2:a9:2e:fd:d9:22 patch-int port Id 2.4.2 patch-tun VLAN patch-int 2.4.3 Compute1 Tunnel Bridge (br-tun) ovs-ofctl dump-flows br-tun table=0 cookie=0x0, duration=1750348.488s, table=0, n_packets=383967, n_bytes=133975190, idle_age=6, hard_age=65534, priority=1,in_port=1 actions=resubmit(,1) ovs-ofctl dump-flows br-tun table=1 cookie=0x0, duration=1750438.711s, table=1, n_packets=383488, n_bytes=133936330, idle_age=6, hard_age=65534, priority=0 actions=resubmit(,2) ovs-ofctl dump-flows br-tun table=2 cookie=0x0, duration=1750496.475s, table=2, n_packets=3373, n_bytes=282126, idle_age=1758, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) ovs-ofctl dump-flows br-tun table=20 | grep (Dest MAC> cookie=0x0, duration=8966.062s, table=20, n_packets=58, n_bytes=5460, idle_age=2466, priority=2,dl_vlan=47,dl_dst=fa:16:3e:91:d1:24 actions=strip_vlan,set_tunnel:0x406,output:75 strip VLAN tag, set VXLAN VNI 0x406 and send to port 75 ovs-ofctl show br-tun | grep '(‘ 75(vxlan-c0a81824): addr:ee:9b:af:d2:84:4b ovs-vsctl show | grep –A2 vxlan-c0a81824 options: {df_default="false", in_key=flow, local_ip="192.168.24.35", out_key=flow, remote_ip="192.168.24.36"} This is compute 1 ÏP Table 0: From VM ? Table 1: Routed ? Table 2: Unicast ? Table 20: Tunnel VNI
Use Case 2: VM to VM in single network on two compute nodes Compute1 Tunnel Bridge (br-tun) 2.4.4 Table 20: Tunnel tcpdump -e -i eth0 -c 100 | grep -B1 <Destination IP> 14:26:50.960407 fc:15:b4:1e:91:88 (oui Unknown) > c4:34:6b:ae:a6:f8 (oui Unknown), ethertype IPv4 (0x0800), length 148: NovaCompute0.39024 > NovaCompute1.4789: VXLAN, flags [I] (0x08), vni 1030 Internal MAC and IP are not visible to underlay VNI Underlay VNI 2.4.5 Compute2 Tunnel Bridge (br-tun) tcpdump -e -i eth0 -c 100 | grep -B1 <Destination IP> fa:16:3e:79:3a:06 (oui Unknown) > fa:16:3e:91:d1:24 (oui Unknown), ethertype IPv4 (0x0800), length 98: 10.101.0.8 > 10.101.0.9: ICMP echo request, id 6460, seq 5, length 64 14:31:13.542635 c4:34:6b:ae:a6:f8 (oui Unknown) > fc:15:b4:1e:91:88 (oui Unknown), ethertype IPv4 (0x0800), length 148: NovaCompute1.59623 > NovaCompute0.4789: VXLAN, flags [I] (0x08), vni 1030 ovs-vsctl show Port "vxlan-c0a81823" Interface "vxlan-c0a81823" type: vxlan options: {df_default="false", in_key=flow, local_ip="192.168.24.36", out_key=flow, remote_ip="192.168.24.35"} ovs-ofctl show br-tun | grep '(' 21(vxlan-c0a81823): addr:56:c2:66:5a:61:0b VXLAN packet it is coming from 1(patch-int): addr:d6:23:44:f3:48:f1 connects br-tun with br-int, where our VM is 2.4.6
Use Case 2: VM to VM in single network on two compute nodes VNI 2.4.7 Compute2 Tunnel Bridge (br-tun) ovs-ofctl dump-flows br-tun table=0 cookie=0x0, duration=10326.225s, table=0, n_packets=270, n_bytes=28072, idle_age=750, priority=1,in_port=21 actions=resubmit(,4) ovs-ofctl dump-flows br-tun table=4 cookie=0x0, duration=10383.253s, table=4, n_packets=257, n_bytes=27584, idle_age=807, priority=1,tun_id=0x406 actions=mod_vlan_vid:12,resubmit(,9) ovs-ofctl dump-flows br-tun table=9 cookie=0x0, duration=1752707.429s, table=9, n_packets=1585, n_bytes=167317, idle_age=188, hard_age=65534, priority=0 actions=resubmit(,10) ovs-ofctl dump-flows br-tun table=10 cookie=0x0, duration=1752779.241s, table=10, n_packets=1585, n_bytes=167317, idle_age=258, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 learn table 20, sent to port 1 (patch-int) Table 0: From Tunnel ? Table 4: Add VLAN based on VNI Table 9: Routed ? Table 10: Learn, sent to br-int VLAN patch-int
Use Case 2: VM to VM in single network on two compute nodes ovs-vsctl show | grep -A1 'tag: 12' tag: 12 Interface "qvo0d4c2f0e-8b“ ovs-ofctl show br-int | grep '(‘ 8(patch-tun): addr:66:27:4d:bf:34:fc 33(qvo0d4c2f0e-8b): addr:1e:69:f6:87:df:d4 ovs-ofctl dump-flows br-int table=0 cookie=0x0, duration=1753813.258s, table=0, n_packets=443423, n_bytes=150262656, idle_age=1, hard_age=65534, priority=1 actions=NORMAL match is with rule forward NORMAL ovs-appctl fdb/show br-int | grep <Dest MAC> 12 fa:16:3e:91:d1:24 0 33 packet switch to this port which is qvo patch-tun VLAN Compute2 vSwitch Internal Bridge (br-int) 2.4.8 Table 0 – Forward normal qvo qvb 2.4.9 per-VM Linux Bridge (iptables) virsh list virsh dumpxml <Instance ID> | grep “<nova:name” to check it is your VM virsh dumpxml <Instance ID> | grep -A 7 "<interface“ <source bridge='qbr0d4c2f0e-8b'/> brctl show <qbr> qbr0d4c2f0e-8b 8000.ba89713f6904 no qvb0d4c2f0e-8b tap0d4c2f0e-8b tap qbr eth0 VM
Use Case 3 North-South with Floating IP
Use Case 3: North-South with Floating IP In next scenario we have situation where VM communicates with real network such as Intranet or Internet and there is Floating IP assigned (external identity). In this case Helion OpenStack will use distributed routing and static NAT capability
Use Case 3: North-South with Floating IP What you need (Refer to the Cloud Lab for How To) 1 VMs, with a Floating IP attached to it Scenario Start ping from VM to outside world and start chasing packet Note: in this case Helion OpenStack will use distributed routing and static NAT capability
Use Case 3: North-South with Floating IP VM ping 15.201.49.155 (www.hp.com) Don’t care it is not answering eth0 virsh list virsh dumpxml <Instance ID> | grep “<nova:name” to check it is your VM virsh dumpxml <Instance ID> | grep -A 7 "<interface“ <source bridge='qbr551d286a-e4'/> <target dev='tap551d286a-e4'/> tcpdump icmp -e -i <tap> 15:29:59.554463 fa:16:3e:79:3a:06 (oui Unknown) > fa:16:3e:01:80:dd (oui Unknown), ethertype IPv4 (0x0800), length 98: 10.101.0.8 > 15.201.49.155: ICMP echo request, id 6475, seq 1, length 64 (sending packet to MAC of default gateway which is DVR MAC 2.5.1 tap per-VM Linux Bridge (qbr) Iptables qvb 2.5.2 ovs-vsctl show | grep -A3 qvo551d286a-e4 tag: 47 Tenants are locally isolated on L2 by assigning VLAN tags ovs-ofctl show br-int 140 (qvo551d286a-e4): addr:ee:ff:b1:dc:70:6c 138 (qr-45874868-21): addr:00:00:00:00:00:00 140 Port Id used for OpenFlow rules ovs-ofctl dump-flows br-int table=0 cookie=0x0, duration=1755155.708s, table=0, n_packets=12237969, n_bytes=84967475439, idle_age=0, hard_age=65534, priority=1 actions=NORMAL match is with rule forward NORMAL ovs-appctl fdb/show br-int | grep <Dest MAC> 138 47 fa:16:3e:01:80:dd packet switch to router port 138 (= qr-45874868-21) qvo VLAN Tag Compute1 vSwitch Integration Bridge (br-int) Table 0 – Forward normal qr 2.5.3
Use Case 3: North-South with Floating IP 2.5.4 Get router ID fom GUI 89ca06dc-6d80-469f-b86f-34d5e359988d ip netns | grep 89ca06dc-6d80-469f-b86f-34d5e359988d qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d ip a 3: rfp-89ca06dc-6 inet 192.168.25.121/32 438: qr-45874868-21 inet 10.101.0.1/24 ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d ip rule list 32854: from 10.101.0.8 lookup 16 ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d ip route show table 16 default via 169.254.31.39 dev rfp-89ca06dc-6 ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d iptables --table nat --list DNAT all -- anywhere 192.168.25.121 to:10.101.0.8 SNAT all -- 10.101.0.8 anywhere to:192.168.25.121 ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d tcpdump icmp -e -l -i rfp-89ca06dc-6 15:58:51.993167 0e:09:93:4f:34:54 (oui Unknown) > da:66:c5:a3:5a:22 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.25.121 > 15.201.49.155: ICMP echo request, id 6476, seq 1336, length 64 SNATing Done: IP has been translated qr Compute 1 Router namespace (qrouter) Static NAT Translate IP Routing rfp
Use Case 3: North-South with Floating IP 2.5.5 ip netns fip-4e68e9d1-6157-4507-9264-874409d000ec ip netns exec fip-4e68e9d1-6157-4507-9264-874409d000ec ip route | grep fpr-89ca06dc-6 169.254.31.38/31 dev fpr-89ca06dc-6 proto kernel scope link src 169.254.31.39 192.168.25.121 via 169.254.31.38 dev fpr-89ca06dc-6 ip netns exec fip-4e68e9d1-6157-4507-9264-874409d000ec ip a 2: fpr-89ca06dc-6 inet 169.254.31.39/31 448: fg-4de08be2-67 inet 192.168.25.126/24 ip netns exec fip-4e68e9d1-6157-4507-9264-874409d000ec tcpdump icmp -e -l -i fg-4de08be2-67 16:18:07.418030 fa:16:3e:be:48:4f (oui Unknown) > 78:48:59:38:41:e3 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.25.121 > 15.201.49.155: ICMP echo request, id 6491, seq 1, length 64 versus qrouter dcpdump 15:58:51.993167 0e:09:93:4f:34:54 (oui Unknown) > da:66:c5:a3:5a:22 (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.25.121 > 15.201.49.155: ICMP echo request, id 6476, seq 1336, length 64 rfp fpr Compute 1 Floating IP namespace (fip) Convert MAC Address Send it to external world MAC fg 2.5.6 fg ovs-vsctl show | grep –A4 br-ex Port "fg-4de08be2-67" Port "vlan25“ ovs-ofctl show br-ex | grep '(‘ 1 (vlan25): addr:fc:15:b4:1e:91:88 ovs-ofctl dump-flows br-ex cookie=0x0, duration=1758769.414s, table=0, n_packets=11832534, n_bytes=84831149625, idle_age=370, hard_age=65534, priority=0 actions=NORMAL ovs-appctl fdb/show br-ex 1 0 78:48:59:38:41:e3 4 Compute 1 External Bridge (br-ex) Switching VLAN25
Use Case 4 East-West routing – VM on different computes / networks
Use Case 4: East-West routing – VM on different computes / networks
Use Case 5 North-South routing with SNAT
Use Case 5: North-South routing with SNAT
Conclusion
Reference http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html http://docs.openstack.org/networking-guide/ incl. http://docs.openstack.org/networking-guide/deploy_scenario3a.html
Annex
Main CLI on Compute node qr Instance Distributer Router namespace (qrouter) network namespace ip-netns - process network namespace management (ip, tcpdump, iptables) rfp eth0 Libvirt - Virtualization virsh KVM fpr tap Floating IP namespace (fip) Linux bridge brctl show iptables --list-rules tcpdump Linux Bridge (qbr) fg qvb Openvswicth qvo qr Integration Bridge (br-int) Patch fg openvswicth.org ovs-vsctl show - utility for querying and configuring ovs-vswitchd ovs-ofctl show - administer OpenFlow switches ovs-appctl - utility for configuring running Open vSwitch daemons Tunnel Bridge (br-tun) External Bridge (br-ext) Underlay Internet
Imprimer Mercredi les network lab doc cheat sheet Slide lien vers ressources openstack networking => Mathieu Wrap-up : 10 min avant la fin Voir les use cases faits et ceux pas fait …