+# OVN4NFV Usage guide\r
+\r
+## Quickstart Installation Guide\r
+\r
+Please follow the ovn4nfv installation steps - [ovn4nfv installation](https://github.com/ovn4nfv/ovn4nfv-k8s-plugin#quickstart-installation-guide)\r
+\r
+## Network Testing\r
+\r
+create 2 pod and test the ping operation between them\r
+\r
+```\r
+# kubectl apply -f example/ovn4nfv-deployment-replica-2-noannotation.yaml\r
+deployment.apps/ovn4nfv-deployment-noannotation created\r
+# kubectl get pods -o wide\r
+NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r
+default ovn4nfv-deployment-noannotation-f446688bf-8g8hl 1/1 Running 0 3m26s 10.233.64.11 minion02 <none> <none>\r
+default ovn4nfv-deployment-noannotation-f446688bf-srh56 1/1 Running 0 3m26s 10.233.64.10 minion01 <none> <none>\r
+# kubectl exec -it ovn4nfv-deployment-noannotation-f446688bf-8g8hl -- ping 10.233.64.10 -c 1\r
+PING 10.233.64.10 (10.233.64.10): 56 data bytes\r
+64 bytes from 10.233.64.10: seq=0 ttl=64 time=2.650 ms\r
+\r
+--- 10.233.64.10 ping statistics ---\r
+1 packets transmitted, 1 packets received, 0% packet loss\r
+round-trip min/avg/max = 2.650/2.650/2.650 ms\r
+```\r
+\r
+Create hostname deployment and svc and test the k8s service query\r
+\r
+```\r
+# kubectl apply -f example/ovn4nfv-deployment-noannotation-hostnames.yaml\r
+deployment.apps/hostnames created\r
+# kubectl get pods --all-namespaces -o wide\r
+NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r
+default hostnames-5d97c4688-jqw77 1/1 Running 0 12s 10.233.64.12 minion01 <none> <none>\r
+default hostnames-5d97c4688-rx7zp 1/1 Running 0 12s 10.233.64.11 master <none> <none>\r
+default hostnames-5d97c4688-z44sh 1/1 Running 0 12s 10.233.64.10 minion02 <none> <none>\r
+```\r
+\r
+Test the hostname svc\r
+\r
+```\r
+# kubectl apply -f example/ovn4nfv-deployment-hostnames-svc.yaml\r
+service/hostnames created\r
+# kubectl apply -f example/ovn4nfv-deployment-noannotation-sandbox.yaml\r
+deployment.apps/ovn4nfv-deployment-noannotation-sandbox created\r
+# kubectl get pods -o wide\r
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r
+hostnames-5d97c4688-jqw77 1/1 Running 0 6m41s 10.233.64.12 minion01 <none> <none>\r
+hostnames-5d97c4688-rx7zp 1/1 Running 0 6m41s 10.233.64.11 master <none> <none>\r
+hostnames-5d97c4688-z44sh 1/1 Running 0 6m41s 10.233.64.10 minion02 <none> <none>\r
+ovn4nfv-deployment-noannotation-sandbox-5fb94db669-vdkss 1/1 Running 0 9s 10.233.64.13 minion02 <none> <none>\r
+# kubectl exec -it ovn4nfv-deployment-noannotation-sandbox-5fb94db669-vdkss -- wget -qO- hostnames\r
+hostnames-5d97c4688-jqw77\r
+# kubectl exec -it ovn4nfv-deployment-noannotation-sandbox-5fb94db669-vdkss -- wget -qO- hostnames\r
+hostnames-5d97c4688-rx7zp\r
+# kubectl exec -it ovn4nfv-deployment-noannotation-sandbox-5fb94db669-vdkss -- wget -qO- hostnames\r
+hostnames-5d97c4688-z44sh\r
+```\r
+you should get different hostname for each query\r
+\r
+Test the reachablity\r
+\r
+```\r
+# kubectl exec -it ovn4nfv-deployment-noannotation-sandbox-5fb94db669-vdkss -- wget -qO- example.com\r
+<!doctype html>\r
+<html>\r
+<head>\r
+ <title>Example Domain</title>\r
+\r
+ <meta charset="utf-8" />\r
+ <meta http-equiv="Content-type" content="text/html; charset=utf-8" />\r
+ <meta name="viewport" content="width=device-width, initial-scale=1" />\r
+ <style type="text/css">\r
+ body {\r
+ background-color: #f0f0f2;\r
+ margin: 0;\r
+ padding: 0;\r
+ font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;\r
+\r
+ }\r
+ div {\r
+ width: 600px;\r
+ margin: 5em auto;\r
+ padding: 2em;\r
+ background-color: #fdfdff;\r
+ border-radius: 0.5em;\r
+ box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\r
+ }\r
+ a:link, a:visited {\r
+ color: #38488f;\r
+ text-decoration: none;\r
+ }\r
+ @media (max-width: 700px) {\r
+ div {\r
+ margin: 0 auto;\r
+ width: auto;\r
+ }\r
+ }\r
+ </style>\r
+</head>\r
+\r
+<body>\r
+<div>\r
+ <h1>Example Domain</h1>\r
+ <p>This domain is for use in illustrative examples in documents. You may use this\r
+ domain in literature without prior coordination or asking for permission.</p>\r
+ <p><a href="https://www.iana.org/domains/example">More information...</a></p>\r
+</div>\r
+</body>\r
+</html>\r
+```\r
+\r
+## Test the Multiple Network Setup and Testing\r
+\r
+Create two networks ovn-priv-net and ovn-port-net\r
+\r
+```\r
+# kubectl apply -f example/ovn-priv-net.yaml\r
+network.k8s.plugin.opnfv.org/ovn-priv-net created\r
+\r
+# kubectl apply -f example/ovn-port-net.yaml\r
+network.k8s.plugin.opnfv.org/ovn-port-net created\r
+\r
+# kubectl get crds\r
+NAME CREATED AT\r
+networkchainings.k8s.plugin.opnfv.org 2020-09-21T19:29:50Z\r
+networks.k8s.plugin.opnfv.org 2020-09-21T19:29:50Z\r
+providernetworks.k8s.plugin.opnfv.org 2020-09-21T19:29:50\r
+\r
+# kubectl get networks\r
+NAME AGE\r
+ovn-port-net 32s\r
+ovn-priv-net 39s\r
+```\r
+\r
+Use the network `ovn-port-net` and `ovn-priv-net` for the multiple network creation\r
+and test the network connectivity between the pods\r
+\r
+```\r
+# kubectl apply -f example/ovn4nfv-deployment-replica-2-withannotation.yaml\r
+deployment.apps/ovn4nfv-deployment-2-annotation created\r
+\r
+# kubectl get pods -o wide\r
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r
+ovn4nfv-deployment-2-annotation-65cbc6f87f-5zwkt 1/1 Running 0 3m15s 10.233.64.14 minion01 <none> <none>\r
+ovn4nfv-deployment-2-annotation-65cbc6f87f-cv75p 1/1 Running 0 3m15s 10.233.64.15 minion02 <none> <none>\r
+\r
+# kubectl exec -it ovn4nfv-deployment-2-annotation-65cbc6f87f-5zwkt -- ifconfig\r
+eth0 Link encap:Ethernet HWaddr B6:66:62:E9:40:0F\r
+ inet addr:10.233.64.14 Bcast:10.233.127.255 Mask:255.255.192.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:13 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:1026 (1.0 KiB) TX bytes:0 (0.0 B)\r
+\r
+lo Link encap:Local Loopback\r
+ inet addr:127.0.0.1 Mask:255.0.0.0\r
+ UP LOOPBACK RUNNING MTU:65536 Metric:1\r
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:1000\r
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
+\r
+net0 Link encap:Ethernet HWaddr B6:66:62:10:21:03\r
+ inet addr:172.16.33.2 Bcast:172.16.33.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:13 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:1026 (1.0 KiB) TX bytes:0 (0.0 B)\r
+\r
+net1 Link encap:Ethernet HWaddr B6:66:62:10:2C:03\r
+ inet addr:172.16.44.2 Bcast:172.16.44.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:52 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:10452 (10.2 KiB) TX bytes:0 (0.0 B)\r
+\r
+# kubectl exec -it ovn4nfv-deployment-2-annotation-65cbc6f87f-cv75p -- ifconfig\r
+eth0 Link encap:Ethernet HWaddr B6:66:62:E9:40:10\r
+ inet addr:10.233.64.15 Bcast:10.233.127.255 Mask:255.255.192.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:13 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:1026 (1.0 KiB) TX bytes:0 (0.0 B)\r
+\r
+lo Link encap:Local Loopback\r
+ inet addr:127.0.0.1 Mask:255.0.0.0\r
+ UP LOOPBACK RUNNING MTU:65536 Metric:1\r
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:1000\r
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
+\r
+net0 Link encap:Ethernet HWaddr B6:66:62:10:21:04\r
+ inet addr:172.16.33.3 Bcast:172.16.33.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:13 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:1026 (1.0 KiB) TX bytes:0 (0.0 B)\r
+\r
+net1 Link encap:Ethernet HWaddr B6:66:62:10:2C:04\r
+ inet addr:172.16.44.3 Bcast:172.16.44.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:13 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:1026 (1.0 KiB) TX bytes:0 (0.0 B)\r
+\r
+# kubectl exec -it ovn4nfv-deployment-2-annotation-65cbc6f87f-cv75p -- ping 172.16.44.2 -c 1\r
+PING 172.16.44.2 (172.16.44.2): 56 data bytes\r
+64 bytes from 172.16.44.2: seq=0 ttl=64 time=3.488 ms\r
+\r
+--- 172.16.44.2 ping statistics ---\r
+1 packets transmitted, 1 packets received, 0% packet loss\r
+round-trip min/avg/max = 3.488/3.488/3.488 ms\r
+```\r
+\r
+## VLAN and Direct Provider Network Setup and Testing\r
+\r
+In this `./example` folder, OVN4NFV-plugin daemonset yaml file, VLAN and direct Provider networking testing scenarios and required sample\r
+configuration file.\r
+\r
+### Quick start\r
+\r
+### Creating sandbox environment\r
+\r
+Create 2 VMs in your setup. The recommended way of creating the sandbox is through KUD. Please follow the all-in-one setup in KUD. This\r
+will create two VMs and provide the required sandbox.\r
+\r
+### VLAN Tagging Provider network testing\r
+\r
+The following setup have 2 VMs with one VM having Kubernetes setup with OVN4NFVk8s plugin and another VM act as provider networking to do\r
+testing.\r
+\r
+Run the following yaml file to test teh vlan tagging provider networking. User required to change the `providerInterfaceName` and\r
+`nodeLabelList` in the `ovn4nfv_vlan_pn.yml`\r
+\r
+```\r
+kubectl apply -f ovn4nfv_vlan_pn.yml\r
+```\r
+This create Vlan tagging interface eth0.100 in VM1 and two pods for the deployment `pnw-original-vlan-1` and `pnw-original-vlan-2` in VM.\r
+Test the interface details and inter network communication between `net0` interfaces\r
+```\r
+# kubectl exec -it pnw-original-vlan-1-6c67574cd7-mv57g -- ifconfig\r
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:30\r
+ inet addr:10.244.64.48 Bcast:0.0.0.0 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
+ RX packets:11 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)\r
+\r
+lo Link encap:Local Loopback\r
+ inet addr:127.0.0.1 Mask:255.0.0.0\r
+ UP LOOPBACK RUNNING MTU:65536 Metric:1\r
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:1000\r
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
+\r
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3C\r
+ inet addr:172.16.33.3 Bcast:172.16.33.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:10 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:9 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:868 (868.0 B) TX bytes:826 (826.0 B)\r
+# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ifconfig\r
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:31\r
+ inet addr:10.244.64.49 Bcast:0.0.0.0 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
+ RX packets:11 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)\r
+\r
+lo Link encap:Local Loopback\r
+ inet addr:127.0.0.1 Mask:255.0.0.0\r
+ UP LOOPBACK RUNNING MTU:65536 Metric:1\r
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:1000\r
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
+\r
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3D\r
+ inet addr:172.16.33.4 Bcast:172.16.33.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:25 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:25 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:2282 (2.2 KiB) TX bytes:2282 (2.2 KiB)\r
+```\r
+Test the ping operation between the vlan interfaces\r
+```\r
+# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ping -I net0 172.16.33.3 -c 2\r
+PING 172.16.33.3 (172.16.33.3): 56 data bytes\r
+64 bytes from 172.16.33.3: seq=0 ttl=64 time=0.092 ms\r
+64 bytes from 172.16.33.3: seq=1 ttl=64 time=0.105 ms\r
+\r
+--- 172.16.33.3 ping statistics ---\r
+2 packets transmitted, 2 packets received, 0% packet loss\r
+round-trip min/avg/max = 0.092/0.098/0.105 ms\r
+```\r
+In VM2 create a Vlan tagging for eth0 as eth0.100 and configure the IP address as\r
+```\r
+# ifconfig eth0.100\r
+eth0.100: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500\r
+ inet 172.16.33.2 netmask 255.255.255.0 broadcast 172.16.33.255\r
+ ether 52:54:00:f4:ee:d9 txqueuelen 1000 (Ethernet)\r
+ RX packets 111 bytes 8092 (8.0 KB)\r
+ RX errors 0 dropped 0 overruns 0 frame 0\r
+ TX packets 149 bytes 12698 (12.6 KB)\r
+ TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0\r
+```\r
+Pinging from VM2 through eth0.100 to pod 1 in VM1 should be successfull to test the VLAN tagging\r
+```\r
+# ping -I eth0.100 172.16.33.3 -c 2\r
+PING 172.16.33.3 (172.16.33.3) from 172.16.33.2 eth0.100: 56(84) bytes of data.\r
+64 bytes from 172.16.33.3: icmp_seq=1 ttl=64 time=0.382 ms\r
+64 bytes from 172.16.33.3: icmp_seq=2 ttl=64 time=0.347 ms\r
+\r
+--- 172.16.33.3 ping statistics ---\r
+2 packets transmitted, 2 received, 0% packet loss, time 1009ms\r
+rtt min/avg/max/mdev = 0.347/0.364/0.382/0.025 ms\r
+```\r
+### VLAN Tagging between VMs\r
+\r
+\r
+### Direct Provider network testing\r
+\r
+The main difference between Vlan tagging and Direct provider networking is that VLAN logical interface is created and then ports are\r
+attached to it. In order to validate the direct provider networking connectivity, we create VLAN tagging between VM1 & VM2 and test the\r
+connectivity as follow.\r
+\r
+Create VLAN tagging interface eth0.101 in VM1 and VM2. Just add `providerInterfaceName: eth0.101' in Direct provider network CR.\r
+```\r
+# kubectl apply -f ovn4nfv_direct_pn.yml\r
+```\r
+Check the inter connection between direct provider network pods as follow\r
+```\r
+# kubectl exec -it pnw-original-direct-1-85f5b45fdd-qq6xc -- ifconfig\r
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:33\r
+ inet addr:10.244.64.51 Bcast:0.0.0.0 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
+ RX packets:6 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)\r
+\r
+lo Link encap:Local Loopback\r
+ inet addr:127.0.0.1 Mask:255.0.0.0\r
+ UP LOOPBACK RUNNING MTU:65536 Metric:1\r
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:1000\r
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
+\r
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3E\r
+ inet addr:172.16.34.3 Bcast:172.16.34.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:29 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:26 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:2394 (2.3 KiB) TX bytes:2268 (2.2 KiB)\r
+\r
+# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ifconfig\r
+eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:32\r
+ inet addr:10.244.64.50 Bcast:0.0.0.0 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
+ RX packets:6 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)\r
+\r
+lo Link encap:Local Loopback\r
+ inet addr:127.0.0.1 Mask:255.0.0.0\r
+ UP LOOPBACK RUNNING MTU:65536 Metric:1\r
+ RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:1000\r
+ RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
+\r
+net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3F\r
+ inet addr:172.16.34.4 Bcast:172.16.34.255 Mask:255.255.255.0\r
+ UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
+ RX packets:14 errors:0 dropped:0 overruns:0 frame:0\r
+ TX packets:10 errors:0 dropped:0 overruns:0 carrier:0\r
+ collisions:0 txqueuelen:0\r
+ RX bytes:1092 (1.0 KiB) TX bytes:924 (924.0 B)\r
+# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ping -I net0 172.16.34.3 -c 2\r
+PING 172.16.34.3 (172.16.34.3): 56 data bytes\r
+64 bytes from 172.16.34.3: seq=0 ttl=64 time=0.097 ms\r
+64 bytes from 172.16.34.3: seq=1 ttl=64 time=0.096 ms\r
+\r
+--- 172.16.34.3 ping statistics ---\r
+2 packets transmitted, 2 packets received, 0% packet loss\r
+round-trip min/avg/max = 0.096/0.096/0.097 ms\r
+```\r
+In VM2, ping the pod1 in the VM1\r
+$ ping -I eth0.101 172.16.34.2 -c 2\r
+```\r
+PING 172.16.34.2 (172.16.34.2) from 172.16.34.2 eth0.101: 56(84) bytes of data.\r
+64 bytes from 172.16.34.2: icmp_seq=1 ttl=64 time=0.057 ms\r
+64 bytes from 172.16.34.2: icmp_seq=2 ttl=64 time=0.065 ms\r
+\r
+--- 172.16.34.2 ping statistics ---\r
+2 packets transmitted, 2 received, 0% packet loss, time 1010ms\r
+rtt min/avg/max/mdev = 0.057/0.061/0.065/0.004 ms\r
+```\r
+### Direct provider networking between VMs\r
+\r
+\r
+# Summary\r
+\r
+This is only the test scenario for development and also for verification purpose. Work in progress to make the end2end testing\r
+automatic.\r
+++ /dev/null
-# Example Setup and Testing\r
-\r
-In this `./example` folder, OVN4NFV-plugin daemonset yaml file, VLAN and direct Provider networking testing scenarios and required sample\r
-configuration file.\r
-\r
-# Quick start\r
-\r
-## Creating sandbox environment\r
-\r
-Create 2 VMs in your setup. The recommended way of creating the sandbox is through KUD. Please follow the all-in-one setup in KUD. This\r
-will create two VMs and provide the required sandbox.\r
-\r
-## VLAN Tagging Provider network testing\r
-\r
-The following setup have 2 VMs with one VM having Kubernetes setup with OVN4NFVk8s plugin and another VM act as provider networking to do\r
-testing.\r
-\r
-Run the following yaml file to test teh vlan tagging provider networking. User required to change the `providerInterfaceName` and\r
-`nodeLabelList` in the `ovn4nfv_vlan_pn.yml`\r
-\r
-```\r
-kubectl apply -f ovn4nfv_vlan_pn.yml\r
-```\r
-This create Vlan tagging interface eth0.100 in VM1 and two pods for the deployment `pnw-original-vlan-1` and `pnw-original-vlan-2` in VM.\r
-Test the interface details and inter network communication between `net0` interfaces\r
-```\r
-# kubectl exec -it pnw-original-vlan-1-6c67574cd7-mv57g -- ifconfig\r
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:30\r
- inet addr:10.244.64.48 Bcast:0.0.0.0 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
- RX packets:11 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)\r
-\r
-lo Link encap:Local Loopback\r
- inet addr:127.0.0.1 Mask:255.0.0.0\r
- UP LOOPBACK RUNNING MTU:65536 Metric:1\r
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:1000\r
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
-\r
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3C\r
- inet addr:172.16.33.3 Bcast:172.16.33.255 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
- RX packets:10 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:9 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:868 (868.0 B) TX bytes:826 (826.0 B)\r
-# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ifconfig\r
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:31\r
- inet addr:10.244.64.49 Bcast:0.0.0.0 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
- RX packets:11 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:462 (462.0 B) TX bytes:0 (0.0 B)\r
-\r
-lo Link encap:Local Loopback\r
- inet addr:127.0.0.1 Mask:255.0.0.0\r
- UP LOOPBACK RUNNING MTU:65536 Metric:1\r
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:1000\r
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
-\r
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3D\r
- inet addr:172.16.33.4 Bcast:172.16.33.255 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
- RX packets:25 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:25 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:2282 (2.2 KiB) TX bytes:2282 (2.2 KiB)\r
-```\r
-Test the ping operation between the vlan interfaces\r
-```\r
-# kubectl exec -it pnw-original-vlan-2-5bd9ffbf5c-4gcgq -- ping -I net0 172.16.33.3 -c 2\r
-PING 172.16.33.3 (172.16.33.3): 56 data bytes\r
-64 bytes from 172.16.33.3: seq=0 ttl=64 time=0.092 ms\r
-64 bytes from 172.16.33.3: seq=1 ttl=64 time=0.105 ms\r
-\r
---- 172.16.33.3 ping statistics ---\r
-2 packets transmitted, 2 packets received, 0% packet loss\r
-round-trip min/avg/max = 0.092/0.098/0.105 ms\r
-```\r
-In VM2 create a Vlan tagging for eth0 as eth0.100 and configure the IP address as\r
-```\r
-# ifconfig eth0.100\r
-eth0.100: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500\r
- inet 172.16.33.2 netmask 255.255.255.0 broadcast 172.16.33.255\r
- ether 52:54:00:f4:ee:d9 txqueuelen 1000 (Ethernet)\r
- RX packets 111 bytes 8092 (8.0 KB)\r
- RX errors 0 dropped 0 overruns 0 frame 0\r
- TX packets 149 bytes 12698 (12.6 KB)\r
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0\r
-```\r
-Pinging from VM2 through eth0.100 to pod 1 in VM1 should be successfull to test the VLAN tagging\r
-```\r
-# ping -I eth0.100 172.16.33.3 -c 2\r
-PING 172.16.33.3 (172.16.33.3) from 172.16.33.2 eth0.100: 56(84) bytes of data.\r
-64 bytes from 172.16.33.3: icmp_seq=1 ttl=64 time=0.382 ms\r
-64 bytes from 172.16.33.3: icmp_seq=2 ttl=64 time=0.347 ms\r
-\r
---- 172.16.33.3 ping statistics ---\r
-2 packets transmitted, 2 received, 0% packet loss, time 1009ms\r
-rtt min/avg/max/mdev = 0.347/0.364/0.382/0.025 ms\r
-```\r
-## VLAN Tagging between VMs\r
-\r
-\r
-# Direct Provider network testing\r
-\r
-The main difference between Vlan tagging and Direct provider networking is that VLAN logical interface is created and then ports are\r
-attached to it. In order to validate the direct provider networking connectivity, we create VLAN tagging between VM1 & VM2 and test the\r
-connectivity as follow.\r
-\r
-Create VLAN tagging interface eth0.101 in VM1 and VM2. Just add `providerInterfaceName: eth0.101' in Direct provider network CR.\r
-```\r
-# kubectl apply -f ovn4nfv_direct_pn.yml\r
-```\r
-Check the inter connection between direct provider network pods as follow\r
-```\r
-# kubectl exec -it pnw-original-direct-1-85f5b45fdd-qq6xc -- ifconfig\r
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:33\r
- inet addr:10.244.64.51 Bcast:0.0.0.0 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
- RX packets:6 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)\r
-\r
-lo Link encap:Local Loopback\r
- inet addr:127.0.0.1 Mask:255.0.0.0\r
- UP LOOPBACK RUNNING MTU:65536 Metric:1\r
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:1000\r
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
-\r
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3E\r
- inet addr:172.16.34.3 Bcast:172.16.34.255 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
- RX packets:29 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:26 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:2394 (2.3 KiB) TX bytes:2268 (2.2 KiB)\r
-\r
-# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ifconfig\r
-eth0 Link encap:Ethernet HWaddr 0A:58:0A:F4:40:32\r
- inet addr:10.244.64.50 Bcast:0.0.0.0 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1\r
- RX packets:6 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:252 (252.0 B) TX bytes:0 (0.0 B)\r
-\r
-lo Link encap:Local Loopback\r
- inet addr:127.0.0.1 Mask:255.0.0.0\r
- UP LOOPBACK RUNNING MTU:65536 Metric:1\r
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:1000\r
- RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r
-\r
-net0 Link encap:Ethernet HWaddr 0A:00:00:00:00:3F\r
- inet addr:172.16.34.4 Bcast:172.16.34.255 Mask:255.255.255.0\r
- UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1\r
- RX packets:14 errors:0 dropped:0 overruns:0 frame:0\r
- TX packets:10 errors:0 dropped:0 overruns:0 carrier:0\r
- collisions:0 txqueuelen:0\r
- RX bytes:1092 (1.0 KiB) TX bytes:924 (924.0 B)\r
-# kubectl exec -it pnw-original-direct-2-6bc54d98c4-vhxmk -- ping -I net0 172.16.34.3 -c 2\r
-PING 172.16.34.3 (172.16.34.3): 56 data bytes\r
-64 bytes from 172.16.34.3: seq=0 ttl=64 time=0.097 ms\r
-64 bytes from 172.16.34.3: seq=1 ttl=64 time=0.096 ms\r
-\r
---- 172.16.34.3 ping statistics ---\r
-2 packets transmitted, 2 packets received, 0% packet loss\r
-round-trip min/avg/max = 0.096/0.096/0.097 ms\r
-```\r
-In VM2, ping the pod1 in the VM1\r
-$ ping -I eth0.101 172.16.34.2 -c 2\r
-```\r
-PING 172.16.34.2 (172.16.34.2) from 172.16.34.2 eth0.101: 56(84) bytes of data.\r
-64 bytes from 172.16.34.2: icmp_seq=1 ttl=64 time=0.057 ms\r
-64 bytes from 172.16.34.2: icmp_seq=2 ttl=64 time=0.065 ms\r
-\r
---- 172.16.34.2 ping statistics ---\r
-2 packets transmitted, 2 received, 0% packet loss, time 1010ms\r
-rtt min/avg/max/mdev = 0.057/0.061/0.065/0.004 ms\r
-```\r
-## Direct provider networking between VMs\r
-\r
-\r
-# Summary\r
-\r
-This is only the test scenario for development and also for verification purpose. Work in progress to make the end2end testing\r
-automatic.\r