X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?p=kvmfornfv.git;a=blobdiff_plain;f=docs%2Frelease%2Fuserguide%2Flive_migration.userguide.rst;fp=docs%2Frelease%2Fuserguide%2Flive_migration.userguide.rst;h=ff075ac26b379e4beeccdea3f4e992dbc13f99bf;hp=9fa9b82fd5bf226fabe2808586313a110bb03bdc;hb=a3923b36b50dbd842ed1151eb5734ba17348b669;hpb=827627ae5f5674775062ab6a8a31a0ae1bbba7c7 diff --git a/docs/release/userguide/live_migration.userguide.rst b/docs/release/userguide/live_migration.userguide.rst index 9fa9b82fd..ff075ac26 100644 --- a/docs/release/userguide/live_migration.userguide.rst +++ b/docs/release/userguide/live_migration.userguide.rst @@ -72,16 +72,83 @@ QEMU v2.4.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01) + +Vhost-user with OVS/DPDK as backend: +:: +The goal is to connect guests' virtio-net devices having vhost-user backend to OVS dpdkvhostuser +ports and be able to run any kind of network traffic between them. + +Installation of OVS and DPDK: +:: +Using vsperf,installing the OVS and DPDk. Prepare the directories + +.. code:: bash + + mkdir -p /var/run/openvswitch + mount -t hugetlbfs -o pagesize=2048k none /dev/hugepages + +Load Kernel modules + +.. code:: bash + + modprobe openvswitch + +For OVS setup, clean the environment + +.. code:: bash + + rm -f /usr/local/var/run/openvswitch/vhost-user* + rm -f /usr/local/etc/openvswitch/conf.db + +Start database server + +.. code:: bash + + ovsdb-tool create /usr/local/etc/openvswitch/conf.db $VSPERF/src/ovs/ovs/vswitchd/vswitch.ovsschema + ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach + +Start OVS + +.. code:: bash + + ovs-vsctl --no-wait init + ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xf + ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024 + ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true + +Configure the bridge + +.. code:: bash + + ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev + ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser + ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser + QEMU parameters: :: -${qemu} -smp ${guest_cpus} -monitor unix:${qmp_sock},server,nowait -daemonize \ --cpu host,migratable=off,+invtsc,+tsc-deadline,pmu=off \ --realtime mlock=on -mem-prealloc -enable-kvm -m 1G \ --mem-path /mnt/hugetlbfs-1g \ --drive file=/root/minimal-centos1.qcow2,cache=none,aio=threads \ --netdev user,id=guest0,hostfwd=tcp:5555-:22 \ --device virtio-net-pci,netdev=guest0 \ --nographic -serial /dev/null -parallel /dev/null +qemu-system-x86_64 -enable-kvm -cpu host -smp 2 +-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 +-netdev type=vhost-user,id=net1,chardev=char1,vhostforce \ +-device virtio-net-pci,netdev=net1,mac=52:54:00:12:34:56 \ +-chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2\ +-netdev type=vhost-user,id=net2,chardev=char2,vhostforce \ +-device virtio-net-pci,netdev=net2,mac=54:54:00:12:34:56 -m 1024 -mem-path /dev/hugepages \ +-mem-prealloc -realtime mlock=on -monitor unix:/tmp/qmp-sock-src,server,nowait \ +-balloon virtio -drive file=/root/guest1.qcow2 -vnc :1 & + +Run the standby qemu with -incoming tcp:-incoming tcp:${incoming_ip}:${migrate_port}:${migrate_port} + +For local live migration + +.. code:: bash + + incoming ip=0 + +For peer -peer live migration + +.. code:: bash + + incoming ip=dest_host Network connection @@ -90,6 +157,16 @@ Network connection :alt: live migration network connection :figwidth: 80% +Commands for performing live migration: +:: + +.. code:: bash + + echo "migrate_set_speed 0" |nc -U /tmp/qmp-sock-src + echo "migrate_set_downtime 0.10" |nc -U /tmp/qmp-sock-src + echo "migrate -d tcp:0:4444" |nc -U /tmp/qmp-sock-src + #Wait till livemigration completed + echo "info migrate" | nc -U /tmp/qmp-sock-src Test Result -----------