1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
8 The NFV project requires fast live migration. The specific requirement is
9 total live migration time < 2Sec, while keeping the VM down time < 10ms when
10 running DPDK L2 forwarding workload.
12 We measured the baseline data of migrating an idle 8GiB guest running a DPDK L2
13 forwarding work load and observed that the total live migration time was 2271ms
14 while the VM downtime was 26ms. Both of these two indicators failed to satisfy
20 The following 4 features have been developed over the years to make the live
21 migration process faster.
24 Helps to reduce the network traffic by just sending the
27 Uses a specific NIC to increase the efficiency of data
29 + Multi thread compression:
30 Compresses the data before transmission.
32 Reduces the data rate of dirty pages.
34 Tests show none of the above features can satisfy the requirement of NFV.
35 XBZRLE and Multi thread compression do the compression entirely in software and
36 they are not fast enough in a 10Gbps network environment. RDMA is not flexible
37 because it has to transport all the guest memory to the destination without zero
38 page optimization. Auto convergence is not appropriate for NFV because it will
39 impact guest’s performance.
41 So we need to find other ways for optimization.
44 -------------------------
45 a. Delay non-emergency operations
46 By profiling, it was discovered that some of the cleanup operations during
47 the stop and copy stage are the main reason for the long VM down time. The
48 cleanup operation includes stopping the dirty page logging, which is a time
49 consuming operation. By deferring these operations until the data transmission
50 is completed the VM down time is reduced to about 5-7ms.
51 b. Optimize zero page checking
52 Currently QEMU uses the SSE2 instruction to optimize the zero pages
53 checking. The SSE2 instruction can process 16 bytes per instruction.
54 By using the AVX2 instruction, we can process 32 bytes per instruction.
55 Testing shows that using AVX2 can speed up the zero pages checking process
57 c. Remove unnecessary context synchronization.
58 The CPU context was being synchronized twice during live migration. Removing
59 this unnecessary synchronization shortened the VM downtime by about 100us.
64 The source and destination host have the same hardware and OS:
67 CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
73 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit
76 Vhost-user with OVS/DPDK as backend:
78 The goal is to connect guests' virtio-net devices having vhost-user backend to OVS dpdkvhostuser
79 ports and be able to run any kind of network traffic between them.
81 Installation of OVS and DPDK:
83 Using vsperf,installing the OVS and DPDk. Prepare the directories
87 mkdir -p /var/run/openvswitch
88 mount -t hugetlbfs -o pagesize=2048k none /dev/hugepages
96 For OVS setup, clean the environment
100 rm -f /usr/local/var/run/openvswitch/vhost-user*
101 rm -f /usr/local/etc/openvswitch/conf.db
103 Start database server
107 ovsdb-tool create /usr/local/etc/openvswitch/conf.db $VSPERF/src/ovs/ovs/vswitchd/vswitch.ovsschema
108 ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
114 ovs-vsctl --no-wait init
115 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xf
116 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024
117 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
123 ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
124 ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
125 ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
129 qemu-system-x86_64 -enable-kvm -cpu host -smp 2
130 -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1
131 -netdev type=vhost-user,id=net1,chardev=char1,vhostforce \
132 -device virtio-net-pci,netdev=net1,mac=52:54:00:12:34:56 \
133 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2\
134 -netdev type=vhost-user,id=net2,chardev=char2,vhostforce \
135 -device virtio-net-pci,netdev=net2,mac=54:54:00:12:34:56 -m 1024 -mem-path /dev/hugepages \
136 -mem-prealloc -realtime mlock=on -monitor unix:/tmp/qmp-sock-src,server,nowait \
137 -balloon virtio -drive file=/root/guest1.qcow2 -vnc :1 &
139 Run the standby qemu with -incoming tcp:-incoming tcp:${incoming_ip}:${migrate_port}:${migrate_port}
141 For local live migration
147 For peer -peer live migration
151 incoming ip=dest_host
155 .. figure:: lmnetwork.jpg
157 :alt: live migration network connection
160 Commands for performing live migration:
165 echo "migrate_set_speed 0" |nc -U /tmp/qmp-sock-src
166 echo "migrate_set_downtime 0.10" |nc -U /tmp/qmp-sock-src
167 echo "migrate -d tcp:0:4444" |nc -U /tmp/qmp-sock-src
168 #Wait till livemigration completed
169 echo "info migrate" | nc -U /tmp/qmp-sock-src
173 The down time is set to 10ms when doing the test. We use pktgen to send the
174 packages to guest, the package size is 64 bytes, and the line rate is 2013
177 a. Total live migration time
179 The total live migration time before and after optimization is shown in the
180 chart below. For an idle guest, we can reduce the total live migration time
181 from 2070ms to 401ms. For a guest running the DPDK L2 forwarding workload,
182 the total live migration time is reduced from 2271ms to 654ms.
184 .. figure:: lmtotaltime.jpg
186 :alt: total live migration time
190 The VM down time before and after optimization is shown in the chart below.
191 For an idle guest, we can reduce the VM down time from 29ms to 9ms. For a guest
192 running the DPDK L2 forwarding workload, the VM down time is reduced from 26ms to
195 .. figure:: lmdowntime.jpg