xci: Switch VM disk cache to 'unsafe' and use 'iothreads' for I/O 17/52317/3
authorMarkos Chandras <mchandras@suse.de>
Mon, 19 Feb 2018 17:51:42 +0000 (17:51 +0000)
committerMarkos Chandras <mchandras@suse.de>
Tue, 20 Feb 2018 15:33:47 +0000 (15:33 +0000)
According to the docs[1]

"writeback: This mode causes the hypervisor to interact with the disk
image file or block device with neither O_DSYNC nor O_DIRECT semantics.
The host page cache is used and writes are reported to the guest as
completed when they are placed in the host page cache. The normal page
cache management will handle commitment to the storage device.
Additionally, the guest's virtual storage adapter is informed of the
writeback cache, so the guest would be expected to send down flush
commands as needed to manage data integrity. Analogous to a raid
controller with RAM cache."

and

"writeback: This mode informs the guest of the presence of a write
cache, and relies on the guest to send flush commands as needed to
maintain data integrity within its disk image. This is a common
storage design which is completely accounted for within modern file
systems. This mode exposes the guest to data loss in the unlikely case
of a host failure, because there is a window of time between the time
a write is reported as completed, and that write being committed to the
storage device."

"unsafe: This mode is similar to writeback caching except for the
following: the guest flush commands are ignored, nullifying the data
integrity control of these flush commands, and resulting in a higher
risk of data loss because of host failure. The name “unsafe” should
serve as a warning that there is a much higher potential for data
loss because of a host failure than with the other modes. As the
guest terminates, the cached data is flushed at that time."

It's beneficial to use the host page cache to cache I/O from the guest
instead of waiting for data to reach the actual disk device. We do not
normally care about data integrity so data loss is not a problem.

Moreover, we drop the cache configuration from the flavor files since
it's independent of the flavor that's being deployed.

[1] https://www.suse.com/documentation/sles-12/singlehtml/book_virt/book_virt.html#cha.cachemodes

Change-Id: I118ffdf84b1be672185b3eff60fe5d0b5f1a590d
Signed-off-by: Markos Chandras <mchandras@suse.de>
bifrost/scripts/bifrost-provision.sh
xci/config/aio-vars
xci/config/ha-vars
xci/config/mini-vars
xci/config/noha-vars
xci/scripts/vm/start-new-vm.sh

index 4de2ad2..84c0db4 100755 (executable)
@@ -46,7 +46,7 @@ export VM_DOMAIN_TYPE=${VM_DOMAIN_TYPE:-kvm}
 export VM_CPU=${VM_CPU:-4}
 export VM_DISK=${VM_DISK:-100}
 export VM_MEMORY_SIZE=${VM_MEMORY_SIZE:-8192}
-export VM_DISK_CACHE=${VM_DISK_CACHE:-none}
+export VM_DISK_CACHE=${VM_DISK_CACHE:-unsafe}
 
 # Settings for bifrost
 TEST_PLAYBOOK="opnfv-virtual.yaml"
index e5a1aee..1d2e4f9 100755 (executable)
@@ -15,4 +15,3 @@ export VM_DOMAIN_TYPE=${VM_DOMAIN_TYPE:-kvm}
 export VM_CPU=${VM_CPU:-8}
 export VM_DISK=${VM_DISK:-80}
 export VM_MEMORY_SIZE=${VM_MEMORY_SIZE:-8192}
-export VM_DISK_CACHE=none
index 4c7cd87..32616ab 100755 (executable)
@@ -15,4 +15,3 @@ export VM_DOMAIN_TYPE=${VM_DOMAIN_TYPE:-kvm}
 export VM_CPU=${VM_CPU:-6}
 export VM_DISK=${VM_DISK:-80}
 export VM_MEMORY_SIZE=${VM_MEMORY_SIZE:-16384}
-export VM_DISK_CACHE=none
index 48b38ce..142e886 100755 (executable)
@@ -15,4 +15,3 @@ export VM_DOMAIN_TYPE=${VM_DOMAIN_TYPE:-kvm}
 export VM_CPU=${VM_CPU:-6}
 export VM_DISK=${VM_DISK:-80}
 export VM_MEMORY_SIZE=${VM_MEMORY_SIZE:-12288}
-export VM_DISK_CACHE=none
index cb8901b..4610b32 100755 (executable)
@@ -15,4 +15,3 @@ export VM_DOMAIN_TYPE=${VM_DOMAIN_TYPE:-kvm}
 export VM_CPU=${VM_CPU:-6}
 export VM_DISK=${VM_DISK:-80}
 export VM_MEMORY_SIZE=${VM_MEMORY_SIZE:-12288}
-export VM_DISK_CACHE=none
index 70dc4ef..040377c 100755 (executable)
@@ -208,12 +208,12 @@ if sudo vgscan | grep -q xci-vm-vg; then
        }
        echo "Flusing the ${OS_IMAGE_FILE} image to ${lv_dev}..."
        sudo qemu-img convert -O raw ${OS_IMAGE_FILE} ${lv_dev}
-       disk_config="${lv_dev},cache=directsync,bus=virtio"
+       disk_config="${lv_dev},cache=unsafe,io=threads,bus=virtio"
 else
        echo "Using file backend..."
        echo "Resizing disk image '${OS}' to ${DISK}G..."
        qemu-img resize ${OS_IMAGE_FILE} ${DISK}G
-       disk_config="${OS_IMAGE_FILE},cache=none,bus=virtio"
+       disk_config="${OS_IMAGE_FILE},cache=unsafe,io=threads,bus=virtio"
 fi
 
 echo "Installing virtual machine '${VM_NAME}'..."