1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
9 .. In this section explain the purpose of the scenario and the
10 types of capabilities provided
12 The purpose of os-nosdn-kvm_ovs_dpdk_bar-noha scenario testing is to test the
13 High Availability deployment and configuration of OPNFV software suite
14 with OpenStack and without SDN software. This OPNFV software suite
15 includes OPNFV KVM4NFV latest software packages for Linux Kernel and
16 QEMU patches for achieving low latency. High Availability feature is achieved
17 by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
19 KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
21 Scenario Components and Composition
22 ===================================
23 .. In this section describe the unique components that make up the scenario,
24 .. what each component provides and why it has been included in order
25 .. to communicate to the user the capabilities available in this scenario.
27 This scenario deploys the High Availability OPNFV Cloud based on the
28 configurations provided in noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml.
29 This yaml file contains following configurations and is passed as an
30 argument to deploy.py script
32 * scenario.yaml:This configuration file defines translation between a
33 short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-noha) and an actual deployment
34 scenario configuration file(noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
36 * ``deployment-scenario-metadata:`` Contains the configuration metadata like
37 title,version,created,comment.
41 deployment-scenario-metadata:
42 title: NFV KVM and OVS-DPDK HA deployment
45 comment: NFV KVM and OVS-DPDK
47 * ``stack-extensions:`` Stack extentions are opnfv added value features in form
48 of a fuel-plugin.Plugins listed in stack extensions are enabled and
49 configured. os-nosdn-kvm_ovs_dpdk_bar-noha scenario currently uses KVM-1.0.0 plugin and barometer-1.0.0 plugin.
54 - module: fuel-plugin-kvm
55 module-config-name: fuel-nfvkvm
56 module-config-version: 1.0.0
57 module-config-override:
58 # Module config overrides
59 - module: fuel-plugin-collectd-ceilometer
60 module-config-name: fuel-barometer
61 module-config-version: 1.0.0
62 module-config-override:
63 # Module config overrides
65 * ``dea-override-config:`` Used to configure the HA mode,network segmentation
66 types and role to node assignments.These configurations overrides
67 corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
68 These keys are used to deploy multiple nodes(``1 controller,3 computes``)
71 * **Node 1**: This node has MongoDB and Controller roles. The controller
72 node runs the Identity service, Image Service, management portions of
73 Compute and Networking, Networking plug-in and the dashboard. The
74 Telemetry service which was designed to support billing systems for
75 OpenStack cloud resources uses a NoSQL database to store information.
76 The database typically runs on the controller node.
78 * **Node 2**: This node has compute and Ceph-osd roles. Ceph is a
79 massively scalable, open source, distributed storage system. It is
80 comprised of an object store, block store and a POSIX-compliant
81 file system. Enabling Ceph, configures Nova to store ephemeral volumes in
82 RBD, configures Glance to use the Ceph RBD backend to store images,
83 configures Cinder to store volumes in Ceph RBD images and configures the
84 default number of object replicas in Ceph.
86 * **Node 3**: This node has Compute role in order to achieve high
89 * **Node 4**: This node has Compute role. The compute node runs the
90 hypervisor portion of Compute that operates tenant virtual machines
91 or instances. By default, Compute uses KVM as the hypervisor.
93 The below is the ``dea-override-config`` of the noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
102 net_segment_type: vlan
105 interfaces: interfaces_vlan
106 role: mongo,controller
108 interfaces: interfaces_dpdk
109 role: ceph-osd,compute
110 attributes: attributes_1
112 interfaces: interfaces_dpdk
113 role: ceph-osd,compute
114 attributes: attributes_1
116 interfaces: interfaces_dpdk
117 role: ceph-osd,compute
118 attributes: attributes_1
129 networking_parameters:
130 segmentation_type: vlan
139 neutron_vlan_range: true
141 render_addr_mask: null
153 description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
154 label: Ceph RBD for ephemeral volumes (Nova)
159 description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing.
160 label: Ceph RBD for images (Glance)
162 - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
167 * ``dha-override-config:`` Provides information about the VM definition and
168 Network config for virtual deployment.These configurations overrides
169 the pod dha definition and points to the controller,compute and
170 fuel definition files. The noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used.
172 * os-nosdn-kvm_ovs_dpdk_bar-noha scenario is successful when all the 4 Nodes are accessible,
179 * In os-nosdn-kvm_ovs_dpdk_bar-noha scenario, OVS is installed on the compute nodes with DPDK configured
181 * Baraometer plugin is also implemented along with KVM plugin.
183 * This results in faster communication and data transfer among the compute nodes
186 Scenario Usage Overview
187 =======================
188 .. Provide a brief overview on how to use the scenario and the features available to the
189 .. user. This should be an "introduction" to the userguide document, and explicitly link to it,
190 .. where the specifics of the features are covered including examples and API's
192 * The high availability feature is disabled and deploymet is done by deploy.py with
193 noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument.
194 * Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware
198 Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-noha scenario:
203 $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
206 -b is used to specify the configuration directory
208 -i is used to specify the image downloaded from artifacts.
214 Check $ sudo ./deploy.sh -h for further information.
216 * os-nosdn-kvm_ovs_dpdk_bar-noha scenario can be executed from the jenkins project
217 "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-baremetal-daily-master"
218 * This scenario provides the High Availability feature by deploying
219 3 controller,2 compute nodes and checking if all the 5 nodes
220 are accessible(IP,up & running).
221 * Test Scenario is passed if deployment is successful and all 5 nodes have
222 accessibility (IP , up & running).
224 Known Limitations, Issues and Workarounds
225 =========================================
226 .. Explain any known limitations here.
228 * Test scenario os-nosdn-kvm_ovs_dpdk_bar-noha result is not stable.
233 For more information on the OPNFV Danube release, please visit
234 http://www.opnfv.org/Danube