1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
6 *************************************
7 Yardstick Test Case Description TC019
8 *************************************
10 +-----------------------------------------------------------------------------+
11 |Control Node Openstack Service High Availability |
13 +--------------+--------------------------------------------------------------+
14 |test case id | OPNFV_YARDSTICK_TC019_HA: Control node Openstack service down|
16 +--------------+--------------------------------------------------------------+
17 |test purpose | This test case will verify the high availability of the |
18 | | service provided by OpenStack (like nova-api, neutro-server) |
19 | | on control node. |
21 +--------------+--------------------------------------------------------------+
22 |test method | This test case kills the processes of a specific Openstack |
23 | | service on a selected control node, then checks whether the |
24 | | request of the related Openstack command is OK and the killed|
25 | | processes are recovered. |
27 +--------------+--------------------------------------------------------------+
28 |attackers | In this test case, an attacker called "kill-process" is |
29 | | needed. This attacker includes three parameters: |
30 | | 1) fault_type: which is used for finding the attacker's |
31 | | scripts. It should be always set to "kill-process" in this |
33 | | 2) process_name: which is the process name of the specified |
34 | | OpenStack service. If there are multiple processes use the |
35 | | same name on the host, all of them are killed by this |
37 | | 3) host: which is the name of a control node being attacked. |
40 | | -fault_type: "kill-process" |
41 | | -process_name: "nova-api" |
44 +--------------+--------------------------------------------------------------+
45 |monitors | In this test case, two kinds of monitor are needed: |
46 | | 1. the "openstack-cmd" monitor constantly request a specific |
47 | | Openstack command, which needs two parameters: |
48 | | 1) monitor_type: which is used for finding the monitor class |
49 | | and related scritps. It should be always set to |
50 | | "openstack-cmd" for this monitor. |
51 | | 2) command_name: which is the command name used for request |
53 | | 2. the "process" monitor check whether a process is running |
54 | | on a specific node, which needs three parameters: |
55 | | 1) monitor_type: which used for finding the monitor class and|
56 | | related scritps. It should be always set to "process" |
57 | | for this monitor. |
58 | | 2) process_name: which is the process name for monitor |
59 | | 3) host: which is the name of the node runing the process |
63 | | -monitor_type: "openstack-cmd" |
64 | | -command_name: "nova image-list" |
66 | | -monitor_type: "process" |
67 | | -process_name: "nova-api" |
70 +--------------+--------------------------------------------------------------+
71 |metrics | In this test case, there are two metrics: |
72 | | 1)service_outage_time: which indicates the maximum outage |
73 | | time (seconds) of the specified Openstack command request. |
74 | | 2)process_recover_time: which indicates the maximun time |
75 | | (seconds) from the process being killed to recovered |
77 +--------------+--------------------------------------------------------------+
78 |test tool | Developed by the project. Please see folder: |
79 | | "yardstick/benchmark/scenarios/availability/ha_tools" |
81 +--------------+--------------------------------------------------------------+
82 |references | ETSI NFV REL001 |
84 +--------------+--------------------------------------------------------------+
85 |configuration | This test case needs two configuration files: |
86 | | 1) test case file: opnfv_yardstick_tc019.yaml |
87 | | -Attackers: see above "attackers" discription |
88 | | -waiting_time: which is the time (seconds) from the process |
89 | | being killed to stoping monitors the monitors |
90 | | -Monitors: see above "monitors" discription |
91 | | -SLA: see above "metrics" discription |
93 | | 2)POD file: pod.yaml |
94 | | The POD configuration should record on pod.yaml first. |
95 | | the "host" item in this test case will use the node name in |
98 +--------------+--------------------------------------------------------------+
99 |test sequence | description and expected result |
101 +--------------+--------------------------------------------------------------+
102 |step 1 | start monitors: |
103 | | each monitor will run with independently process |
105 | | Result: The monitor info will be collected. |
107 +--------------+--------------------------------------------------------------+
108 |step 2 | do attacker: connect the host through SSH, and then execute |
109 | | the kill process script with param value specified by |
112 | | Result: Process will be killed. |
114 +--------------+--------------------------------------------------------------+
115 |step 3 | stop monitors after a period of time specified by |
118 | | Result: The monitor info will be aggregated. |
120 +--------------+--------------------------------------------------------------+
121 |step 4 | verify the SLA |
123 | | Result: The test case is passed or not. |
125 +--------------+--------------------------------------------------------------+
126 |post-action | It is the action when the test cases exist. It will check the|
127 | | status of the specified process on the host, and restart the |
128 | | process if it is not running for next test cases |
130 +--------------+--------------------------------------------------------------+
131 |test verdict | Fails only if SLA is not passed, or if there is a test case |
132 | | execution problem. |
134 +--------------+--------------------------------------------------------------+