1 *************************************
2 Yardstick Test Case Description TC019
3 *************************************
5 +-----------------------------------------------------------------------------+
6 |Control Node Openstack Service High Availability |
8 +--------------+--------------------------------------------------------------+
9 |test case id | OPNFV_YARDSTICK_TC019_HA: Control node Openstack service down|
11 +--------------+--------------------------------------------------------------+
12 |test purpose | This test case will verify the high availability of the |
13 | | service provided by OpenStack (like nova-api, neutro-server) |
14 | | on control node. |
16 +--------------+--------------------------------------------------------------+
17 |test method | This test case kills the processes of a specific Openstack |
18 | | service on a selected control node, then checks whether the |
19 | | request of the related Openstack command is OK and the killed|
20 | | processes are recovered. |
22 +--------------+--------------------------------------------------------------+
23 |attackers | In this test case, an attacker called "kill-process" is |
24 | | needed. This attacker includes three parameters: |
25 | | 1) fault_type: which is used for finding the attacker's |
26 | | scripts. It should be always set to "kill-process" in this |
28 | | 2) process_name: which is the process name of the specified |
29 | | OpenStack service. If there are multiple processes use the |
30 | | same name on the host, all of them are killed by this |
32 | | 3) host: which is the name of a control node being attacked. |
35 | | -fault_type: "kill-process" |
36 | | -process_name: "nova-api" |
39 +--------------+--------------------------------------------------------------+
40 |monitors | In this test case, two kinds of monitor are needed: |
41 | | 1. the "openstack-cmd" monitor constantly request a specific |
42 | | Openstack command, which needs two parameters: |
43 | | 1) monitor_type: which is used for finding the monitor class |
44 | | and related scritps. It should be always set to |
45 | | "openstack-cmd" for this monitor. |
46 | | 2) command_name: which is the command name used for request |
48 | | 2. the "process" monitor check whether a process is running |
49 | | on a specific node, which needs three parameters: |
50 | | 1) monitor_type: which used for finding the monitor class and|
51 | | related scritps. It should be always set to "process" |
52 | | for this monitor. |
53 | | 2) process_name: which is the process name for monitor |
54 | | 3) host: which is the name of the node runing the process |
58 | | -monitor_type: "openstack-cmd" |
59 | | -command_name: "nova image-list" |
61 | | -monitor_type: "process" |
62 | | -process_name: "nova-api" |
65 +--------------+--------------------------------------------------------------+
66 |metrics | In this test case, there are two metrics: |
67 | | 1)service_outage_time: which indicates the maximum outage |
68 | | time (seconds) of the specified Openstack command request. |
69 | | 2)process_recover_time: which indicates the maximun time |
70 | | (seconds) from the process being killed to recovered |
72 +--------------+--------------------------------------------------------------+
73 |test tool | Developed by the project. Please see folder: |
74 | | "yardstick/benchmark/scenarios/availability/ha_tools" |
76 +--------------+--------------------------------------------------------------+
77 |references | ETSI NFV REL001 |
79 +--------------+--------------------------------------------------------------+
80 |configuration | This test case needs two configuration files: |
81 | | 1) test case file: opnfv_yardstick_tc019.yaml |
82 | | -Attackers: see above "attackers" discription |
83 | | -waiting_time: which is the time (seconds) from the process |
84 | | being killed to stoping monitors the monitors |
85 | | -Monitors: see above "monitors" discription |
86 | | -SLA: see above "metrics" discription |
88 | | 2)POD file: pod.yaml |
89 | | The POD configuration should record on pod.yaml first. |
90 | | the "host" item in this test case will use the node name in |
93 +--------------+--------------------------------------------------------------+
94 |test sequence | description and expected result |
96 +--------------+--------------------------------------------------------------+
97 |step 1 | start monitors: |
98 | | each monitor will run with independently process |
100 | | Result: The monitor info will be collected. |
102 +--------------+--------------------------------------------------------------+
103 |step 2 | do attacker: connect the host through SSH, and then execute |
104 | | the kill process script with param value specified by |
107 | | Result: Process will be killed. |
109 +--------------+--------------------------------------------------------------+
110 |step 3 | stop monitors after a period of time specified by |
113 | | Result: The monitor info will be aggregated. |
115 +--------------+--------------------------------------------------------------+
116 |step 4 | verify the SLA |
118 | | Result: The test case is passed or not. |
120 +--------------+--------------------------------------------------------------+
121 |post-action | It is the action when the test cases exist. It will check the|
122 | | status of the specified process on the host, and restart the |
123 | | process if it is not running for next test cases |
125 +--------------+--------------------------------------------------------------+
126 |test verdict | Fails only if SLA is not passed, or if there is a test case |
127 | | execution problem. |
129 +--------------+--------------------------------------------------------------+