add yardstick iruya 9.0.0 release notes
[yardstick.git] / docs / testing / user / userguide / opnfv_yardstick_tc090.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Yin Kanglin and others.
5 .. 14_ykl@tongji.edu.cn
6
7 *************************************
8 Yardstick Test Case Description TC090
9 *************************************
10
11 +-----------------------------------------------------------------------------+
12 |Control Node OpenStack Service High Availability - Database Instances        |
13 |                                                                             |
14 +--------------+--------------------------------------------------------------+
15 |test case id  | OPNFV_YARDSTICK_TC090: Control node OpenStack service down - |
16 |              | database instances                                           |
17 +--------------+--------------------------------------------------------------+
18 |test purpose  | This test case will verify the high availability of the      |
19 |              | data base instances used by OpenStack (mysql) on control     |
20 |              | node.                                                        |
21 |              |                                                              |
22 +--------------+--------------------------------------------------------------+
23 |test method   | This test case kills the processes of database service on a  |
24 |              | selected control node, then checks whether the request of    |
25 |              | the related OpenStack command is OK and the killed processes |
26 |              | are recovered.                                               |
27 |              |                                                              |
28 +--------------+--------------------------------------------------------------+
29 |attackers     | In this test case, an attacker called "kill-process" is      |
30 |              | needed. This attacker includes three parameters:             |
31 |              | 1) fault_type: which is used for finding the attacker's      |
32 |              | scripts. It should be always set to "kill-process" in this   |
33 |              | test case.                                                   |
34 |              | 2) process_name: which is the process name of the specified  |
35 |              | OpenStack service. If there are multiple processes use the   |
36 |              | same name on the host, all of them are killed by this        |
37 |              | attacker.                                                    |
38 |              | In this case. This parameter should always set to the name   |
39 |              | of the database service of OpenStack.                        |
40 |              | 3) host: which is the name of a control node being attacked. |
41 |              |                                                              |
42 |              | e.g.                                                         |
43 |              | -fault_type: "kill-process"                                  |
44 |              | -process_name: "mysql"                                       |
45 |              | -host: node1                                                 |
46 |              |                                                              |
47 +--------------+--------------------------------------------------------------+
48 |monitors      | In this test case, two kinds of monitor are needed:          |
49 |              | 1. the "openstack-cmd" monitor constantly request a specific |
50 |              | Openstack command, which needs two parameters:               |
51 |              | 1) monitor_type: which is used for finding the monitor class |
52 |              | and related scritps. It should be always set to              |
53 |              | "openstack-cmd" for this monitor.                            |
54 |              | 2) command_name: which is the command name used for request. |
55 |              | In this case, the command name should be neutron related     |
56 |              | commands.                                                    |
57 |              |                                                              |
58 |              | 2. the "process" monitor check whether a process is running  |
59 |              | on a specific node, which needs three parameters:            |
60 |              | 1) monitor_type: which used for finding the monitor class and|
61 |              | related scripts. It should be always set to "process"        |
62 |              | for this monitor.                                            |
63 |              | 2) process_name: which is the process name for monitor       |
64 |              | 3) host: which is the name of the node running the process   |
65 |              |                                                              |
66 |              | The examples of monitors show as follows, there are four     |
67 |              | instance of the "openstack-cmd" monitor, in order to check   |
68 |              | the database connection of different OpenStack components.   |
69 |              |                                                              |
70 |              | monitor1:                                                    |
71 |              | -monitor_type: "openstack-cmd"                               |
72 |              | -api_name: "openstack image list"                            |
73 |              | monitor2:                                                    |
74 |              | -monitor_type: "openstack-cmd"                               |
75 |              | -api_name: "openstack router list"                           |
76 |              | monitor3:                                                    |
77 |              | -monitor_type: "openstack-cmd"                               |
78 |              | -api_name: "openstack stack list"                            |
79 |              | monitor4:                                                    |
80 |              | -monitor_type: "openstack-cmd"                               |
81 |              | -api_name: "openstack volume list"                           |
82 |              | monitor5:                                                    |
83 |              | -monitor_type: "process"                                     |
84 |              | -process_name: "mysql"                                       |
85 |              | -host: node1                                                 |
86 |              |                                                              |
87 +--------------+--------------------------------------------------------------+
88 |metrics       | In this test case, there are two metrics:                    |
89 |              | 1)service_outage_time: which indicates the maximum outage    |
90 |              | time (seconds) of the specified OpenStack command request.   |
91 |              | 2)process_recover_time: which indicates the maximum time     |
92 |              | (seconds) from the process being killed to recovered         |
93 |              |                                                              |
94 +--------------+--------------------------------------------------------------+
95 |test tool     | Developed by the project. Please see folder:                 |
96 |              | "yardstick/benchmark/scenarios/availability/ha_tools"        |
97 |              |                                                              |
98 +--------------+--------------------------------------------------------------+
99 |references    | ETSI NFV REL001                                              |
100 |              |                                                              |
101 +--------------+--------------------------------------------------------------+
102 |configuration | This test case needs two configuration files:                |
103 |              | 1) test case file: opnfv_yardstick_tc090.yaml                |
104 |              | -Attackers: see above "attackers" description                |
105 |              | -waiting_time: which is the time (seconds) from the process  |
106 |              | being killed to stopping monitors the monitors               |
107 |              | -Monitors: see above "monitors" description                  |
108 |              | -SLA: see above "metrics" description                        |
109 |              |                                                              |
110 |              | 2)POD file: pod.yaml                                         |
111 |              | The POD configuration should record on pod.yaml first.       |
112 |              | the "host" item in this test case will use the node name in  |
113 |              | the pod.yaml.                                                |
114 |              |                                                              |
115 +--------------+--------------------------------------------------------------+
116 |test sequence | description and expected result                              |
117 |              |                                                              |
118 +--------------+--------------------------------------------------------------+
119 |step 1        | start monitors:                                              |
120 |              | each monitor will run with independently process             |
121 |              |                                                              |
122 |              | Result: The monitor info will be collected.                  |
123 |              |                                                              |
124 +--------------+--------------------------------------------------------------+
125 |step 2        | do attacker: connect the host through SSH, and then execute  |
126 |              | the kill process script with param value specified by        |
127 |              | "process_name"                                               |
128 |              |                                                              |
129 |              | Result: Process will be killed.                              |
130 |              |                                                              |
131 +--------------+--------------------------------------------------------------+
132 |step 3        | stop monitors after a period of time specified by            |
133 |              | "waiting_time"                                               |
134 |              |                                                              |
135 |              | Result: The monitor info will be aggregated.                 |
136 |              |                                                              |
137 +--------------+--------------------------------------------------------------+
138 |step 4        | verify the SLA                                               |
139 |              |                                                              |
140 |              | Result: The test case is passed or not.                      |
141 |              |                                                              |
142 +--------------+--------------------------------------------------------------+
143 |post-action   | It is the action when the test cases exist. It will check the|
144 |              | status of the specified process on the host, and restart the |
145 |              | process if it is not running for next test cases             |
146 |              |                                                              |
147 +--------------+--------------------------------------------------------------+
148 |test verdict  | Fails only if SLA is not passed, or if there is a test case  |
149 |              | execution problem.                                           |
150 |              |                                                              |
151 +--------------+--------------------------------------------------------------+