Adding scale up feature to prox MPLS Tagging OvS-DPDK & SRIOV
[yardstick.git] / docs / testing / user / userguide / opnfv_yardstick_tc057.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Yin Kanglin and others.
5 .. 14_ykl@tongji.edu.cn
6
7 *************************************
8 Yardstick Test Case Description TC057
9 *************************************
10
11 +-----------------------------------------------------------------------------+
12 |OpenStack Controller Cluster Management Service High Availability            |
13 |                                                                             |
14 +--------------+--------------------------------------------------------------+
15 |test case id  | OPNFV_YARDSTICK_TC057_HA: OpenStack Controller Cluster       |
16 |              | Management Service High Availability                         |
17 |              |                                                              |
18 +--------------+--------------------------------------------------------------+
19 |test purpose  | This test case will verify the quorum configuration of the   |
20 |              | cluster manager(pacemaker) on controller nodes. When a       |
21 |              | controller node , which holds all active application         |
22 |              | resources, failed to communicate with other cluster nodes    |
23 |              | (via corosync), the test case will check whether the standby |
24 |              | application resources will take place of those active        |
25 |              | application resources which should be regarded to be down in |
26 |              | the cluster manager.                                         |
27 +--------------+--------------------------------------------------------------+
28 |test method   | This test case kills the processes of cluster messaging      |
29 |              | service(corosync) on a selected controller node(the node     |
30 |              | holds the active application resources), then checks whether |
31 |              | active application resources are switched to other           |
32 |              | controller nodes and whether the Openstack commands are OK.  |
33 +--------------+--------------------------------------------------------------+
34 |attackers     | In this test case, an attacker called "kill-process" is      |
35 |              | needed. This attacker includes three parameters:             |
36 |              | 1) fault_type: which is used for finding the attacker's      |
37 |              | scripts. It should be always set to "kill-process" in this   |
38 |              | test case.                                                   |
39 |              | 2) process_name: which is the process name of the load       |
40 |              | balance service. If there are multiple processes use the     |
41 |              | same name on the host, all of them are killed by this        |
42 |              | attacker.                                                    |
43 |              | 3) host: which is the name of a control node being attacked. |
44 |              |                                                              |
45 |              | In this case, this process name should set to "corosync" ,   |
46 |              | for example                                                  |
47 |              | -fault_type: "kill-process"                                  |
48 |              | -process_name: "corosync"                                    |
49 |              | -host: node1                                                 |
50 +--------------+--------------------------------------------------------------+
51 |monitors      | In this test case, a kind of monitor is needed:              |
52 |              |                                                              |
53 |              | 1. the "openstack-cmd" monitor constantly request a specific |
54 |              |    Openstack command, which needs two parameters:            |
55 |              |                                                              |
56 |              |    1. monitor_type: which is used for finding the monitor    |
57 |              |       class and related scripts. It should be always set to  |
58 |              |       "openstack-cmd" for this monitor.                      |
59 |              |    2. command_name: which is the command name used for       |
60 |              |       request                                                |
61 |              |                                                              |
62 |              | In this case, the command_name of monitor1 should be         |
63 |              | services that are managed by the cluster manager.            |
64 |              | (Since rabbitmq and haproxy are managed by pacemaker,        |
65 |              | most Openstack Services can be used to check high            |
66 |              | availability in this case)                                   |
67 |              |                                                              |
68 |              | (e.g.)                                                       |
69 |              | monitor1:                                                    |
70 |              | -monitor_type: "openstack-cmd"                               |
71 |              | -command_name: "nova image-list"                             |
72 |              | monitor2:                                                    |
73 |              | -monitor_type: "openstack-cmd"                               |
74 |              | -command_name: "neutron router-list"                         |
75 |              | monitor3:                                                    |
76 |              | -monitor_type: "openstack-cmd"                               |
77 |              | -command_name: "heat stack-list"                             |
78 |              | monitor4:                                                    |
79 |              | -monitor_type: "openstack-cmd"                               |
80 |              | -command_name: "cinder list"                                 |
81 |              |                                                              |
82 +--------------+--------------------------------------------------------------+
83 |checkers      | In this test case, a checker is needed, the checker will     |
84 |              | the status of application resources in pacemaker and the     |
85 |              | checker have three parameters:                               |
86 |              | 1) checker_type: which is used for finding the result        |
87 |              | checker class and related scripts. In this case the checker  |
88 |              | type will be "pacemaker-check-resource"                      |
89 |              | 2) resource_name: the application resource name              |
90 |              | 3) resource_status: the expected status of the resource      |
91 |              | 4) expectedValue: the expected value for the output of the   |
92 |              | checker script, in the case the expected value will be the   |
93 |              | identifier in the cluster manager                            |
94 |              | 3) condition: whether the expected value is in the output of |
95 |              | checker script or is totally same with the output.           |
96 |              | (note: pcs is required to installed on controller node in    |
97 |              | order to run this checker)                                   |
98 |              |                                                              |
99 |              | (e.g.)                                                       |
100 |              | checker1:                                                    |
101 |              | -checker_type: "pacemaker-check-resource"                    |
102 |              | -resource_name: "p_rabbitmq-server"                          |
103 |              | -resource_status: "Stopped"                                  |
104 |              | -expectedValue: "node-1"                                     |
105 |              | -condition: "in"                                             |
106 |              | checker2:                                                    |
107 |              | -checker_type: "pacemaker-check-resource"                    |
108 |              | -resource_name: "p_rabbitmq-server"                          |
109 |              | -resource_status: "Master"                                   |
110 |              | -expectedValue: "node-2"                                     |
111 |              | -condition: "in"                                             |
112 +--------------+--------------------------------------------------------------+
113 |metrics       | In this test case, there are two metrics:                    |
114 |              | 1)service_outage_time: which indicates the maximum outage    |
115 |              | time (seconds) of the specified Openstack command request.   |
116 +--------------+--------------------------------------------------------------+
117 |test tool     | None. Self-developed.                                        |
118 +--------------+--------------------------------------------------------------+
119 |references    | ETSI NFV REL001                                              |
120 +--------------+--------------------------------------------------------------+
121 |configuration | This test case needs two configuration files:                |
122 |              | 1) test case file: opnfv_yardstick_tc057.yaml                |
123 |              | -Attackers: see above "attackers" description                |
124 |              | -Monitors: see above "monitors" description                  |
125 |              | -Checkers: see above "checkers" description                  |
126 |              | -Steps: the test case execution step, see "test sequence"    |
127 |              | description below                                            |
128 |              |                                                              |
129 |              | 2)POD file: pod.yaml                                         |
130 |              | The POD configuration should record on pod.yaml first.       |
131 |              | the "host" item in this test case will use the node name in  |
132 |              | the pod.yaml.                                                |
133 +--------------+------+----------------------------------+--------------------+
134 |test sequence | description and expected result                              |
135 |              |                                                              |
136 +--------------+--------------------------------------------------------------+
137 |step 1        | start monitors:                                              |
138 |              | each monitor will run with independently process             |
139 |              |                                                              |
140 |              | Result: The monitor info will be collected.                  |
141 |              |                                                              |
142 +--------------+--------------------------------------------------------------+
143 |step 2        | do attacker: connect the host through SSH, and then execute  |
144 |              | the kill process script with param value specified by        |
145 |              | "process_name"                                               |
146 |              |                                                              |
147 |              | Result: Process will be killed.                              |
148 |              |                                                              |
149 +--------------+--------------------------------------------------------------+
150 |step 3        | do checker: check whether the status of application          |
151 |              | resources on different nodes are updated                     |
152 |              |                                                              |
153 +--------------+--------------------------------------------------------------+
154 |step 4        | stop monitors after a period of time specified by            |
155 |              | "waiting_time"                                               |
156 |              |                                                              |
157 |              | Result: The monitor info will be aggregated.                 |
158 |              |                                                              |
159 +--------------+--------------------------------------------------------------+
160 |step 5        | verify the SLA                                               |
161 |              |                                                              |
162 |              | Result: The test case is passed or not.                      |
163 |              |                                                              |
164 +--------------+------+----------------------------------+--------------------+
165 |post-action   | It is the action when the test cases exist. It will check    |
166 |              | the status of the cluster messaging process(corosync) on the |
167 |              | host, and restart the process if it is not running for next  |
168 |              | test cases.                                                  |
169 |              | Notice: This post-action uses 'lsb_release' command to check |
170 |              | the host linux distribution and determine the OpenStack      |
171 |              | service name to restart the process. Lack of 'lsb_release'   |
172 |              | on the host may cause failure to restart the process.        |
173 |              |                                                              |
174 +--------------+------+----------------------------------+--------------------+
175 |test verdict  | Fails only if SLA is not passed, or if there is a test case  |
176 |              | execution problem.                                           |
177 |              |                                                              |
178 +--------------+--------------------------------------------------------------+