1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Yin Kanglin and others.
5 .. 14_ykl@tongji.edu.cn
7 *************************************
8 Yardstick Test Case Description TC057
9 *************************************
11 +-----------------------------------------------------------------------------+
12 |OpenStack Controller Cluster Management Service High Availability |
13 +==============+==============================================================+
15 +--------------+--------------------------------------------------------------+
16 |test purpose | This test case will verify the quorum configuration of the |
17 | | cluster manager(pacemaker) on controller nodes. When a |
18 | | controller node , which holds all active application |
19 | | resources, failed to communicate with other cluster nodes |
20 | | (via corosync), the test case will check whether the standby |
21 | | application resources will take place of those active |
22 | | application resources which should be regarded to be down in |
23 | | the cluster manager. |
24 +--------------+--------------------------------------------------------------+
25 |test method | This test case kills the processes of cluster messaging |
26 | | service(corosync) on a selected controller node(the node |
27 | | holds the active application resources), then checks whether |
28 | | active application resources are switched to other |
29 | | controller nodes and whether the Openstack commands are OK. |
30 +--------------+--------------------------------------------------------------+
31 |attackers | In this test case, an attacker called "kill-process" is |
32 | | needed. This attacker includes three parameters: |
33 | | 1) fault_type: which is used for finding the attacker's |
34 | | scripts. It should be always set to "kill-process" in this |
36 | | 2) process_name: which is the process name of the load |
37 | | balance service. If there are multiple processes use the |
38 | | same name on the host, all of them are killed by this |
40 | | 3) host: which is the name of a control node being attacked. |
42 | | In this case, this process name should set to "corosync" , |
44 | | -fault_type: "kill-process" |
45 | | -process_name: "corosync" |
47 +--------------+--------------------------------------------------------------+
48 |monitors | In this test case, a kind of monitor is needed: |
49 | | 1. the "openstack-cmd" monitor constantly request a specific |
50 | | Openstack command, which needs two parameters: |
51 | | 1) monitor_type: which is used for finding the monitor class |
52 | | and related scripts. It should be always set to |
53 | | "openstack-cmd" for this monitor. |
54 | | 2) command_name: which is the command name used for request |
56 | | In this case, the command_name of monitor1 should be services|
57 | | that are managed by the cluster manager. (Since rabbitmq and |
58 | | haproxy are managed by pacemaker, most Openstack Services |
59 | | can be used to check high availability in this case) |
63 | | -monitor_type: "openstack-cmd" |
64 | | -command_name: "nova image-list" |
66 | | -monitor_type: "openstack-cmd" |
67 | | -command_name: "neutron router-list" |
69 | | -monitor_type: "openstack-cmd" |
70 | | -command_name: "heat stack-list" |
72 | | -monitor_type: "openstack-cmd" |
73 | | -command_name: "cinder list" |
75 +--------------+--------------------------------------------------------------+
76 |checkers | In this test case, a checker is needed, the checker will |
77 | | the status of application resources in pacemaker and the |
78 | | checker have three parameters: |
79 | | 1) checker_type: which is used for finding the result |
80 | | checker class and related scripts. In this case the checker |
81 | | type will be "pacemaker-check-resource" |
82 | | 2) resource_name: the application resource name |
83 | | 3) resource_status: the expected status of the resource |
84 | | 4) expectedValue: the expected value for the output of the |
85 | | checker script, in the case the expected value will be the |
86 | | identifier in the cluster manager |
87 | | 3) condition: whether the expected value is in the output of |
88 | | checker script or is totally same with the output. |
89 | | (note: pcs is required to installed on controller node in |
90 | | order to run this checker) |
94 | | -checker_type: "pacemaker-check-resource" |
95 | | -resource_name: "p_rabbitmq-server" |
96 | | -resource_status: "Stopped" |
97 | | -expectedValue: "node-1" |
98 | | -condition: "in" |
100 | | -checker_type: "pacemaker-check-resource" |
101 | | -resource_name: "p_rabbitmq-server" |
102 | | -resource_status: "Master" |
103 | | -expectedValue: "node-2" |
104 | | -condition: "in" |
105 +--------------+--------------------------------------------------------------+
106 |metrics | In this test case, there are two metrics: |
107 | | 1)service_outage_time: which indicates the maximum outage |
108 | | time (seconds) of the specified Openstack command request. |
109 +--------------+--------------------------------------------------------------+
110 |test tool | None. Self-developed. |
111 +--------------+--------------------------------------------------------------+
112 |references | ETSI NFV REL001 |
113 +--------------+--------------------------------------------------------------+
114 |configuration | This test case needs two configuration files: |
115 | | 1) test case file: opnfv_yardstick_tc057.yaml |
116 | | -Attackers: see above "attackers" description |
117 | | -Monitors: see above "monitors" description |
118 | | -Checkers: see above "checkers" description |
119 | | -Steps: the test case execution step, see "test sequence" |
120 | | description below |
122 | | 2)POD file: pod.yaml |
123 | | The POD configuration should record on pod.yaml first. |
124 | | the "host" item in this test case will use the node name in |
126 +--------------+------+----------------------------------+--------------------+
127 |test sequence | description and expected result |
129 +--------------+--------------------------------------------------------------+
130 |step 1 | start monitors: |
131 | | each monitor will run with independently process |
133 | | Result: The monitor info will be collected. |
135 +--------------+--------------------------------------------------------------+
136 |step 2 | do attacker: connect the host through SSH, and then execute |
137 | | the kill process script with param value specified by |
140 | | Result: Process will be killed. |
142 +--------------+--------------------------------------------------------------+
143 |step 3 | do checker: check whether the status of application |
144 | | resources on different nodes are updated |
146 +--------------+--------------------------------------------------------------+
147 |step 4 | stop monitors after a period of time specified by |
150 | | Result: The monitor info will be aggregated. |
152 +--------------+--------------------------------------------------------------+
153 |step 5 | verify the SLA |
155 | | Result: The test case is passed or not. |
157 +--------------+------+----------------------------------+--------------------+
158 |post-action | It is the action when the test cases exist. It will check the|
159 | | status of the cluster messaging process(corosync) on the |
160 | | host, and restart the process if it is not running for next |
162 +--------------+------+----------------------------------+--------------------+
163 |test verdict | Fails only if SLA is not passed, or if there is a test case |
164 | | execution problem. |
165 +--------------+--------------------------------------------------------------+