Merge "Add NUMA pinning support for node context"
[yardstick.git] / docs / testing / user / userguide / opnfv_yardstick_tc050.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Yin Kanglin and others.
5 .. 14_ykl@tongji.edu.cn
6
7 *************************************
8 Yardstick Test Case Description TC050
9 *************************************
10
11 +-----------------------------------------------------------------------------+
12 |OpenStack Controller Node Network High Availability                          |
13 |                                                                             |
14 +--------------+--------------------------------------------------------------+
15 |test case id  | OPNFV_YARDSTICK_TC050: OpenStack Controller Node Network     |
16 |              | High Availability                                            |
17 +--------------+--------------------------------------------------------------+
18 |test purpose  | This test case will verify the high availability of control  |
19 |              | node. When one of the controller failed to connect the       |
20 |              | network, which breaks down the Openstack services on this    |
21 |              | node. These Openstack service should able to be accessed by  |
22 |              | other controller nodes, and the services on failed           |
23 |              | controller node should be isolated.                          |
24 +--------------+--------------------------------------------------------------+
25 |test method   | This test case turns off the network interfaces of a         |
26 |              | specified control node, then checks whether all services     |
27 |              | provided by the control node are OK with some monitor tools. |
28 +--------------+--------------------------------------------------------------+
29 |attackers     | In this test case, an attacker called "close-interface" is   |
30 |              | needed. This attacker includes three parameters:             |
31 |              | 1) fault_type: which is used for finding the attacker's      |
32 |              | scripts. It should be always set to "close-interface" in     |
33 |              | this test case.                                              |
34 |              | 2) host: which is the name of a control node being attacked. |
35 |              | 3) interface: the network interface to be turned off.        |
36 |              |                                                              |
37 |              | There are four instance of the "close-interface" monitor:    |
38 |              | attacker1(for public netork):                                |
39 |              | -fault_type: "close-interface"                               |
40 |              | -host: node1                                                 |
41 |              | -interface: "br-ex"                                          |
42 |              | attacker2(for management netork):                            |
43 |              | -fault_type: "close-interface"                               |
44 |              | -host: node1                                                 |
45 |              | -interface: "br-mgmt"                                        |
46 |              | attacker3(for storage netork):                               |
47 |              | -fault_type: "close-interface"                               |
48 |              | -host: node1                                                 |
49 |              | -interface: "br-storage"                                     |
50 |              | attacker4(for private netork):                               |
51 |              | -fault_type: "close-interface"                               |
52 |              | -host: node1                                                 |
53 |              | -interface: "br-mesh"                                        |
54 +--------------+--------------------------------------------------------------+
55 |monitors      | In this test case, the monitor named "openstack-cmd" is      |
56 |              | needed. The monitor needs needs two parameters:              |
57 |              | 1) monitor_type: which is used for finding the monitor class |
58 |              | and related scritps. It should be always set to              |
59 |              | "openstack-cmd" for this monitor.                            |
60 |              | 2) command_name: which is the command name used for request  |
61 |              |                                                              |
62 |              | There are four instance of the "openstack-cmd" monitor:      |
63 |              | monitor1:                                                    |
64 |              | -monitor_type: "openstack-cmd"                               |
65 |              | -command_name: "nova image-list"                             |
66 |              | monitor2:                                                    |
67 |              | -monitor_type: "openstack-cmd"                               |
68 |              | -command_name: "neutron router-list"                         |
69 |              | monitor3:                                                    |
70 |              | -monitor_type: "openstack-cmd"                               |
71 |              | -command_name: "heat stack-list"                             |
72 |              | monitor4:                                                    |
73 |              | -monitor_type: "openstack-cmd"                               |
74 |              | -command_name: "cinder list"                                 |
75 +--------------+--------------------------------------------------------------+
76 |metrics       | In this test case, there is one metric:                      |
77 |              | 1)service_outage_time: which indicates the maximum outage    |
78 |              | time (seconds) of the specified Openstack command request.   |
79 +--------------+--------------------------------------------------------------+
80 |test tool     | Developed by the project. Please see folder:                 |
81 |              | "yardstick/benchmark/scenarios/availability/ha_tools"        |
82 |              |                                                              |
83 +--------------+--------------------------------------------------------------+
84 |references    | ETSI NFV REL001                                              |
85 |              |                                                              |
86 +--------------+--------------------------------------------------------------+
87 |configuration | This test case needs two configuration files:                |
88 |              | 1) test case file: opnfv_yardstick_tc050.yaml                |
89 |              | -Attackers: see above "attackers" discription                |
90 |              | -waiting_time: which is the time (seconds) from the process  |
91 |              | being killed to stoping monitors the monitors                |
92 |              | -Monitors: see above "monitors" discription                  |
93 |              | -SLA: see above "metrics" discription                        |
94 |              |                                                              |
95 |              | 2)POD file: pod.yaml                                         |
96 |              | The POD configuration should record on pod.yaml first.       |
97 |              | the "host" item in this test case will use the node name in  |
98 |              | the pod.yaml.                                                |
99 |              |                                                              |
100 +--------------+--------------------------------------------------------------+
101 |test sequence | description and expected result                              |
102 |              |                                                              |
103 +--------------+--------------------------------------------------------------+
104 |step 1        | start monitors:                                              |
105 |              | each monitor will run with independently process             |
106 |              |                                                              |
107 |              | Result: The monitor info will be collected.                  |
108 |              |                                                              |
109 +--------------+--------------------------------------------------------------+
110 |step 2        | do attacker: connect the host through SSH, and then execute  |
111 |              | the turnoff network interface script with param value        |
112 |              | specified by  "interface".                                   |
113 |              |                                                              |
114 |              | Result: Network interfaces will be turned down.              |
115 |              |                                                              |
116 +--------------+--------------------------------------------------------------+
117 |step 3        | stop monitors after a period of time specified by            |
118 |              | "waiting_time"                                               |
119 |              |                                                              |
120 |              | Result: The monitor info will be aggregated.                 |
121 |              |                                                              |
122 +--------------+--------------------------------------------------------------+
123 |step 4        | verify the SLA                                               |
124 |              |                                                              |
125 |              | Result: The test case is passed or not.                      |
126 |              |                                                              |
127 +--------------+--------------------------------------------------------------+
128 |post-action   | It is the action when the test cases exist. It turns up the  |
129 |              | network interface of the control node if it is not turned    |
130 |              | up.                                                          |
131 +--------------+--------------------------------------------------------------+
132 |test verdict  | Fails only if SLA is not passed, or if there is a test case  |
133 |              | execution problem.                                           |
134 |              |                                                              |
135 +--------------+--------------------------------------------------------------+