1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Yin Kanglin and others.
5 .. 14_ykl@tongji.edu.cn
7 *************************************
8 Yardstick Test Case Description TC054
9 *************************************
11 +-----------------------------------------------------------------------------+
12 |OpenStack Virtual IP High Availability |
14 +--------------+--------------------------------------------------------------+
15 |test case id | OPNFV_YARDSTICK_TC054: OpenStack Virtual IP High |
17 +--------------+--------------------------------------------------------------+
18 |test purpose | This test case will verify the high availability for virtual |
19 | | ip in the environment. When master node of virtual ip is |
20 | | abnormally shutdown, connection to virtual ip and |
21 | | the services binded to the virtual IP it should be OK. |
22 +--------------+--------------------------------------------------------------+
23 |test method | This test case shutdowns the virtual IP master node with |
24 | | some fault injection tools, then checks whether virtual ips |
25 | | can be pinged and services binded to virtual ip are OK with |
26 | | some monitor tools. |
27 +--------------+--------------------------------------------------------------+
28 |attackers | In this test case, an attacker called "control-shutdown" is |
29 | | needed. This attacker includes two parameters: |
30 | | 1) fault_type: which is used for finding the attacker's |
31 | | scripts. It should be always set to "control-shutdown" in |
33 | | 2) host: which is the name of a control node being attacked. |
35 | | In this case the host should be the virtual ip master node, |
36 | | that means the host ip is the virtual ip, for exapmle: |
37 | | -fault_type: "control-shutdown" |
38 | | -host: node1(the VIP Master node) |
39 +--------------+--------------------------------------------------------------+
40 |monitors | In this test case, two kinds of monitor are needed: |
41 | | 1. the "ip_status" monitor that pings a specific ip to check |
42 | | the connectivity of this ip, which needs two parameters: |
43 | | 1) monitor_type: which is used for finding the monitor class |
44 | | and related scripts. It should be always set to "ip_status" |
45 | | for this monitor. |
46 | | 2) ip_address: The ip to be pinged. In this case, ip_address |
47 | | should be the virtual IP. |
49 | | 2. the "openstack-cmd" monitor constantly request a specific |
50 | | Openstack command, which needs two parameters: |
51 | | 1) monitor_type: which is used for finding the monitor class |
52 | | and related scripts. It should be always set to |
53 | | "openstack-cmd" for this monitor. |
54 | | 2) command_name: which is the command name used for request. |
58 | | -monitor_type: "ip_status" |
59 | | -host: 192.168.0.2 |
61 | | -monitor_type: "openstack-cmd" |
62 | | -command_name: "nova image-list" |
64 +--------------+--------------------------------------------------------------+
65 |metrics | In this test case, there are two metrics: |
66 | | 1) ping_outage_time: which-indicates the maximum outage time |
67 | | to ping the specified host. |
68 | | 2)service_outage_time: which indicates the maximum outage |
69 | | time (seconds) of the specified Openstack command request. |
70 +--------------+--------------------------------------------------------------+
71 |test tool | Developed by the project. Please see folder: |
72 | | "yardstick/benchmark/scenarios/availability/ha_tools" |
74 +--------------+--------------------------------------------------------------+
75 |references | ETSI NFV REL001 |
77 +--------------+--------------------------------------------------------------+
78 |configuration | This test case needs two configuration files: |
79 | | 1) test case file: opnfv_yardstick_tc054.yaml |
80 | | -Attackers: see above "attackers" discription |
81 | | -waiting_time: which is the time (seconds) from the process |
82 | | being killed to stoping monitors the monitors |
83 | | -Monitors: see above "monitors" discription |
84 | | -SLA: see above "metrics" discription |
86 | | 2)POD file: pod.yaml |
87 | | The POD configuration should record on pod.yaml first. |
88 | | the "host" item in this test case will use the node name in |
91 +--------------+--------------------------------------------------------------+
92 |test sequence | description and expected result |
94 +--------------+--------------------------------------------------------------+
95 |step 1 | start monitors: |
96 | | each monitor will run with independently process |
98 | | Result: The monitor info will be collected. |
100 +--------------+--------------------------------------------------------------+
101 |step 2 | do attacker: connect the host through SSH, and then execute |
102 | | the shutdown script on the VIP master node. |
104 | | Result: VIP master node will be shutdown |
106 +--------------+--------------------------------------------------------------+
107 |step 3 | stop monitors after a period of time specified by |
110 | | Result: The monitor info will be aggregated. |
112 +--------------+--------------------------------------------------------------+
113 |step 4 | verify the SLA |
115 | | Result: The test case is passed or not. |
117 +--------------+--------------------------------------------------------------+
118 |post-action | It is the action when the test cases exist. It restarts the |
119 | | original VIP master node if it is not restarted. |
121 +--------------+--------------------------------------------------------------+
122 |test verdict | Fails only if SLA is not passed, or if there is a test case |
123 | | execution problem. |
125 +--------------+--------------------------------------------------------------+