Merge "Add grafana config for TC072_Network Latency, Throughput, Packet Loss and...
authorliang gao <jean.gaoliang@huawei.com>
Mon, 1 Aug 2016 07:05:28 +0000 (07:05 +0000)
committerGerrit Code Review <gerrit@172.30.200.206>
Mon, 1 Aug 2016 07:05:29 +0000 (07:05 +0000)
docs/userguide/opnfv_yardstick_tc073.rst [new file with mode: 0644]
samples/storperf.yaml [new file with mode: 0644]
tests/ci/prepare_env.sh
tests/opnfv/test_cases/opnfv_yardstick_tc073.yaml [new file with mode: 0755]
tests/unit/benchmark/scenarios/storage/test_storperf.py [new file with mode: 0644]
yardstick/benchmark/scenarios/storage/storperf.py [new file with mode: 0644]
yardstick/resources/scripts/install/storperf.bash [moved from yardstick/resources/script/install/storperf.bash with 100% similarity]
yardstick/resources/scripts/remove/storperf.bash [moved from yardstick/resources/script/remove/storperf.bash with 100% similarity]

diff --git a/docs/userguide/opnfv_yardstick_tc073.rst b/docs/userguide/opnfv_yardstick_tc073.rst
new file mode 100644 (file)
index 0000000..a6499ea
--- /dev/null
@@ -0,0 +1,81 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+
+*************************************
+Yardstick Test Case Description TC073
+*************************************
+
+.. _netperf: http://www.netperf.org/netperf/training/Netperf.html
+
++-----------------------------------------------------------------------------+
+|Throughput per NFVI node test                                                |
+|                                                                             |
++--------------+--------------------------------------------------------------+
+|test case id  | OPNFV_YARDSTICK_TC073_Network latency and throughput between |
+|              | nodes                                                        |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|metric        | Network latency and throughput                               |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|test purpose  | To evaluate the IaaS network performance with regards to     |
+|              | flows and throughput, such as if and how different amounts   |
+|              | of packet sizes and flows matter for the throughput between  |
+|              | nodes in one pod.                                            |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|configuration | file: opnfv_yardstick_tc073.yaml                             |
+|              |                                                              |
+|              | Packet size: default 1024 bytes.                             |
+|              |                                                              |
+|              | Test length: default 20 seconds.                             |
+|              |                                                              |
+|              | The client and server are distributed on different nodes.    |
+|              |                                                              |
+|              | For SLA max_mean_latency is set to 100.                      |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|test tool     | netperf                                                      |
+|              | Netperf is a software application that provides network      |
+|              | bandwidth testing between two hosts on a network. It         |
+|              | supports Unix domain sockets, TCP, SCTP, DLPI and UDP via    |
+|              | BSD Sockets. Netperf provides a number of predefined tests   |
+|              | e.g. to measure bulk (unidirectional) data transfer or       |
+|              | request response performance.                                |
+|              | (netperf is not always part of a Linux distribution, hence   |
+|              | it needs to be installed.)                                   |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|references    | netperf Man pages                                            |
+|              | ETSI-NFV-TST001                                              |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|applicability | Test can be configured with different packet sizes and       |
+|              | test duration. Default values exist.                         |
+|              |                                                              |
+|              | SLA (optional): max_mean_latency                             |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|pre-test      | The POD can be reached by external ip and logged on via ssh  |
+|conditions    |                                                              |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result                              |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|step 1        | Install netperf tool on each specified node, one is as the   |
+|              | server, and the other as the client.                         |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|step 2        | Log on to the client node and use the netperf command to     |
+|              | execute the network performance test                         |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|step 3        | The throughput results stored.                               |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
+|test verdict  | Fails only if SLA is not passed, or if there is a test case  |
+|              | execution problem.                                           |
+|              |                                                              |
++--------------+--------------------------------------------------------------+
diff --git a/samples/storperf.yaml b/samples/storperf.yaml
new file mode 100644 (file)
index 0000000..815ef0d
--- /dev/null
@@ -0,0 +1,31 @@
+---
+# Sample StorPerf benchmark task config file
+# StorPerf is a tool to measure block and object storage performance in an NFVI
+
+schema: "yardstick:task:0.1"
+
+scenarios:
+-
+  type: StorPerf
+  options:
+    agent_count: 1
+    agent_image: "Ubuntu 14.04"
+    public_network: "ext-net"
+    volume_size: 2
+    # target:
+    # deadline:
+    # nossd:
+    # nowarm:
+    block_sizes: "4096"
+    queue_depths: "4"
+    workload: "ws"
+    StorPerf_ip: "192.168.23.2"
+    query_interval: 10
+    timeout: 600
+
+  runner:
+    type: Iteration
+    iterations: 1
+
+context:
+  type: Dummy
index 723a04a..35118b1 100755 (executable)
@@ -55,3 +55,28 @@ export EXTERNAL_NETWORK INSTALLER_TYPE DEPLOY_TYPE NODE_NAME
 
 # Prepare a admin-rc file for StorPerf integration
 $YARDSTICK_REPO_DIR/tests/ci/prepare_storperf_admin-rc.sh
+
+# Fetching id_rsa file from jump_server..."
+verify_connectivity() {
+    local ip=$1
+    echo "Verifying connectivity to $ip..."
+    for i in $(seq 0 10); do
+        if ping -c 1 -W 1 $ip > /dev/null; then
+            echo "$ip is reachable!"
+            return 0
+        fi
+        sleep 1
+    done
+    error "Can not talk to $ip."
+}
+
+ssh_options="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
+
+if [ "$INSTALLER_TYPE" == "fuel" ]; then
+    #ip_fuel="10.20.0.2"
+    verify_connectivity $INSTALLER_IP
+    echo "Fetching id_rsa file from jump_server $INSTALLER_IP..."
+    sshpass -p r00tme scp 2>/dev/null $ssh_options \
+    root@${INSTALLER_IP}:~/.ssh/id_rsa /root/.ssh/id_rsa &> /dev/null
+fi
+
diff --git a/tests/opnfv/test_cases/opnfv_yardstick_tc073.yaml b/tests/opnfv/test_cases/opnfv_yardstick_tc073.yaml
new file mode 100755 (executable)
index 0000000..fd95b8c
--- /dev/null
@@ -0,0 +1,37 @@
+---
+# Yardstick TC073 config file
+# measure network latency and throughput using netperf
+# There are two sample scenarios: bulk test and request/response test
+# In bulk test, UDP_STREAM and TCP_STREAM can be used
+# send_msg_size and recv_msg_size are options of bulk test
+# In req/rsp test, TCP_RR TCP_CRR UDP_RR can be used
+# req_rsp_size is option of req/rsp test
+
+schema: "yardstick:task:0.1"
+{% set host = host or "node1.LF" %}
+{% set target = target or "node2.LF" %}
+{% set pod_info = pod_info or "etc/yardstick/nodes/compass_sclab_physical/pod.yaml" %}
+scenarios:
+-
+  type: NetperfNode
+  options:
+    testname: 'UDP_STREAM'
+    send_msg_size: 1024
+    duration: 20
+
+  host: {{host}}
+  target: {{target}}
+
+  runner:
+    type: Iteration
+    iterations: 1
+    interval: 1
+  sla:
+    mean_latency: 100
+    action: monitor
+
+context:
+  type: Node
+  name: LF
+  file: {{pod_info}}
+
diff --git a/tests/unit/benchmark/scenarios/storage/test_storperf.py b/tests/unit/benchmark/scenarios/storage/test_storperf.py
new file mode 100644 (file)
index 0000000..d87ed73
--- /dev/null
@@ -0,0 +1,214 @@
+#!/usr/bin/env python
+
+##############################################################################
+# Copyright (c) 2016 Huawei Technologies Co.,Ltd.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+
+# Unittest for yardstick.benchmark.scenarios.storage.storperf.StorPerf
+
+import mock
+import unittest
+import requests
+import json
+
+from yardstick.benchmark.scenarios.storage import storperf
+
+
+def mocked_requests_config_post(*args, **kwargs):
+    class MockResponseConfigPost:
+        def __init__(self, json_data, status_code):
+            self.content = json_data
+            self.status_code = status_code
+
+    return MockResponseConfigPost('{"stack_id": "dac27db1-3502-4300-b301-91c64e6a1622","stack_created": "false"}', 200)
+
+
+def mocked_requests_config_get(*args, **kwargs):
+    class MockResponseConfigGet:
+        def __init__(self, json_data, status_code):
+            self.content = json_data
+            self.status_code = status_code
+
+    return MockResponseConfigGet('{"stack_id": "dac27db1-3502-4300-b301-91c64e6a1622","stack_created": "true"}', 200)
+
+
+def mocked_requests_job_get(*args, **kwargs):
+    class MockResponseJobGet:
+        def __init__(self, json_data, status_code):
+            self.content = json_data
+            self.status_code = status_code
+
+    return MockResponseJobGet('{"_ssd_preconditioning.queue-depth.8.block-size.16384.duration": 6}', 200)
+
+
+def mocked_requests_job_post(*args, **kwargs):
+    class MockResponseJobPost:
+        def __init__(self, json_data, status_code):
+            self.content = json_data
+            self.status_code = status_code
+
+    return MockResponseJobPost('{"job_id": \
+                                 "d46bfb8c-36f4-4a40-813b-c4b4a437f728"}', 200)
+
+
+def mocked_requests_job_delete(*args, **kwargs):
+    class MockResponseJobDelete:
+        def __init__(self, json_data, status_code):
+            self.content = json_data
+            self.status_code = status_code
+
+    return MockResponseJobDelete('{}', 200)
+
+
+def mocked_requests_delete(*args, **kwargs):
+    class MockResponseDelete:
+        def __init__(self, json_data, status_code):
+            self.json_data = json_data
+            self.status_code = status_code
+
+    return MockResponseDelete('{}', 200)
+
+
+def mocked_requests_delete_failed(*args, **kwargs):
+    class MockResponseDeleteFailed:
+        def __init__(self, json_data, status_code):
+            self.json_data = json_data
+            self.status_code = status_code
+
+    if args[0] == "http://172.16.0.137:5000/api/v1.0/configurations":
+        return MockResponseDeleteFailed('{"message": "Teardown failed"}', 400)
+
+    return MockResponseDeleteFailed('{}', 404)
+
+
+class StorPerfTestCase(unittest.TestCase):
+
+    def setUp(self):
+        self.ctx = {
+            'host': {
+                'ip': '172.16.0.137',
+                'user': 'cirros',
+                'key_filename': "mykey.key"
+            }
+        }
+
+        self.result = {}
+
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.post',
+                side_effect=mocked_requests_config_post)
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.get',
+                side_effect=mocked_requests_config_get)
+    def test_successful_setup(self, mock_post, mock_get):
+        options = {
+            "agent_count": 8,
+            "public_network": 'ext-net',
+            "volume_size": 10,
+            "block_sizes": 4096,
+            "queue_depths": 4,
+            "workload": "rs",
+            "StorPerf_ip": "192.168.23.2",
+            "query_interval": 10,
+            "timeout": 60
+        }
+
+        args = {
+            "options": options
+        }
+
+        s = storperf.StorPerf(args, self.ctx)
+
+        s.setup()
+
+        self.assertTrue(s.setup_done)
+
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.post',
+                side_effect=mocked_requests_job_post)
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.get',
+                side_effect=mocked_requests_job_get)
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.delete',
+                side_effect=mocked_requests_job_delete)
+    def test_successful_run(self, mock_post, mock_get, mock_delete):
+        options = {
+            "agent_count": 8,
+            "public_network": 'ext-net',
+            "volume_size": 10,
+            "block_sizes": 4096,
+            "queue_depths": 4,
+            "workload": "rs",
+            "StorPerf_ip": "192.168.23.2",
+            "query_interval": 10,
+            "timeout": 60
+        }
+
+        args = {
+            "options": options
+        }
+
+        s = storperf.StorPerf(args, self.ctx)
+        s.setup_done = True
+
+        sample_output = '{"_ssd_preconditioning.queue-depth.8.block-size.16384.duration": 6}'
+
+        expected_result = json.loads(sample_output)
+
+        s.run(self.result)
+
+        self.assertEqual(self.result, expected_result)
+
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.delete', side_effect=mocked_requests_delete)
+    def test_successful_teardown(self, mock_delete):
+        options = {
+            "agent_count": 8,
+            "public_network": 'ext-net',
+            "volume_size": 10,
+            "block_sizes": 4096,
+            "queue_depths": 4,
+            "workload": "rs",
+            "StorPerf_ip": "192.168.23.2",
+            "query_interval": 10,
+            "timeout": 60
+        }
+
+        args = {
+            "options": options
+        }
+
+        s = storperf.StorPerf(args, self.ctx)
+
+        s.teardown()
+
+        self.assertFalse(s.setup_done)
+
+    @mock.patch('yardstick.benchmark.scenarios.storage.storperf.requests.delete', side_effect=mocked_requests_delete_failed)
+    def test_failed_teardown(self, mock_delete):
+        options = {
+            "agent_count": 8,
+            "public_network": 'ext-net',
+            "volume_size": 10,
+            "block_sizes": 4096,
+            "queue_depths": 4,
+            "workload": "rs",
+            "StorPerf_ip": "192.168.23.2",
+            "query_interval": 10,
+            "timeout": 60
+        }
+
+        args = {
+            "options": options
+        }
+
+        s = storperf.StorPerf(args, self.ctx)
+
+        self.assertRaises(AssertionError, s.teardown(), self.result)
+
+
+def main():
+    unittest.main()
+
+if __name__ == '__main__':
+    main()
diff --git a/yardstick/benchmark/scenarios/storage/storperf.py b/yardstick/benchmark/scenarios/storage/storperf.py
new file mode 100644 (file)
index 0000000..d39c23a
--- /dev/null
@@ -0,0 +1,208 @@
+##############################################################################
+# Copyright (c) 2016 Huawei Technologies Co.,Ltd.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+import logging
+import json
+import requests
+import time
+
+from yardstick.benchmark.scenarios import base
+
+LOG = logging.getLogger(__name__)
+
+
+class StorPerf(base.Scenario):
+    """Execute StorPerf benchmark.
+    Once the StorPerf container has been started and the ReST API exposed,
+    you can interact directly with it using the ReST API. StorPerf comes with a
+    Swagger interface that is accessible through the exposed port at:
+    http://StorPerf:5000/swagger/index.html
+
+  Command line options:
+    target = [device or path] (Optional):
+    The path to either an attached storage device (/dev/vdb, etc) or a
+    directory path (/opt/storperf) that will be used to execute the performance
+    test. In the case of a device, the entire device will be used.
+    If not specified, the current directory will be used.
+
+    workload = [workload module] (Optional):
+    If not specified, the default is to run all workloads.
+    The workload types are:
+        rs: 100% Read, sequential data
+        ws: 100% Write, sequential data
+        rr: 100% Read, random access
+        wr: 100% Write, random access
+        rw: 70% Read / 30% write, random access
+
+    nossd (Optional):
+    Do not perform SSD style preconditioning.
+
+    nowarm (Optional):
+    Do not perform a warmup prior to measurements.
+
+    report = [job_id] (Optional):
+    Query the status of the supplied job_id and report on metrics.
+    If a workload is supplied, will report on only that subset.
+
+    """
+    __scenario_type__ = "StorPerf"
+
+    def __init__(self, scenario_cfg, context_cfg):
+        """Scenario construction."""
+        self.scenario_cfg = scenario_cfg
+        self.context_cfg = context_cfg
+        self.options = self.scenario_cfg["options"]
+
+        self.target = self.options.get("StorPerf_ip", None)
+        self.query_interval = self.options.get("query_interval", 10)
+        # Maximum allowed job time
+        self.timeout = self.options.get('timeout', 3600)
+
+        self.setup_done = False
+        self.job_completed = False
+
+    def _query_setup_state(self):
+        """Query the stack status."""
+        LOG.info("Querying the stack state...")
+        setup_query = requests.get('http://%s:5000/api/v1.0/configurations'
+                                   % self.target)
+
+        setup_query_content = json.loads(setup_query.content)
+        if setup_query_content["stack_created"]:
+            self.setup_done = True
+            LOG.debug("stack_created: %s"
+                      % setup_query_content["stack_created"])
+
+    def setup(self):
+        """Set the configuration."""
+        env_args = {}
+        env_args_payload_list = ["agent_count", "public_network",
+                                 "agent_image", "volume_size"]
+
+        for env_argument in env_args_payload_list:
+            if env_argument in self.options:
+                env_args[env_argument] = self.options[env_argument]
+
+        LOG.info("Creating a stack on node %s with parameters %s" %
+                 (self.target, env_args))
+        setup_res = requests.post('http://%s:5000/api/v1.0/configurations'
+                                  % self.target, json=env_args)
+
+        setup_res_content = json.loads(setup_res.content)
+
+        if setup_res.status_code == 400:
+            raise RuntimeError("Failed to create a stack, error message:",
+                               setup_res_content["message"])
+        elif setup_res.status_code == 200:
+            LOG.info("stack_id: %s" % setup_res_content["stack_id"])
+
+            while not self.setup_done:
+                self._query_setup_state()
+                time.sleep(self.query_interval)
+
+    # TODO: Support Storperf job status.
+
+    # def _query_job_state(self, job_id):
+    #     """Query the status of the supplied job_id and report on metrics"""
+    #     LOG.info("Fetching report for %s..." % job_id)
+    #     report_res = requests.get('http://%s:5000/api/v1.0/jobs?id=%s' %
+    #                               (self.target, job_id))
+
+    #     report_res_content = json.loads(report_res.content)
+
+    #     if report_res.status_code == 400:
+    #         raise RuntimeError("Failed to fetch report, error message:",
+    #                            report_res_content["message"])
+    #     else:
+    #         job_status = report_res_content["status"]
+
+    #     LOG.debug("Job is: %s..." % job_status)
+    #     if job_status == "completed":
+    #         self.job_completed = True
+
+        # TODO: Support using StorPerf ReST API to read Job ETA.
+
+        # if job_status == "completed":
+        #     self.job_completed = True
+        #     ETA = 0
+        # elif job_status == "running":
+        #     ETA = report_res_content['time']
+        #
+        # return ETA
+
+    def run(self, result):
+        """Execute StorPerf benchmark"""
+        if not self.setup_done:
+            self.setup()
+
+        job_args = {}
+        job_args_payload_list = ["block_sizes", "queue_depths", "deadline",
+                                 "target", "nossd", "nowarm", "workload"]
+
+        for job_argument in job_args_payload_list:
+            if job_argument in self.options:
+                job_args[job_argument] = self.options[job_argument]
+
+        LOG.info("Starting a job with parameters %s" % job_args)
+        job_res = requests.post('http://%s:5000/api/v1.0/jobs' % self.target,
+                                json=job_args)
+
+        job_res_content = json.loads(job_res.content)
+
+        if job_res.status_code == 400:
+            raise RuntimeError("Failed to start a job, error message:",
+                               job_res_content["message"])
+        elif job_res.status_code == 200:
+            job_id = job_res_content["job_id"]
+            LOG.info("Started job id: %s..." % job_id)
+
+            time.sleep(self.timeout)
+            terminate_res = requests.delete('http://%s:5000/api/v1.0/jobs' %
+                                            self.target)
+
+            if terminate_res.status_code == 400:
+                terminate_res_content = json.loads(terminate_res.content)
+                raise RuntimeError("Failed to start a job, error message:",
+                                   terminate_res_content["message"])
+
+        # TODO: Support Storperf job status.
+
+        #   while not self.job_completed:
+        #       self._query_job_state(job_id)
+        #       time.sleep(self.query_interval)
+
+        # TODO: Support using ETA to polls for completion.
+        #       Read ETA, next poll in 1/2 ETA time slot.
+        #       If ETA is greater than the maximum allowed job time,
+        #       then terminate job immediately.
+
+        #   while not self.job_completed:
+        #       esti_time = self._query_state(job_id)
+        #       if esti_time > self.timeout:
+        #           terminate_res = requests.delete('http://%s:5000/api/v1.0
+        #                                           /jobs' % self.target)
+        #       else:
+        #           time.sleep(int(est_time)/2)
+
+            result_res = requests.get('http://%s:5000/api/v1.0/jobs?id=%s' %
+                                      (self.target, job_id))
+            result_res_content = json.loads(result_res.content)
+
+            result.update(result_res_content)
+
+    def teardown(self):
+        """Deletes the agent configuration and the stack"""
+        teardown_res = requests.delete('http://%s:5000/api/v1.0/\
+                                       configurations' % self.target)
+
+        if teardown_res.status_code == 400:
+            teardown_res_content = json.loads(teardown_res.content)
+            raise RuntimeError("Failed to reset environment, error message:",
+                               teardown_res_content['message'])
+
+        self.setup_done = False