1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
6 ===========================================
7 Test Results for fuel-os-nosdn-nofeature-ha
8 ===========================================
17 .. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main
18 .. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
20 Overview of test results
21 ------------------------
23 See Grafana_ for viewing test result metrics for each respective test case. It
24 is possible to chose which specific scenarios to look at, and then to zoom in
25 on the details of each run test scenario as well.
27 All of the test case results below are based on 5 consecutive scenario test
28 runs, each run on the Ericsson POD2_ between February 13 and 18 in 2016. The
29 best would be to have more runs to draw better conclusions from, but these are
30 the only runs available at the time of OPNFV R2 release.
34 The round-trip-time (RTT) between 2 VMs on different blades is measured using
35 ping. The measurements are on average varying between 0.5 and 1.1 ms
36 with a first 2 - 2.5 ms RTT spike in the beginning of each run (This could be
37 because of normal ARP handling). The 2 last runs are very similar in their
38 results. But, to be able to draw any further conclusions more runs should be
39 made. There is one measurement taken on February 16 that does not have the
40 first RTT spike, and less variations to the RTT. The reason for this is
41 unknown. There is a discussion on another test measurement made Feb. 16 in
43 SLA set to 10 ms. The SLA value is used as a reference, it has not
44 been defined by OPNFV.
48 The IO read bandwidth look similar between different test runs, with an
49 average at approx. 160-170 MB/s. Within each run the results vary much,
50 minimum 2 MB/s and maximum 630 MB/s on the totality. Most runs have a
51 minimum of 3 MB/s (one run at 2 MB/s). The maximum BW varies much more in
52 absolute numbers, between 566 and 630 MB/s.
53 SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
58 The measurements for memory latency are consistent among test runs and results
59 in approx. 1.2 ns. The variations between runs are similar, between
60 1.215 and 1.219 ns. One exception is February 16, where the varation is
61 greater, between 1.22 and 1.28 ns. SLA set to 30 ns. The SLA value is used as
62 a reference, it has not been defined by OPNFV.
66 For this scenario no results are available to report on. Probable reason is
67 an integer/floating point issue regarding how InfluxDB is populated with
68 result data from the test runs.
72 The average measurements for memory bandwidth are consistent among most of the
73 different test runs at 17.2 - 17.3 GB/s. The very first test run averages at
74 17.7 GB/s. Within each run the results vary, with a minimal BW of 15.4
75 GB/s and maximum of 18.2 GB/s of the totality.
76 SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
81 The Unixbench processor single and parallel speed scores show similar results
82 at approx. 3200. The runs vary between scores 3160 and 3240.
87 The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
88 on different blades are measured when increasing the amount of UDP flows sent
89 between the VMs using pktgen as packet generator tool.
91 Round trip times and packet throughput between VMs are typically affected by
92 the amount of flows set up and result in higher RTT and less PPS
95 When running with less than 10000 flows the results are flat and consistent.
96 RTT is then approx. 30 ms and the number of PPS remains flat at approx.
97 250000 PPS. Beyond approx. 10000 flows and up to 1000000 (one million) there
98 is an even drop in RTT and PPS performance, eventually ending up at approx.
99 150-250 ms and 40000 PPS respectively.
101 There is one measurement made February 16 that has slightly worse results
102 compared to the other 4 measurements. The reason for this is unknown. For
103 instance anyone being logged onto the POD can be of relevance for such a
106 Detailed test results
107 ---------------------
108 The scenario was run on Ericsson POD2_ with:
113 No SDN controller installed
115 Rationale for decisions
116 -----------------------
119 Tests were successfully executed and metrics collects (apart from TC011_).
120 No SLA was verified. To be decided on in next release of OPNFV.
122 Conclusions and recommendations
123 -------------------------------
124 The pktgen test configuration has a relatively large base effect on RTT in
125 TC037 compared to TC002, where there is no background load at all (30 ms
126 compared to 1 ms or less, which is more than a 3000 percentage different
127 in RTT results). The larger amounts of flows in TC037 generate worse
128 RTT results, in the magnitude of several hundreds of milliseconds. It would
129 be interesting to also make and compare all these measurements to completely
130 (optimized) bare metal machines running native Linux with all other relevant
131 tools available, e.g. lmbench, pktgen etc.