1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
6 ================================
7 Test Results for os-nosdn-kvm-ha
8 ================================
17 .. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
18 .. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
20 Overview of test results
21 ------------------------
23 See Grafana_ for viewing test result metrics for each respective test case. It
24 is possible to chose which specific scenarios to look at, and then to zoom in
25 on the details of each run test scenario as well.
27 All of the test case results below are based on 4 scenario test
28 runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in
33 The round-trip-time (RTT) between 2 VMs on different blades is measured using
34 ping. Most test run measurements result on average between 0.44 and 0.75 ms.
35 A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of
36 normal ARP handling). One test run has a greater RTT spike of 1.49 ms.
37 To be able to draw conclusions more runs should be made. SLA set to 10 ms.
38 The SLA value is used as a reference, it has not been defined by OPNFV.
42 The IO read bandwidth looks similar between different dates, with an
43 average between approx. 92 and 204 MB/s. Within each test run the results
44 vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs
45 have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in
46 absolute numbers between the dates, between 238 and 819 MB/s.
47 SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
52 The measurements for memory latency are similar between test dates and result
53 in approx. 2.07 ns. The variations within each test run are similar, between
55 SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
60 Packet delay variation between 2 VMs on different blades is measured using
61 Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms,
62 with an average delay variation between 0.0081 ms and 0.0195 ms.
66 Between test dates, the average measurements for memory bandwidth result in
67 approx. 13.6 GB/s. Within each test run the results vary more, with a minimal
68 BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality.
69 SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
74 The Unixbench processor test run results vary between scores 2316 and 3619,
80 The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
81 on different blades are measured when increasing the amount of UDP flows sent
82 between the VMs using pktgen as packet generator tool.
84 Round trip times and packet throughput between VMs can typically be affected by
85 the amount of flows set up and result in higher RTT and less PPS throughput.
87 The RTT results are similar throughout the different test dates and runs at
88 approx. 15 ms. Some test runs show an increase with many flows, in the range
89 towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
90 RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
92 In some test runs when running with less than approx. 10000 flows the PPS
93 throughput is normally flatter compared to when running with more flows, after
94 which the PPS throughput decreases. Around 20 percent decrease in the worst
95 case. For the other test runs there is however no significant change to the PPS
96 throughput when the number of flows are increased. In some test runs the PPS
97 is also greater with 1000000 flows compared to other test runs where the PPS
98 result is less with only 2 flows.
100 The average PPS throughput in the different runs varies between 414000 and
101 452000 PPS. The total amount of packets in each test run is approx. 7500000 to
102 8200000 packets. One test run Feb. 15 sticks out with a PPS average of
103 558000 and approx. 1100000 packets in total (same as the on mentioned earlier
106 There are lost packets reported in most of the test runs. There is no observed
107 correlation between the amount of flows and the amount of lost packets.
108 The lost amount of packets normally range between 100 and 1000 per test run,
109 but there are spikes in the range of 10000 lost packets as well, and even
110 more in a rare cases.
112 CPU utilization statistics are collected during UDP flows sent between the VMs
113 using pktgen as packet generator tool. The average measurements for CPU
114 utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
119 Between test dates, the average measurements for memory bandwidth vary between
120 22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
121 BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
122 SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
128 The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
129 on different blades are measured when increasing the amount of UDP flows sent
130 between the VMs using pktgen as packet generator tool.
132 Round trip times and packet throughput between VMs can typically be affected by
133 the amount of flows set up and result in higher RTT and less PPS throughput.
135 The RTT results are similar throughout the different test dates and runs at
136 approx. 15 ms. Some test runs show an increase with many flows, in the range
137 towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
138 RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
140 In some test runs when running with less than approx. 10000 flows the PPS
141 throughput is normally flatter compared to when running with more flows, after
142 which the PPS throughput decreases. Around 20 percent decrease in the worst
143 case. For the other test runs there is however no significant change to the PPS
144 throughput when the number of flows are increased. In some test runs the PPS
145 is also greater with 1000000 flows compared to other test runs where the PPS
146 result is less with only 2 flows.
148 The average PPS throughput in the different runs varies between 414000 and
149 452000 PPS. The total amount of packets in each test run is approx. 7500000 to
150 8200000 packets. One test run Feb. 15 sticks out with a PPS average of
151 558000 and approx. 1100000 packets in total (same as the on mentioned earlier
154 There are lost packets reported in most of the test runs. There is no observed
155 correlation between the amount of flows and the amount of lost packets.
156 The lost amount of packets normally range between 100 and 1000 per test run,
157 but there are spikes in the range of 10000 lost packets as well, and even
158 more in a rare cases.
160 Memory utilization statistics are collected during UDP flows sent between the
161 VMs using pktgen as packet generator tool. The average measurements for memory
162 utilization vary between 225MB to 246MB. The peak of memory utilization appears
167 The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
168 on different blades are measured when increasing the amount of UDP flows sent
169 between the VMs using pktgen as packet generator tool.
171 Round trip times and packet throughput between VMs can typically be affected by
172 the amount of flows set up and result in higher RTT and less PPS throughput.
174 The RTT results are similar throughout the different test dates and runs at
175 approx. 15 ms. Some test runs show an increase with many flows, in the range
176 towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
177 RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
179 In some test runs when running with less than approx. 10000 flows the PPS
180 throughput is normally flatter compared to when running with more flows, after
181 which the PPS throughput decreases. Around 20 percent decrease in the worst
182 case. For the other test runs there is however no significant change to the PPS
183 throughput when the number of flows are increased. In some test runs the PPS
184 is also greater with 1000000 flows compared to other test runs where the PPS
185 result is less with only 2 flows.
187 The average PPS throughput in the different runs varies between 414000 and
188 452000 PPS. The total amount of packets in each test run is approx. 7500000 to
189 8200000 packets. One test run Feb. 15 sticks out with a PPS average of
190 558000 and approx. 1100000 packets in total (same as the on mentioned earlier
193 There are lost packets reported in most of the test runs. There is no observed
194 correlation between the amount of flows and the amount of lost packets.
195 The lost amount of packets normally range between 100 and 1000 per test run,
196 but there are spikes in the range of 10000 lost packets as well, and even
197 more in a rare cases.
199 Cache utilization statistics are collected during UDP flows sent between the
200 VMs using pktgen as packet generator tool. The average measurements for cache
201 utilization vary between 205MB to 212MB.
205 The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
206 on different blades are measured when increasing the amount of UDP flows sent
207 between the VMs using pktgen as packet generator tool.
209 Round trip times and packet throughput between VMs can typically be affected by
210 the amount of flows set up and result in higher RTT and less PPS throughput.
212 The RTT results are similar throughout the different test dates and runs at
213 approx. 15 ms. Some test runs show an increase with many flows, in the range
214 towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
215 RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
217 In some test runs when running with less than approx. 10000 flows the PPS
218 throughput is normally flatter compared to when running with more flows, after
219 which the PPS throughput decreases. Around 20 percent decrease in the worst
220 case. For the other test runs there is however no significant change to the PPS
221 throughput when the number of flows are increased. In some test runs the PPS
222 is also greater with 1000000 flows compared to other test runs where the PPS
223 result is less with only 2 flows.
225 The average PPS throughput in the different runs varies between 414000 and
226 452000 PPS. The total amount of packets in each test run is approx. 7500000 to
227 8200000 packets. One test run Feb. 15 sticks out with a PPS average of
228 558000 and approx. 1100000 packets in total (same as the on mentioned earlier
231 There are lost packets reported in most of the test runs. There is no observed
232 correlation between the amount of flows and the amount of lost packets.
233 The lost amount of packets normally range between 100 and 1000 per test run,
234 but there are spikes in the range of 10000 lost packets as well, and even
235 more in a rare cases.
237 Network utilization statistics are collected during UDP flows sent between the
238 VMs using pktgen as packet generator tool. Total number of packets received per
239 second was average on 200 kpps and total number of packets transmitted per
240 second was average on 600 kpps.
242 Detailed test results
243 ---------------------
244 The scenario was run on Ericsson POD2_ and LF POD2_ with:
247 OpenVirtualSwitch 2.5.90
248 OpenDayLight Beryllium
250 Rationale for decisions
251 -----------------------
254 Tests were successfully executed and metrics collected.
255 No SLA was verified. To be decided on in next release of OPNFV.
257 Conclusions and recommendations
258 -------------------------------
259 The pktgen test configuration has a relatively large base effect on RTT in
260 TC037 compared to TC002, where there is no background load at all. Approx.
261 15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
262 difference in RTT results.
263 Especially RTT and throughput come out with better results than for instance
264 the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
265 probably be further analyzed and understood. Also of interest could be
266 to make further analyzes to find patterns and reasons for lost traffic.
267 Also of interest could be to see if there are continuous variations where
268 some test cases stand out with better or worse results than the general test