3 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
4 .. SPDX-License-Identifier: CC-BY-4.0
5 .. (c) Open Platform for NFV Project, Inc. and its contributors
7 =======================
8 Infrastructure Overview
9 =======================
11 OPNFV develops, operates, and maintains infrastructure which is used by the OPNFV
12 Community for development, integration, and testing purposes. `OPNFV
13 Infrastructure Working Group (Infra WG) <https://wiki.opnfv.org/display/INF>`_
14 oversees the OPNFV Infrastructure, ensures it is kept in a state which serves
15 the community in best possible way and always up to date.
17 Infra WG is working towards a model whereby we have a seamless pipeline
18 for handing resource requests from the OPNFV community for both development and
19 Continuous Integration perspectives. Automation of requests and integration to
20 existing automation tools is a primary driver in reaching this model. In the
21 Infra WG, we imagine a model where the Infrastructure Requirements that are
22 specified by a Feature, Installer or otherrelevant projects within OPNFV are
23 requested, provisioned, used, reported on and subsequently torn down with no (or
24 minimal) user intervention at the physical/infrastructure level.
26 Objectives of the Infra WG are
28 * Deliver efficiently dimensions resources to OPNFV community needs on request in a timely manner that ensure maximum usage (capacity) and maximum density (distribution of workloads)
29 * Satisfy the needs of the twice-yearly release projects, this includes being able to handle load (amount of projects and requests) as well as need (topology and different layouts)
30 * Support OPNFV community users. As the INFRA group, we are integral to all aspects of the OPNFV Community (since it starts with the Hardware) - this can mean troubleshooting any element within the stack
31 * Provide a method to expand and adapt as OPNFV community needs grow and provide this to Hosting Providers (lab providers) for input in growth forecast so they can better judge how best to contribute with their resources.
32 * Work with reporting and other groups to ensure we have adequate feedback to the end-users of the labs on how their systems, code, feature performs.
34 The details of what is provided as part of the infrastructure can be seen in following chapters.
36 Hardware Infrastructure
37 -----------------------
41 Software Infrastructure
42 -----------------------
49 ../submodules/releng/docs/sofware-infrastructure-index
51 Power Consumption Monitoring Framework
52 ======================================
56 Power consumption is a key driver for NFV.
57 As an end user is interested to know which application is good/bad regarding
58 power consumption and explains why he/she has to plug his/her smartphone every
59 day, we would be interested to know which VNF is power consuming.
61 Power consumption is hard to evaluate empirically. It is however possible to
62 collect information and leverage Pharos federation to try to detect some
64 In fact thanks to CI, we know that we are running a known/deterministic list of
65 cases. The idea is to correlate this knowledge with the power consumption to try
66 at the end to find statistical biais.
69 High Level Architecture
70 -----------------------
72 The energy recorder high level architecture may be described as follows:
74 .. figure:: ../../images/energyrecorder.png
76 :alt: Energy recorder high level architecture
78 The energy monitoring system in based on 3 software components:
80 * Power info collector: poll server to collect instantaneous power consumption information
81 * Energy recording API + influxdb: On one leg receive servers consumption and
82 on the other, scenarios notfication. It then able to establish te correlation
83 between consumption and scenario and stores it into a time-series database (influxdb)
84 * Python SDK: A Python SDK using decorator to send notification to Energy
85 recording API from testcases scenarios
89 It collects instantaneous power consumption information and send it to Event
90 API in charge of data storing.
91 The collector use different connector to read the power consumption on remote
94 * IPMI: this is the basic method and is manufacturer dependent.
95 Depending on manufacturer, refreshing delay may vary (generally for 10 to 30 sec.)
96 * RedFish: redfish is an industry RESTFUL API for hardware managment.
97 Unfortunatly it is not yet supported by many suppliers.
98 * ILO: HP RESTFULL API:
99 This connector support as well 2.1 as 2.4 version of HP-ILO
101 IPMI is supported by at least:
111 Redfish API has been successfully tested on:
115 * Huawei (E9000 class servers used in OPNFV Community Labs are IPMI 2.0
116 compliant and use Redfish login Interface through Browsers supporting JRE1.7/1.8)
118 Several test campaigns done with physical Wattmeter showed that IPMI results
119 were notvery accurate but RedFish were. So if Redfish is available, it is
120 highly recommended to use it.
125 To run the server power consumption collector agent, you need to deploy a
126 docker container locally on your infrastructure.
128 This container requires:
130 * Connectivy on the LAN where server administration services (ILO, eDrac,
131 IPMI,...) are configured and IP access to the POD's servers
132 * Outgoing HTTP access to the Event API (internet)
134 Build the image by typing::
136 curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/docker/server-collector.dockerfile|docker build -t energyrecorder/collector -
138 Create local folder on your host for logs and config files::
140 mkdir -p /etc/energyrecorder
141 mkdir -p /var/log/energyrecorder
143 In /etc/energyrecorder create a configuration for logging in a file named
144 collector-logging.conf::
146 curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/server-collector/conf/collector-logging.conf.sample > /etc/energyrecorder/collector-logging.conf
148 Check configuration for this file (folders, log levels.....)
149 In /etc/energyrecorder create a configuration for the collector in a file named
150 collector-settings.yaml::
152 curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/server-collector/conf/collector-settings.yaml.sample > /etc/energyrecorder/collector-settings.yaml
154 Define the "PODS" section and their "servers" section according to the
155 environment to monitor.
156 Note: The "environment" key should correspond to the pod name, as defined in
157 the "NODE_NAME" environment variable by CI when running.
159 **IMPORTANT NOTE**: To apply a new configuration, you need to kill the running
160 container an start a new one (see below)
165 To run the container, you have to map folder located on the host to folders in
166 the container (config, logs)::
168 docker run -d --name energy-collector --restart=always -v /etc/energyrecorder:/usr/local/energyrecorder/server-collector/conf -v /var/log/energyrecorder:/var/log/energyrecorder energyrecorder/collector
173 An event API to insert contextual information when monitoring energy (e.g.
174 start Functest, start Tempest, destroy VM, ..)
175 It is associated with an influxDB to store the power consumption measures
176 It is hosted on a shared environment with the folling access points:
178 +------------------------------------+----------------------------------------+
179 | Component | Connectivity |
180 +====================================+========================================+
181 | Energy recording API documentation | http://energy.opnfv.fr/resources/doc/ |
182 +------------------------------------+----------------------------------------+
183 | influxDB (data) | http://energy.opnfv.fr:8086 |
184 +------------------------------------+----------------------------------------+
186 In you need, you can also host your own version of the Energy recording API
187 (in such case, the Python SDK may requires a settings update)
188 If you plan to use the default shared API, following steps are not required.
192 First, you need to buid an image::
194 curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/docker/recording-api.dockerfile|docker build -t energyrecorder/api -
198 Create local folder on your host for logs and config files::
200 mkdir -p /etc/energyrecorder
201 mkdir -p /var/log/energyrecorder
202 mkdir -p /var/lib/influxdb
204 In /etc/energyrecorder create a configuration for logging in a file named
205 webapp-logging.conf::
207 curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/recording-api/conf/webapp-logging.conf.sample > /etc/energyrecorder/webapp-logging.conf
209 Check configuration for this file (folders, log levels.....)
211 In /etc/energyrecorder create a configuration for the collector in a file
212 named webapp-settings.yaml::
214 curl -s https://raw.githubusercontent.com/bherard/energyrecorder/master/recording-api/conf/webapp-settings.yaml.sample > /etc/energyrecorder/webapp-settings.yaml
216 Normaly included configuration is ready to use except username/passwer for
217 influx (see run-container.sh bellow). Use here the admin user.
219 **IMPORTANT NOTE**: To apply a new configuration, you need to kill the running
220 container an start a new one (see bellow)
224 To run the container, you have to map folder located on the host to folders in
225 the container (config, logs)::
227 docker run -d --name energyrecorder-api -p 8086:8086 -p 8888:8888 -v /etc/energyrecorder:/usr/local/energyrecorder/web.py/conf -v /var/log/energyrecorder/:/var/log/energyrecorder -v /var/lib/influxdb:/var/lib/influxdb energyrecorder/webapp admin-influx-user-name admin-password readonly-influx-user-name user-password
231 +---------------------------+--------------------------------------------+
232 | Parameter name | Description |
233 +===========================+============================================+
234 | admin-influx-user-name | Influx user with admin grants to create |
235 |---------------------------+--------------------------------------------+
236 | admin-password | Influx password to set to admin user |
237 |---------------------------+--------------------------------------------+
238 | readonly-influx-user-name | Influx user with readonly grants to create |
239 |---------------------------+--------------------------------------------+
240 | user-password | Influx password to set to readonly user |
241 +---------------------------+--------------------------------------------+
243 **NOTE**: Local folder /var/lib/influxdb is the location web influx data are
244 stored. You may used anything else at your convience. Just remember to define
245 this mapping properly when running the container.
247 Power consumption Python SDK
248 ----------------------------
249 a Python SDK - almost not intrusive, based on python decorator to trigger call
252 It is currently hosted in Functest repo but if other projects adopt it, a
253 dedicated project could be created and/or it could be hosted in Releng.
258 import the energy library::
260 import functest.energy.energy as energy
262 Notify that you want power recording in your testcase::
264 @energy.enable_recording
266 self.do_some_stuff1()
267 self.do_some_stuff2()
269 If you want to register additional steps during the scenarios you can to it in
272 * notify step on method defintion::
273 @energy.set_step("step1")
274 def do_some_stuff1(self):
276 @energy.set_step("step2")
277 def do_some_stuff2(self):
278 * directly from code::
279 @energy.enable_recording
281 Energy.set_step("step1")
282 self.do_some_stuff1()
284 Energy.set_step("step2")
285 self.do_some_stuff2()
289 Settings delivered in the project git are ready to use and assume that you will
290 use the sahre energy recording API.
291 If you want to use an other instance, you have to update the key
292 "energy_recorder.api_url" in <FUNCTEST>/functest/ci/config_functest.yaml" by
293 setting the proper hostname/IP
297 Here is an example of result comming from LF POD2. This sequence represents
298 several CI runs in a raw. (0 power corresponds to hard reboot of the servers)
300 You may connect http://energy.opnfv.fr:3000 for more results (ask for
301 credentials to infra team).
303 .. figure:: ../../images/energy_LF2.png
305 :alt: Energy monitoring of LF POD2