1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) <optionally add copywriters name>
4 .. _barometer-docker-userguide:
6 ===================================
7 OPNFV Barometer Docker User Guide
8 ===================================
14 The intention of this user guide is to outline how to install and test the Barometer project's
15 docker images. The `OPNFV docker hub <https://hub.docker.com/u/opnfv/?page=1>`_ contains 5 docker
16 images from the Barometer project:
18 1. `Collectd docker image <https://hub.docker.com/r/opnfv/barometer-collectd/>`_
19 2. `Influxdb docker image <https://hub.docker.com/r/opnfv/barometer-influxdb/>`_
20 3. `Grafana docker image <https://hub.docker.com/r/opnfv/barometer-grafana/>`_
21 4. `Kafka docker image <https://hub.docker.com/r/opnfv/barometer-kafka/>`_
22 5. `VES application docker image <https://hub.docker.com/r/opnfv/barometer-ves/>`_
24 For description of images please see section `Barometer Docker Images Description`_
26 For steps to build and run Collectd image please see section `Build and Run Collectd Docker Image`_
28 For steps to build and run InfluxDB and Grafana images please see section `Build and Run InfluxDB and Grafana Docker Images`_
30 For steps to build and run VES and Kafka images please see section `Build and Run VES and Kafka Docker Images`_
32 For overview of running VES application with Kafka please see the :ref:`VES Application User Guide <barometer-ves-userguide>`
34 Barometer Docker Images Description
35 -----------------------------------
37 .. Describe the specific features and how it is realised in the scenario in a brief manner
38 .. to ensure the user understand the context for the user guide instructions to follow.
40 Barometer Collectd Image
41 ^^^^^^^^^^^^^^^^^^^^^^^^
42 The barometer collectd docker image gives you a collectd installation that includes all
43 the barometer plugins.
46 The Dockerfile is available in the docker/barometer-collectd directory in the barometer repo.
47 The Dockerfile builds a CentOS 7 docker image.
48 The container MUST be run as a privileged container.
50 Collectd is a daemon which collects system performance statistics periodically
51 and provides a variety of mechanisms to publish the collected metrics. It
52 supports more than 90 different input and output plugins. Input plugins
53 retrieve metrics and publish them to the collectd deamon, while output plugins
54 publish the data they receive to an end point. Collectd also has infrastructure
55 to support thresholding and notification.
57 Collectd docker image has enabled the following collectd plugins (in addition
58 to the standard collectd plugins):
61 * Open vSwitch events Plugin
62 * Open vSwitch stats Plugin
70 Plugins and third party applications in Barometer repository that will be available in the
73 * Open vSwitch PMD stats
74 * ONAP VES application
79 InfluxDB + Grafana Docker Images
80 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
82 The Barometer project's InfluxDB and Grafana docker images are 2 docker images that database and graph
83 statistics reported by the Barometer collectd docker. InfluxDB is an open-source time series database
84 tool which stores the data from collectd for future analysis via Grafana, which is a open-source
85 metrics anlytics and visualisation suite which can be accessed through any browser.
87 VES + Kafka Docker Images
88 ^^^^^^^^^^^^^^^^^^^^^^^^^
90 The Barometer project's VES application and Kafka docker images are based on a CentOS 7 image. Kafka
91 docker image has a dependancy on `Zookeeper <https://zookeeper.apache.org/>`_. Kafka must be able to
92 connect and register with an instance of Zookeeper that is either running on local or remote host.
93 Kafka recieves and stores metrics recieved from Collectd. VES application pulls latest metrics from Kafka
94 which it normalizes into VES format for sending to a VES collector. Please see details in
95 :ref:`VES Application User Guide <barometer-ves-userguide>`
97 One Click Install with Ansible
98 ------------------------------
100 Proxy for package manager on host
101 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
103 This step has to be performed only if host is behind HTTP/HTTPS proxy
105 Proxy URL have to be set in dedicated config file
107 1. CentOS - /etc/yum.conf
111 proxy=http://your.proxy.domain:1234
113 2. Ubuntu - /etc/apt/apt.conf
117 Acquire::http::Proxy "http://your.proxy.domain:1234"
119 After update of config file, apt mirrors have to be updated via 'apt-get update'
123 $ sudo apt-get update
125 Proxy environment variables(for docker and pip)
126 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128 This step has to be performed only if host is behind HTTP/HTTPS proxy
130 Configuring proxy for packaging system is not enough, also some proxy
131 environment variables have to be set in the system before ansible scripts
133 Barometer configures docker proxy automatically via ansible task as a part
134 of 'one click install' process - user only has to provide proxy URL using common
135 shell environment variables and ansible will automatically configure proxies
136 for docker(to be able to fetch barometer images). Another component used by
137 ansible (e.g. pip is used for downloading python dependencies) will also benefit
138 from setting proxy variables properly in the system.
140 Proxy variables used by ansible One Click Install:
146 Variables mentioned above have to be visible for superuser (because most
147 actions involving ansible-barometer installation require root privileges).
148 Proxy variables are commonly defined in '/etc/environment' file (but any other
149 place is good as long as variables can be seen by commands using 'su').
151 Sample proxy configuration in /etc/environment:
155 http_proxy=http://your.proxy.domain:1234
156 https_proxy=http://your.proxy.domain:1234
157 ftp_proxy=http://your.proxy.domain:1234
163 * sudo permissions or root access are required to install ansible.
164 * ansible version needs to be 2.4+, because usage of import/include statements
166 The following steps have been verified with Ansible 2.6.3 on Ubuntu 16.04 and 18.04.
167 To install Ansible 2.6.3 on Ubuntu:
171 $ sudo apt-get install python
172 $ sudo apt-get install python-pip
173 $ sudo -H pip install 'ansible==2.6.3'
175 The following steps have been verified with Ansible 2.6.3 on Centos 7.5.
176 To install Ansible 2.6.3 on Centos:
180 $ sudo yum install python
181 $ sudo yum install epel-release
182 $ sudo yum install python-pip
183 $ sudo -H pip install 'ansible==2.6.3'
184 $ sudo yum install git
187 When using multi-node-setup, please make sure that 'python' package is
188 installed on all of the target nodes (ansible during 'Gathering facts'
189 phase is using python2 and it may not be installed by default on some
190 distributions - e.g. on Ubuntu 16.04 it has to be installed manually)
197 $ git clone https://gerrit.opnfv.org/gerrit/barometer
198 $ cd barometer/docker/ansible
202 Edit inventory file and add hosts: $barometer_dir/docker/ansible/default.inv
209 [collectd_hosts:vars]
211 insert_ipmi_modules=true
228 Change localhost to different hosts where neccessary.
229 Hosts for influxdb and grafana are required only for collectd_service.yml.
230 Hosts for kafka and ves are required only for collectd_ves.yml.
232 To change host for kafka edit kafka_ip_addr in ./roles/config_files/vars/main.yml.
234 Additional plugin dependencies
235 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
237 By default ansible will try to fulfill dependencies for mcelog and ipmi plugin.
238 For mcelog plugin it installs mcelog daemon. For ipmi it tries to insert ipmi_devintf
239 and ipmi_si kernel modules.
240 This can be changed in inventory file with use of variables install_mcelog
241 and insert_ipmi_modules, both variables are independent:
245 [collectd_hosts:vars]
247 insert_ipmi_modules=false
250 On Ubuntu 18.04 to use mcelog plugin the user has to install mcelog daemon
251 manually ahead of installing from ansible scripts as the deb package is not
252 available in official Ubuntu 18.04 repo. It means that setting install_mcelog
258 Generate ssh keys if not present, otherwise move onto next step.
264 Copy ssh key to all target hosts. It requires to provide root password.
265 The example is for localhost.
270 $ ssh-copy-id root@localhost
272 Verify that key is added and password is not required to connect.
276 $ sudo ssh root@localhost
279 Keys should be added to every target host and [localhost] is only used as an
280 example. For multinode installation keys need to be copied for each node:
281 [collectd_hostname], [influxdb_hostname] etc.
283 Download and run Collectd+Influxdb+Grafana containers
284 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
286 The One Click installation features easy and scalable deployment of Collectd,
287 Influxdb and Grafana containers using Ansible playbook. The following steps goes
288 through more details.
292 $ sudo -H ansible-playbook -i default.inv collectd_service.yml
294 Check the three containers are running, the output of docker ps should be similar to:
299 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
300 a033aeea180d opnfv/barometer-grafana "/run.sh" 9 days ago Up 7 minutes bar-grafana
301 1bca2e4562ab opnfv/barometer-influxdb "/entrypoint.sh in..." 9 days ago Up 7 minutes bar-influxdb
302 daeeb68ad1d5 opnfv/barometer-collectd "/run_collectd.sh ..." 9 days ago Up 7 minutes bar-collectd
304 To make some changes when a container is running run:
308 $ sudo docker exec -ti <CONTAINER ID> /bin/bash
310 Connect to <host_ip>:3000 with a browser and log into Grafana: admin/admin.
311 For short introduction please see the:
312 `Grafana guide <http://docs.grafana.org/guides/getting_started/>`_.
314 The collectd configuration files can be accessed directly on target system in '/opt/collectd/etc/collectd.conf.d'.
315 It can be used for manual changes or enable/disable plugins. If configuration has been modified it is required to
320 $ sudo docker restart bar-collectd
322 Download collectd+kafka+ves containers
323 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
325 Before running Kafka an instance of zookeeper is required. See `Run Kafka docker image`_ for notes on how to run it.
326 The 'zookeeper_hostname' and 'broker_id' can be set in ./roles/run_kafka/vars/main.yml.
330 $ sudo ansible-playbook -i default.inv collectd_ves.yml
332 Check the three containers are running, the output of docker ps should be similar to:
337 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
338 8b095ad94ea1 zookeeper:3.4.11 "/docker-entrypoin..." 7 minutes ago Up 7 minutes awesome_jennings
339 eb8bba3c0b76 opnfv/barometer-ves "./start_ves_app.s..." 21 minutes ago Up 6 minutes bar-ves
340 86702a96a68c opnfv/barometer-kafka "/src/start_kafka.sh" 21 minutes ago Up 6 minutes bar-kafka
341 daeeb68ad1d5 opnfv/barometer-collectd "/run_collectd.sh ..." 13 days ago Up 6 minutes bar-collectd
344 To make some changes when a container is running run:
348 $ sudo docker exec -ti <CONTAINER ID> /bin/bash
350 List of default plugins for collectd container
351 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
353 The dpdk plugins dpdkevents and dpdkstat were tested with DPDK v16.11.
355 By default the collectd is started with default configuration which includes the followin plugins:
356 * csv, contextswitch, cpu, cpufreq, df, disk, ethstat, ipc, irq, load, memory, numa, processes,
357 swap, turbostat, uuid, uptime, exec, hugepages, intel_pmu, ipmi, write_kafka, logfile, mcelog,
358 network, intel_rdt, rrdtool, snmp_agent, syslog, virt, ovs_stats, ovs_events, dpdkevents,
361 Some of the plugins are loaded depending on specific system requirements and can be omitted if
362 dependency is not met, this is the case for:
363 * hugepages, ipmi, mcelog, intel_rdt, virt, ovs_stats, ovs_events
365 List and description of tags used in ansible scripts
366 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
368 Tags can be used to run a specific part of the configuration without running the whole playbook.
369 To run a specific parts only:
373 $ sudo ansible-playbook -i default.inv collectd_service.yml --tags "syslog,cpu,uuid"
375 To disable some parts or plugins:
379 $ sudo ansible-playbook -i default.inv collectd_service.yml --skip-tags "en_default_all,syslog,cpu,uuid"
381 List of available tags:
384 Install docker and required dependencies with package manager.
387 Configure proxy file for docker service if proxy is set on host environment.
390 Remove collectd config files.
392 copy_additional_configs
393 Copy additional configuration files to target system. Path to additional configuration
394 is stored in $barometer_dir/docker/ansible/roles/config_files/vars/main.yml as additional_configs_path.
397 Set of default read plugins: contextswitch, cpu, cpufreq, df, disk, ethstat, ipc, irq,
398 load, memory, numa, processes, swap, turbostat, uptime.
401 The following tags can be used to enable/disable plugins: csv, contextswitch, cpu,
402 cpufreq, df, disk, ethstat, ipc, irq, load, memory, numa, processes, swap, turbostat,
403 uptime, exec, hugepages, ipmi, kafka, logfile, mcelogs, network, pmu, rdt, rrdtool,
404 snmp, syslog, virt, ovs_stats, ovs_events, uuid, dpdkevents, dpdkstat.
408 .. Describe the specific capabilities and usage for <XYZ> feature.
409 .. Provide enough information that a user will be able to operate the feature on a deployed scenario.
412 The below sections provide steps for manual installation and configuration
413 of docker images. They are not neccessary if docker images were installed with
414 use of Ansible-Playbook.
419 * sudo permissions are required to install docker.
420 * These instructions are for Ubuntu 16.10
426 $ sudo apt-get install curl
427 $ sudo curl -fsSL https://get.docker.com/ | sh
428 $ sudo usermod -aG docker <username>
429 $ sudo systemctl status docker
431 Replace <username> above with an appropriate user name.
436 * sudo permissions are required to install docker.
437 * These instructions are for CentOS 7
443 $ sudo yum remove docker docker-common docker-selinux docker-engine
444 $ sudo yum install -y yum-utils device-mapper-persistent-data lvm2
445 $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
446 $ sudo yum-config-manager --enable docker-ce-edge
447 $ sudo yum-config-manager --enable docker-ce-test
448 $ sudo yum install docker-ce
449 $ sudo usermod -aG docker <username>
450 $ sudo systemctl status docker
452 Replace <username> above with an appropriate user name.
455 If this is the first time you are installing a package from a recently added
456 repository, you will be prompted to accept the GPG key, and the key’s
457 fingerprint will be shown. Verify that the fingerprint is correct, and if so,
458 accept the key. The fingerprint should match060A 61C5 1B55 8A7F 742B 77AA C52F
461 Retrieving key from https://download.docker.com/linux/centos/gpg
462 Importing GPG key 0x621E9F35:
463 Userid : "Docker Release (CE rpm) <docker@docker.com>"
464 Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
465 From : https://download.docker.com/linux/centos/gpg
468 Manual proxy configuration for docker
469 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
472 This applies for both CentOS and Ubuntu.
474 If you are behind an HTTP or HTTPS proxy server, you will need to add this
475 configuration in the Docker systemd service file.
477 1. Create a systemd drop-in directory for the docker service:
481 $ sudo mkdir -p /etc/systemd/system/docker.service.d
484 called /etc/systemd/system/docker.service.d/http-proxy.conf that adds
485 the HTTP_PROXY environment variable:
490 Environment="HTTP_PROXY=http://proxy.example.com:80/"
492 Or, if you are behind an HTTPS proxy server, create a file
493 called /etc/systemd/system/docker.service.d/https-proxy.conf that adds
494 the HTTPS_PROXY environment variable:
499 Environment="HTTPS_PROXY=https://proxy.example.com:443/"
501 Or create a single file with all the proxy configurations:
502 /etc/systemd/system/docker.service.d/proxy.conf
507 Environment="HTTP_PROXY=http://proxy.example.com:80/"
508 Environment="HTTPS_PROXY=https://proxy.example.com:443/"
509 Environment="FTP_PROXY=ftp://proxy.example.com:443/"
510 Environment="NO_PROXY=localhost"
516 $ sudo systemctl daemon-reload
522 $ sudo systemctl restart docker
524 5. Check docker environment variables:
528 sudo systemctl show --property=Environment docker
530 Test docker installation
531 ^^^^^^^^^^^^^^^^^^^^^^^^
533 This applies for both CentOS and Ubuntu.
537 $ sudo docker run hello-world
539 The output should be something like:
543 Unable to find image 'hello-world:latest' locally
544 latest: Pulling from library/hello-world
545 5b0f327be733: Pull complete
546 Digest: sha256:07d5f7800dfe37b8c2196c7b1c524c33808ce2e0f74e7aa00e603295ca9a0972
547 Status: Downloaded newer image for hello-world:latest
550 This message shows that your installation appears to be working correctly.
552 To generate this message, Docker took the following steps:
553 1. The Docker client contacted the Docker daemon.
554 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
555 3. The Docker daemon created a new container from that image which runs the
556 executable that produces the output you are currently reading.
557 4. The Docker daemon streamed that output to the Docker client, which sent it
560 To try something more ambitious, you can run an Ubuntu container with:
564 $ docker run -it ubuntu bash
566 Build and Run Collectd Docker Image
567 -----------------------------------
569 Download the collectd docker image
570 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
571 If you wish to use a pre-built barometer image, you can pull the barometer
572 image from https://hub.docker.com/r/opnfv/barometer-collectd/
576 $ docker pull opnfv/barometer-collectd
578 Build the collectd docker image
579 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
583 $ git clone https://gerrit.opnfv.org/gerrit/barometer
584 $ cd barometer/docker/barometer-collectd
585 $ sudo docker build -t opnfv/barometer-collectd --build-arg http_proxy=`echo $http_proxy` \
586 --build-arg https_proxy=`echo $https_proxy` --network=host -f Dockerfile .
589 Main directory of barometer source code (directory that contains 'docker',
590 'docs', 'src' and systems sub-directories) will be referred as
591 ``<BAROMETER_REPO_DIR>``
595 In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to be
596 passed only if system is behind an HTTP or HTTPS proxy server.
598 Check the docker images:
604 Output should contain a barometer-collectd image:
608 REPOSITORY TAG IMAGE ID CREATED SIZE
609 opnfv/barometer-collectd latest 05f2a3edd96b 3 hours ago 1.2GB
610 centos 7 196e0ce0c9fb 4 weeks ago 197MB
611 centos latest 196e0ce0c9fb 4 weeks ago 197MB
612 hello-world latest 05a3bd381fc2 4 weeks ago 1.84kB
614 Run the collectd docker image
615 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
618 $ cd <BAROMETER_REPO_DIR>
619 $ sudo docker run -ti --net=host -v \
620 `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \
621 -v /var/run:/var/run -v /tmp:/tmp --privileged opnfv/barometer-collectd
624 The docker collectd image contains configuration for all the collectd
625 plugins. In the command above we are overriding
626 /opt/collectd/etc/collectd.conf.d by mounting a host directory
627 src/collectd/collectd_sample_configs that contains only the sample
628 configurations we are interested in running.
630 *If some dependencies for plugins listed in configuration directory
631 aren't met, then collectd startup may fail(collectd tries to
632 initialize plugins configurations for all given config files that can
633 be found in shared configs directory and may fail if some dependency
636 If `DPDK` or `RDT` can't be installed on host, then corresponding config
637 files should be removed from shared configuration directory
638 (`<BAROMETER_REPO_DIR>/src/collectd/collectd_sample_configs/`) prior
639 to starting barometer-collectd container. By example: in case of missing
640 `DPDK` functionality on the host, `dpdkstat.conf` and `dpdkevents.conf`
643 Sample configurations can be found at:
644 https://github.com/opnfv/barometer/tree/master/src/collectd/collectd_sample_configs
646 List of barometer-collectd dependencies on host for various plugins
648 https://wiki.opnfv.org/display/fastpath/Barometer-collectd+host+dependencies
650 Check your docker image is running
656 To make some changes when the container is running run:
660 sudo docker exec -ti <CONTAINER ID> /bin/bash
662 Build and Run InfluxDB and Grafana docker images
663 ------------------------------------------------
667 The barometer-influxdb image is based on the influxdb:1.3.7 image from the influxdb dockerhub. To
668 view detils on the base image please visit
669 `https://hub.docker.com/_/influxdb/ <https://hub.docker.com/_/influxdb/>`_ Page includes details of
670 exposed ports and configurable enviromental variables of the base image.
672 The barometer-grafana image is based on grafana:4.6.3 image from the grafana dockerhub. To view
673 details on the base image please visit
674 `https://hub.docker.com/r/grafana/grafana/ <https://hub.docker.com/r/grafana/grafana/>`_ Page
675 includes details on exposed ports and configurable enviromental variables of the base image.
677 The barometer-grafana image includes pre-configured source and dashboards to display statistics exposed
678 by the barometer-collectd image. The default datasource is an influxdb database running on localhost
679 but the address of the influxdb server can be modified when launching the image by setting the
680 environmental variables influxdb_host to IP or hostname of host on which influxdb server is running.
682 Additional dashboards can be added to barometer-grafana by mapping a volume to /opt/grafana/dashboards.
683 Incase where a folder is mounted to this volume only files included in this folder will be visible
684 inside barometer-grafana. To ensure all default files are also loaded please ensure they are included in
685 volume folder been mounted. Appropriate example are given in section `Run the Grafana docker image`_
687 Download the InfluxDB and Grafana docker images
688 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
689 If you wish to use pre-built barometer project's influxdb and grafana images, you can pull the
690 images from https://hub.docker.com/r/opnfv/barometer-influxdb/ and https://hub.docker.com/r/opnfv/barometer-grafana/
693 If your preference is to build images locally please see sections `Build InfluxDB Docker Image`_ and
694 `Build Grafana Docker Image`_
698 $ docker pull opnfv/barometer-influxdb
699 $ docker pull opnfv/barometer-grafana
702 If you have pulled the pre-built barometer-influxdb and barometer-grafana images there is no
703 requirement to complete steps outlined in sections `Build InfluxDB Docker Image`_ and
704 `Build Grafana Docker Image`_ and you can proceed directly to section
705 `Run the Influxdb and Grafana Images`_
707 Build InfluxDB docker image
708 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
710 Build influxdb image from Dockerfile
714 $ cd barometer/docker/barometer-influxdb
715 $ sudo docker build -t opnfv/barometer-influxdb --build-arg http_proxy=`echo $http_proxy` \
716 --build-arg https_proxy=`echo $https_proxy` --network=host -f Dockerfile .
719 In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to
720 be passed only if system is behind an HTTP or HTTPS proxy server.
722 Check the docker images:
728 Output should contain an influxdb image:
732 REPOSITORY TAG IMAGE ID CREATED SIZE
733 opnfv/barometer-influxdb latest 1e4623a59fe5 3 days ago 191MB
735 Build Grafana docker image
736 ^^^^^^^^^^^^^^^^^^^^^^^^^^
738 Build Grafana image from Dockerfile
742 $ cd barometer/docker/barometer-grafana
743 $ sudo docker build -t opnfv/barometer-grafana --build-arg http_proxy=`echo $http_proxy` \
744 --build-arg https_proxy=`echo $https_proxy` -f Dockerfile .
747 In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to
748 be passed only if system is behind an HTTP or HTTPS proxy server.
750 Check the docker images:
756 Output should contain an influxdb image:
760 REPOSITORY TAG IMAGE ID CREATED SIZE
761 opnfv/barometer-grafana latest 05f2a3edd96b 3 hours ago 1.2GB
763 Run the Influxdb and Grafana Images
764 -----------------------------------
766 Run the InfluxDB docker image
767 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
770 $ sudo docker run -tid -v /var/lib/influxdb:/var/lib/influxdb --net=host\
771 --name bar-influxdb opnfv/barometer-influxdb
773 Check your docker image is running
779 To make some changes when the container is running run:
783 sudo docker exec -ti <CONTAINER ID> /bin/bash
785 When both collectd and InfluxDB containers are located
786 on the same host, then no additional configuration have to be added and you
787 can proceed directly to `Run the Grafana docker image`_ section.
789 Modify collectd to support InfluxDB on another host
790 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
791 If InfluxDB and collectd containers are located on separate hosts, then
792 additional configuration have to be done in ``collectd`` container - it
793 normally sends data using network plugin to 'localhost/127.0.0.1' therefore
794 changing output location is required:
796 1. Stop and remove running bar-collectd container (if it is running)
800 $ sudo docker ps #to get collectd container name
801 $ sudo docker rm -f <COLLECTD_CONTAINER_NAME>
803 2. Go to location where shared collectd config files are stored
807 $ cd <BAROMETER_REPO_DIR>
808 $ cd src/collectd/collectd_sample_configs
810 3. Edit content of ``network.conf`` file.
811 By default this file looks like that:
817 Server "127.0.0.1" "25826"
820 ``127.0.0.1`` string has to be replaced with the IP address of host where
821 InfluxDB container is running (e.g. ``192.168.121.111``). Edit this using your
822 favorite text editor.
824 4. Start again collectd container like it is described in
825 `Run the collectd docker image`_ chapter
829 $ cd <BAROMETER_REPO_DIR>
830 $ sudo docker run -ti --name bar-collectd --net=host -v \
831 `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \
832 -v /var/run:/var/run -v /tmp:/tmp --privileged opnfv/barometer-collectd
834 Now collectd container will be sending data to InfluxDB container located on
835 remote Host pointed by IP configured in step 3.
837 Run the Grafana docker image
838 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
840 Connecting to an influxdb instance running on local system and adding own custom dashboards
844 $ cd <BAROMETER_REPO_DIR>
845 $ sudo docker run -tid -v /var/lib/grafana:/var/lib/grafana \
846 -v ${PWD}/docker/barometer-grafana/dashboards:/opt/grafana/dashboards \
847 --name bar-grafana --net=host opnfv/barometer-grafana
849 Connecting to an influxdb instance running on remote system with hostname of someserver and IP address
854 $ sudo docker run -tid -v /var/lib/grafana:/var/lib/grafana --net=host -e \
855 influxdb_host=someserver --add-host someserver:192.168.121.111 --name \
856 bar-grafana opnfv/barometer-grafana
858 Check your docker image is running
864 To make some changes when the container is running run:
868 sudo docker exec -ti <CONTAINER ID> /bin/bash
870 Connect to <host_ip>:3000 with a browser and log into grafana: admin/admin
872 Cleanup of influxdb/grafana configuration
873 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
875 When user wants to remove current grafana and influxdb configuration,
876 folowing actions have to be performed
878 1. Stop and remove running influxdb and grafana containers
882 sudo docker rm -f bar-grafana bar-influxdb
884 2. Remove shared influxdb and grafana folders from the Host
888 sudo rm -rf /var/lib/grafana
889 sudo rm -rf /var/lib/influxdb
892 Shared folders are storing configuration of grafana and influxdb
893 containers. In case of changing influxdb or grafana configuration
894 (e.g. moving influxdb to another host) it is good to perform cleanup
895 on shared folders to not affect new setup with an old configuration.
897 Build and Run VES and Kafka Docker Images
898 ------------------------------------------
900 Download VES and Kafka docker images
901 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
903 If you wish to use pre-built barometer project's VES and kafka images, you can pull the
904 images from https://hub.docker.com/r/opnfv/barometer-ves/ and https://hub.docker.com/r/opnfv/barometer-kafka/
907 If your preference is to build images locally please see sections `Build the Kafka Image`_ and
912 $ docker pull opnfv/barometer-kafka
913 $ docker pull opnfv/barometer-ves
916 If you have pulled the pre-built images there is no requirement to complete steps outlined
917 in sections `Build Kafka Docker Image`_ and `Build VES Docker Image`_ and you can proceed directly to section
918 `Run Kafka Docker Image`_
920 Build Kafka docker image
921 ^^^^^^^^^^^^^^^^^^^^^^^^
923 Build Kafka docker image:
927 $ cd barometer/docker/barometer-kafka
928 $ sudo docker build -t opnfv/barometer-kafka --build-arg http_proxy=`echo $http_proxy` \
929 --build-arg https_proxy=`echo $https_proxy` -f Dockerfile .
932 In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs
933 to be passed only if system is behind an HTTP or HTTPS proxy server.
935 Check the docker images:
941 Output should contain a barometer image:
945 REPOSITORY TAG IMAGE ID CREATED SIZE
946 opnfv/barometer-kafka latest 05f2a3edd96b 3 hours ago 1.2GB
948 Build VES docker image
949 ^^^^^^^^^^^^^^^^^^^^^^
951 Build VES application docker image:
955 $ cd barometer/docker/barometer-ves
956 $ sudo docker build -t opnfv/barometer-ves --build-arg http_proxy=`echo $http_proxy` \
957 --build-arg https_proxy=`echo $https_proxy` -f Dockerfile .
960 In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs
961 to be passed only if system is behind an HTTP or HTTPS proxy server.
963 Check the docker images:
969 Output should contain a barometer image:
973 REPOSITORY TAG IMAGE ID CREATED SIZE
974 opnfv/barometer-ves latest 05f2a3edd96b 3 hours ago 1.2GB
976 Run Kafka docker image
977 ^^^^^^^^^^^^^^^^^^^^^^
980 Before running Kafka an instance of Zookeeper must be running for the Kafka broker to register
981 with. Zookeeper can be running locally or on a remote platform. Kafka's broker_id and address of
982 its zookeeper instance can be configured by setting values for environmental variables 'broker_id'
983 and 'zookeeper_node'. In instance where 'broker_id' and/or 'zookeeper_node' is not set the default
984 setting of broker_id=0 and zookeeper_node=localhost is used. In intance where Zookeeper is running
985 on same node as Kafka and there is a one to one relationship between Zookeeper and Kafka, default
986 setting can be used. The docker argument `add-host` adds hostname and IP address to
987 /etc/hosts file in container
989 Run zookeeper docker image:
993 $ sudo docker run -tid --net=host -p 2181:2181 zookeeper:3.4.11
995 Run kafka docker image which connects with a zookeeper instance running on same node with a 1:1 relationship
999 $ sudo docker run -tid --net=host -p 9092:9092 opnfv/barometer-kafka
1002 Run kafka docker image which connects with a zookeeper instance running on a node with IP address of
1003 192.168.121.111 using broker ID of 1
1007 $ sudo docker run -tid --net=host -p 9092:9092 --env broker_id=1 --env zookeeper_node=zookeeper --add-host \
1008 zookeeper:192.168.121.111 opnfv/barometer-kafka
1010 Run VES Application docker image
1011 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1013 VES application uses configuration file ves_app_config.conf from directory
1014 barometer/3rd_party/collectd-ves-app/ves_app/config/ and host.yaml file from
1015 barometer/3rd_party/collectd-ves-app/ves_app/yaml/ by default. If you wish to use a custom config
1016 file it should be mounted to mount point /opt/ves/config/ves_app_config.conf. To use an alternative yaml
1017 file from folder barometer/3rd_party/collectd-ves-app/ves_app/yaml the name of the yaml file to use
1018 should be passed as an additional command. If you wish to use a custom file the file should be
1019 mounted to mount point /opt/ves/yaml/ Please see examples below
1021 Run VES docker image with default configuration
1025 $ sudo docker run -tid --net=host opnfv/barometer-ves
1027 Run VES docker image with guest.yaml files from barometer/3rd_party/collectd-ves-app/ves_app/yaml/
1031 $ sudo docker run -tid --net=host opnfv/barometer-ves guest.yaml
1034 Run VES docker image with using custom config and yaml files. In example below yaml/ folder cotains
1035 file named custom.yaml
1039 $ sudo docker run -tid --net=host -v ${PWD}/custom.config:/opt/ves/config/ves_app_config.conf \
1040 -v ${PWD}/yaml/:/opt/ves/yaml/ opnfv/barometer-ves custom.yaml
1042 Run VES Test Collector application
1043 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1045 VES Test Collector application can be used for displaying platform
1046 wide metrics that are collected by barometer-ves container.
1047 Setup instructions are located in: :ref:`Setup VES Test Collector`
1049 Build and Run DMA and Redis Docker Images
1050 -----------------------------------------------------
1052 Download DMA docker images
1053 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1055 If you wish to use pre-built barometer project's DMA images, you can pull the
1056 images from https://hub.docker.com/r/opnfv/barometer-dma/
1059 If your preference is to build images locally please see sections `Build DMA Docker Image`_
1063 $ docker pull opnfv/barometer-dma
1066 If you have pulled the pre-built images there is no requirement to complete steps outlined
1067 in sections `Build DMA Docker Image`_ and you can proceed directly to section
1068 `Run DMA Docker Image`_
1070 Build DMA docker image
1071 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1073 Build DMA docker image:
1077 $ cd barometer/docker/barometer-dma
1078 $ sudo docker build -t opnfv/barometer-dma --build-arg http_proxy=`echo $http_proxy` \
1079 --build-arg https_proxy=`echo $https_proxy` -f Dockerfile .
1082 In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs
1083 to be passed only if system is behind an HTTP or HTTPS proxy server.
1085 Check the docker images:
1089 $ sudo docker images
1091 Output should contain a barometer image:
1095 REPOSITORY TAG IMAGE ID CREATED SIZE
1096 opnfv/barometer-dma latest 2f14fbdbd498 3 hours ago 941 MB
1098 Run Redis docker image
1099 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1102 Before running DMA, Redis must be running.
1104 Run Redis docker image:
1108 $ sudo docker run -tid -p 6379:6379 --name barometer-redis redis
1110 Check your docker image is running
1116 Run DMA docker image
1117 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1120 Run DMA docker image with default configuration
1124 $ cd barometer/docker/barometer-dma
1125 $ sudo mkdir /etc/barometer-dma
1126 $ sudo cp ../../src/dma/examples/config.toml /etc/barometer-dma/
1127 $ sudo vi /etc/barometer-dma/config.toml
1128 (edit amqp_password and os_password:OpenStack admin password)
1131 (When there is no key for SSH access authentication)
1133 (Press Enter until done)
1134 (Backup if necessary)
1135 # cp ~/.ssh/authorized_keys ~/.ssh/authorized_keys_org
1136 # cat ~/.ssh/authorized_keys_org ~/.ssh/id_rsa.pub \
1137 > ~/.ssh/authorized_keys
1140 $ sudo docker run -tid --net=host --name server \
1141 -v /etc/barometer-dma:/etc/barometer-dma \
1142 -v /root/.ssh/id_rsa:/root/.ssh/id_rsa \
1143 -v /etc/collectd/collectd.conf.d:/etc/collectd/collectd.conf.d \
1144 opnfv/barometer-dma /server
1146 $ sudo docker run -tid --net=host --name infofetch \
1147 -v /etc/barometer-dma:/etc/barometer-dma \
1148 -v /var/run/libvirt:/var/run/libvirt \
1149 opnfv/barometer-dma /infofetch
1151 (Execute when installing the threshold evaluation binary)
1152 $ sudo docker cp infofetch:/threshold ./
1153 $ sudo ln -s ${PWD}/threshold /usr/local/bin/
1157 .. [1] https://docs.docker.com/engine/admin/systemd/#httphttps-proxy
1158 .. [2] https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository
1159 .. [3] https://docs.docker.com/engine/userguide/