-Encrypt the docker remote API via TLS for Ubuntu and CentOS\r
-\r
-[Introduction]\r
-The Docker daemon can listen to Docker Remote API requests via three types of\r
-Socket: unix, tcp and fd. By default, a unix domain socket (or IPC socket) is\r
-created at /var/run/docker.sock, requiring either root permission, or docker\r
-group membership.\r
-\r
-Port 2375 is conventionally used for un-encrypted communition with Docker daemon\r
-remotely, where docker server can be accessed by any docker client via tcp socket\r
-in local area network. You can listen to port 2375 on all network interfaces with\r
--H tcp://0.0.0.0:2375, where 0.0.0.0 means any available IP address on host, and\r
-tcp://0.0.0.0:2375 indicates that port 2375 is listened on any IP of daemon host.\r
-If we want to make docker server open on the Internet via TCP port, and only trusted\r
-clients have the right to access the docker server in a safe manner, port 2376 for\r
-encrypted communication with the daemon should be listened. It can be achieved to\r
-create certificate and distribute it to the trusted clients.\r
-\r
-Through creating self-signed certificate, and using --tlsverify command when running\r
-Docker daemon, Docker daemon opens the TLS authentication. Thus only the clients\r
-with related private key files can have access to the Docker daemon's server. As\r
-long as the key files for encryption are secure between docker server and client,\r
-the Docker daemon can keep secure.\r
-In summary,\r
-Firstly we should create docker server certificate and related key files, which\r
-are distributed to the trusted clients.\r
-Then the clients with related key files can access docker server.\r
-\r
-[Steps]\r
-1.0. Create a CA, server and client keys with OpenSSL.\r
- OpenSSL is used to generate certificate, and can be installed as follows.\r
- apt-get install openssl openssl-devel\r
-\r
-1.1 First generate CA private and public keys.\r
- openssl genrsa -aes256 -out ca-key.pem 4096\r
- openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem\r
-\r
- You are about to be asked to enter information that will be incorporated\r
- into your certificate request, where the instance of $HOST should be replaced\r
- with the DNS name of your Docker daemon's host, here the DNS name of my Docker\r
- daemon is ly.\r
- Common Name (e.g. server FQDN or YOUR name) []:$HOST\r
-\r
-1.2 Now we have a CA (ca-key.pem and ca.pem), you can create a server key and\r
-certificate signing request.\r
- openssl genrsa -out server-key.pem 4096\r
- openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr\r
-\r
-1.3 Sign the public key with our CA.\r
- TLS connections can be made via IP address as well as DNS name, they need to be\r
- specified when creating the certificate.\r
-\r
- echo subjectAltName = IP:172.16.10.121,IP:127.0.0.1 > extfile.cnf\r
- openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \\r
- -CAcreateserial -out server-cert.pem -extfile extfile.cnf\r
-\r
-1.4 For client authentication, create a client key and certificate signing request.\r
- openssl genrsa -out key.pem 4096\r
- openssl req -subj '/CN=client' -new -key key.pem -out client.csr\r
-\r
-1.5 To make the key suitable for client authentication, create an extensions config file.\r
- echo extendedKeyUsage = clientAuth > extfile.cnf\r
-\r
-1.6 Sign the public key and after generating cert.pem and server-cert.pem, two certificate\r
- signing requests can be removed.\r
- openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \\r
- -CAcreateserial -out cert.pem -extfile extfile.cnf\r
-\r
-1.7 In order to protect your keys from accidental damage, you may change file modes to\r
- be only readable.\r
- chmod -v 0400 ca-key.pem key.pem server-key.pem\r
- chmod -v 0444 ca.pem server-cert.pem cert.pem\r
-\r
-1.8 Build docker server\r
- dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \\r
- -H=0.0.0.0:2376\r
- Then, it can be seen from the command 'netstat -ntlp' that port 2376 has been listened\r
- and the Docker daemon only accept connections from clients providing a certificate\r
- trusted by our CA.\r
-\r
-1.9 Distribute the keys to the client\r
- scp /etc/docker/ca.pem wwl@172.16.10.121:/etc/docker\r
- scp /etc/docker/cert.pem wwl@172.16.10.121:/etc/docker\r
- scp /etc/docker/key.pem wwl@172.16.10.121:/etc/docker\r
- Where, wwl and 172.16.10.121 is the username and IP of the client respectively.\r
- And the password of the client is needed when you distribute the keys to the client.\r
-\r
-1.10 To access Docker daemon from the client via keys.\r
- docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \\r
- -H=$HOST:2376 version\r
-\r
- Then we can operate docker in the Docker daemon from the client vis keys, for example:\r
- 1) create container from the client\r
- docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 run -d \\r
- -it --name w1 grafana/grafana\r
- 2) list containers from the client\r
- docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 pa -a\r
- 3) stop/start containers from the client\r
- docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 stop w1\r
- docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 start w1\r
-\r
-\r
-\r
-\r
-\r
-\r
-\r
+Encrypt the docker remote API via TLS for Ubuntu and CentOS
+
+[Introduction]
+The Docker daemon can listen to Docker Remote API requests via three types of
+Socket: unix, tcp and fd. By default, a unix domain socket (or IPC socket) is
+created at /var/run/docker.sock, requiring either root permission, or docker
+group membership.
+
+Port 2375 is conventionally used for un-encrypted communition with Docker daemon
+remotely, where docker server can be accessed by any docker client via tcp socket
+in local area network. You can listen to port 2375 on all network interfaces with
+-H tcp://0.0.0.0:2375, where 0.0.0.0 means any available IP address on host, and
+tcp://0.0.0.0:2375 indicates that port 2375 is listened on any IP of daemon host.
+If we want to make docker server open on the Internet via TCP port, and only trusted
+clients have the right to access the docker server in a safe manner, port 2376 for
+encrypted communication with the daemon should be listened. It can be achieved to
+create certificate and distribute it to the trusted clients.
+
+Through creating self-signed certificate, and using --tlsverify command when running
+Docker daemon, Docker daemon opens the TLS authentication. Thus only the clients
+with related private key files can have access to the Docker daemon's server. As
+long as the key files for encryption are secure between docker server and client,
+the Docker daemon can keep secure.
+In summary,
+Firstly we should create docker server certificate and related key files, which
+are distributed to the trusted clients.
+Then the clients with related key files can access docker server.
+
+[Steps]
+1.0. Create a CA, server and client keys with OpenSSL.
+ OpenSSL is used to generate certificate, and can be installed as follows.
+ apt-get install openssl openssl-devel
+
+1.1 First generate CA private and public keys.
+ openssl genrsa -aes256 -out ca-key.pem 4096
+ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
+
+ You are about to be asked to enter information that will be incorporated
+ into your certificate request, where the instance of $HOST should be replaced
+ with the DNS name of your Docker daemon's host, here the DNS name of my Docker
+ daemon is ly.
+ Common Name (e.g. server FQDN or YOUR name) []:$HOST
+
+1.2 Now we have a CA (ca-key.pem and ca.pem), you can create a server key and
+certificate signing request.
+ openssl genrsa -out server-key.pem 4096
+ openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr
+
+1.3 Sign the public key with our CA.
+ TLS connections can be made via IP address as well as DNS name, they need to be
+ specified when creating the certificate.
+
+ echo subjectAltName = IP:172.16.10.121,IP:127.0.0.1 > extfile.cnf
+ openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
+ -CAcreateserial -out server-cert.pem -extfile extfile.cnf
+
+1.4 For client authentication, create a client key and certificate signing request.
+ openssl genrsa -out key.pem 4096
+ openssl req -subj '/CN=client' -new -key key.pem -out client.csr
+
+1.5 To make the key suitable for client authentication, create an extensions config file.
+ echo extendedKeyUsage = clientAuth > extfile.cnf
+
+1.6 Sign the public key and after generating cert.pem and server-cert.pem, two certificate
+ signing requests can be removed.
+ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
+ -CAcreateserial -out cert.pem -extfile extfile.cnf
+
+1.7 In order to protect your keys from accidental damage, you may change file modes to
+ be only readable.
+ chmod -v 0400 ca-key.pem key.pem server-key.pem
+ chmod -v 0444 ca.pem server-cert.pem cert.pem
+
+1.8 Build docker server
+ dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \
+ -H=0.0.0.0:2376
+ Then, it can be seen from the command 'netstat -ntlp' that port 2376 has been listened
+ and the Docker daemon only accept connections from clients providing a certificate
+ trusted by our CA.
+
+1.9 Distribute the keys to the client
+ scp /etc/docker/ca.pem wwl@172.16.10.121:/etc/docker
+ scp /etc/docker/cert.pem wwl@172.16.10.121:/etc/docker
+ scp /etc/docker/key.pem wwl@172.16.10.121:/etc/docker
+ Where, wwl and 172.16.10.121 is the username and IP of the client respectively.
+ And the password of the client is needed when you distribute the keys to the client.
+
+1.10 To access Docker daemon from the client via keys.
+ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
+ -H=$HOST:2376 version
+
+ Then we can operate docker in the Docker daemon from the client vis keys, for example:
+ 1) create container from the client
+ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 run -d \
+ -it --name w1 grafana/grafana
+ 2) list containers from the client
+ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 pa -a
+ 3) stop/start containers from the client
+ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 stop w1
+ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=ly:2376 start w1
+
+
+
+
+
+
+
-#!/bin/bash\r
-# SPDX-license-identifier: Apache-2.0\r
-\r
-# ******************************\r
-# Script to update the docker host configuration\r
-# to enable Docker Remote API\r
-# ******************************\r
-\r
-if [ -f /etc/lsb-release ]; then\r
- #tested on ubuntu 14.04 and 16.04\r
- if grep -q "#DOCKER_OPTS=" "/etc/default/docker"; then\r
- cp /etc/default/docker /etc/default/docker.bak\r
- sed -i 's/^#DOCKER_OPTS.*$/DOCKER_OPTS=\"-H unix:\/\/\/var\/run\/docker.sock -H tcp:\/\/0.0.0.0:2375\"/g' /etc/default/docker\r
- else\r
- echo DOCKER_OPTS=\"-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375\" >> /etc/default/docker\r
- fi\r
- service docker restart\r
- #docker start $(docker ps -aq)\r
-elif [ -f /etc/system-release ]; then\r
- #tested on centos 7.2\r
- if grep -q "ExecStart=\/usr\/bin\/docker-current daemon" "/lib/systemd/system/docker.service"; then\r
- cp /lib/systemd/system/docker.service /lib/systemd/system/docker.service.bak\r
- sed -i 's/^ExecStart=.*$/ExecStart=\/usr\/bin\/docker daemon -H tcp:\/\/0.0.0.0:2375 -H unix:\/\/\/var\/run\/docker.sock \\/g' /lib/systemd/system/docker.service\r
- systemctl daemon-reload\r
- systemctl restart docker\r
- else\r
- echo "to be implemented"\r
- fi\r
-else\r
- echo "OS is not supported"\r
-fi\r
-\r
-# Issue Note for Ubuntu\r
-# 1. If the configuration of the file /etc/default/docker does not take effect after restarting docker service,\r
-# you may try to modify /lib/systemd/system/docker.service\r
-# commands:\r
-# cp /lib/systemd/system/docker.service /lib/systemd/system/docker.service.bak\r
-# sed -i '/^ExecStart/i\EnvironmentFile=-/etc/default/docker' /lib/systemd/system/docker.service\r
-# sed -i '/ExecStart=\/usr\/bin\/dockerd/{;s/$/ \$DOCKER_OPTS/}' /lib/systemd/system/docker.service\r
-# systemctl daemon-reload\r
-# service docker restart\r
-# 2. Systemd is a system and session manager for Linux, where systemctl is one tool for systemd to view and control systemd.\r
-# If the file /lib/systemd/system/docker.service is modified, systemd has to be reloaded to scan new or changed units.\r
-# 1) systemd and related packages are available on the PPA. To use the PPA, first add it to your software sources list as follows.\r
-# add-apt-repository ppa:pitti/systemd\r
-# apt-get update\r
-# 2) system can be installed from the PPS as follows.\r
-# apt-get install systemd libpam-systemd systemd-ui\r
-\r
-\r
-\r
+#!/bin/bash
+# SPDX-license-identifier: Apache-2.0
+
+# ******************************
+# Script to update the docker host configuration
+# to enable Docker Remote API
+# ******************************
+
+if [ -f /etc/lsb-release ]; then
+ #tested on ubuntu 14.04 and 16.04
+ if grep -q "#DOCKER_OPTS=" "/etc/default/docker"; then
+ cp /etc/default/docker /etc/default/docker.bak
+ sed -i 's/^#DOCKER_OPTS.*$/DOCKER_OPTS=\"-H unix:\/\/\/var\/run\/docker.sock -H tcp:\/\/0.0.0.0:2375\"/g' /etc/default/docker
+ else
+ echo DOCKER_OPTS=\"-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375\" >> /etc/default/docker
+ fi
+ service docker restart
+ #docker start $(docker ps -aq)
+elif [ -f /etc/system-release ]; then
+ #tested on centos 7.2
+ if grep -q "ExecStart=\/usr\/bin\/docker-current daemon" "/lib/systemd/system/docker.service"; then
+ cp /lib/systemd/system/docker.service /lib/systemd/system/docker.service.bak
+ sed -i 's/^ExecStart=.*$/ExecStart=\/usr\/bin\/docker daemon -H tcp:\/\/0.0.0.0:2375 -H unix:\/\/\/var\/run\/docker.sock \\/g' /lib/systemd/system/docker.service
+ systemctl daemon-reload
+ systemctl restart docker
+ else
+ echo "to be implemented"
+ fi
+else
+ echo "OS is not supported"
+fi
+
+# Issue Note for Ubuntu
+# 1. If the configuration of the file /etc/default/docker does not take effect after restarting docker service,
+# you may try to modify /lib/systemd/system/docker.service
+# commands:
+# cp /lib/systemd/system/docker.service /lib/systemd/system/docker.service.bak
+# sed -i '/^ExecStart/i\EnvironmentFile=-/etc/default/docker' /lib/systemd/system/docker.service
+# sed -i '/ExecStart=\/usr\/bin\/dockerd/{;s/$/ \$DOCKER_OPTS/}' /lib/systemd/system/docker.service
+# systemctl daemon-reload
+# service docker restart
+# 2. Systemd is a system and session manager for Linux, where systemctl is one tool for systemd to view and control systemd.
+# If the file /lib/systemd/system/docker.service is modified, systemd has to be reloaded to scan new or changed units.
+# 1) systemd and related packages are available on the PPA. To use the PPA, first add it to your software sources list as follows.
+# add-apt-repository ppa:pitti/systemd
+# apt-get update
+# 2) system can be installed from the PPS as follows.
+# apt-get install systemd libpam-systemd systemd-ui
+
+
+
-general:\r
- directories:\r
- # Relative to the path where the repo is cloned:\r
- dir_vping: functest/opnfv_tests/openstack/vping\r
- dir_odl: functest/opnfv_tests/sdn/odl\r
- dir_rally: functest/opnfv_tests/openstack/rally\r
- dir_tempest_cases: functest/opnfv_tests/openstack/tempest/custom_tests\r
- dir_vIMS: functest/opnfv_tests/vnf/ims\r
- dir_onos: functest/opnfv_tests/sdn/onos/teston\r
- dir_onos_sfc: functest/opnfv_tests/sdn/onos/sfc\r
-\r
- # Absolute path\r
- dir_home: /home/opnfv\r
- dir_repos: /home/opnfv/repos\r
- dir_repo_functest: /home/opnfv/repos/functest\r
- dir_repo_rally: /home/opnfv/repos/rally\r
- dir_repo_tempest: /home/opnfv/repos/tempest\r
- dir_repo_releng: /home/opnfv/repos/releng\r
- dir_repo_vims_test: /home/opnfv/repos/vims-test\r
- dir_repo_sdnvpn: /home/opnfv/repos/sdnvpn\r
- dir_repo_sfc: /home/opnfv/repos/sfc\r
- dir_repo_onos: /home/opnfv/repos/onos\r
- dir_repo_promise: /home/opnfv/repos/promise\r
- dir_repo_doctor: /home/opnfv/repos/doctor\r
- dir_repo_copper: /home/opnfv/repos/copper\r
- dir_repo_ovno: /home/opnfv/repos/ovno\r
- dir_repo_parser: /home/opnfv/repos/parser\r
- dir_repo_domino: /home/opnfv/repos/domino\r
- dir_repo_snaps: /home/opnfv/repos/snaps\r
- dir_functest: /home/opnfv/functest\r
- dir_functest_test: /home/opnfv/repos/functest/functest/opnfv_tests\r
- dir_results: /home/opnfv/functest/results\r
- dir_functest_conf: /home/opnfv/functest/conf\r
- dir_functest_data: /home/opnfv/functest/data\r
- dir_vIMS_data: /home/opnfv/functest/data/vIMS/\r
- dir_rally_inst: /home/opnfv/.rally\r
-\r
- openstack:\r
- creds: /home/opnfv/functest/conf/openstack.creds\r
- snapshot_file: /home/opnfv/functest/conf/openstack_snapshot.yaml\r
-\r
- image_name: Cirros-0.3.4\r
- image_file_name: cirros-0.3.4-x86_64-disk.img\r
- image_disk_format: qcow2\r
-\r
- flavor_name: opnfv_flavor\r
- flavor_ram: 512\r
- flavor_disk: 1\r
- flavor_vcpus: 1\r
-\r
- # Private network for functest. Will be created by config_functest.py\r
- neutron_private_net_name: functest-net\r
- neutron_private_subnet_name: functest-subnet\r
- neutron_private_subnet_cidr: 192.168.120.0/24\r
- neutron_private_subnet_start: 192.168.120.2\r
- neutron_private_subnet_end: 192.168.120.254\r
- neutron_private_subnet_gateway: 192.168.120.254\r
- neutron_router_name: functest-router\r
-\r
- functest:\r
- testcases_yaml: /home/opnfv/repos/functest/functest/ci/testcases.yaml\r
-\r
-healthcheck:\r
- disk_image: /home/opnfv/functest/data/cirros-0.3.4-x86_64-disk.img\r
- disk_format: qcow2\r
- wait_time: 60\r
-\r
-snaps:\r
- use_keystone: True\r
- use_floating_ips: False\r
-\r
-vping:\r
- ping_timeout: 200\r
- vm_flavor: m1.tiny # adapt to your environment\r
- vm_name_1: opnfv-vping-1\r
- vm_name_2: opnfv-vping-2\r
- image_name: functest-vping\r
- vping_private_net_name: vping-net\r
- vping_private_subnet_name: vping-subnet\r
- vping_private_subnet_cidr: 192.168.130.0/24\r
- vping_router_name: vping-router\r
- vping_sg_name: vPing-sg\r
- vping_sg_descr: Security group for vPing test case\r
-\r
-onos_sfc:\r
- image_base_url: http://artifacts.opnfv.org/sfc/demo\r
- image_name: TestSfcVm\r
- image_file_name: firewall_block_image.img\r
-\r
-tempest:\r
- identity:\r
- tenant_name: tempest\r
- tenant_description: Tenant for Tempest test suite\r
- user_name: tempest\r
- user_password: tempest\r
- validation:\r
- ssh_timeout: 130\r
- private_net_name: tempest-net\r
- private_subnet_name: tempest-subnet\r
- private_subnet_cidr: 192.168.150.0/24\r
- router_name: tempest-router\r
- use_custom_images: False\r
- use_custom_flavors: False\r
-\r
-rally:\r
- deployment_name: opnfv-rally\r
- network_name: rally-net\r
- subnet_name: rally-subnet\r
- subnet_cidr: 192.168.140.0/24\r
- router_name: rally-router\r
-\r
-vIMS:\r
- general:\r
- tenant_name: vIMS\r
- tenant_description: vIMS Functionality Testing\r
- images:\r
- ubuntu:\r
- image_url: http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img\r
- image_name: ubuntu_14.04\r
- centos:\r
- image_url: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1510.qcow2\r
- image_name: centos_7\r
- cloudify:\r
- blueprint:\r
- url: https://github.com/boucherv-orange/cloudify-manager-blueprints.git\r
- branch: "3.3.1-build"\r
- requierments:\r
- ram_min: 3000\r
- os_image: centos_7\r
- inputs:\r
- keystone_username: ""\r
- keystone_password: ""\r
- keystone_tenant_name: ""\r
- keystone_url: ""\r
- manager_public_key_name: 'manager-kp'\r
- agent_public_key_name: 'agent-kp'\r
- image_id: ""\r
- flavor_id: "3"\r
- external_network_name: ""\r
- ssh_user: centos\r
- agents_user: ubuntu\r
- clearwater:\r
- blueprint:\r
- file_name: 'openstack-blueprint.yaml'\r
- name: "clearwater-opnfv"\r
- destination_folder: "opnfv-cloudify-clearwater"\r
- url: https://github.com/Orange-OpenSource/opnfv-cloudify-clearwater.git\r
- branch: "stable"\r
- deployment-name: 'clearwater-opnfv'\r
- requierments:\r
- ram_min: 1700\r
- os_image: ubuntu_14.04\r
- inputs:\r
- image_id: ''\r
- flavor_id: ''\r
- agent_user: 'ubuntu'\r
- external_network_name: ''\r
- public_domain: clearwater.opnfv\r
-ONOS:\r
- general:\r
- onosbench_username: 'root'\r
- onosbench_password: 'root'\r
- onoscli_username: 'root'\r
- onoscli_password: 'root'\r
- runtimeout: 300\r
- environment:\r
- OCT: '10.20.0.1'\r
- OC1: '10.20.0.7'\r
- OC2: '10.20.0.7'\r
- OC3: '10.20.0.7'\r
- OCN: '10.20.0.4'\r
- OCN2: '10.20.0.5'\r
- installer_master: '10.20.0.2'\r
- installer_master_username: 'root'\r
- installer_master_password: 'r00tme'\r
-multisite:\r
- fuel_environment:\r
- installer_username: 'root'\r
- installer_password: 'r00tme'\r
- compass_environment:\r
- installer_username: 'root'\r
- installer_password: 'root'\r
- multisite_controller_ip: '10.1.0.50'\r
-promise:\r
- tenant_name: promise\r
- tenant_description: promise Functionality Testing\r
- user_name: promiser\r
- user_pwd: test\r
- image_name: promise-img\r
- flavor_name: promise-flavor\r
- flavor_vcpus: 1\r
- flavor_ram: 128\r
- flavor_disk: 0\r
- network_name: promise-net\r
- subnet_name: promise-subnet\r
- subnet_cidr: 192.168.121.0/24\r
- router_name: promise-router\r
-\r
-example:\r
- example_vm_name: example-vm\r
- example_flavor: m1.small\r
- example_image_name: functest-example-vm\r
- example_private_net_name: example-net\r
- example_private_subnet_name: example-subnet\r
- example_private_subnet_cidr: 192.168.170.0/24\r
- example_router_name: example-router\r
- example_sg_name: example-sg\r
- example_sg_descr: Example Security group\r
-\r
-results:\r
- test_db_url: http://testresults.opnfv.org/test/api/v1\r
+general:
+ directories:
+ # Relative to the path where the repo is cloned:
+ dir_vping: functest/opnfv_tests/openstack/vping
+ dir_odl: functest/opnfv_tests/sdn/odl
+ dir_rally: functest/opnfv_tests/openstack/rally
+ dir_tempest_cases: functest/opnfv_tests/openstack/tempest/custom_tests
+ dir_vIMS: functest/opnfv_tests/vnf/ims
+ dir_onos: functest/opnfv_tests/sdn/onos/teston
+ dir_onos_sfc: functest/opnfv_tests/sdn/onos/sfc
+
+ # Absolute path
+ dir_home: /home/opnfv
+ dir_repos: /home/opnfv/repos
+ dir_repo_functest: /home/opnfv/repos/functest
+ dir_repo_rally: /home/opnfv/repos/rally
+ dir_repo_tempest: /home/opnfv/repos/tempest
+ dir_repo_releng: /home/opnfv/repos/releng
+ dir_repo_vims_test: /home/opnfv/repos/vims-test
+ dir_repo_sdnvpn: /home/opnfv/repos/sdnvpn
+ dir_repo_sfc: /home/opnfv/repos/sfc
+ dir_repo_onos: /home/opnfv/repos/onos
+ dir_repo_promise: /home/opnfv/repos/promise
+ dir_repo_doctor: /home/opnfv/repos/doctor
+ dir_repo_copper: /home/opnfv/repos/copper
+ dir_repo_ovno: /home/opnfv/repos/ovno
+ dir_repo_parser: /home/opnfv/repos/parser
+ dir_repo_domino: /home/opnfv/repos/domino
+ dir_repo_snaps: /home/opnfv/repos/snaps
+ dir_functest: /home/opnfv/functest
+ dir_functest_test: /home/opnfv/repos/functest/functest/opnfv_tests
+ dir_results: /home/opnfv/functest/results
+ dir_functest_conf: /home/opnfv/functest/conf
+ dir_functest_data: /home/opnfv/functest/data
+ dir_vIMS_data: /home/opnfv/functest/data/vIMS/
+ dir_rally_inst: /home/opnfv/.rally
+
+ openstack:
+ creds: /home/opnfv/functest/conf/openstack.creds
+ snapshot_file: /home/opnfv/functest/conf/openstack_snapshot.yaml
+
+ image_name: Cirros-0.3.4
+ image_file_name: cirros-0.3.4-x86_64-disk.img
+ image_disk_format: qcow2
+
+ flavor_name: opnfv_flavor
+ flavor_ram: 512
+ flavor_disk: 1
+ flavor_vcpus: 1
+
+ # Private network for functest. Will be created by config_functest.py
+ neutron_private_net_name: functest-net
+ neutron_private_subnet_name: functest-subnet
+ neutron_private_subnet_cidr: 192.168.120.0/24
+ neutron_private_subnet_start: 192.168.120.2
+ neutron_private_subnet_end: 192.168.120.254
+ neutron_private_subnet_gateway: 192.168.120.254
+ neutron_router_name: functest-router
+
+ functest:
+ testcases_yaml: /home/opnfv/repos/functest/functest/ci/testcases.yaml
+
+healthcheck:
+ disk_image: /home/opnfv/functest/data/cirros-0.3.4-x86_64-disk.img
+ disk_format: qcow2
+ wait_time: 60
+
+snaps:
+ use_keystone: True
+ use_floating_ips: False
+
+vping:
+ ping_timeout: 200
+ vm_flavor: m1.tiny # adapt to your environment
+ vm_name_1: opnfv-vping-1
+ vm_name_2: opnfv-vping-2
+ image_name: functest-vping
+ vping_private_net_name: vping-net
+ vping_private_subnet_name: vping-subnet
+ vping_private_subnet_cidr: 192.168.130.0/24
+ vping_router_name: vping-router
+ vping_sg_name: vPing-sg
+ vping_sg_descr: Security group for vPing test case
+
+onos_sfc:
+ image_base_url: http://artifacts.opnfv.org/sfc/demo
+ image_name: TestSfcVm
+ image_file_name: firewall_block_image.img
+
+tempest:
+ identity:
+ tenant_name: tempest
+ tenant_description: Tenant for Tempest test suite
+ user_name: tempest
+ user_password: tempest
+ validation:
+ ssh_timeout: 130
+ private_net_name: tempest-net
+ private_subnet_name: tempest-subnet
+ private_subnet_cidr: 192.168.150.0/24
+ router_name: tempest-router
+ use_custom_images: False
+ use_custom_flavors: False
+
+rally:
+ deployment_name: opnfv-rally
+ network_name: rally-net
+ subnet_name: rally-subnet
+ subnet_cidr: 192.168.140.0/24
+ router_name: rally-router
+
+vIMS:
+ general:
+ tenant_name: vIMS
+ tenant_description: vIMS Functionality Testing
+ images:
+ ubuntu:
+ image_url: http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
+ image_name: ubuntu_14.04
+ centos:
+ image_url: http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1510.qcow2
+ image_name: centos_7
+ cloudify:
+ blueprint:
+ url: https://github.com/boucherv-orange/cloudify-manager-blueprints.git
+ branch: "3.3.1-build"
+ requierments:
+ ram_min: 3000
+ os_image: centos_7
+ inputs:
+ keystone_username: ""
+ keystone_password: ""
+ keystone_tenant_name: ""
+ keystone_url: ""
+ manager_public_key_name: 'manager-kp'
+ agent_public_key_name: 'agent-kp'
+ image_id: ""
+ flavor_id: "3"
+ external_network_name: ""
+ ssh_user: centos
+ agents_user: ubuntu
+ clearwater:
+ blueprint:
+ file_name: 'openstack-blueprint.yaml'
+ name: "clearwater-opnfv"
+ destination_folder: "opnfv-cloudify-clearwater"
+ url: https://github.com/Orange-OpenSource/opnfv-cloudify-clearwater.git
+ branch: "stable"
+ deployment-name: 'clearwater-opnfv'
+ requierments:
+ ram_min: 1700
+ os_image: ubuntu_14.04
+ inputs:
+ image_id: ''
+ flavor_id: ''
+ agent_user: 'ubuntu'
+ external_network_name: ''
+ public_domain: clearwater.opnfv
+ONOS:
+ general:
+ onosbench_username: 'root'
+ onosbench_password: 'root'
+ onoscli_username: 'root'
+ onoscli_password: 'root'
+ runtimeout: 300
+ environment:
+ OCT: '10.20.0.1'
+ OC1: '10.20.0.7'
+ OC2: '10.20.0.7'
+ OC3: '10.20.0.7'
+ OCN: '10.20.0.4'
+ OCN2: '10.20.0.5'
+ installer_master: '10.20.0.2'
+ installer_master_username: 'root'
+ installer_master_password: 'r00tme'
+multisite:
+ fuel_environment:
+ installer_username: 'root'
+ installer_password: 'r00tme'
+ compass_environment:
+ installer_username: 'root'
+ installer_password: 'root'
+ multisite_controller_ip: '10.1.0.50'
+promise:
+ tenant_name: promise
+ tenant_description: promise Functionality Testing
+ user_name: promiser
+ user_pwd: test
+ image_name: promise-img
+ flavor_name: promise-flavor
+ flavor_vcpus: 1
+ flavor_ram: 128
+ flavor_disk: 0
+ network_name: promise-net
+ subnet_name: promise-subnet
+ subnet_cidr: 192.168.121.0/24
+ router_name: promise-router
+
+example:
+ example_vm_name: example-vm
+ example_flavor: m1.small
+ example_image_name: functest-example-vm
+ example_private_net_name: example-net
+ example_private_subnet_name: example-subnet
+ example_private_subnet_cidr: 192.168.170.0/24
+ example_router_name: example-router
+ example_sg_name: example-sg
+ example_sg_descr: Example Security group
+
+results:
+ test_db_url: http://testresults.opnfv.org/test/api/v1
-import os
+import os
import re
import time
import json
-##############################################################################\r
-# All rights reserved. This program and the accompanying materials\r
-# are made available under the terms of the Apache License, Version 2.0\r
-# which accompanies this distribution, and is available at\r
-# http://www.apache.org/licenses/LICENSE-2.0\r
-##############################################################################\r
-\r
-from setuptools import setup, find_packages\r
-\r
-\r
-setup(\r
- name="functest",\r
- version="master",\r
- py_modules=['cli_base'],\r
- packages=find_packages(),\r
- include_package_data=True,\r
- package_data={\r
- },\r
- url="https://www.opnfv.org",\r
- entry_points={\r
- 'console_scripts': [\r
- 'functest=functest.cli.cli_base:cli'\r
- ],\r
- },\r
-)\r
+##############################################################################
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+
+from setuptools import setup, find_packages
+
+
+setup(
+ name="functest",
+ version="master",
+ py_modules=['cli_base'],
+ packages=find_packages(),
+ include_package_data=True,
+ package_data={
+ },
+ url="https://www.opnfv.org",
+ entry_points={
+ 'console_scripts': [
+ 'functest=functest.cli.cli_base:cli'
+ ],
+ },
+)