1 Installation High-Level Overview - Bare Metal Deployment
2 ========================================================
4 The setup presumes that you have 6 or more bare metal servers already setup
5 with network connectivity on at least 1 or more network interfaces for all
6 servers via a TOR switch or other network implementation.
8 The physical TOR switches are **not** automatically configured from the OPNFV
9 reference platform. All the networks involved in the OPNFV infrastructure as
10 well as the provider networks and the private tenant VLANs needs to be manually
13 The Jump Host can be installed using the bootable ISO or by using the
14 (``opnfv-apex*.rpm``) RPMs and their dependencies. The Jump Host should then
15 be configured with an IP gateway on its admin or public interface and
16 configured with a working DNS server. The Jump Host should also have routable
17 access to the lights out network for the overcloud nodes.
19 ``opnfv-deploy`` is then executed in order to deploy the undercloud VM and to
20 provision the overcloud nodes. ``opnfv-deploy`` uses three configuration files
21 in order to know how to install and provision the OPNFV target system.
22 The information gathered under section
23 `Execution Requirements (Bare Metal Only)`_ is put into the YAML file
24 ``/etc/opnfv-apex/inventory.yaml`` configuration file. Deployment options are
25 put into the YAML file ``/etc/opnfv-apex/deploy_settings.yaml``. Alternatively
26 there are pre-baked deploy_settings files available in ``/etc/opnfv-apex/``.
27 These files are named with the naming convention
28 os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in place
29 of the ``/etc/opnfv-apex/deploy_settings.yaml`` file if one suites your
30 deployment needs. Networking definitions gathered under section
31 `Network Requirements`_ are put into the YAML file
32 ``/etc/opnfv-apex/network_settings.yaml``. ``opnfv-deploy`` will boot the
33 undercloud VM and load the target deployment configuration into the
34 provisioning toolchain. This information includes MAC address, IPMI,
35 Networking Environment and OPNFV deployment options.
37 Once configuration is loaded and the undercloud is configured it will then
38 reboot the overcloud nodes via IPMI. The nodes should already be set to PXE
39 boot first off the admin interface. The nodes will first PXE off of the
40 undercloud PXE server and go through a discovery/introspection process.
42 Introspection boots off of custom introspection PXE images. These images are
43 designed to look at the properties of the hardware that is being booted
44 and report the properties of it back to the undercloud node.
46 After introspection the undercloud will execute a Heat Stack Deployment to
47 continue node provisioning and configuration. The nodes will reboot and PXE
48 from the undercloud PXE server again to provision each node using Glance disk
49 images provided by the undercloud. These disk images include all the necessary
50 packages and configuration for an OPNFV deployment to execute. Once the disk
51 images have been written to node's disks the nodes will boot locally and
52 execute cloud-init which will execute the final node configuration. At this
53 point in the deployment, the Heat Stack will complete, and Mistral will
54 takeover the configuration of the nodes. Mistral handles calling Ansible which
55 will connect to each node, and begin configuration. This configuration includes
56 launching the desired OPNFV services as containers, and generating their
57 configuration files. These configuration is largely completed by executing a
58 puppet apply on each container to generate the config files, which are then
59 stored on the overcloud host and mounted into the service container at runtime.
61 Installation Guide - Bare Metal Deployment
62 ==========================================
64 This section goes step-by-step on how to correctly install and provision the
65 OPNFV target system to bare metal nodes.
67 Install Bare Metal Jump Host
68 ----------------------------
70 1a. If your Jump Host does not have CentOS 7 already on it, or you would like
71 to do a fresh install, then download the CentOS 7 DVD and perform a
72 "Virtualization Host" install. If you perform a "Minimal Install" or
73 install type other than "Virtualization Host" simply run
74 ``sudo yum -y groupinstall "Virtualization Host"``
75 ``chkconfig libvirtd on && reboot``
76 to install virtualization support and enable libvirt on boot. If you use
77 the CentOS 7 DVD proceed to step 1b once the CentOS 7 with
78 "Virtualization Host" support is completed.
80 1b. Boot the ISO off of a USB or other installation media and walk through
81 installing OPNFV CentOS 7. The ISO comes prepared to be written directly
82 to a USB drive with dd as such:
84 ``dd if=opnfv-apex.iso of=/dev/sdX bs=4M``
86 Replace /dev/sdX with the device assigned to your usb drive. Then select
87 the USB device as the boot media on your Jump Host
89 2a. Install these repos:
91 ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-queens/rdo-release-queens-1.noarch.rpm``
92 ``sudo yum install epel-release``
93 ``sudo curl -o /etc/yum.repos.d/opnfv-apex.repo http://artifacts.opnfv.org/apex/gambia/opnfv-apex.repo``
95 The RDO Project release repository is needed to install OpenVSwitch, which
96 is a dependency of opnfv-apex. If you do not have external connectivity to
97 use this repository you need to download the OpenVSwitch RPM from the RDO
98 Project repositories and install it with the opnfv-apex RPM. The
99 opnfv-apex repo hosts all of the Apex dependencies which will automatically
100 be installed when installing RPMs, but will be pre-installed with the ISO.
102 2b. Download the first Apex RPMs from the OPNFV downloads page, under the
103 TripleO RPMs ``https://www.opnfv.org/software/downloads``. The dependent
104 RPMs will be automatically installed from the opnfv-apex repo in the
106 The following RPMs are available for installation:
108 - python34-opnfv-apex - (reqed) OPNFV Apex Python package
109 - python34-markupsafe - (reqed) Dependency of python34-opnfv-apex **
110 - python34-jinja2 - (reqed) Dependency of python34-opnfv-apex **
111 - python3-ipmi - (reqed) Dependency of python34-opnfv-apex **
112 - python34-pbr - (reqed) Dependency of python34-opnfv-apex **
113 - python34-virtualbmc - (reqed) Dependency of python34-opnfv-apex **
114 - python34-iptables - (reqed) Dependency of python34-opnfv-apex **
115 - python34-cryptography - (reqed) Dependency of python34-opnfv-apex **
116 - python34-libvirt - (reqed) Dependency of python34-opnfv-apex **
118 ** These RPMs are not yet distributed by CentOS or EPEL.
119 Apex has built these for distribution with Apex while CentOS and EPEL do
120 not distribute them. Once they are carried in an upstream channel Apex will
121 no longer carry them and they will not need special handling for
122 installation. You do not need to explicitly install these as they will be
123 automatically installed by installing python34-opnfv-apex when the
124 opnfv-apex.repo has been previously downloaded to ``/etc/yum.repos.d/``.
126 Install the required RPM (replace <rpm> with the actual downloaded
128 ``yum -y install <python34-opnfv-apex>``
130 3. After the operating system and the opnfv-apex RPMs are installed, login to
131 your Jump Host as root.
133 4. Configure IP addresses on the interfaces that you have selected as your
136 5. Configure the IP gateway to the Internet either, preferably on the public
139 6. Configure your ``/etc/resolv.conf`` to point to a DNS server
140 (8.8.8.8 is provided by Google).
142 Creating a Node Inventory File
143 ------------------------------
145 IPMI configuration information gathered in section
146 `Execution Requirements (Bare Metal Only)`_ needs to be added to the
147 ``inventory.yaml`` file.
149 1. Copy ``/usr/share/doc/opnfv/inventory.yaml.example`` as your inventory file
150 template to ``/etc/opnfv-apex/inventory.yaml``.
152 2. The nodes dictionary contains a definition block for each baremetal host
153 that will be deployed. 0 or more compute nodes and 1 or 3 controller nodes
154 are required (the example file contains blocks for each of these already).
155 It is optional at this point to add more compute nodes into the node list.
156 By specifying 0 compute nodes in the inventory file, the deployment will
157 automatically deploy "all-in-one" nodes which means the compute will run
158 along side the controller in a single overcloud node. Specifying 3 control
159 nodes will result in a highly-available service model.
161 3. Edit the following values for each node:
163 - ``mac_address``: MAC of the interface that will PXE boot from undercloud
164 - ``ipmi_ip``: IPMI IP Address
165 - ``ipmi_user``: IPMI username
166 - ``ipmi_password``: IPMI password
167 - ``pm_type``: Power Management driver to use for the node
168 values: pxe_ipmitool (tested) or pxe_wol (untested) or pxe_amt (untested)
169 - ``cpus``: (Introspected*) CPU cores available
170 - ``memory``: (Introspected*) Memory available in Mib
171 - ``disk``: (Introspected*) Disk space available in Gb
172 - ``disk_device``: (Opt***) Root disk device to use for installation
173 - ``arch``: (Introspected*) System architecture
174 - ``capabilities``: (Opt**) Node's role in deployment
175 values: profile:control or profile:compute
177 \* Introspection looks up the overcloud node's resources and overrides these
178 value. You can leave default values and Apex will get the correct values when
179 it runs introspection on the nodes.
181 ** If capabilities profile is not specified then Apex will select node's roles
182 in the OPNFV cluster in a non-deterministic fashion.
184 \*** disk_device declares which hard disk to use as the root device for
185 installation. The format is a comma delimited list of devices, such as
186 "sda,sdb,sdc". The disk chosen will be the first device in the list which
187 is found by introspection to exist on the system. Currently, only a single
188 definition is allowed for all nodes. Therefore if multiple disk_device
189 definitions occur within the inventory, only the last definition on a node
190 will be used for all nodes.
192 Creating the Settings Files
193 ---------------------------
195 Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to
196 help you customize them.
198 1. deploy_settings.yaml
199 This file includes basic configuration options deployment, and also documents
200 all available options.
201 Alternatively, there are pre-built deploy_settings files available in
202 (``/etc/opnfv-apex/``). These files are named with the naming convention
203 os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
204 place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites
205 your deployment needs. If a pre-built deploy_settings file is chosen there
206 is no need to customize (``/etc/opnfv-apex/deploy_settings.yaml``). The
207 pre-built file can be used in place of the
208 (``/etc/opnfv-apex/deploy_settings.yaml``) file.
210 2. network_settings.yaml
211 This file provides Apex with the networking information that satisfies the
212 prerequisite `Network Requirements`_. These are specific to your
215 Running ``opnfv-deploy``
216 ------------------------
218 You are now ready to deploy OPNFV using Apex!
219 ``opnfv-deploy`` will use the inventory and settings files to deploy OPNFV.
221 Follow the steps below to execute:
223 1. Execute opnfv-deploy
224 ``sudo opnfv-deploy -n network_settings.yaml
225 -i inventory.yaml -d deploy_settings.yaml``
226 If you need more information about the options that can be passed to
227 opnfv-deploy use ``opnfv-deploy --help``. -n
228 network_settings.yaml allows you to customize your networking topology.
229 Note it can also be useful to run the command with the ``--debug``
230 argument which will enable a root login on the overcloud nodes with
231 password: 'opnfvapex'. It is also useful in some cases to surround the
232 deploy command with ``nohup``. For example:
233 ``nohup <deploy command> &``, will allow a deployment to continue even if
234 ssh access to the Jump Host is lost during deployment.
236 2. Wait while deployment is executed.
237 If something goes wrong during this part of the process, start by reviewing
238 your network or the information in your configuration files. It's not
239 uncommon for something small to be overlooked or mis-typed.
240 You will also notice outputs in your shell as the deployment progresses.
242 3. When the deployment is complete the undercloud IP and overcloud dashboard
243 url will be printed. OPNFV has now been deployed using Apex.
245 .. _`Execution Requirements (Bare Metal Only)`: requirements.html#execution-requirements-bare-metal-only
246 .. _`Network Requirements`: requirements.html#network-requirements