1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) Open Platform for NFV Project, Inc. and its contributors
9 This document describes how to install the Euphrates release of
10 OPNFV when using Fuel as a deployment tool, covering its usage,
11 limitations, dependencies and required system resources.
12 This is an unified documentation for both x86_64 and aarch64
13 architectures. All information is common for both architectures
14 except when explicitly stated.
20 This document provides guidelines on how to install and
21 configure the Euphrates release of OPNFV when using Fuel as a
22 deployment tool, including required software and hardware configurations.
24 Although the available installation options provide a high degree of
25 freedom in how the system is set up, including architecture, services
26 and features, etc., said permutations may not provide an OPNFV
27 compliant reference architecture. This document provides a
28 step-by-step guide that results in an OPNFV Euphrates compliant
31 The audience of this document is assumed to have good knowledge of
32 networking and Unix/Linux administration.
38 Before starting the installation of the Euphrates release of
39 OPNFV, using Fuel as a deployment tool, some planning must be
45 Prior to installation, a number of deployment specific parameters must be collected, those are:
47 #. Provider sub-net and gateway information
49 #. Provider VLAN information
51 #. Provider DNS addresses
53 #. Provider NTP addresses
55 #. Network overlay you plan to deploy (VLAN, VXLAN, FLAT)
57 #. How many nodes and what roles you want to deploy (Controllers, Storage, Computes)
59 #. Monitoring options you want to deploy (Ceilometer, Syslog, etc.).
61 #. Other options not covered in the document are available in the links above
64 This information will be needed for the configuration procedures
65 provided in this document.
67 =========================================
68 Hardware Requirements for Virtual Deploys
69 =========================================
71 The following minimum hardware requirements must be met for the virtual
72 installation of Euphrates using Fuel:
74 +----------------------------+--------------------------------------------------------+
75 | **HW Aspect** | **Requirement** |
77 +============================+========================================================+
78 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
79 | | will host a Salt Master VM and each of the VM nodes in |
80 | | the virtual deploy |
81 +----------------------------+--------------------------------------------------------+
82 | **CPU** | Minimum 1 socket with Virtualization support |
83 +----------------------------+--------------------------------------------------------+
84 | **RAM** | Minimum 32GB/server (Depending on VNF work load) |
85 +----------------------------+--------------------------------------------------------+
86 | **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended |
87 +----------------------------+--------------------------------------------------------+
90 ===========================================
91 Hardware Requirements for Baremetal Deploys
92 ===========================================
94 The following minimum hardware requirements must be met for the baremetal
95 installation of Euphrates using Fuel:
97 +-------------------------+------------------------------------------------------+
98 | **HW Aspect** | **Requirement** |
100 +=========================+======================================================+
101 | **# of nodes** | Minimum 5 |
103 | | - 3 KVM servers which will run all the controller |
106 | | - 2 Compute nodes |
108 +-------------------------+------------------------------------------------------+
109 | **CPU** | Minimum 1 socket with Virtualization support |
110 +-------------------------+------------------------------------------------------+
111 | **RAM** | Minimum 16GB/server (Depending on VNF work load) |
112 +-------------------------+------------------------------------------------------+
113 | **Disk** | Minimum 256GB 10kRPM spinning disks |
114 +-------------------------+------------------------------------------------------+
115 | **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be |
116 | | a mix of tagged/native |
118 | | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network |
120 | | Note: These can be allocated to a single NIC - |
121 | | or spread out over multiple NICs |
122 +-------------------------+------------------------------------------------------+
123 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
124 | | hosts the Salt Master and MaaS VMs |
125 +-------------------------+------------------------------------------------------+
126 | **Power management** | All targets need to have power management tools that |
127 | | allow rebooting the hardware and setting the boot |
128 | | order (e.g. IPMI) |
129 +-------------------------+------------------------------------------------------+
131 **NOTE:** All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
133 **NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
135 ===============================
136 Help with Hardware Requirements
137 ===============================
139 Calculate hardware requirements:
141 For information on compatible hardware types available for use,
142 please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_
144 When choosing the hardware on which you will deploy your OpenStack
145 environment, you should think about:
147 - CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine.
149 - Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node.
151 - Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage.
153 - Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
155 ================================================
156 Top of the Rack (TOR) Configuration Requirements
157 ================================================
159 The switching infrastructure provides connectivity for the OPNFV
160 infrastructure operations, tenant networks (East/West) and provider
161 connectivity (North/South); it also provides needed connectivity for
162 the Storage Area Network (SAN).
163 To avoid traffic congestion, it is strongly suggested that three
164 physically separated networks are used, that is: 1 physical network
165 for administration and control, one physical network for tenant private
166 and public networks, and one physical network for SAN.
167 The switching connectivity can (but does not need to) be fully redundant,
168 in such case it comprises a redundant 10GE switch pair for each of the
169 three physically separated networks.
171 The physical TOR switches are **not** automatically configured from
172 the Fuel OPNFV reference platform. All the networks involved in the OPNFV
173 infrastructure as well as the provider networks and the private tenant
174 VLANs needs to be manually configured.
176 Manual configuration of the Euphrates hardware platform should
177 be carried out according to the `OPNFV Pharos Specification
178 <https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
180 ============================
181 OPNFV Software Prerequisites
182 ============================
184 The Jumpserver node should be pre-provisioned with an operating system,
185 according to the Pharos specification. Relevant network bridges should
186 also be pre-configured (e.g. admin_br, mgmt_br, public_br).
188 - The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during fuel installation.
189 - The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
190 suggested to pre-configure it for debugging purposes.
191 - The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
193 The user running the deploy script on the Jumpserver should belong to "sudo" and "libvirt" groups,
194 and have passwordless sudo access.
196 The following example adds the groups to the user "jenkins"
200 $ sudo usermod -aG sudo jenkins
201 $ sudo usermod -aG libvirt jenkins
208 %jenkins ALL=(ALL) NOPASSWD:ALL
210 For an AArch64 Jumpserver, the "libvirt" minimum required version is 3.x, 3.5 or newer highly recommended.
211 While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
212 (especially on AArch64 Jumpservers).
214 For CentOS 7.4 (AArch64), distro provided packages are already new enough.
215 For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used.
216 For convenience, Armband provides a DEB repository holding all the required packages.
218 To add and enable the Armband repository on an Ubuntu 16.04 system,
219 create a new sources list file `/apt/sources.list.d/armband.list` with the following contents:
223 $ cat /etc/apt/sources.list.d/armband.list
224 //for OpenStack Pike release
225 deb http://linux.enea.com/mcp-repos/pike/xenial pike-armband main
229 Fuel@OPNFV has been validated by CI using the following distributions
230 installed on the Jumpserver:
232 - CentOS 7 (recommended by Pharos specification);
235 **NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver.In case libvirt
236 packages are missing, the script will install them; but depending on the OS distribution, the user
237 might have to start the 'libvirtd' service manually, then run the deploy script again. Therefore, it
238 is recommened to install libvirt-bin explicitly on the Jumpserver before the deployment.
240 **NOTE**: It is also recommened to install the newer kernel on the Jumpserver before the deployment.
242 **NOTE**: The install script will automatically install the rest of required distro package
243 dependencies on the Jumpserver, unless explicitly asked not to (via -P deploy arg). This includes
244 Python, QEMU, libvirt etc.
248 $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin
251 ==========================================
252 OPNFV Software Installation and Deployment
253 ==========================================
255 This section describes the process of installing all the components needed to
256 deploy the full OPNFV reference platform stack across a server cluster.
258 The installation is done with Mirantis Cloud Platform (MCP), which is based on
259 a reclass model. This model provides the formula inputs to Salt, to make the deploy
260 automatic based on deployment scenario.
261 The reclass model covers:
263 - Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01)
264 - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
265 - Infrastructure components to install (software packages, services etc.)
266 - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them
269 Automatic Installation of a Virtual POD
270 =======================================
272 For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will:
274 - Create a Salt Master VM on the Jumpserver which will drive the installation
275 - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network)
276 - Install OpenStack on the targets
277 - Leverage Salt to install & configure OpenStack services
279 .. figure:: img/fuel_virtual.png
281 :alt: Fuel@OPNFV Virtual POD Network Layout Examples
283 Fuel@OPNFV Virtual POD Network Layout Examples
285 +-----------------------+------------------------------------------------------------------------+
286 | cfg01 | Salt Master VM |
287 +-----------------------+------------------------------------------------------------------------+
288 | ctl01 | Controller VM |
289 +-----------------------+------------------------------------------------------------------------+
290 | cmp01/cmp02 | Compute VMs |
291 +-----------------------+------------------------------------------------------------------------+
292 | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) |
293 +-----------------------+------------------------------------------------------------------------+
294 | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
295 +-----------------------+------------------------------------------------------------------------+
298 In this figure there are examples of two virtual deploys:
299 - Jumphost 1 has only virsh bridges, created by the deploy script
300 - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
301 the deploy script will skip creating a virsh bridge for it
303 **Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also
304 used for Admin, leaving the PXE/Admin bridge unused.
307 Automatic Installation of a Baremetal POD
308 =========================================
310 The baremetal installation process can be done by editing the information about
311 hardware and environment in the reclass files, or by using a Pod Descriptor File (PDF).
312 This file contains all the information about the hardware and network of the deployment
313 the will be fed to the reclass model during deployment.
315 The installation is done automatically with the deploy script, which will:
317 - Create a Salt Master VM on the Jumpserver which will drive the installation
318 - Create a MaaS Node VM on the Jumpserver which will provision the targets
319 - Install OpenStack on the targets
320 - Leverage MaaS to provision baremetal nodes with the operating system
321 - Leverage Salt to configure the operating system on the baremetal nodes
322 - Leverage Salt to install & configure OpenStack services
324 .. figure:: img/fuel_baremetal.png
326 :alt: Fuel@OPNFV Baremetal POD Network Layout Example
328 Fuel@OPNFV Baremetal POD Network Layout Example
330 +-----------------------+---------------------------------------------------------+
331 | cfg01 | Salt Master VM |
332 +-----------------------+---------------------------------------------------------+
333 | mas01 | MaaS Node VM |
334 +-----------------------+---------------------------------------------------------+
335 | kvm01..03 | Baremetals which hold the VMs with controller functions |
336 +-----------------------+---------------------------------------------------------+
337 | cmp001/cmp002 | Baremetal compute nodes |
338 +-----------------------+---------------------------------------------------------+
339 | prx01/prx02 | Proxy VMs for Nginx |
340 +-----------------------+---------------------------------------------------------+
341 | msg01..03 | RabbitMQ Service VMs |
342 +-----------------------+---------------------------------------------------------+
343 | dbs01..03 | MySQL service VMs |
344 +-----------------------+---------------------------------------------------------+
345 | mdb01..03 | Telemetry VMs |
346 +-----------------------+---------------------------------------------------------+
347 | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
348 +-----------------------+---------------------------------------------------------+
349 | Tenant VM | VM running in the cloud |
350 +-----------------------+---------------------------------------------------------+
352 In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is
353 required to pre-configure at least the admin_br bridge for the PXE/Admin.
354 For the targets, the bridges are created by the deploy script.
356 **Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used
357 for baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
360 Steps to Start the Automatic Deploy
361 ===================================
363 These steps are common both for virtual and baremetal deploys.
365 #. Clone the Fuel code from gerrit
371 $ git clone https://git.opnfv.org/fuel
378 $ git clone https://git.opnfv.org/armband
381 #. Checkout the Euphrates release
385 $ git checkout opnfv-5.0.2
387 #. Start the deploy script
389 Besides the basic options, there are other recommended deploy arguments:
391 - use **-D** option to enable the debug info
392 - use **-S** option to point to a tmp dir where the disk images are saved. The images will be
393 re-used between deploys
394 - use **|& tee** to save the deploy log to a file
398 $ ci/deploy.sh -l <lab_name> \
400 -b <URI to configuration repo containing the PDF file> \
402 -B <list of admin, management, private and public bridges> \
404 -S <Storage directory for disk images> |& tee deploy.log
410 To start a virtual deployment, it is required to have the `virtual` keyword
411 while specifying the pod name to the installer script.
413 It will create the required bridges and networks, configure Salt Master and
418 $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
421 -s os-nosdn-nofeature-noha \
423 -S /home/jenkins/tmpdir |& tee deploy.log
425 Once the deployment is complete, the OpenStack Dashboard, Horizon is
426 available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
427 The administrator credentials are **admin** / **opnfv_secret**.
431 A x86 deploy on pod2 from Linux Foundation lab
435 $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
438 -s os-nosdn-nofeature-ha \
441 -S /home/jenkins/tmpdir |& tee deploy.log
443 .. figure:: img/lf_pod2.png
445 :alt: Fuel@OPNFV LF POD2 Network Layout
447 Fuel@OPNFV LF POD2 Network Layout
449 Once the deployment is complete, the SaltStack Deployment Documentation is
450 available at http://<Proxy VIP>:8090, e.g. http://172.30.10.103:8090.
452 An aarch64 deploy on pod5 from Arm lab
456 $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
459 -s os-nosdn-nofeature-ha \
461 -S /home/jenkins/tmpdir |& tee deploy.log
463 .. figure:: img/arm_pod5.png
465 :alt: Fuel@OPNFV ARM POD5 Network Layout
467 Fuel@OPNFV ARM POD5 Network Layout
472 Descriptor files provide the installer with an abstraction of the target pod
473 with all its hardware characteristics and required parameters. This information
474 is split into two different files:
475 Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
478 The Pod Descriptor File is a hardware and network description of the pod
479 infrastructure. The information is modeled under a yaml structure.
480 A reference file with the expected yaml structure is available at
481 *mcp/config/labs/local/pod1.yaml*
483 A common network section describes all the internal and provider networks
484 assigned to the pod. Each network is expected to have a vlan tag, IP subnet and
485 attached interface on the boards. Untagged vlans shall be defined as "native".
487 The hardware description is arranged into a main "jumphost" node and a "nodes"
488 set for all target boards. For each node the following characteristics
491 - Node parameters including CPU features and total memory.
492 - A list of available disks.
493 - Remote management parameters.
494 - Network interfaces list including mac address, speed and advanced features.
495 - IP list of fixed IPs for the node
497 **Note**: the fixed IPs are ignored by the MCP installer script and it will instead
498 assign based on the network ranges defined under the pod network configuration.
501 The Installer Descriptor File extends the PDF with pod related parameters
502 required by the installer. This information may differ per each installer type
503 and it is not considered part of the pod infrastructure. Fuel installer relies
504 on the IDF model to map the networks to the bridges on the foundation node and
505 to setup all node NICs by defining the expected OS device name and bus address.
508 The file follows a yaml structure and a "fuel" section is expected. Contents and
509 references must be aligned with the PDF file. The IDF file must be named after
510 the PDF with the prefix "idf-". A reference file with the expected structure
511 is available at *mcp/config/labs/local/idf-pod1.yaml*
518 Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article.
526 1) `OPNFV Home Page <http://www.opnfv.org>`_
527 2) `OPNFV documentation <http://docs.opnfv.org>`_
528 3) `Software downloads <https://www.opnfv.org/software/download>`_
532 4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
533 5) `OpenStack Documentation <http://docs.openstack.org>`_
537 6) `OpenDaylight Artifacts <http://www.opendaylight.org/software/downloads>`_
541 7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
545 8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
546 9) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
550 10) `Reclass model <http://reclass.pantsfullofunix.net>`_