1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) Open Platform for NFV Project, Inc. and its contributors
5 ***********************************
6 OPNFV Fuel Installation Instruction
7 ***********************************
12 This document describes how to install the ``Gambia`` release of
13 OPNFV when using Fuel as a deployment tool, covering its usage,
14 limitations, dependencies and required system resources.
16 This is an unified documentation for both ``x86_64`` and ``aarch64``
17 architectures. All information is common for both architectures
18 except when explicitly stated.
23 This document provides guidelines on how to install and
24 configure the ``Gambia`` release of OPNFV when using Fuel as a
25 deployment tool, including required software and hardware configurations.
27 Although the available installation options provide a high degree of
28 freedom in how the system is set up, including architecture, services
29 and features, etc., said permutations may not provide an OPNFV
30 compliant reference architecture. This document provides a
31 step-by-step guide that results in an OPNFV ``Gambia`` compliant
34 The audience of this document is assumed to have good knowledge of
35 networking and Unix/Linux administration.
37 Before starting the installation of the ``Gambia`` release of
38 OPNFV, using Fuel as a deployment tool, some planning must be
44 Prior to installation, a number of deployment specific parameters must be
47 #. Provider sub-net and gateway information
49 #. Provider ``VLAN`` information
51 #. Provider ``DNS`` addresses
53 #. Provider ``NTP`` addresses
55 #. How many nodes and what roles you want to deploy (Controllers, Computes)
57 This information will be needed for the configuration procedures
58 provided in this document.
63 Mininum hardware requirements depend on the deployment type.
67 If ``baremetal`` nodes are present in the cluster, the architecture of the
68 nodes running the control plane (``kvm01``, ``kvm02``, ``kvm03`` for
69 ``HA`` scenarios, respectively ``ctl01``, ``gtw01``, ``odl01`` for
70 ``noHA`` scenarios) and the ``jumpserver`` architecture must be the same
71 (either ``x86_64`` or ``aarch64``).
75 The compute nodes may have different architectures, but extra
76 configuration might be required for scheduling VMs on the appropiate host.
77 This use-case is not tested in OPNFV CI, so it is considered experimental.
79 Hardware Requirements for ``virtual`` Deploys
80 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
82 The following minimum hardware requirements must be met for the ``virtual``
83 installation of ``Gambia`` using Fuel:
85 +------------------+------------------------------------------------------+
86 | **HW Aspect** | **Requirement** |
88 +==================+======================================================+
89 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
90 | | will host a Salt Master container and each of the VM |
91 | | nodes in the virtual deploy |
92 +------------------+------------------------------------------------------+
93 | **CPU** | Minimum 1 socket with Virtualization support |
94 +------------------+------------------------------------------------------+
95 | **RAM** | Minimum 32GB/server (Depending on VNF work load) |
96 +------------------+------------------------------------------------------+
97 | **Disk** | Minimum 100GB (SSD or 15krpm SCSI highly recommended)|
98 +------------------+------------------------------------------------------+
100 Hardware Requirements for ``baremetal`` Deploys
101 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
103 The following minimum hardware requirements must be met for the ``baremetal``
104 installation of ``Gambia`` using Fuel:
106 +------------------+------------------------------------------------------+
107 | **HW Aspect** | **Requirement** |
109 +==================+======================================================+
110 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
111 | | hosts the Salt Master container and MaaS VM |
112 +------------------+------------------------------------------------------+
113 | **# of nodes** | Minimum 5 |
115 | | - 3 KVM servers which will run all the controller |
118 | | - 2 Compute nodes |
122 | | ``kvm01``, ``kvm02``, ``kvm03`` nodes and the |
123 | | ``jumpserver`` must have the same architecture |
124 | | (either ``x86_64`` or ``aarch64``). |
128 | | ``aarch64`` nodes should run an ``UEFI`` |
129 | | compatible firmware with PXE support |
130 | | (e.g. ``EDK2``). |
131 +------------------+------------------------------------------------------+
132 | **CPU** | Minimum 1 socket with Virtualization support |
133 +------------------+------------------------------------------------------+
134 | **RAM** | Minimum 16GB/server (Depending on VNF work load) |
135 +------------------+------------------------------------------------------+
136 | **Disk** | Minimum 256GB 10kRPM spinning disks |
137 +------------------+------------------------------------------------------+
138 | **Networks** | Mininum 4 |
140 | | - 3 VLANs (``public``, ``mgmt``, ``private``) - |
141 | | can be a mix of tagged/native |
143 | | - 1 Un-Tagged VLAN for PXE Boot - |
144 | | ``PXE/admin`` Network |
148 | | These can be allocated to a single NIC |
149 | | or spread out over multiple NICs. |
150 +------------------+------------------------------------------------------+
151 | **Power mgmt** | All targets need to have power management tools that |
152 | | allow rebooting the hardware (e.g. ``IPMI``). |
153 +------------------+------------------------------------------------------+
155 Hardware Requirements for ``hybrid`` (``baremetal`` + ``virtual``) Deploys
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 The following minimum hardware requirements must be met for the ``hybrid``
159 installation of ``Gambia`` using Fuel:
161 +------------------+------------------------------------------------------+
162 | **HW Aspect** | **Requirement** |
164 +==================+======================================================+
165 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
166 | | hosts the Salt Master container, MaaS VM and |
167 | | each of the virtual nodes defined in ``PDF`` |
168 +------------------+------------------------------------------------------+
169 | **# of nodes** | .. NOTE:: |
171 | | Depends on ``PDF`` configuration. |
173 | | If the control plane is virtualized, minimum |
174 | | baremetal requirements are: |
176 | | - 2 Compute nodes |
178 | | If the computes are virtualized, minimum |
179 | | baremetal requirements are: |
181 | | - 3 KVM servers which will run all the controller |
186 | | ``kvm01``, ``kvm02``, ``kvm03`` nodes and the |
187 | | ``jumpserver`` must have the same architecture |
188 | | (either ``x86_64`` or ``aarch64``). |
192 | | ``aarch64`` nodes should run an ``UEFI`` |
193 | | compatible firmware with PXE support |
194 | | (e.g. ``EDK2``). |
195 +------------------+------------------------------------------------------+
196 | **CPU** | Minimum 1 socket with Virtualization support |
197 +------------------+------------------------------------------------------+
198 | **RAM** | Minimum 16GB/server (Depending on VNF work load) |
199 +------------------+------------------------------------------------------+
200 | **Disk** | Minimum 256GB 10kRPM spinning disks |
201 +------------------+------------------------------------------------------+
202 | **Networks** | Same as for ``baremetal`` deployments |
203 +------------------+------------------------------------------------------+
204 | **Power mgmt** | Same as for ``baremetal`` deployments |
205 +------------------+------------------------------------------------------+
207 Help with Hardware Requirements
208 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
210 Calculate hardware requirements:
212 When choosing the hardware on which you will deploy your OpenStack
213 environment, you should think about:
215 - CPU -- Consider the number of virtual machines that you plan to deploy in
216 your cloud environment and the CPUs per virtual machine.
218 - Memory -- Depends on the amount of RAM assigned per virtual machine and the
221 - Storage -- Depends on the local drive space per virtual machine, remote
222 volumes that can be attached to a virtual machine, and object storage.
224 - Networking -- Depends on the Choose Network Topology, the network bandwidth
225 per virtual machine, and network storage.
227 Top of the Rack (``TOR``) Configuration Requirements
228 ====================================================
230 The switching infrastructure provides connectivity for the OPNFV
231 infrastructure operations, tenant networks (East/West) and provider
232 connectivity (North/South); it also provides needed connectivity for
233 the Storage Area Network (SAN).
235 To avoid traffic congestion, it is strongly suggested that three
236 physically separated networks are used, that is: 1 physical network
237 for administration and control, one physical network for tenant private
238 and public networks, and one physical network for SAN.
240 The switching connectivity can (but does not need to) be fully redundant,
241 in such case it comprises a redundant 10GE switch pair for each of the
242 three physically separated networks.
246 The physical ``TOR`` switches are **not** automatically configured from
247 the OPNFV Fuel reference platform. All the networks involved in the OPNFV
248 infrastructure as well as the provider networks and the private tenant
249 VLANs needs to be manually configured.
251 Manual configuration of the ``Gambia`` hardware platform should
252 be carried out according to the `OPNFV Pharos Specification`_.
254 OPNFV Software Prerequisites
255 ============================
259 All prerequisites described in this chapter apply to the ``jumpserver``
262 OS Distribution Support
263 ~~~~~~~~~~~~~~~~~~~~~~~
265 The Jumpserver node should be pre-provisioned with an operating system,
266 according to the `OPNFV Pharos specification`_.
268 OPNFV Fuel has been validated by CI using the following distributions
269 installed on the Jumpserver:
271 - ``CentOS 7`` (recommended by Pharos specification);
272 - ``Ubuntu Xenial 16.04``;
274 .. TOPIC:: ``aarch64`` notes
276 For an ``aarch64`` Jumpserver, the ``libvirt`` minimum required
277 version is ``3.x``, ``3.5`` or newer highly recommended.
281 ``CentOS 7`` (``aarch64``) distro provided packages are already new
286 ``Ubuntu 16.04`` (``arm64``), distro packages are too old and 3rd party
287 repositories should be used.
289 For convenience, Armband provides a DEB repository holding all the
292 To add and enable the Armband repository on an Ubuntu 16.04 system,
293 create a new sources list file ``/apt/sources.list.d/armband.list``
294 with the following contents:
296 .. code-block:: console
298 jenkins@jumpserver:~$ cat /etc/apt/sources.list.d/armband.list
299 deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main
301 jenkins@jumpserver:~$ sudo apt-key adv --keyserver keys.gnupg.net \
303 jenkins@jumpserver:~$ sudo apt-get update
305 OS Distribution Packages
306 ~~~~~~~~~~~~~~~~~~~~~~~~
308 By default, the ``deploy.sh`` script will automatically install the required
309 distribution package dependencies on the Jumpserver, so the end user does
310 not have to manually install them before starting the deployment.
312 This includes Python, QEMU, libvirt etc.
316 To disable automatic package installation (and/or upgrade) during
317 deployment, check out the ``-P`` deploy argument.
321 The install script expects ``libvirt`` to be already running on the
324 In case ``libvirt`` packages are missing, the script will install them; but
325 depending on the OS distribution, the user might have to start the
326 ``libvirt`` daemon service manually, then run the deploy script again.
328 Therefore, it is recommended to install ``libvirt`` explicitly on the
329 Jumpserver before the deployment.
331 While not mandatory, upgrading the kernel on the Jumpserver is also highly
334 .. code-block:: console
336 jenkins@jumpserver:~$ sudo apt-get install \
337 linux-image-generic-hwe-16.04-edge libvirt-bin
338 jenkins@jumpserver:~$ sudo reboot
343 The user running the deploy script on the Jumpserver should belong to
344 ``sudo`` and ``libvirt`` groups, and have passwordless sudo access.
348 Throughout this documentation, we will use the ``jenkins`` username for
351 The following example adds the groups to the user ``jenkins``:
353 .. code-block:: console
355 jenkins@jumpserver:~$ sudo usermod -aG sudo jenkins
356 jenkins@jumpserver:~$ sudo usermod -aG libvirt jenkins
357 jenkins@jumpserver:~$ sudo reboot
358 jenkins@jumpserver:~$ groups
361 jenkins@jumpserver:~$ sudo visudo
363 %jenkins ALL=(ALL) NOPASSWD:ALL
365 Local Artifact Storage
366 ~~~~~~~~~~~~~~~~~~~~~~
368 The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir``
369 in the examples below) needs to have mask ``777`` in order for ``libvirt`` to
372 .. code-block:: console
374 jenkins@jumpserver:~$ mkdir -p -m 777 /home/jenkins/tmpdir
376 Network Configuration
377 ~~~~~~~~~~~~~~~~~~~~~
379 Relevant Linux bridges should also be pre-configured for certain networks,
380 depending on the type of the deployment.
382 +------------+---------------+----------------------------------------------+
383 | Network | Linux Bridge | Linux Bridge necessity based on deploy type |
384 | | +--------------+---------------+---------------+
385 | | | ``virtual`` | ``baremetal`` | ``hybrid`` |
386 +============+===============+==============+===============+===============+
387 | PXE/admin | ``admin_br`` | absent | present | present |
388 +------------+---------------+--------------+---------------+---------------+
389 | management | ``mgmt_br`` | optional | optional, | optional, |
390 | | | | recommended, | recommended, |
391 | | | | required for | required for |
392 | | | | ``functest``, | ``functest``, |
393 | | | | ``yardstick`` | ``yardstick`` |
394 +------------+---------------+--------------+---------------+---------------+
395 | internal | ``int_br`` | optional | optional | present |
396 +------------+---------------+--------------+---------------+---------------+
397 | public | ``public_br`` | optional | optional, | optional, |
398 | | | | recommended, | recommended, |
399 | | | | useful for | useful for |
400 | | | | debugging | debugging |
401 +------------+---------------+--------------+---------------+---------------+
405 IP addresses should be assigned to the created bridge interfaces (not
406 to one of its ports).
410 ``PXE/admin`` bridge (``admin_br``) **must** have an IP address.
412 Changes ``deploy.sh`` Will Perform to Jumpserver OS
413 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
417 The install script will alter Jumpserver sysconf and disable
418 ``net.bridge.bridge-nf-call``.
422 The install script will automatically install and/or upgrade the
423 required distribution package dependencies on the Jumpserver,
424 unless explicitly asked not to (via the ``-P`` deploy arg).
426 OPNFV Software Configuration (``XDF``)
427 ======================================
429 .. versionadded:: 5.0.0
430 .. versionchanged:: 7.0.0
432 Unlike the old approach based on OpenStack Fuel, OPNFV Fuel no longer has a
433 graphical user interface for configuring the environment, but instead
434 switched to OPNFV specific descriptor files that we will call generically
437 - ``PDF`` (POD Descriptor File) provides an abstraction of the target POD
438 with all its hardware characteristics and required parameters;
439 - ``IDF`` (Installer Descriptor File) extends the ``PDF`` with POD related
440 parameters required by the OPNFV Fuel installer;
441 - ``SDF`` (Scenario Descriptor File, **not** yet adopted) will later
442 replace embedded scenario definitions, describing the roles and layout of
443 the cluster enviroment for a given reference architecture;
447 For ``virtual`` deployments, if the ``public`` network will be accessed
448 from outside the ``jumpserver`` node, a custom ``PDF``/``IDF`` pair is
449 required for customizing ``idf.net_config.public`` and
450 ``idf.fuel.jumphost.bridges.public``.
454 For OPNFV CI PODs, as well as simple (no ``public`` bridge) ``virtual``
455 deployments, ``PDF``/``IDF`` files are already available in the
456 `pharos git repo`_. They can be used as a reference for user-supplied
457 inputs or to kick off a deployment right away.
459 +----------+------------------------------------------------------------------+
460 | LAB/POD | ``PDF``/``IDF`` availability based on deploy type |
461 | +------------------------+--------------------+--------------------+
462 | | ``virtual`` | ``baremetal`` | ``hybrid`` |
463 +==========+========================+====================+====================+
464 | OPNFV CI | available in | available in | N/A, as currently |
465 | POD | `pharos git repo`_ | `pharos git repo`_ | there are 0 hybrid |
466 | | (e.g. | (e.g. ``lf-pod2``, | PODs in OPNFV CI |
467 | | ``ericsson-virtual1``) | ``arm-pod5``) | |
468 +----------+------------------------+--------------------+--------------------+
469 | local or | ``user-supplied`` | ``user-supplied`` | ``user-supplied`` |
471 +----------+------------------------+--------------------+--------------------+
475 Both ``PDF`` and ``IDF`` structure are modelled as ``yaml`` schemas in the
476 `pharos git repo`_, also included as a git submodule in OPNFV Fuel.
480 - ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml``
481 - ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml``
483 Schema files are also used during the initial deployment phase to validate
484 the user-supplied input ``PDF``/``IDF`` files.
489 The Pod Descriptor File is a hardware description of the POD
490 infrastructure. The information is modeled under a ``yaml`` structure.
492 The hardware description covers the ``jumphost`` node and a set of ``nodes``
493 for the cluster target boards. For each node the following characteristics
496 - Node parameters including ``CPU`` features and total memory;
497 - A list of available disks;
498 - Remote management parameters;
499 - Network interfaces list including name, ``MAC`` address, link speed,
504 A reference file with the expected ``yaml`` structure is available at:
506 - ``mcp/scripts/pharos/config/pdf/pod1.yaml``
508 For more information on ``PDF``, see the `OPNFV PDF Wiki Page`_.
512 The fixed IPs defined in ``PDF`` are ignored by the OPNFV Fuel installer
513 script and it will instead assign addresses based on the network ranges
516 For more details on the way IP addresses are assigned, see
517 :ref:`OPNFV Fuel User Guide <fuel-userguide>`.
519 ``PDF``/``IDF`` Role (hostname) Mapping
520 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
522 Upcoming ``SDF`` support will introduce a series of possible node roles.
523 Until that happens, the role mapping logic is hardcoded, based on node index
524 in ``PDF``/``IDF`` (which should also be in sync, i.e. the parameters of the
525 ``n``-th cluster node defined in ``PDF`` should be the ``n``-th node in
526 ``IDF`` structures too).
528 +-------------+------------------+----------------------+
529 | Node index | ``HA`` scenario | ``noHA`` scenario |
530 +=============+==================+======================+
531 | 1st | ``kvm01`` | ``ctl01`` |
532 +-------------+------------------+----------------------+
533 | 2nd | ``kvm02`` | ``gtw01`` |
534 +-------------+------------------+----------------------+
535 | 3rd | ``kvm03`` | ``odl01``/``unused`` |
536 +-------------+------------------+----------------------+
537 | 4th, | ``cmp001``, | ``cmp001``, |
538 | 5th, | ``cmp002``, | ``cmp002``, |
539 | ... | ``...`` | ``...`` |
540 +-------------+------------------+----------------------+
544 To switch node role(s), simply reorder the node definitions in
545 ``PDF``/``IDF`` (make sure to keep them in sync).
550 The Installer Descriptor File extends the ``PDF`` with POD related parameters
551 required by the installer. This information may differ per each installer type
552 and it is not considered part of the POD infrastructure.
557 The ``IDF`` file must be named after the ``PDF`` it attaches to, with the
562 A reference file with the expected ``yaml`` structure is available at:
564 - ``mcp/scripts/pharos/config/pdf/idf-pod1.yaml``
566 The file follows a ``yaml`` structure and at least two sections
567 (``idf.net_config`` and ``idf.fuel``) are expected.
569 The ``idf.fuel`` section defines several sub-sections required by the OPNFV
572 - ``jumphost``: List of bridge names for each network on the Jumpserver;
573 - ``network``: List of device name and bus address info of all the target nodes.
574 The order must be aligned with the order defined in the ``PDF`` file.
575 The OPNFV Fuel installer relies on the ``IDF`` model to setup all node NICs
576 by defining the expected device name and bus address;
577 - ``maas``: Defines the target nodes commission timeout and deploy timeout;
578 - ``reclass``: Defines compute parameter tuning, including huge pages, ``CPU``
579 pinning and other ``DPDK`` settings;
585 version: 0.1 # fixed, the only supported version (mandatory)
586 net_config: # POD network configuration overview (mandatory)
588 admin: ... # mandatory
589 mgmt: ... # mandatory
590 storage: ... # mandatory
591 private: ... # mandatory
592 public: ... # mandatory
593 fuel: # OPNFV Fuel specific section (mandatory)
594 jumphost: # OPNFV Fuel jumpserver bridge configuration (mandatory)
595 bridges: # Bridge name mapping (mandatory)
596 admin: 'admin_br' # <PXE/admin bridge name> or ~
597 mgmt: 'mgmt_br' # <mgmt bridge name> or ~
598 private: ~ # <private bridge name> or ~
599 public: 'public_br' # <public bridge name> or ~
600 trunks: ... # Trunked networks (optional)
601 maas: # MaaS timeouts (optional)
602 timeout_comissioning: 10 # commissioning timeout in minutes
603 timeout_deploying: 15 # deploy timeout in minutes
604 network: # Cluster nodes network (mandatory)
605 ntp_strata_host1: 1.pool.ntp.org # NTP1 (optional)
606 ntp_strata_host2: 0.pool.ntp.org # NTP2 (optional)
607 node: ... # List of per-node cfg (mandatory)
608 reclass: # Additional params (mandatory)
609 node: ... # List of per-node cfg (mandatory)
614 ``idf.net_config`` was introduced as a mechanism to map all the usual cluster
615 networks (internal and provider networks, e.g. ``mgmt``) to their ``VLAN``
616 tags, ``CIDR`` and a physical interface index (used to match networks to
617 interface names, like ``eth0``, on the cluster nodes).
622 The mapping between one network segment (e.g. ``mgmt``) and its ``CIDR``/
623 ``VLAN`` is not configurable on a per-node basis, but instead applies to
624 all the nodes in the cluster.
626 For each network, the following parameters are currently supported:
628 +--------------------------+--------------------------------------------------+
629 | ``idf.net_config.*`` key | Details |
630 +==========================+==================================================+
631 | ``interface`` | The index of the interface to use for this net. |
632 | | For each cluster node (if network is present), |
633 | | OPNFV Fuel will determine the underlying physical|
634 | | interface by picking the element at index |
635 | | ``interface`` from the list of network interface |
636 | | names defined in |
637 | | ``idf.fuel.network.node.*.interfaces``. |
638 | | Required for each network. |
642 | | The interface index should be the |
643 | | same on all cluster nodes. This can be |
644 | | achieved by ordering them accordingly in |
645 | | ``PDF``/``IDF``. |
646 +--------------------------+--------------------------------------------------+
647 | ``vlan`` | ``VLAN`` tag (integer) or the string ``native``. |
648 | | Required for each network. |
649 +--------------------------+--------------------------------------------------+
650 | ``ip-range`` | When specified, all cluster IPs dynamically |
651 | | allocated by OPNFV Fuel for that network will be |
652 | | assigned inside this range. |
653 | | Required for ``oob``, optional for others. |
657 | | For now, only range start address is used. |
658 +--------------------------+--------------------------------------------------+
659 | ``network`` | Network segment address. |
660 | | Required for each network, except ``oob``. |
661 +--------------------------+--------------------------------------------------+
662 | ``mask`` | Network segment mask. |
663 | | Required for each network, except ``oob``. |
664 +--------------------------+--------------------------------------------------+
665 | ``gateway`` | Gateway IP address. |
666 | | Required for ``public``, N/A for others. |
667 +--------------------------+--------------------------------------------------+
668 | ``dns`` | List of DNS IP addresses. |
669 | | Required for ``public``, N/A for others. |
670 +--------------------------+--------------------------------------------------+
672 Sample ``public`` network configuration block:
682 ip-range: 10.0.16.100-10.0.16.253
689 .. TOPIC:: ``hybrid`` POD notes
691 Interface indexes must be the same for all nodes, which is problematic
692 when mixing ``virtual`` nodes (where all interfaces were untagged
693 so far) with ``baremetal`` nodes (where interfaces usually carry
698 To achieve this, a special ``jumpserver`` network layout is used:
699 ``mgmt``, ``storage``, ``private``, ``public`` are trunked together
700 in a single ``trunk`` bridge:
702 - without decapsulating them (if they are also tagged on ``baremetal``);
703 a ``trunk.<vlan_tag>`` interface should be created on the
704 ``jumpserver`` for each tagged VLAN so the kernel won't drop the
706 - by decapsulating them first (if they are also untagged on
707 ``baremetal`` nodes);
709 The ``trunk`` bridge is then used for all bridges OPNFV Fuel
710 is aware of in ``idf.fuel.jumphost.bridges``, e.g. for a ``trunk`` where
711 only ``mgmt`` network is not decapsulated:
724 # mgmt network is not decapsulated for jumpserver infra VMs,
725 # to align with the VLAN configuration of baremetal nodes.
730 The Linux kernel limits the name of network interfaces to 16 characters.
731 Extra care is required when choosing bridge names, so appending the
732 ``VLAN`` tag won't lead to an interface name length exceeding that limit.
737 ``idf.fuel.network`` allows mapping the cluster networks (e.g. ``mgmt``) to
738 their physical interface name (e.g. ``eth0``) and bus address on the cluster
741 ``idf.fuel.network.node`` should be a list with the same number (and order) of
742 elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node
743 in ``PDF`` will use the interface name and bus address defined in the second
746 Below is a sample configuration block for a single node with two interfaces:
754 # Ordered-list, index should be in sync with node index in PDF
756 # Ordered-list, index should be in sync with interface index
761 # Bus-info reported by `ethtool -i ethX`
769 ``idf.fuel.reclass`` provides a way of overriding default values in the
770 reclass cluster model.
772 This currently covers strictly compute parameter tuning, including huge
773 pages, ``CPU`` pinning and other ``DPDK`` settings.
775 ``idf.fuel.reclass.node`` should be a list with the same number (and order) of
776 elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node
777 in ``PDF`` will use the parameters defined in the second list element.
779 The following parameters are currently supported:
781 +---------------------------------+-------------------------------------------+
782 | ``idf.fuel.reclass.node.*`` | Details |
784 +=================================+===========================================+
785 | ``nova_cpu_pinning`` | List of CPU cores nova will be pinned to. |
789 | | Currently disabled. |
790 +---------------------------------+-------------------------------------------+
791 | ``compute_hugepages_size`` | Size of each persistent huge pages. |
793 | | Usual values are ``2M`` and ``1G``. |
794 +---------------------------------+-------------------------------------------+
795 | ``compute_hugepages_count`` | Total number of persistent huge pages. |
796 +---------------------------------+-------------------------------------------+
797 | ``compute_hugepages_mount`` | Mount point to use for huge pages. |
798 +---------------------------------+-------------------------------------------+
799 | ``compute_kernel_isolcpu`` | List of certain CPU cores that are |
800 | | isolated from Linux scheduler. |
801 +---------------------------------+-------------------------------------------+
802 | ``compute_dpdk_driver`` | Kernel module to provide userspace I/O |
804 +---------------------------------+-------------------------------------------+
805 | ``compute_ovs_pmd_cpu_mask`` | Hexadecimal mask of CPUs to run ``DPDK`` |
806 | | Poll-mode drivers. |
807 +---------------------------------+-------------------------------------------+
808 | ``compute_ovs_dpdk_socket_mem`` | Set of amount huge pages in ``MB`` to be |
809 | | used by ``OVS-DPDK`` daemon taken for each|
810 | | ``NUMA`` node. Set size is equal to |
811 | | ``NUMA`` nodes count, elements are |
812 | | divided by comma. |
813 +---------------------------------+-------------------------------------------+
814 | ``compute_ovs_dpdk_lcore_mask`` | Hexadecimal mask of ``DPDK`` lcore |
815 | | parameter used to run ``DPDK`` processes. |
816 +---------------------------------+-------------------------------------------+
817 | ``compute_ovs_memory_channels`` | Number of memory channels to be used. |
818 +---------------------------------+-------------------------------------------+
819 | ``dpdk0_driver`` | NIC driver to use for physical network |
821 +---------------------------------+-------------------------------------------+
822 | ``dpdk0_n_rxq`` | Number of ``RX`` queues. |
823 +---------------------------------+-------------------------------------------+
825 Sample ``compute_params`` configuration block (for a single node):
834 common: &compute_params_common
835 compute_hugepages_size: 2M
836 compute_hugepages_count: 2048
837 compute_hugepages_mount: /mnt/hugepages_2M
839 <<: *compute_params_common
840 compute_dpdk_driver: uio
841 compute_ovs_pmd_cpu_mask: "0x6"
842 compute_ovs_dpdk_socket_mem: "1024"
843 compute_ovs_dpdk_lcore_mask: "0x8"
844 compute_ovs_memory_channels: "2"
845 dpdk0_driver: igb_uio
851 Scenario Descriptor Files are not yet implemented in the OPNFV Fuel ``Gambia``
854 Instead, embedded OPNFV Fuel scenarios files are locally available in
855 ``mcp/config/scenario``.
857 OPNFV Software Installation and Deployment
858 ==========================================
860 This section describes the process of installing all the components needed to
861 deploy the full OPNFV reference platform stack across a server cluster.
868 OPNFV releases previous to ``Gambia`` used to rely on the ``virtual``
869 keyword being part of the POD name (e.g. ``ericsson-virtual2``) to
870 configure the deployment type as ``virtual``. Otherwise ``baremetal``
873 ``Gambia`` and newer releases are more flexbile towards supporting a mix
874 of ``baremetal`` and ``virtual`` nodes, so the type of deployment is
875 now automatically determined based on the cluster nodes types in ``PDF``:
877 +---------------------------------+-------------------------------------------+
878 | ``PDF`` has nodes of type | Deployment type |
879 +---------------+-----------------+ |
880 | ``baremetal`` | ``virtual`` | |
881 +===============+=================+===========================================+
882 | yes | no | ``baremetal`` |
883 +---------------+-----------------+-------------------------------------------+
884 | yes | yes | ``hybrid`` |
885 +---------------+-----------------+-------------------------------------------+
886 | no | yes | ``virtual`` |
887 +---------------+-----------------+-------------------------------------------+
889 Based on that, the deployment script will later enable/disable certain extra
890 nodes (e.g. ``mas01``) and/or ``STATE`` files (e.g. ``maas``).
895 High availability of OpenStack services is determined based on scenario name,
896 e.g. ``os-nosdn-nofeature-noha`` vs ``os-nosdn-nofeature-ha``.
900 ``HA`` scenarios imply a virtualized control plane (``VCP``) for the
901 OpenStack services running on the 3 ``kvm`` nodes.
905 An experimental feature argument (``-N``) is supported by the deploy
906 script for disabling ``VCP``, although it might not be supported by
907 all scenarios and is not being continuosly validated by OPNFV CI/CD.
911 ``virtual`` ``HA`` deployments are not officially supported, due to
912 poor performance and various limitations of nested virtualization on
913 both ``x86_64`` and ``aarch64`` architectures.
917 ``virtual`` ``HA`` deployments without ``VCP`` are supported, but
920 +-------------------------------+-------------------------+-------------------+
921 | Feature | ``HA`` scenario | ``noHA`` scenario |
922 +===============================+=========================+===================+
923 | ``VCP`` | yes, | no |
924 | (Virtualized Control Plane) | disabled with ``-N`` | |
925 +-------------------------------+-------------------------+-------------------+
926 | OpenStack APIs SSL | yes | no |
927 +-------------------------------+-------------------------+-------------------+
928 | Storage | ``GlusterFS`` | ``NFS`` |
929 +-------------------------------+-------------------------+-------------------+
931 Steps to Start the Automatic Deploy
932 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
934 These steps are common for ``virtual``, ``baremetal`` or ``hybrid`` deploys,
935 ``x86_64``, ``aarch64`` or ``mixed`` (``x86_64`` and ``aarch64``):
937 - Clone the OPNFV Fuel code from gerrit
938 - Checkout the ``Gambia`` release tag
939 - Start the deploy script
943 The deployment uses the OPNFV Pharos project as input (``PDF`` and
944 ``IDF`` files) for hardware and network configuration of all current
947 When deploying a new POD, one may pass the ``-b`` flag to the deploy
948 script to override the path for the labconfig directory structure
949 containing the ``PDF`` and ``IDF`` (``<URI to configuration repo ...>`` is
950 the absolute path to a local or remote directory structure, populated
951 similar to `pharos git repo`_, i.e. ``PDF``/``IDF`` reside in a
952 subdirectory called ``labs/<lab_name>``).
954 .. code-block:: console
956 jenkins@jumpserver:~$ git clone https://git.opnfv.org/fuel
957 jenkins@jumpserver:~$ cd fuel
958 jenkins@jumpserver:~/fuel$ git checkout opnfv-7.0.0
959 jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> \
961 -b <URI to configuration repo containing the PDF/IDF files> \
964 -S <Storage directory for deploy artifacts> |& tee deploy.log
968 Besides the basic options, there are other recommended deploy arguments:
970 - use ``-D`` option to enable the debug info
971 - use ``-S`` option to point to a tmp dir where the disk images are saved.
972 The deploy artifacts will be re-used on subsequent (re)deployments.
973 - use ``|& tee`` to save the deploy log to a file
975 Typical Cluster Examples
976 ~~~~~~~~~~~~~~~~~~~~~~~~
978 Common cluster layouts usually fall into one of the cases described below,
979 categorized by deployment type (``baremetal``, ``virtual`` or ``hybrid``) and
980 high availability (``HA`` or ``noHA``).
982 A simplified overview of the steps ``deploy.sh`` will automatically perform is:
984 - create a Salt Master Docker container on the jumpserver, which will drive
985 the rest of the installation;
986 - ``baremetal`` or ``hybrid`` only: create a ``MaaS`` infrastructure node VM,
987 which will be leveraged using Salt to handle OS provisioning on the
989 - leverage Salt to install & configure OpenStack;
993 A virtual network ``mcpcontrol`` is always created for initial connection
994 of the VMs on Jumphost.
998 A single cluster deployment per ``jumpserver`` node is currently supported,
999 indifferent of its type (``virtual``, ``baremetal`` or ``hybrid``).
1001 Once the deployment is complete, the following should be accessible:
1003 +---------------+----------------------------------+---------------------------+
1004 | Resource | ``HA`` scenario | ``noHA`` scenario |
1005 +===============+==================================+===========================+
1006 | ``Horizon`` | ``https://<prx public VIP>`` | ``http://<ctl VIP>:8078`` |
1009 +---------------+----------------------------------+---------------------------+
1010 | ``SaltStack`` | ``http://<prx public VIP>:8090`` | N/A |
1012 | Documentation | | |
1013 +---------------+----------------------------------+---------------------------+
1017 For more details on locating and importing the generated SSL certificate,
1018 see :ref:`OPNFV Fuel User Guide <fuel-userguide>`.
1020 ``virtual`` ``noHA`` POD
1021 ------------------------
1023 In the following figure there are two generic examples of ``virtual`` deploys,
1024 each on a separate Jumphost node, both behind the same ``TOR`` switch:
1026 - Jumphost 1 has only virsh bridges (created by the deploy script);
1027 - Jumphost 2 has a mix of Linux (manually created) and ``libvirt`` managed
1028 bridges (created by the deploy script);
1030 .. figure:: img/fuel_virtual_noha.png
1033 :alt: OPNFV Fuel Virtual noHA POD Network Layout Examples
1035 OPNFV Fuel Virtual noHA POD Network Layout Examples
1037 +-------------+------------------------------------------------------------+
1038 | ``cfg01`` | Salt Master Docker container |
1039 +-------------+------------------------------------------------------------+
1040 | ``ctl01`` | Controller VM |
1041 +-------------+------------------------------------------------------------+
1042 | ``gtw01`` | Gateway VM with neutron services |
1043 | | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc) |
1044 +-------------+------------------------------------------------------------+
1045 | ``odl01`` | VM on which ``ODL`` runs |
1046 | | (for scenarios deployed with ODL) |
1047 +-------------+------------------------------------------------------------+
1048 | ``cmp001``, | Compute VMs |
1050 +-------------+------------------------------------------------------------+
1054 If external access to the ``public`` network is not required, there is
1055 little to no motivation to create a custom ``PDF``/``IDF`` set for a
1058 Instead, the existing virtual PODs definitions in `pharos git repo`_ can
1061 - ``ericsson-virtual1`` for ``x86_64``;
1062 - ``arm-virtual2`` for ``aarch64``;
1064 .. code-block:: console
1066 # example deploy cmd for an x86_64 virtual cluster
1067 jenkins@jumpserver:~/fuel$ ci/deploy.sh -l ericsson \
1069 -s os-nosdn-nofeature-noha \
1071 -S /home/jenkins/tmpdir |& tee deploy.log
1073 ``baremetal`` ``noHA`` POD
1074 --------------------------
1078 These scenarios are not tested in OPNFV CI, so they are considered
1081 .. figure:: img/fuel_baremetal_noha.png
1084 :alt: OPNFV Fuel Baremetal noHA POD Network Layout Example
1086 OPNFV Fuel Baremetal noHA POD Network Layout Example
1088 +-------------+------------------------------------------------------------+
1089 | ``cfg01`` | Salt Master Docker container |
1090 +-------------+------------------------------------------------------------+
1091 | ``mas01`` | MaaS Node VM |
1092 +-------------+------------------------------------------------------------+
1093 | ``ctl01`` | Baremetal controller node |
1094 +-------------+------------------------------------------------------------+
1095 | ``gtw01`` | Baremetal Gateway with neutron services |
1096 | | (dhcp agent, L3 agent, metadata, etc) |
1097 +-------------+------------------------------------------------------------+
1098 | ``odl01`` | Baremetal node on which ODL runs |
1099 | | (for scenarios deployed with ODL, otherwise unused |
1100 +-------------+------------------------------------------------------------+
1101 | ``cmp001``, | Baremetal Computes |
1103 +-------------+------------------------------------------------------------+
1104 | Tenant VM | VM running in the cloud |
1105 +-------------+------------------------------------------------------------+
1107 ``baremetal`` ``HA`` POD
1108 ------------------------
1110 .. figure:: img/fuel_baremetal_ha.png
1113 :alt: OPNFV Fuel Baremetal HA POD Network Layout Example
1115 OPNFV Fuel Baremetal HA POD Network Layout Example
1117 +---------------------------+----------------------------------------------+
1118 | ``cfg01`` | Salt Master Docker container |
1119 +---------------------------+----------------------------------------------+
1120 | ``mas01`` | MaaS Node VM |
1121 +---------------------------+----------------------------------------------+
1122 | ``kvm01``, | Baremetals which hold the VMs with |
1123 | ``kvm02``, | controller functions |
1125 +---------------------------+----------------------------------------------+
1126 | ``prx01``, | Proxy VMs for Nginx |
1128 +---------------------------+----------------------------------------------+
1129 | ``msg01``, | RabbitMQ Service VMs |
1132 +---------------------------+----------------------------------------------+
1133 | ``dbs01``, | MySQL service VMs |
1136 +---------------------------+----------------------------------------------+
1137 | ``mdb01``, | Telemetry VMs |
1140 +---------------------------+----------------------------------------------+
1141 | ``odl01`` | VM on which ``OpenDaylight`` runs |
1142 | | (for scenarios deployed with ``ODL``) |
1143 +---------------------------+----------------------------------------------+
1144 | ``cmp001``, | Baremetal Computes |
1146 +---------------------------+----------------------------------------------+
1147 | Tenant VM | VM running in the cloud |
1148 +---------------------------+----------------------------------------------+
1150 .. code-block:: console
1152 # x86_x64 baremetal deploy on pod2 from Linux Foundation lab (lf-pod2)
1153 jenkins@jumpserver:~/fuel$ ci/deploy.sh -l lf \
1155 -s os-nosdn-nofeature-ha \
1157 -S /home/jenkins/tmpdir |& tee deploy.log
1159 .. code-block:: console
1161 # aarch64 baremetal deploy on pod5 from Enea ARM lab (arm-pod5)
1162 jenkins@jumpserver:~/fuel$ ci/deploy.sh -l arm \
1164 -s os-nosdn-nofeature-ha \
1166 -S /home/jenkins/tmpdir |& tee deploy.log
1168 ``hybrid`` ``noHA`` POD
1169 -----------------------
1171 .. figure:: img/fuel_hybrid_noha.png
1174 :alt: OPNFV Fuel Hybrid noHA POD Network Layout Examples
1176 OPNFV Fuel Hybrid noHA POD Network Layout Examples
1178 +-------------+------------------------------------------------------------+
1179 | ``cfg01`` | Salt Master Docker container |
1180 +-------------+------------------------------------------------------------+
1181 | ``mas01`` | MaaS Node VM |
1182 +-------------+------------------------------------------------------------+
1183 | ``ctl01`` | Controller VM |
1184 +-------------+------------------------------------------------------------+
1185 | ``gtw01`` | Gateway VM with neutron services |
1186 | | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc) |
1187 +-------------+------------------------------------------------------------+
1188 | ``odl01`` | VM on which ``ODL`` runs |
1189 | | (for scenarios deployed with ODL) |
1190 +-------------+------------------------------------------------------------+
1191 | ``cmp001``, | Baremetal Computes |
1193 +-------------+------------------------------------------------------------+
1195 Automatic Deploy Breakdown
1196 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1198 When an automatic deploy is started, the following operations are performed
1199 sequentially by the deploy script:
1201 +------------------+----------------------------------------------------------+
1202 | **Deploy stage** | **Details** |
1203 +==================+==========================================================+
1204 | Argument | enviroment variables and command line arguments passed |
1205 | Parsing | to ``deploy.sh`` are interpreted |
1206 +------------------+----------------------------------------------------------+
1207 | Distribution | Install and/or configure mandatory requirements on the |
1208 | Package | ``jumpserver`` node: |
1210 | | - ``Docker`` (from upstream and not distribution repos, |
1211 | | as the version included in ``Ubuntu`` ``Xenial`` is |
1213 | | - ``docker-compose`` (from upstream, as the version |
1214 | | included in both ``CentOS 7`` and |
1215 | | ``Ubuntu Xenial 16.04`` has dependency issues on most |
1217 | | - ``virt-inst`` (from upstream, as the version included |
1218 | | in ``Ubuntu Xenial 16.04`` is outdated and lacks |
1219 | | certain required features); |
1220 | | - other miscelaneous requirements, depending on |
1221 | | ``jumpserver`` distribution OS; |
1225 | | - ``mcp/scripts/requirements_deb.yaml`` (``Ubuntu``) |
1226 | | - ``mcp/scripts/requirements_rpm.yaml`` (``CentOS``) |
1230 | | Mininum required ``Docker`` version is ``17.x``. |
1234 | | Mininum required ``virt-inst`` version is ``1.4``. |
1235 +------------------+----------------------------------------------------------+
1236 | Patch | For each ``git`` submodule in OPNFV Fuel repository, |
1237 | Apply | if a subdirectory with the same name exists under |
1238 | | ``mcp/patches``, all patches in that subdirectory are |
1239 | | applied using ``git-am`` to the respective ``git`` |
1242 | | This allows OPNFV Fuel to alter upstream repositories |
1243 | | contents before consuming them, including: |
1245 | | - ``Docker`` container build process customization; |
1246 | | - ``salt-formulas`` customization; |
1247 | | - ``reclass.system`` customization; |
1251 | | - ``mcp/patches/README.rst`` |
1252 +------------------+----------------------------------------------------------+
1253 | SSH RSA Keypair | If not already present, a RSA keypair is generated on |
1254 | Generation | the ``jumpserver`` node at: |
1256 | | - ``/var/lib/opnfv/mcp.rsa{,.pub}`` |
1258 | | The public key will be added to the ``authorized_keys`` |
1259 | | list for ``ubuntu`` user, so the private key can be used |
1260 | | for key-based logins on: |
1262 | | - ``cfg01``, ``mas01`` infrastructure nodes; |
1263 | | - all cluster nodes (``baremetal`` and/or ``virtual``), |
1264 | | including ``VCP`` VMs; |
1265 +------------------+----------------------------------------------------------+
1266 | ``j2`` | Based on ``XDF`` (``PDF``, ``IDF``, ``SDF``) and |
1267 | Expansion | additional deployment configuration determined during |
1268 | | ``argument parsing`` stage described above, all jinja2 |
1269 | | templates are expanded, including: |
1271 | | - various classes in ``reclass.cluster``; |
1272 | | - docker-compose ``yaml`` for Salt Master bring-up; |
1273 | | - ``libvirt`` network definitions (``xml``); |
1274 +------------------+----------------------------------------------------------+
1275 | Jumpserver | Basic validation that common ``jumpserver`` requirements |
1276 | Requirements | are satisfied, e.g. ``PXE/admin`` is Linux bridge if |
1277 | Check | ``baremetal`` nodes are defined in the ``PDF``. |
1278 +------------------+----------------------------------------------------------+
1279 | Infrastucture | .. NOTE:: |
1281 | | All steps apply to and only to the ``jumpserver``. |
1283 | | - prepare virtual machines; |
1284 | | - (re)create ``libvirt`` managed networks; |
1285 | | - apply ``sysctl`` configuration; |
1286 | | - apply ``udev`` configuration; |
1287 | | - create & start virtual machines prepared earlier; |
1288 | | - create & start Salt Master (``cfg01``) Docker |
1290 +------------------+----------------------------------------------------------+
1291 | ``STATE`` | Based on deployment type, scenario and other parameters, |
1292 | Files | a ``STATE`` file list is constructed, then executed |
1297 | | The table below lists all current ``STATE`` files |
1298 | | and their intended action. |
1302 | | For more information on how the list of ``STATE`` |
1303 | | files is constructed, see |
1304 | | :ref:`OPNFV Fuel User Guide <fuel-userguide>`. |
1305 +------------------+----------------------------------------------------------+
1306 | Log | Contents of ``/var/log`` are recursively gathered from |
1307 | Collection | all the nodes, then archived together for later |
1309 +------------------+----------------------------------------------------------+
1311 ``STATE`` Files Overview
1312 ------------------------
1314 +---------------------------+-------------------------------------------------+
1315 | ``STATE`` file | Targets involved and main intended action |
1316 +===========================+=================================================+
1317 | ``virtual_init`` | ``cfg01``: reclass node generation |
1319 | | ``jumpserver`` VMs (e.g. ``mas01``): basic OS |
1321 +---------------------------+-------------------------------------------------+
1322 | ``maas`` | ``mas01``: OS, MaaS installation, |
1323 | | ``baremetal`` node commissioning and deploy |
1327 | | Skipped if no ``baremetal`` nodes are |
1328 | | defined in ``PDF`` (``virtual`` deploy). |
1329 +---------------------------+-------------------------------------------------+
1330 | ``baremetal_init`` | ``kvm``, ``cmp``: OS install, config |
1331 +---------------------------+-------------------------------------------------+
1332 | ``dpdk`` | ``cmp``: configure OVS-DPDK |
1333 +---------------------------+-------------------------------------------------+
1334 | ``networks`` | ``ctl``: create OpenStack networks |
1335 +---------------------------+-------------------------------------------------+
1336 | ``neutron_gateway`` | ``gtw01``: configure Neutron gateway |
1337 +---------------------------+-------------------------------------------------+
1338 | ``opendaylight`` | ``odl01``: install & configure ``ODL`` |
1339 +---------------------------+-------------------------------------------------+
1340 | ``openstack_noha`` | cluster nodes: install OpenStack without ``HA`` |
1341 +---------------------------+-------------------------------------------------+
1342 | ``openstack_ha`` | cluster nodes: install OpenStack with ``HA`` |
1343 +---------------------------+-------------------------------------------------+
1344 | ``virtual_control_plane`` | ``kvm``: create ``VCP`` VMs |
1346 | | ``VCP`` VMs: basic OS config |
1350 | | Skipped if ``-N`` deploy argument is used. |
1351 +---------------------------+-------------------------------------------------+
1352 | ``tacker`` | ``ctl``: install & configure Tacker |
1353 +---------------------------+-------------------------------------------------+
1358 Please refer to the :ref:`OPNFV Fuel Release Notes <fuel-releasenotes>`
1364 For more information on the OPNFV ``Gambia`` 7.0 release, please see:
1366 #. `OPNFV Home Page`_
1367 #. `OPNFV Documentation`_
1368 #. `OPNFV Software Downloads`_
1369 #. `OPNFV Gambia Wiki Page`_
1370 #. `OpenStack Queens Release Artifacts`_
1371 #. `OpenStack Documentation`_
1372 #. `OpenDaylight Artifacts`_
1373 #. `Mirantis Cloud Platform Documentation`_
1374 #. `Saltstack Documentation`_
1375 #. `Saltstack Formulas`_
1378 .. FIXME: cleanup unused refs, extend above list
1379 .. _`OpenDaylight`: https://www.opendaylight.org/software
1380 .. _`OpenDaylight Artifacts`: https://www.opendaylight.org/software/downloads
1381 .. _`MCP`: https://www.mirantis.com/software/mcp/
1382 .. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/
1383 .. _`fuel git repository`: https://git.opnfv.org/fuel
1384 .. _`pharos git repo`: https://git.opnfv.org/pharos
1385 .. _`OpenStack Documentation`: https://docs.openstack.org
1386 .. _`OpenStack Queens Release Artifacts`: https://www.openstack.org/software/queens
1387 .. _`OPNFV Home Page`: https://www.opnfv.org
1388 .. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia
1389 .. _`OPNFV Documentation`: https://docs.opnfv.org
1390 .. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download
1391 .. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0
1392 .. _`Saltstack Documentation`: https://docs.saltstack.com/en/latest/topics/
1393 .. _`Saltstack Formulas`: https://salt-formulas.readthedocs.io/en/latest/
1394 .. _`Reclass`: https://reclass.pantsfullofunix.net
1395 .. _`OPNFV Pharos Specification`: https://wiki.opnfv.org/display/pharos/Pharos+Specification
1396 .. _`OPNFV PDF Wiki Page`: https://wiki.opnfv.org/display/INF/POD+Descriptor