Abstract
========
-This document describes how to install the Euphrates release of
+This document describes how to install the Fraser release of
OPNFV when using Fuel as a deployment tool, covering its usage,
limitations, dependencies and required system resources.
This is an unified documentation for both x86_64 and aarch64
============
This document provides guidelines on how to install and
-configure the Euphrates release of OPNFV when using Fuel as a
+configure the Fraser release of OPNFV when using Fuel as a
deployment tool, including required software and hardware configurations.
Although the available installation options provide a high degree of
freedom in how the system is set up, including architecture, services
and features, etc., said permutations may not provide an OPNFV
compliant reference architecture. This document provides a
-step-by-step guide that results in an OPNFV Euphrates compliant
+step-by-step guide that results in an OPNFV Fraser compliant
deployment.
The audience of this document is assumed to have good knowledge of
Preface
=======
-Before starting the installation of the Euphrates release of
+Before starting the installation of the Fraser release of
OPNFV, using Fuel as a deployment tool, some planning must be
done.
=========================================
The following minimum hardware requirements must be met for the virtual
-installation of Euphrates using Fuel:
+installation of Fraser using Fuel:
+----------------------------+--------------------------------------------------------+
| **HW Aspect** | **Requirement** |
+----------------------------+--------------------------------------------------------+
| **RAM** | Minimum 32GB/server (Depending on VNF work load) |
+----------------------------+--------------------------------------------------------+
-| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended |
+| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)|
+----------------------------+--------------------------------------------------------+
===========================================
The following minimum hardware requirements must be met for the baremetal
-installation of Euphrates using Fuel:
+installation of Fraser using Fuel:
+-------------------------+------------------------------------------------------+
| **HW Aspect** | **Requirement** |
| | order (e.g. IPMI) |
+-------------------------+------------------------------------------------------+
-**NOTE:** All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
+.. NOTE::
-**NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
+ All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
+
+.. NOTE::
+
+ For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
===============================
Help with Hardware Requirements
infrastructure as well as the provider networks and the private tenant
VLANs needs to be manually configured.
-Manual configuration of the Euphrates hardware platform should
+Manual configuration of the Fraser hardware platform should
be carried out according to the `OPNFV Pharos Specification
<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
according to the Pharos specification. Relevant network bridges should
also be pre-configured (e.g. admin_br, mgmt_br, public_br).
-- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during fuel installation.
+- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during Fuel installation.
- The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
suggested to pre-configure it for debugging purposes.
- The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
-The user running the deploy script on the Jumpserver should belong to "sudo" and "libvirt" groups,
+The user running the deploy script on the Jumpserver should belong to ``sudo`` and ``libvirt`` groups,
and have passwordless sudo access.
-The following example adds the groups to the user "jenkins"
+The following example adds the groups to the user ``jenkins``
.. code-block:: bash
...
%jenkins ALL=(ALL) NOPASSWD:ALL
-The folder containing the temporary deploy artifacts (/home/jenkins/tmpdir in the examples below)
+The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir`` in the examples below)
needs to have mask 777 in order for libvirt to be able to use them.
.. code-block:: bash
$ mkdir -p -m 777 /home/jenkins/tmpdir
-For an AArch64 Jumpserver, the "libvirt" minimum required version is 3.x, 3.5 or newer highly recommended.
+For an AArch64 Jumpserver, the ``libvirt`` minimum required version is 3.x, 3.5 or newer highly recommended.
While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
(especially on AArch64 Jumpservers).
For convenience, Armband provides a DEB repository holding all the required packages.
To add and enable the Armband repository on an Ubuntu 16.04 system,
-create a new sources list file `/apt/sources.list.d/armband.list` with the following contents:
+create a new sources list file ``/apt/sources.list.d/armband.list`` with the following contents:
.. code-block:: bash
$ cat /etc/apt/sources.list.d/armband.list
- //for OpenStack Pike release
- deb http://linux.enea.com/mcp-repos/pike/xenial pike-armband main
+ //for OpenStack Queens release
+ deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main
$ apt-get update
- CentOS 7 (recommended by Pharos specification);
- Ubuntu Xenial;
-**NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver. In case libvirt
-packages are missing, the script will install them; but depending on the OS distribution, the user
-might have to start the 'libvirtd' service manually, then run the deploy script again. Therefore, it
-is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment.
+.. WARNING::
+
+ The install script expects ``libvirt`` to be already running on the Jumpserver.
+ In case ``libvirt`` packages are missing, the script will install them; but
+ depending on the OS distribution, the user might have to start the ``libvirtd``
+ service manually, then run the deploy script again. Therefore, it
+ is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment.
+
+.. NOTE::
+
+ It is also recommended to install the newer kernel on the Jumpserver before the deployment.
-**NOTE**: It is also recommended to install the newer kernel on the Jumpserver before the deployment.
+.. WARNING::
-**NOTE**: The install script will automatically install the rest of required distro package
-dependencies on the Jumpserver, unless explicitly asked not to (via -P deploy arg). This includes
-Python, QEMU, libvirt etc.
+ The install script will automatically install the rest of required distro package
+ dependencies on the Jumpserver, unless explicitly asked not to (via ``-P`` deploy arg).
+ This includes Python, QEMU, libvirt etc.
-**NOTE**: The install script will alter Jumpserver sysconf and disable `net.bridge.bridge-nf-call`.
+.. WARNING::
+
+ The install script will alter Jumpserver sysconf and disable ``net.bridge.bridge-nf-call``.
.. code-block:: bash
+-----------------------+------------------------------------------------------------------------+
| ctl01 | Controller VM |
+-----------------------+------------------------------------------------------------------------+
- | cmp01/cmp02 | Compute VMs |
+ | cmp001/cmp002 | Compute VMs |
+-----------------------+------------------------------------------------------------------------+
| gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) |
+-----------------------+------------------------------------------------------------------------+
- Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
the deploy script will skip creating a virsh bridge for it
-**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also
- used for Admin, leaving the PXE/Admin bridge unused.
+.. NOTE::
+
+ A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost.
Automatic Installation of a Baremetal POD
=========================================
The baremetal installation process can be done by editing the information about
-hardware and environment in the reclass files, or by using a Pod Descriptor File (PDF).
-This file contains all the information about the hardware and network of the deployment
-the will be fed to the reclass model during deployment.
+hardware and environment in the reclass files, or by using the files Pod Descriptor
+File (PDF) and Installer Descriptor File (IDF) as described in the OPNFV Pharos project.
+These files contain all the information about the hardware and network of the deployment
+that will be fed to the reclass model during deployment.
The installation is done automatically with the deploy script, which will:
required to pre-configure at least the admin_br bridge for the PXE/Admin.
For the targets, the bridges are created by the deploy script.
-**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used
-for baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
+.. NOTE::
+
+ A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost.
Steps to Start the Automatic Deploy
$ git clone https://git.opnfv.org/armband
$ cd armband
-#. Checkout the Euphrates release
+#. Checkout the Fraser release
.. code-block:: bash
- $ git checkout opnfv-5.0.2
+ $ git checkout opnfv-6.2.1
#. Start the deploy script
Besides the basic options, there are other recommended deploy arguments:
- - use **-D** option to enable the debug info
- - use **-S** option to point to a tmp dir where the disk images are saved. The images will be
+ - use ``-D`` option to enable the debug info
+ - use ``-S`` option to point to a tmp dir where the disk images are saved. The images will be
re-used between deploys
- - use **|& tee** to save the deploy log to a file
+ - use ``|& tee`` to save the deploy log to a file
.. code-block:: bash
-D \
-S <Storage directory for disk images> |& tee deploy.log
+.. NOTE::
+
+ The deployment uses the OPNFV Pharos project as input (PDF and IDF files)
+ for hardware and network configuration of all current OPNFV PODs.
+ When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override
+ the path for the labconfig directory structure containing the PDF and IDF (see below).
+
Examples
--------
#. Virtual deploy
- To start a virtual deployment, it is required to have the `virtual` keyword
+ To start a virtual deployment, it is required to have the **virtual** keyword
while specifying the pod name to the installer script.
It will create the required bridges and networks, configure Salt Master and
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l ericsson \
- -p virtual3 \
- -s os-nosdn-nofeature-noha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
+ $ ci/deploy.sh -l ericsson \
+ -p virtual3 \
+ -s os-nosdn-nofeature-noha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
- Once the deployment is complete, the OpenStack Dashboard, Horizon is
- available at http://<controller VIP>:8078, e.g. http://10.16.0.11:8078.
+ Once the deployment is complete, the OpenStack Dashboard, Horizon, is
+ available at ``http://<controller VIP>:8078``
The administrator credentials are **admin** / **opnfv_secret**.
+ A simple (and generic) sample PDF/IDF set of configuration files may
+ be used for virtual deployments by setting lab/POD name to ``local-virtual1``.
+ This sample configuration is x86_64 specific and hardcodes certain parameters,
+ like public network address space, so a dedicated PDF/IDF is highly recommended.
+
+ .. code-block:: bash
+
+ $ ci/deploy.sh -l local \
+ -p virtual1 \
+ -s os-nosdn-nofeature-noha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
+
#. Baremetal deploy
A x86 deploy on pod2 from Linux Foundation lab
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l lf \
+ $ ci/deploy.sh -l lf \
-p pod2 \
-s os-nosdn-nofeature-ha \
-D \
Fuel@OPNFV LF POD2 Network Layout
- Once the deployment is complete, the SaltStack Deployment Documentation is
- available at http://<Proxy VIP>:8090, e.g. http://172.30.10.103:8090.
-
An aarch64 deploy on pod5 from Arm lab
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l arm \
+ $ ci/deploy.sh -l arm \
-p pod5 \
-s os-nosdn-nofeature-ha \
-D \
Fuel@OPNFV ARM POD5 Network Layout
-Pod Descriptor Files
-====================
+ Once the deployment is complete, the SaltStack Deployment Documentation is
+ available at ``http://<proxy public VIP>:8090``.
+
+ When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override
+ the path for the labconfig directory structure containing the PDF and IDF.
+
+ .. code-block:: bash
+
+ $ ci/deploy.sh -b file://<absolute_path_to_labconfig> \
+ -l <lab_name> \
+ -p <pod_name> \
+ -s <scenario> \
+ -D \
+ -S <tmp_folder> |& tee deploy.log
+
+ - <absolute_path_to_labconfig> is the absolute path to a local directory, populated
+ similar to Pharos, i.e. PDF/IDF reside in ``<absolute_path_to_labconfig>/labs/<lab_name>``
+ - <lab_name> is the same as the directory in the path above
+ - <pod_name> is the name used for the PDF (``<pod_name>.yaml``) and IDF (``idf-<pod_name>.yaml``) files
+
+
+
+Pod and Installer Descriptor Files
+==================================
Descriptor files provide the installer with an abstraction of the target pod
with all its hardware characteristics and required parameters. This information
The Pod Descriptor File is a hardware description of the pod
infrastructure. The information is modeled under a yaml structure.
A reference file with the expected yaml structure is available at
-*mcp/config/labs/local/pod1.yaml*
+``mcp/config/labs/local/pod1.yaml``.
The hardware description is arranged into a main "jumphost" node and a "nodes"
set for all target boards. For each node the following characteristics
- Remote management parameters.
- Network interfaces list including mac address, speed, advanced features and name.
-**Note**: The fixed IPs are ignored by the MCP installer script and it will instead
-assign based on the network ranges defined under the pod network configuration.
+.. NOTE::
+
+ The fixed IPs are ignored by the MCP installer script and it will instead
+ assign based on the network ranges defined in IDF.
The Installer Descriptor File extends the PDF with pod related parameters
required by the installer. This information may differ per each installer type
and it is not considered part of the pod infrastructure.
The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected
-structure is available at *mcp/config/labs/local/idf-pod1.yaml*
+structure is available at ``mcp/config/labs/local/idf-pod1.yaml``.
The file follows a yaml structure and two sections "net_config" and "fuel" are expected.
The "net_config" section describes all the internal and provider networks
-assigned to the pod. Each network is expected to have a vlan tag, IP subnet and
+assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and
attached interface on the boards. Untagged vlans shall be defined as "native".
The "fuel" section defines several sub-sections required by the Fuel installer:
and other DPDK settings. (optional)
The following parameters can be defined in the IDF files under "reclass". Those value will
-overwrite the default configuration values in Fuel repository.
+overwrite the default configuration values in Fuel repository:
-- nova_cpu_pinning: List of CPU cores nova will be pinned to.
+- nova_cpu_pinning: List of CPU cores nova will be pinned to. Currently disabled.
- compute_hugepages_size: Size of each persistent huge pages. Usual values are '2M' and '1G'.
- compute_hugepages_count: Total number of persistent huge pages.
- compute_hugepages_mount: Mount point to use for huge pages.
The schemas are defined as a git submodule in Fuel repository. Input files provided
to the installer will be validated against the schemas.
-- *mcp/scripts/pharos/config/pdf/pod1.schema.yaml*
-- *mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml*
+- ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml``
+- ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml``
=============
Release Notes
OpenStack
-4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
+4) `OpenStack Queens Release Artifacts <http://www.openstack.org/software/queens>`_
5) `OpenStack Documentation <http://docs.openstack.org>`_
OpenDaylight