X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=docs%2Frelease%2Finstallation%2Finstallation.instruction.rst;h=cad1b10774072bf0558d442a19f59aa797c779ee;hb=8d6ea0ff12b6633b0edf6bbb0988360597efc57e;hp=9b2c2e8c27c50cdb68c6acfd22023e3ecee7c60f;hpb=ac4530fa1c89ff1a6af3cec09b3b5de458426bde;p=fuel.git diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst index 9b2c2e8c2..cad1b1077 100644 --- a/docs/release/installation/installation.instruction.rst +++ b/docs/release/installation/installation.instruction.rst @@ -21,7 +21,7 @@ This document provides guidelines on how to install and configure the Euphrates release of OPNFV when using Fuel as a deployment tool, including required software and hardware configurations. -Although the available installation options provide a high de.g.ee of +Although the available installation options provide a high degree of freedom in how the system is set up, including architecture, services and features, etc., said permutations may not provide an OPNFV compliant reference architecture. This document provides a @@ -40,7 +40,7 @@ OPNFV, using Fuel as a deployment tool, some planning must be done. Preparations -================== +============ Prior to installation, a number of deployment specific parameters must be collected, those are: @@ -65,7 +65,7 @@ This information will be needed for the configuration procedures provided in this document. ========================================= -Hardware requirements for virtual deploys +Hardware Requirements for Virtual Deploys ========================================= The following minimum hardware requirements must be met for the virtual @@ -76,7 +76,7 @@ installation of Euphrates using Fuel: | | | +============================+========================================================+ | **1 Jumpserver** | A physical node (also called Foundation Node) that | -| | hosts a Salt Master VM and each of the VM nodes in | +| | will host a Salt Master VM and each of the VM nodes in | | | the virtual deploy | +----------------------------+--------------------------------------------------------+ | **CPU** | Minimum 1 socket with Virtualization support | @@ -88,7 +88,7 @@ installation of Euphrates using Fuel: =========================================== -Hardware requirements for baremetal deploys +Hardware Requirements for Baremetal Deploys =========================================== The following minimum hardware requirements must be met for the baremetal @@ -132,14 +132,14 @@ installation of Euphrates using Fuel: **NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2). - =============================== Help with Hardware Requirements =============================== Calculate hardware requirements: -For information on compatible hardware types available for use, please see `Fuel OpenStack Hardware Compatibility List `_. +For information on compatible hardware types available for use, +please see `Fuel OpenStack Hardware Compatibility List `_ When choosing the hardware on which you will deploy your OpenStack environment, you should think about: @@ -153,7 +153,7 @@ environment, you should think about: - Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage. ================================================ -Top of the rack (TOR) Configuration requirements +Top of the Rack (TOR) Configuration Requirements ================================================ The switching infrastructure provides connectivity for the OPNFV @@ -177,8 +177,79 @@ Manual configuration of the Euphrates hardware platform should be carried out according to the `OPNFV Pharos Specification `_. +============================ +OPNFV Software Prerequisites +============================ + +The Jumpserver node should be pre-provisioned with an operating system, +according to the Pharos specification. Relevant network bridges should +also be pre-configured (e.g. admin_br, mgmt_br, public_br). + + - The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during fuel installation. + - The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is + suggested to pre-configure it for debugging purposes. + - The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory. + +The user running the deploy script on the Jumpserver should belong to "sudo" and "libvirt" groups, +and have passwordless sudo access. + +The following example adds the groups to the user "jenkins" + +.. code-block:: bash + + $ sudo usermod -aG sudo jenkins + $ sudo usermod -aG libvirt jenkins + $ reboot + $ groups + jenkins sudo libvirt + + $ sudo visudo + ... + %jenkins ALL=(ALL) NOPASSWD:ALL + +For an AArch64 Jumpserver, the "libvirt" minimum required version is 3.x, 3.5 or newer highly recommended. +While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended +(especially on AArch64 Jumpservers). + +For CentOS 7.4 (AArch64), distro provided packages are already new enough. +For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used. +For convenience, Armband provides a DEB repository holding all the required packages. + +To add and enable the Armband repository on an Ubuntu 16.04 system, +create a new sources list file `/apt/sources.list.d/armband.list` with the following contents: + +.. code-block:: bash + + $ cat /etc/apt/sources.list.d/armband.list + //for OpenStack Pike release + deb http://linux.enea.com/mcp-repos/pike/xenial pike-armband main + + $ apt-get update + +Fuel@OPNFV has been validated by CI using the following distributions +installed on the Jumpserver: + + - CentOS 7 (recommended by Pharos specification); + - Ubuntu Xenial; + +**NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver.In case libvirt +packages are missing, the script will install them; but depending on the OS distribution, the user +might have to start the 'libvirtd' service manually, then run the deploy script again. Therefore, it +is recommened to install libvirt-bin explicitly on the Jumpserver before the deployment. + +**NOTE**: It is also recommened to install the newer kernel on the Jumpserver before the deployment. + +**NOTE**: The install script will automatically install the rest of required distro package +dependencies on the Jumpserver, unless explicitly asked not to (via -P deploy arg). This includes +Python, QEMU, libvirt etc. + +.. code-block:: bash + + $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin + + ========================================== -OPNFV Software installation and deployment +OPNFV Software Installation and Deployment ========================================== This section describes the process of installing all the components needed to @@ -190,9 +261,9 @@ automatic based on deployment scenario. The reclass model covers: - Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01) - - Openstack node defition: Controler nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002) + - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002) - Infrastructure components to install (software packages, services etc.) - - Openstack components and services (rabbitmq, galera etc.), as well as all configuration for them + - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them Automatic Installation of a Virtual POD @@ -201,16 +272,43 @@ Automatic Installation of a Virtual POD For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will: - Create a Salt Master VM on the Jumpserver which will drive the installation - - Create the bridges for networking with virsh (only if a real bridge does not already exists for a given network) - - Install Openstack on the targets - - Leverage Salt to install & configure Openstack services + - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network) + - Install OpenStack on the targets + - Leverage Salt to install & configure OpenStack services + +.. figure:: img/fuel_virtual.png + :align: center + :alt: Fuel@OPNFV Virtual POD Network Layout Examples + + Fuel@OPNFV Virtual POD Network Layout Examples + + +-----------------------+------------------------------------------------------------------------+ + | cfg01 | Salt Master VM | + +-----------------------+------------------------------------------------------------------------+ + | ctl01 | Controller VM | + +-----------------------+------------------------------------------------------------------------+ + | cmp01/cmp02 | Compute VMs | + +-----------------------+------------------------------------------------------------------------+ + | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) | + +-----------------------+------------------------------------------------------------------------+ + | odl01 | VM on which ODL runs (for scenarios deployed with ODL) | + +-----------------------+------------------------------------------------------------------------+ + + +In this figure there are examples of two virtual deploys: + - Jumphost 1 has only virsh bridges, created by the deploy script + - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network, + the deploy script will skip creating a virsh bridge for it + +**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also + used for Admin, leaving the PXE/Admin bridge unused. Automatic Installation of a Baremetal POD ========================================= The baremetal installation process can be done by editing the information about -hardware and enviroment in the reclass files, or by using a Pod Descriptor File (PDF). +hardware and environment in the reclass files, or by using a Pod Descriptor File (PDF). This file contains all the information about the hardware and network of the deployment the will be fed to the reclass model during deployment. @@ -218,13 +316,48 @@ The installation is done automatically with the deploy script, which will: - Create a Salt Master VM on the Jumpserver which will drive the installation - Create a MaaS Node VM on the Jumpserver which will provision the targets - - Install Openstack on the targets + - Install OpenStack on the targets - Leverage MaaS to provision baremetal nodes with the operating system - - Leverage Salt to configure the operatign system on the baremetal nodes - - Leverage Salt to install & configure Openstack services - - -Steps to start the automatic deploy + - Leverage Salt to configure the operating system on the baremetal nodes + - Leverage Salt to install & configure OpenStack services + +.. figure:: img/fuel_baremetal.png + :align: center + :alt: Fuel@OPNFV Baremetal POD Network Layout Example + + Fuel@OPNFV Baremetal POD Network Layout Example + + +-----------------------+---------------------------------------------------------+ + | cfg01 | Salt Master VM | + +-----------------------+---------------------------------------------------------+ + | mas01 | MaaS Node VM | + +-----------------------+---------------------------------------------------------+ + | kvm01..03 | Baremetals which hold the VMs with controller functions | + +-----------------------+---------------------------------------------------------+ + | cmp001/cmp002 | Baremetal compute nodes | + +-----------------------+---------------------------------------------------------+ + | prx01/prx02 | Proxy VMs for Nginx | + +-----------------------+---------------------------------------------------------+ + | msg01..03 | RabbitMQ Service VMs | + +-----------------------+---------------------------------------------------------+ + | dbs01..03 | MySQL service VMs | + +-----------------------+---------------------------------------------------------+ + | mdb01..03 | Telemetry VMs | + +-----------------------+---------------------------------------------------------+ + | odl01 | VM on which ODL runs (for scenarios deployed with ODL) | + +-----------------------+---------------------------------------------------------+ + | Tenant VM | VM running in the cloud | + +-----------------------+---------------------------------------------------------+ + +In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is +required to pre-configure at least the admin_br bridge for the PXE/Admin. +For the targets, the bridges are created by the deploy script. + +**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used +for baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only. + + +Steps to Start the Automatic Deploy =================================== These steps are common both for virtual and baremetal deploys. @@ -253,53 +386,129 @@ These steps are common both for virtual and baremetal deploys. #. Start the deploy script + Besides the basic options, there are other recommended deploy arguments: + + - use **-D** option to enable the debug info + - use **-S** option to point to a tmp dir where the disk images are saved. The images will be + re-used between deploys + - use **|& tee** to save the deploy log to a file + .. code-block:: bash $ ci/deploy.sh -l \ -p \ - -b \ + -b \ -s \ - -B + -B \ + -D \ + -S |& tee deploy.log Examples -------- #. Virtual deploy -To start a virtual deployment, it is required to have the `virtual` keyword while specifying the pod name -to the installer script. It will create the required bridges and networks, configure Salt Master and install -OpenStack. + To start a virtual deployment, it is required to have the `virtual` keyword + while specifying the pod name to the installer script. - .. code-block:: bash + It will create the required bridges and networks, configure Salt Master and + install OpenStack. - $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ - -l ericsson \ - -p virtual_kvm \ - -s os-nosdn-nofeature-noha + .. code-block:: bash -Once the deployment is complete, OpenStack Dashboard, Horizon is available at http://10.16.0.101:8078 -The administrator credentials are **admin** / **opnfv_secret**. + $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ + -l ericsson \ + -p virtual_kvm \ + -s os-nosdn-nofeature-noha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log + + Once the deployment is complete, the OpenStack Dashboard, Horizon is + available at http://:8078, e.g. http://10.16.0.101:8078. + The administrator credentials are **admin** / **opnfv_secret**. #. Baremetal deploy -A x86 deploy on pod1 from Ericsson lab + A x86 deploy on pod2 from Linux Foundation lab - .. code-block:: bash + .. code-block:: bash - $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ - -l ericsson \ - -p pod1 \ - -s os-nosdn-nofeature-ha \ - -B pxebr + $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ + -l lf \ + -p pod2 \ + -s os-nosdn-nofeature-ha \ + -B pxebr,br-ctl + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log -An aarch64 deploy on pod5 from Arm lab + .. figure:: img/lf_pod2.png + :align: center + :alt: Fuel@OPNFV LF POD2 Network Layout + + Fuel@OPNFV LF POD2 Network Layout + + Once the deployment is complete, the SaltStack Deployment Documentation is + available at http://:8090, e.g. http://172.30.10.103:8090. + + An aarch64 deploy on pod5 from Arm lab + + .. code-block:: bash + + $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ + -l arm \ + -p pod5 \ + -s os-nosdn-nofeature-ha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log + + .. figure:: img/arm_pod5.png + :align: center + :alt: Fuel@OPNFV ARM POD5 Network Layout + + Fuel@OPNFV ARM POD5 Network Layout + +Pod Descriptor Files +==================== + +Descriptor files provide the installer with an abstraction of the target pod +with all its hardware characteristics and required parameters. This information +is split into two different files: +Pod Descriptor File (PDF) and Installer Descriptor File (IDF). + + +The Pod Descriptor File is a hardware and network description of the pod +infrastructure. The information is modeled under a yaml structure. +A reference file with the expected yaml structure is available at +*mcp/config/labs/local/pod1.yaml* + +A common network section describes all the internal and provider networks +assigned to the pod. Each network is expected to have a vlan tag, IP subnet and +attached interface on the boards. Untagged vlans shall be defined as "native". + +The hardware description is arranged into a main "jumphost" node and a "nodes" +set for all target boards. For each node the following characteristics +are defined: + +- Node parameters including CPU features and total memory. +- A list of available disks. +- Remote management parameters. +- Network interfaces list including mac address, speed and advanced features. +- IP list of fixed IPs for the node + +**Note**: the fixed IPs are ignored by the MCP installer script and it will instead +assign based on the network ranges defined under the pod network configuration. + + +The Installer Descriptor File extends the PDF with pod related parameters +required by the installer. This information may differ per each installer type +and it is not considered part of the pod infrastructure. Fuel installer relies +on the IDF model to map the networks to the bridges on the foundation node and +to setup all node NICs by defining the expected OS device name and bus address. - .. code-block:: bash - $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ - -l arm \ - -p pod5 \ - -s os-nosdn-nofeature-ha \ - -B pxebr +The file follows a yaml structure and a "fuel" section is expected. Contents and +references must be aligned with the PDF file. The IDF file must be named after +the PDF with the prefix "idf-". A reference file with the expected structure +is available at *mcp/config/labs/local/idf-pod1.yaml* =============