X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=docs%2Frelease%2Finstallation%2Finstallation.instruction.rst;h=9aaebdd7c8c36326cde0ff551ea9da5ff031ef9e;hb=25bf7306d1d6f66a034c1a60037c0f9b7342c0ac;hp=bec26ae1553d53855ac74b12f70f5fb53b582e39;hpb=5765c5a1b1459c815c2b871b00c36f8fcc75095d;p=fuel.git diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst index bec26ae15..9aaebdd7c 100644 --- a/docs/release/installation/installation.instruction.rst +++ b/docs/release/installation/installation.instruction.rst @@ -6,80 +6,41 @@ Abstract ======== -This document describes how to install the Danube release of +This document describes how to install the Fraser release of OPNFV when using Fuel as a deployment tool, covering its usage, limitations, dependencies and required system resources. +This is an unified documentation for both x86_64 and aarch64 +architectures. All information is common for both architectures +except when explicitly stated. ============ Introduction ============ This document provides guidelines on how to install and -configure the Danube release of OPNFV when using Fuel as a +configure the Fraser release of OPNFV when using Fuel as a deployment tool, including required software and hardware configurations. -Although the available installation options give a high degree of -freedom in how the system is set-up, including architecture, services +Although the available installation options provide a high degree of +freedom in how the system is set up, including architecture, services and features, etc., said permutations may not provide an OPNFV -compliant reference architecture. This instruction provides a -step-by-step guide that results in an OPNFV Danube compliant +compliant reference architecture. This document provides a +step-by-step guide that results in an OPNFV Fraser compliant deployment. -The audience of this document is assumed to have good knowledge in +The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration. ======= Preface ======= -Before starting the installation of the Danube release of +Before starting the installation of the Fraser release of OPNFV, using Fuel as a deployment tool, some planning must be done. -Retrieving the ISO image -======================== - -First of all, the Fuel deployment ISO image needs to be retrieved, the -Fuel .iso image of the Danube release can be found at `OPNFV Downloads `_. - -Building the ISO image -====================== - -Alternatively, you may build the Fuel .iso from source by cloning the -opnfv/fuel git repository. To retrieve the repository for the Danube -release use the following command: - -.. code-block:: bash - - $ git clone https://gerrit.opnfv.org/gerrit/fuel - -Check-out the Danube release tag to set the HEAD to the -baseline required to replicate the Danube release: - -.. code-block:: bash - - $ git checkout danube.1.0 - -Go to the fuel directory and build the .iso: - -.. code-block:: bash - - $ cd fuel/build; make all - -For more information on how to build, please see :ref:`Build instruction for Fuel\@OPNFV ` - -Other preparations -================== - -Next, familiarize yourself with Fuel by reading the following documents: - -- `Fuel Installation Guide `_ - -- `Fuel User Guide `_ - -- `Fuel Developer Guide `_ - -- `Fuel Plugin Developers Guide `_ +Preparations +============ Prior to installation, a number of deployment specific parameters must be collected, those are: @@ -103,44 +64,77 @@ Prior to installation, a number of deployment specific parameters must be collec This information will be needed for the configuration procedures provided in this document. -===================== -Hardware requirements -===================== - -The following minimum hardware requirements must be met for the -installation of Danube using Fuel: - -+--------------------+------------------------------------------------------+ -| **HW Aspect** | **Requirement** | -| | | -+====================+======================================================+ -| **# of nodes** | Minimum 5 (3 for non redundant deployment): | -| | | -| | - 1 Fuel deployment master (may be virtualized) | -| | | -| | - 3(1) Controllers (1 colocated mongo/ceilometer | -| | role, 2 Ceph-OSD roles) | -| | | -| | - 1 Compute (1 co-located Ceph-OSD role) | -| | | -+--------------------+------------------------------------------------------+ -| **CPU** | Minimum 1 socket x86_AMD64 with Virtualization | -| | support | -+--------------------+------------------------------------------------------+ -| **RAM** | Minimum 16GB/server (Depending on VNF work load) | -| | | -+--------------------+------------------------------------------------------+ -| **Disk** | Minimum 256GB 10kRPM spinning disks | -| | | -+--------------------+------------------------------------------------------+ -| **Networks** | 4 Tagged VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) | -| | | -| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | -| | | -| | Note: These can be allocated to a single NIC - | -| | or spread out over multiple NICs as your hardware | -| | supports. | -+--------------------+------------------------------------------------------+ +========================================= +Hardware Requirements for Virtual Deploys +========================================= + +The following minimum hardware requirements must be met for the virtual +installation of Fraser using Fuel: + ++----------------------------+--------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++============================+========================================================+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | will host a Salt Master VM and each of the VM nodes in | +| | the virtual deploy | ++----------------------------+--------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++----------------------------+--------------------------------------------------------+ +| **RAM** | Minimum 32GB/server (Depending on VNF work load) | ++----------------------------+--------------------------------------------------------+ +| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)| ++----------------------------+--------------------------------------------------------+ + + +=========================================== +Hardware Requirements for Baremetal Deploys +=========================================== + +The following minimum hardware requirements must be met for the baremetal +installation of Fraser using Fuel: + ++-------------------------+------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++=========================+======================================================+ +| **# of nodes** | Minimum 5 | +| | | +| | - 3 KVM servers which will run all the controller | +| | services | +| | | +| | - 2 Compute nodes | +| | | ++-------------------------+------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++-------------------------+------------------------------------------------------+ +| **RAM** | Minimum 16GB/server (Depending on VNF work load) | ++-------------------------+------------------------------------------------------+ +| **Disk** | Minimum 256GB 10kRPM spinning disks | ++-------------------------+------------------------------------------------------+ +| **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be | +| | a mix of tagged/native | +| | | +| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | +| | | +| | Note: These can be allocated to a single NIC - | +| | or spread out over multiple NICs | ++-------------------------+------------------------------------------------------+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | hosts the Salt Master and MaaS VMs | ++-------------------------+------------------------------------------------------+ +| **Power management** | All targets need to have power management tools that | +| | allow rebooting the hardware and setting the boot | +| | order (e.g. IPMI) | ++-------------------------+------------------------------------------------------+ + +.. NOTE:: + + All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64). + +.. NOTE:: + + For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2). =============================== Help with Hardware Requirements @@ -148,12 +142,13 @@ Help with Hardware Requirements Calculate hardware requirements: -For information on compatible hardware types available for use, please see `Fuel OpenStack Hardware Compatibility List `_. +For information on compatible hardware types available for use, +please see `Fuel OpenStack Hardware Compatibility List `_ When choosing the hardware on which you will deploy your OpenStack environment, you should think about: -- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPU per virtual machine. +- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine. - Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node. @@ -162,7 +157,7 @@ environment, you should think about: - Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage. ================================================ -Top of the rack (TOR) Configuration requirements +Top of the Rack (TOR) Configuration Requirements ================================================ The switching infrastructure provides connectivity for the OPNFV @@ -182,441 +177,427 @@ the Fuel OPNFV reference platform. All the networks involved in the OPNFV infrastructure as well as the provider networks and the private tenant VLANs needs to be manually configured. -Manual configuration of the Danube hardware platform should +Manual configuration of the Fraser hardware platform should be carried out according to the `OPNFV Pharos Specification `_. -========================================== -OPNFV Software installation and deployment -========================================== - -This section describes the installation of the OPNFV installation -server (Fuel master) as well as the deployment of the full OPNFV -reference platform stack across a server cluster. - -Install Fuel master -=================== - -#. Mount the Danube Fuel ISO file/media as a boot device to the jump host server. - -#. Reboot the jump host to establish the Fuel server. - - - The system now boots from the ISO image. - - - Select "Fuel Install (Static IP)" (See figure below) - - - Press [Enter]. - - .. figure:: img/grub-1.png - -#. Wait until the Fuel setup screen is shown (Note: This can take up to 30 minutes). - -#. In the "Fuel User" section - Confirm/change the default password (See figure below) - - - Enter "admin" in the Fuel password input - - - Enter "admin" in the Confirm password input - - - Select "Check" and press [Enter] - - .. figure:: img/fuelmenu1.png - -#. In the "Network Setup" section - Configure DHCP/Static IP information for your FUEL node - For example, ETH0 is 10.20.0.2/24 for FUEL booting and ETH1 is DHCP in your corporate/lab network (see figure below). - - - Configure eth1 or other network interfaces here as well (if you have them present on your FUEL server). - - .. figure:: img/fuelmenu2.png - -#. In the "PXE Setup" section (see figure below) - Change the following fields to appropriate values (example below): - - - DHCP Pool Start 10.20.0.4 - - - DHCP Pool End 10.20.0.254 - - - DHCP Pool Gateway 10.20.0.2 (IP address of Fuel node) - - .. figure:: img/fuelmenu3.png - -#. In the "DNS & Hostname" section (see figure below) - Change the following fields to appropriate values: - - - Hostname - - - Domain - - - Search Domain - - - External DNS - - - Hostname to test DNS - - - Select and press [Enter] - - .. figure:: img/fuelmenu4.png - - -#. OPTION TO ENABLE PROXY SUPPORT - In the "Bootstrap Image" section (see figure below), edit the following fields to define a proxy. (**NOTE:** cannot be used in tandem with local repository support) - - - Navigate to "HTTP proxy" and enter your http proxy address - - - Select and press [Enter] +============================ +OPNFV Software Prerequisites +============================ - .. figure:: img/fuelmenu5.png +The Jumpserver node should be pre-provisioned with an operating system, +according to the Pharos specification. Relevant network bridges should +also be pre-configured (e.g. admin_br, mgmt_br, public_br). -#. In the "Time Sync" section (see figure below) - Change the following fields to appropriate values: +- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during Fuel installation. +- The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is + suggested to pre-configure it for debugging purposes. +- The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory. - - NTP Server 1 +The user running the deploy script on the Jumpserver should belong to ``sudo`` and ``libvirt`` groups, +and have passwordless sudo access. - - NTP Server 2 +The following example adds the groups to the user ``jenkins`` - - NTP Server 3 - - .. figure:: img/fuelmenu6.png - -#. Start the installation. - - - Select Quit Setup and press Save and Quit. - - - The installation will now start, wait until the login screen is shown. - -Boot the Node Servers -===================== - -After the Fuel Master node has rebooted from the above steps and is at -the login prompt, you should boot the Node Servers (Your -Compute/Control/Storage blades, nested or real) with a PXE booting -scheme so that the FUEL Master can pick them up for control. - -#. Enable PXE booting - - - For every controller and compute server: enable PXE Booting as the first boot device in the BIOS boot order menu, and hard disk as the second boot device in the same menu. - -#. Reboot all the control and compute blades. - -#. Wait for the availability of nodes showing up in the Fuel GUI. - - - Connect to the FUEL UI via the URL provided in the Console (default: https://10.20.0.2:8443) - - - Wait until all nodes are displayed in top right corner of the Fuel GUI: Total nodes and Unallocated nodes (see figure below). - - .. figure:: img/nodes.png - -Install additional Plugins/Features on the FUEL node -==================================================== - -#. SSH to your FUEL node (e.g. root@10.20.0.2 pwd: r00tme) - -#. Select wanted plugins/features from the /opt/opnfv/ directory. - -#. Install the wanted plugin with the command - - .. code-block:: bash - - $ fuel plugins --install /opt/opnfv/-..rpm - - Expected output (see figure below): - - .. code-block:: bash - - Plugin ....... was successfully installed. - - .. figure:: img/plugin_install.png - -Create an OpenStack Environment -=============================== - -#. Connect to Fuel WEB UI with a browser (default: https://10.20.0.2:8443) (login: admin/admin) - -#. Create and name a new OpenStack environment, to be installed. - - .. figure:: img/newenv.png - -#. Select "" and press - -#. Select "compute virtulization method". - - - Select "QEMU-KVM as hypervisor" and press - -#. Select "network mode". - - - Select "Neutron with ML2 plugin" - - - Select "Neutron with tunneling segmentation" (Required when using the ODL or ONOS plugins) - - - Press - -#. Select "Storage Back-ends". - - - Select "Ceph for block storage" and press - -#. Select "additional services" you wish to install. - - - Check option "Install Ceilometer and Aodh" and press - -#. Create the new environment. - - - Click Button - -Configure the network environment -================================= - -#. Open the environment you previously created. - -#. Open the networks tab and select the "default" Node Networks group to on the left pane (see figure below). - - .. figure:: img/network.png - -#. Update the Public network configuration and change the following fields to appropriate values: - - - CIDR to +.. code-block:: bash - - IP Range Start to + $ sudo usermod -aG sudo jenkins + $ sudo usermod -aG libvirt jenkins + $ reboot + $ groups + jenkins sudo libvirt - - IP Range End to + $ sudo visudo + ... + %jenkins ALL=(ALL) NOPASSWD:ALL - - Gateway to +The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir`` in the examples below) +needs to have mask 777 in order for libvirt to be able to use them. - - Check . +.. code-block:: bash - - Set appropriate VLAN id. + $ mkdir -p -m 777 /home/jenkins/tmpdir -#. Update the Storage Network Configuration +For an AArch64 Jumpserver, the ``libvirt`` minimum required version is 3.x, 3.5 or newer highly recommended. +While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended +(especially on AArch64 Jumpservers). - - Set CIDR to appropriate value (default 192.168.1.0/24) +For CentOS 7.4 (AArch64), distro provided packages are already new enough. +For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used. +For convenience, Armband provides a DEB repository holding all the required packages. - - Set IP Range Start to appropriate value (default 192.168.1.1) +To add and enable the Armband repository on an Ubuntu 16.04 system, +create a new sources list file ``/apt/sources.list.d/armband.list`` with the following contents: - - Set IP Range End to appropriate value (default 192.168.1.254) +.. code-block:: bash - - Set vlan to appropriate value (default 102) + $ cat /etc/apt/sources.list.d/armband.list + //for OpenStack Queens release + deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main -#. Update the Management network configuration. + $ apt-get update - - Set CIDR to appropriate value (default 192.168.0.0/24) +Fuel@OPNFV has been validated by CI using the following distributions +installed on the Jumpserver: - - Set IP Range Start to appropriate value (default 192.168.0.1) +- CentOS 7 (recommended by Pharos specification); +- Ubuntu Xenial; - - Set IP Range End to appropriate value (default 192.168.0.254) +.. WARNING:: - - Check . + The install script expects ``libvirt`` to be already running on the Jumpserver. + In case ``libvirt`` packages are missing, the script will install them; but + depending on the OS distribution, the user might have to start the ``libvirtd`` + service manually, then run the deploy script again. Therefore, it + is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment. - - Set appropriate VLAN id. (default 101) +.. NOTE:: -#. Update the Private Network Information + It is also recommended to install the newer kernel on the Jumpserver before the deployment. - - Set CIDR to appropriate value (default 192.168.2.0/24 +.. WARNING:: - - Set IP Range Start to appropriate value (default 192.168.2.1) + The install script will automatically install the rest of required distro package + dependencies on the Jumpserver, unless explicitly asked not to (via ``-P`` deploy arg). + This includes Python, QEMU, libvirt etc. - - Set IP Range End to appropriate value (default 192.168.2.254) +.. WARNING:: - - Check . + The install script will alter Jumpserver sysconf and disable ``net.bridge.bridge-nf-call``. - - Set appropriate VLAN tag (default 103) +.. code-block:: bash -#. Select the "Neutron L3" Node Networks group on the left pane. + $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin - .. figure:: img/neutronl3.png -#. Update the Floating Network configuration. +========================================== +OPNFV Software Installation and Deployment +========================================== - - Set the Floating IP range start (default 172.16.0.130) +This section describes the process of installing all the components needed to +deploy the full OPNFV reference platform stack across a server cluster. + +The installation is done with Mirantis Cloud Platform (MCP), which is based on +a reclass model. This model provides the formula inputs to Salt, to make the deploy +automatic based on deployment scenario. +The reclass model covers: - - Set the Floating IP range end (default 172.16.0.254) + - Infrastructure node definition: Salt Master node (cfg01) and MaaS node (mas01) + - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002) + - Infrastructure components to install (software packages, services etc.) + - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them - - Set the Floating network name (default admin_floating_net) -#. Update the Internal Network configuration. +Automatic Installation of a Virtual POD +======================================= - - Set Internal network CIDR to an appropriate value (default 192.168.111.0/24) +For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will: - - Set Internal network gateway to an appropriate value + - Create a Salt Master VM on the Jumpserver which will drive the installation + - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network) + - Install OpenStack on the targets + - Leverage Salt to install & configure OpenStack services - - Set the Internal network name (default admin_internal_net) +.. figure:: img/fuel_virtual.png + :align: center + :alt: Fuel@OPNFV Virtual POD Network Layout Examples -#. Update the Guest OS DNS servers. + Fuel@OPNFV Virtual POD Network Layout Examples - - Set Guest OS DNS Server values appropriately + +-----------------------+------------------------------------------------------------------------+ + | cfg01 | Salt Master VM | + +-----------------------+------------------------------------------------------------------------+ + | ctl01 | Controller VM | + +-----------------------+------------------------------------------------------------------------+ + | cmp001/cmp002 | Compute VMs | + +-----------------------+------------------------------------------------------------------------+ + | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) | + +-----------------------+------------------------------------------------------------------------+ + | odl01 | VM on which ODL runs (for scenarios deployed with ODL) | + +-----------------------+------------------------------------------------------------------------+ -#. Save Settings. -#. Select the "Other" Node Networks group on the left pane (see figure below). +In this figure there are examples of two virtual deploys: + - Jumphost 1 has only virsh bridges, created by the deploy script + - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network, + the deploy script will skip creating a virsh bridge for it - .. figure:: img/other.png +.. NOTE:: -#. Update the Public network assignment. + A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost. - - Check the box for "Assign public network to all nodes" (Required by OpenDaylight) -#. Update Host OS DNS Servers. +Automatic Installation of a Baremetal POD +========================================= - - Provide the DNS server settings +The baremetal installation process can be done by editing the information about +hardware and environment in the reclass files, or by using the files Pod Descriptor +File (PDF) and Installer Descriptor File (IDF) as described in the OPNFV Pharos project. +These files contain all the information about the hardware and network of the deployment +that will be fed to the reclass model during deployment. -#. Update Host OS NTP Servers. +The installation is done automatically with the deploy script, which will: - - Provide the NTP server settings + - Create a Salt Master VM on the Jumpserver which will drive the installation + - Create a MaaS Node VM on the Jumpserver which will provision the targets + - Install OpenStack on the targets + - Leverage MaaS to provision baremetal nodes with the operating system + - Leverage Salt to configure the operating system on the baremetal nodes + - Leverage Salt to install & configure OpenStack services -Select Hypervisor type -====================== +.. figure:: img/fuel_baremetal.png + :align: center + :alt: Fuel@OPNFV Baremetal POD Network Layout Example -#. In the FUEL UI of your Environment, click the "Settings" Tab + Fuel@OPNFV Baremetal POD Network Layout Example -#. Select "Compute" on the left side pane (see figure below) + +-----------------------+---------------------------------------------------------+ + | cfg01 | Salt Master VM | + +-----------------------+---------------------------------------------------------+ + | mas01 | MaaS Node VM | + +-----------------------+---------------------------------------------------------+ + | kvm01..03 | Baremetals which hold the VMs with controller functions | + +-----------------------+---------------------------------------------------------+ + | cmp001/cmp002 | Baremetal compute nodes | + +-----------------------+---------------------------------------------------------+ + | prx01/prx02 | Proxy VMs for Nginx | + +-----------------------+---------------------------------------------------------+ + | msg01..03 | RabbitMQ Service VMs | + +-----------------------+---------------------------------------------------------+ + | dbs01..03 | MySQL service VMs | + +-----------------------+---------------------------------------------------------+ + | mdb01..03 | Telemetry VMs | + +-----------------------+---------------------------------------------------------+ + | odl01 | VM on which ODL runs (for scenarios deployed with ODL) | + +-----------------------+---------------------------------------------------------+ + | Tenant VM | VM running in the cloud | + +-----------------------+---------------------------------------------------------+ - - Check the KVM box and press "Save settings" +In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is +required to pre-configure at least the admin_br bridge for the PXE/Admin. +For the targets, the bridges are created by the deploy script. - .. figure:: img/compute.png +.. NOTE:: -Enable Plugins -============== + A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost. -#. In the FUEL UI of your Environment, click the "Settings" Tab -#. Select Other on the left side pane (see figure below) +Steps to Start the Automatic Deploy +=================================== - - Enable and configure the plugins of your choice +These steps are common both for virtual and baremetal deploys. - .. figure:: img/plugins.png +#. Clone the Fuel code from gerrit -Allocate nodes to environment and assign functional roles -========================================================= + For x86_64 -#. Click on the "Nodes" Tab in the FUEL WEB UI (see figure below). + .. code-block:: bash - .. figure:: img/addnodes.png + $ git clone https://git.opnfv.org/fuel + $ cd fuel -#. Assign roles (see figure below). + For aarch64 - - Click on the <+Add Nodes> button + .. code-block:: bash - - Check , and optionally an SDN Controller role (OpenDaylight controller/ONOS) in the "Assign Roles" Section. + $ git clone https://git.opnfv.org/armband + $ cd armband - - Check one node which you want to act as a Controller from the bottom half of the screen +#. Checkout the Fraser release - - Click . + .. code-block:: bash - - Click on the <+Add Nodes> button + $ git checkout opnfv-6.2.1 - - Check the and roles. +#. Start the deploy script - - Check the two next nodes you want to act as Controllers from the bottom half of the screen + Besides the basic options, there are other recommended deploy arguments: - - Click + - use ``-D`` option to enable the debug info + - use ``-S`` option to point to a tmp dir where the disk images are saved. The images will be + re-used between deploys + - use ``|& tee`` to save the deploy log to a file - - Click on <+Add Nodes> button + .. code-block:: bash - - Check the and roles. + $ ci/deploy.sh -l \ + -p \ + -b \ + -s \ + -D \ + -S |& tee deploy.log - - Check the Nodes you want to act as Computes from the bottom half of the screen +.. NOTE:: - - Click . + The deployment uses the OPNFV Pharos project as input (PDF and IDF files) + for hardware and network configuration of all current OPNFV PODs. + When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override + the path for the labconfig directory structure containing the PDF and IDF (see below). - .. figure:: img/computelist.png +Examples +-------- +#. Virtual deploy -#. Configure interfaces (see figure below). + To start a virtual deployment, it is required to have the **virtual** keyword + while specifying the pod name to the installer script. - - Check Select to select all allocated nodes + It will create the required bridges and networks, configure Salt Master and + install OpenStack. - - Click + .. code-block:: bash - - Assign interfaces (bonded) for mgmt-, admin-, private-, public- and storage networks + $ ci/deploy.sh -l ericsson \ + -p virtual3 \ + -s os-nosdn-nofeature-noha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log - - Click + Once the deployment is complete, the OpenStack Dashboard, Horizon, is + available at ``http://:8078`` + The administrator credentials are **admin** / **opnfv_secret**. - .. figure:: img/interfaceconf.png + A simple (and generic) sample PDF/IDF set of configuration files may + be used for virtual deployments by setting lab/POD name to ``local-virtual1``. + This sample configuration is x86_64 specific and hardcodes certain parameters, + like public network address space, so a dedicated PDF/IDF is highly recommended. + .. code-block:: bash -Target specific configuration -============================= + $ ci/deploy.sh -l local \ + -p virtual1 \ + -s os-nosdn-nofeature-noha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log -#. Set up targets for provisioning with non-default "Offloading Modes" +#. Baremetal deploy - Some target nodes may require additional configuration after they are - PXE booted (bootstrapped); the most frequent changes are in defaults - for ethernet devices' "Offloading Modes" settings (e.g. some targets' - ethernet drivers may strip VLAN traffic by default). + A x86 deploy on pod2 from Linux Foundation lab - If your target ethernet drivers have wrong "Offloading Modes" defaults, - in "Configure interfaces" page (described above), expand affected - interface's "Offloading Modes" and [un]check the relevant settings - (see figure below): + .. code-block:: bash - .. figure:: img/offloadingmodes.png + $ ci/deploy.sh -l lf \ + -p pod2 \ + -s os-nosdn-nofeature-ha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log -#. Set up targets for "Verify Networks" with non-default "Offloading Modes" + .. figure:: img/lf_pod2.png + :align: center + :alt: Fuel@OPNFV LF POD2 Network Layout - **NOTE**: Check *Reference 15* for an updated and comprehensive list of - known issues and/or limitations, including "Offloading Modes" not being - applied during "Verify Networks" step. + Fuel@OPNFV LF POD2 Network Layout - Setting custom "Offloading Modes" in Fuel GUI will only apply those settings - during provisiong and **not** during "Verify Networks", so if your targets - need this change, you have to apply "Offloading Modes" settings by hand - to bootstrapped nodes. + An aarch64 deploy on pod5 from Arm lab - **E.g.**: Our driver has "rx-vlan-filter" default "on" (expected "off") on - the Openstack interface(s) "eth1", preventing VLAN traffic from passing - during "Verify Networks". + .. code-block:: bash - - From Fuel master console identify target nodes admin IPs (see figure below): + $ ci/deploy.sh -l arm \ + -p pod5 \ + -s os-nosdn-nofeature-ha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log - .. code-block:: bash + .. figure:: img/arm_pod5.png + :align: center + :alt: Fuel@OPNFV ARM POD5 Network Layout - $ fuel nodes + Fuel@OPNFV ARM POD5 Network Layout - .. figure:: img/fuelconsole1.png + Once the deployment is complete, the SaltStack Deployment Documentation is + available at ``http://:8090``. - - SSH into each of the target nodes and disable "rx-vlan-filter" on the - affected physical interface(s) allocated for OpenStack traffic (eth1): + When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override + the path for the labconfig directory structure containing the PDF and IDF. - .. code-block:: bash + .. code-block:: bash - $ ssh root@10.20.0.6 ethtool -K eth1 rx-vlan-filter off + $ ci/deploy.sh -b file:// \ + -l \ + -p \ + -s \ + -D \ + -S |& tee deploy.log - - Repeat the step above for all affected nodes/interfaces in the POD. + - is the absolute path to a local directory, populated + similar to Pharos, i.e. PDF/IDF reside in ``/labs/`` + - is the same as the directory in the path above + - is the name used for the PDF (``.yaml``) and IDF (``idf-.yaml``) files -Verify Networks -=============== -It is important that the Verify Networks action is performed as it will verify -that communicate works for the networks you have setup, as well as check that -packages needed for a successful deployment can be fetched. -#. From the FUEL UI in your Environment, Select the Networks Tab and select "Connectivity check" on the left pane (see figure below) +Pod and Installer Descriptor Files +================================== - - Select +Descriptor files provide the installer with an abstraction of the target pod +with all its hardware characteristics and required parameters. This information +is split into two different files: +Pod Descriptor File (PDF) and Installer Descriptor File (IDF). - - Continue to fix your topology (physical switch, etc) until the "Verification Succeeded" and "Your network is configured correctly" message is shown +The Pod Descriptor File is a hardware description of the pod +infrastructure. The information is modeled under a yaml structure. +A reference file with the expected yaml structure is available at +``mcp/config/labs/local/pod1.yaml``. - .. figure:: img/verifynet.png +The hardware description is arranged into a main "jumphost" node and a "nodes" +set for all target boards. For each node the following characteristics +are defined: -Deploy Your Environment -======================= +- Node parameters including CPU features and total memory. +- A list of available disks. +- Remote management parameters. +- Network interfaces list including mac address, speed, advanced features and name. -#. Deploy the environment. +.. NOTE:: - - In the Fuel GUI, click on the "Dashboard" Tab. + The fixed IPs are ignored by the MCP installer script and it will instead + assign based on the network ranges defined in IDF. - - Click on in the "Ready to Deploy?" section +The Installer Descriptor File extends the PDF with pod related parameters +required by the installer. This information may differ per each installer type +and it is not considered part of the pod infrastructure. +The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected +structure is available at ``mcp/config/labs/local/idf-pod1.yaml``. - - Examine any information notice that pops up and click +The file follows a yaml structure and two sections "net_config" and "fuel" are expected. - Wait for your deployment to complete, you can view the "Dashboard" - Tab to see the progress and status of your deployment. +The "net_config" section describes all the internal and provider networks +assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and +attached interface on the boards. Untagged vlans shall be defined as "native". -========================= -Installation health-check -========================= +The "fuel" section defines several sub-sections required by the Fuel installer: -#. Perform system health-check (see figure below) +- jumphost: List of bridge names for each network on the Jumpserver. +- network: List of device name and bus address info of all the target nodes. + The order must be aligned with the order defined in PDF file. Fuel installer relies on the IDF model + to setup all node NICs by defining the expected device name and bus address. +- maas: Defines the target nodes commission timeout and deploy timeout. (optional) +- reclass: Defines compute parameter tuning, including huge pages, cpu pinning + and other DPDK settings. (optional) - - Click the "Health Check" tab inside your Environment in the FUEL Web UI +The following parameters can be defined in the IDF files under "reclass". Those value will +overwrite the default configuration values in Fuel repository: - - Check