[docs] Refresh for Gambia release 71/64571/1
authorAlexandru Avadanii <Alexandru.Avadanii@enea.com>
Fri, 28 Sep 2018 14:35:10 +0000 (16:35 +0200)
committerCristina Pauna <cristina.pauna@enea.com>
Tue, 6 Nov 2018 10:10:52 +0000 (10:10 +0000)
- s/Fuel@OPNFV/OPNFV Fuel/g;
- added README files for ci/scenarios/patches directories;
- refresh & simplify cluster overview diagrams;
- unify labels across docs;
- fix TOC numbering;
- remove local labs PDF/IDF files, as they are merely duplicates of
  Pharos files included as a git submodule;

JIRA: FUEL-397

Change-Id: I87f61938eeb67f13fd9205d5226a30f02e55d267
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
(cherry picked from commit 170d2d1c195d001d6ca786364aaf3c10e714ae36)

60 files changed:
CONTRIBUTING.rst [new file with mode: 0644]
INFO.yaml
README.rst
ci/README.rst
ci/deploy.sh
docs/index.rst
docs/release/developer-guide/img/README.rst [new symlink]
docs/release/developer-guide/img/detail_fuel.png [new file with mode: 0755]
docs/release/developer-guide/img/overview_fuel.png [new file with mode: 0755]
docs/release/developer-guide/img/overview_mcp.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_gerrit.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_git_blue.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_git_orange.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_git_red.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_jenkins.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_k8.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_os.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_salt.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_trigger.png [new file with mode: 0755]
docs/release/developer-guide/img/symbol_user.png [new file with mode: 0755]
docs/release/installation/img/README.rst
docs/release/installation/img/arm_pod5.png [deleted file]
docs/release/installation/img/fuel_baremetal.png [deleted file]
docs/release/installation/img/fuel_baremetal_ha.png [new file with mode: 0644]
docs/release/installation/img/fuel_baremetal_noha.png [new file with mode: 0644]
docs/release/installation/img/fuel_hybrid_noha.png [new file with mode: 0644]
docs/release/installation/img/fuel_virtual.png [deleted file]
docs/release/installation/img/fuel_virtual_noha.png [new file with mode: 0644]
docs/release/installation/img/lf_pod2.png [deleted file]
docs/release/installation/index.rst
docs/release/installation/installation.instruction.rst
docs/release/release-notes/index.rst
docs/release/release-notes/release-notes.rst
docs/release/scenarios/index.rst
docs/release/scenarios/os-nosdn-ovs-ha/index.rst
docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
docs/release/scenarios/os-nosdn-ovs-noha/index.rst
docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
docs/release/scenarios/os-nosdn-vpp-ha/index.rst
docs/release/scenarios/os-nosdn-vpp-ha/os-nosdn-vpp-ha.rst
docs/release/scenarios/os-nosdn-vpp-noha/index.rst
docs/release/scenarios/os-nosdn-vpp-noha/os-nosdn-vpp-noha.rst
docs/release/scenarios/os-ovn-nofeature-ha/index.rst
docs/release/scenarios/os-ovn-nofeature-ha/os-ovn-nofeature-ha.rst
docs/release/scenarios/os-ovn-nofeature-noha/index.rst
docs/release/scenarios/os-ovn-nofeature-noha/os-ovn-nofeature-noha.rst
docs/release/userguide/img/saltstack.png [deleted file]
docs/release/userguide/index.rst
docs/release/userguide/userguide.rst
mcp/config/labs/local/idf-pod1.yaml [deleted file]
mcp/config/labs/local/idf-virtual1.yaml [deleted file]
mcp/config/labs/local/pod1.yaml [deleted file]
mcp/config/labs/local/virtual1.yaml [deleted file]
mcp/config/scenario/README.rst
mcp/patches/Makefile
mcp/patches/README.rst
mcp/patches/config.mk
mcp/reclass/classes/cluster/README.rst
mcp/scripts/lib_template.sh
onboarding.txt [deleted file]

diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
new file mode 100644 (file)
index 0000000..226e0fc
--- /dev/null
@@ -0,0 +1,23 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others.
+
+OPNFV Fuel Contributing
+=======================
+
+Get on board by filling this out and submitting it for review.
+This is all optional, it's just to give you a taste of the workflow.
+
+| Full Name: <change me>
+| IRC Nick: <change me>
+| Linux Foundation ID: <change me>
+| Favourite Open Source project: <change me>
+| How would you like to help this project: <change me>
+
+References
+==========
+#. `OPNFV Contribution Guidelines`_
+#. `OPNFV Developer Getting Started`_
+
+.. _`OPNFV Contribution Guidelines`: https://wiki.opnfv.org/display/DEV/Contribution+Guidelines
+.. _`OPNFV Developer Getting Started`: https://wiki.opnfv.org/display/DEV/Developer+Getting+Started
index 5e8e042..541f6a5 100644 (file)
--- a/INFO.yaml
+++ b/INFO.yaml
@@ -1,5 +1,5 @@
 ---
-project: 'Fuel based OPNFV installer (Fuel@OPNFV)'
+project: 'Fuel based OPNFV installer (OPNFV Fuel)'
 project_creation_date: '2015.07.07'
 project_category: 'Integration and testing'
 lifecycle_state: 'Incubation'
index 583dd6e..b2b1b7b 100644 (file)
@@ -2,6 +2,61 @@
 .. SPDX-License-Identifier: CC-BY-4.0
 .. (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others.
 
+==========
+OPNFV Fuel
+==========
+
+|docs|
+
+.. |docs| image:: https://readthedocs.org/projects/opnfv-fuel/badge/?version=latest
+    :alt: OPNFV Fuel Documentation Status
+    :scale: 100%
+    :target: https://opnfv-fuel.readthedocs.io/en/latest/?badge=latest
+
+Description
+===========
+
+This is the OPNFV Gambia release that implements the deploy stage of the
+OPNFV CI pipeline via Fuel.
+
+Fuel is based on the `MCP`_ installation tool chain.
+More information available at `Mirantis Cloud Platform Documentation`_.
+
+The goal of the Fuel deployment process is to establish a lab ready platform
+accelerating further development of the OPNFV infrastructure.
+
+Release Notes
+=============
+
+- `OPNFV Fuel Release Notes on RTD`_
+
+Installation
+============
+
+- `OPNFV Fuel Installation Instruction on RTD`_
+
+Usage
+=====
+
+- `OPNFV Fuel User Guide on RTD`_
+
+Scenarios
+=========
+
+- `OPNFV Fuel Scenarios on RTD`_
+
+Contributing
+============
+
+- `OPNFV Fuel Contributing`_
+
+Support
+=======
+
+- `OPNFV Fuel Wiki Page`_
+- `OPNFV Community Support mailing list`_
+- `OPNFV Technical Discussion mailing list`_
+
 LICENSE
 =======
 
@@ -9,7 +64,7 @@ LICENSE
 | (c) Jonas Bjurel (Ericsson AB)
 | Licensed under a Creative Commons Attribution 4.0 International License.
 | You should have received a copy of the license along with this work.
-| If not, see <http://creativecommons.org/licenses/by/4.0/>.
+| If not, see <https://creativecommons.org/licenses/by/4.0/>.
 
 Open Platform for NFV Project Software Licence
 ----------------------------------------------
@@ -17,7 +72,7 @@ Open Platform for NFV Project Software Licence
 | Any software developed by the "Open Platform for NFV" Project is licenced under the
 | Apache License, Version 2.0 (the "License");
 | you may not use the content of this software bundle except in compliance with the License.
-| You may obtain a copy of the License at <http://www.apache.org/licenses/LICENSE-2.0>
+| You may obtain a copy of the License at <https://www.apache.org/licenses/LICENSE-2.0>
 |
 | Unless required by applicable law or agreed to in writing, software
 | distributed under the License is distributed on an "AS IS" BASIS,
@@ -31,7 +86,7 @@ Open Platform for NFV Project Documentation Licence
 | Any documentation developed by the "Open Platform for NFV Project"
 | is licensed under a Creative Commons Attribution 4.0 International License.
 | You should have received a copy of the license along with this. If not,
-| see <http://creativecommons.org/licenses/by/4.0/>.
+| see <https://creativecommons.org/licenses/by/4.0/>.
 |
 | Unless required by applicable law or agreed to in writing, documentation
 | distributed under the License is distributed on an "AS IS" BASIS,
@@ -45,42 +100,77 @@ Other Applicable Upstream Project Licenses
 You may not use the content of this software bundle except in compliance with the
 Licenses as listed below (non-exhaustive list, depending on end-user config):
 
-+----------------+----------------------------------------------------------------+
-| **Component**  | **Licence**                                                    |
-+----------------+----------------------------------------------------------------+
-| OpenStack      | Apache License 2.0                                             |
-|                | https://www.apache.org/licenses/LICENSE-2.0                    |
-+----------------+----------------------------------------------------------------+
-| OpenDaylight   | Eclipse Public License 1.0                                     |
-|                | https://www.eclipse.org/legal/epl-v10.html                     |
-+----------------+----------------------------------------------------------------+
-| PostgreSQL     | PostgreSQL Licence:                                            |
-|                | http://opensource.org/licenses/postgresql                      |
-+----------------+----------------------------------------------------------------+
-| MongoDB        | GNU AGPL v3.0.                                                 |
-|                | http://www.fsf.org/licensing/licenses/agpl-3.0.html            |
-+----------------+----------------------------------------------------------------+
-| RabbitMQ       | Mozilla Public License                                         |
-|                | https://www.rabbitmq.com/mpl.html                              |
-+----------------+----------------------------------------------------------------+
-| Linux          | GPLv3                                                          |
-|                | https://www.gnu.org/copyleft/gpl.html                          |
-+----------------+----------------------------------------------------------------+
-| Docker         | Apache License 2.0                                             |
-|                | https://www.apache.org/licenses/LICENSE-2.0                    |
-+----------------+----------------------------------------------------------------+
-| OpenJDK/JRE    | GPL v2                                                         |
-|                | https://www.gnu.org/licenses/gpl-2.0.html                      |
-+----------------+----------------------------------------------------------------+
-| SaltStack      | Apache License 2.0                                             |
-|                | https://www.apache.org/licenses/LICENSE-2.0                    |
-+----------------+----------------------------------------------------------------+
-| salt-formula-* | Apache License 2.0                                             |
-|                | https://www.apache.org/licenses/LICENSE-2.0                    |
-+----------------+----------------------------------------------------------------+
-| reclass        | The Artistic Licence 2.0                                       |
-|                | http://www.perlfoundation.org/legal/licenses/artistic-2_0.html |
-+----------------+----------------------------------------------------------------+
-| MaaS           | GNU AGPL v3.0.                                                 |
-|                | http://www.fsf.org/licensing/licenses/agpl-3.0.html            |
-+----------------+----------------------------------------------------------------+
++------------------+-------------------------------+
+| **Component**    | **Licence**                   |
++------------------+-------------------------------+
+| `OpenStack`_     | `Apache License 2.0`_         |
++------------------+-------------------------------+
+| `OpenDaylight`_  | `Eclipse Public License 1.0`_ |
++------------------+-------------------------------+
+| `PostgreSQL`_    | `PostgreSQL Licence`_         |
++------------------+-------------------------------+
+| `MongoDB`_       | `GNU AGPL v3.0`_              |
++------------------+-------------------------------+
+| `RabbitMQ`_      | `Mozilla Public License`_     |
++------------------+-------------------------------+
+| `Linux`_         | `GPL v3`_                     |
++------------------+-------------------------------+
+| `Docker`_        | `Apache License 2.0`_         |
++------------------+-------------------------------+
+| `OpenJDK`_/JRE   | `GPL v2`_                     |
++------------------+-------------------------------+
+| `SaltStack`_     | `Apache License 2.0`_         |
++------------------+-------------------------------+
+| `salt-formulas`_ | `Apache License 2.0`_         |
++------------------+-------------------------------+
+| `reclass`_       | `The Artistic Licence 2.0`_   |
++------------------+-------------------------------+
+| `MaaS`_          | `GNU AGPL v3.0`_              |
++------------------+-------------------------------+
+
+References
+==========
+
+For more information on the OPNFV Gambia 7.0 release, please see:
+
+#. `OPNFV Home Page`_
+#. `OPNFV Documentation`_
+#. `OPNFV Software Downloads`_
+#. `OPNFV Gambia Wiki Page`_
+#. `Mirantis Cloud Platform Documentation`_
+
+.. _`OpenStack`: https://www.openstack.org
+.. _`OpenDaylight`: https://www.opendaylight.org/software
+.. _`PostgreSQL`: https://www.postgresql.org
+.. _`MongoDB`: https://www.mongodb.com
+.. _`RabbitMQ`: https://www.rabbitmq.com
+.. _`Linux`: https://www.linux.org
+.. _`Docker`: https://www.docker.com
+.. _`OpenJDK`: https://openjdk.java.net/
+.. _`SaltStack`: https://www.saltstack.com
+.. _`salt-formulas`: https://github.com/salt-formulas
+.. _`reclass`: https://reclass.pantsfullofunix.net
+.. _`MaaS`: https://maas.io
+.. _`MCP`: https://www.mirantis.com/software/mcp/
+.. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/
+.. _`OPNFV Home Page`: https://www.opnfv.org
+.. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia
+.. _`OPNFV Documentation`: https://docs.opnfv.org
+.. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download
+.. _`OPNFV Fuel Contributing`: CONTRIBUTING.rst
+.. _`OPNFV Fuel Wiki Page`: https://wiki.opnfv.org/display/fuel/Fuel+Opnfv
+.. _`OPNFV Community Support mailing list`: https://lists.opnfv.org/g/opnfv-users
+.. _`OPNFV Technical Discussion mailing list`: https://lists.opnfv.org/g/opnfv-tech-discuss
+.. _`OPNFV Fuel Release Notes on RTD`: https://opnfv-fuel.readthedocs.io/en/latest/release/release-notes/index.html
+.. _`OPNFV Fuel Installation Instruction on RTD`: https://opnfv-fuel.readthedocs.io/en/latest/release/installation/index.html
+.. _`OPNFV Fuel User Guide on RTD`: https://opnfv-fuel.readthedocs.io/en/latest/release/userguide/userguide.html
+.. _`OPNFV Fuel Scenarios on RTD`: https://opnfv-fuel.readthedocs.io/en/latest/release/scenarios/index.html
+.. LICENSE links
+.. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0
+.. _`Eclipse Public License 1.0`: https://www.eclipse.org/legal/epl-v10.html
+.. _`PostgreSQL Licence`: https://opensource.org/licenses/postgresql
+.. _`GNU AGPL v3.0`: https://www.gnu.org/licenses/agpl-3.0.html
+.. _`Mozilla Public License`: https://www.rabbitmq.com/mpl.html
+.. _`GPL v3`: https://www.gnu.org/copyleft/gpl.html
+.. _`GPL v2`: https://www.gnu.org/licenses/gpl-2.0.html
+.. _`The Artistic Licence 2.0`: https://www.perlfoundation.org/artistic-license-20.html
index dc860c0..c25c58f 100644 (file)
 
 Abstract
 ========
-The fuel/ci directory holds all Fuel@OPNFV programatic abstractions for
-the OPNFV community release and continous integration pipeline.
-There is now only one Fuel@OPNFV autonomous script for this, complying to the
+
+The ``ci`` directory holds all OPNFV Fuel programatic abstractions for
+the OPNFV community release and continuous integration pipeline.
+There are now two OPNFV Fuel autonomous scripts for this, complying to the
 OPNFV CI pipeline guideline:
- - deploy.sh
 
-USAGE
+- ``build.sh``
+- ``deploy.sh``
+
+Usage
 =====
-For usage information of the CI/CD scripts, please run:
 
-    .. code-block:: bash
+For usage information of the CI/CD deploy script, please run:
 
-        $ ./deploy.sh -h
+.. code-block:: console
 
-Details on the CI/CD deployment framework
+    jenkins@jumpserver:~/fuel/ci$ ./deploy.sh -h
+
+Details on the CI/CD Deployment Framework
 =========================================
 
-Overview and purpose
+Overview and Purpose
 --------------------
-The CI/CD deployment script relies on a configuration structure, providing base
-installer configuration (part of fuel repo: mcp/config), per POD specific
-configuration (part of a separate classified POD configuration repo: securedlab
-and deployment scenario configuration (part of fuel repo: mcp/config/scenario).
 
-- The base installer configuration resembles the least common denominator of all
+The CI/CD deployment script relies on a configuration structure, providing:
+
+- per POD specific configuration (defaults to using Pharos OPNFV project
+  ``PDF``/``IDF`` files for all OPNFV CI PODs).
+  Pharos OPNFV git repository is included as a git submodule at
+  ``mcp/scripts/pharos``.
+  Optionally, a custom configuration structure can be used via the ``-b``
+  deploy argument.
+  The POD specific parameters follow the ``PDF``/``IDF`` formats defined by
+  the Pharos OPNFV project.
+- deployment scenario configuration, part of fuel repo: ``mcp/config/scenario``.
+  Provides a high level, POD/HW environment independent scenario configuration
+  for a specific deployment. It defines what features shall be deployed - as
+  well as needed overrides of the base installer, POD/HW environment
+  configurations. Objects allowed to override are governed by the OPNFV Fuel
+  project.
+- base installer configuration, part of fuel repo: ``mcp/config/states``,
+  ``mcp/reclass``.
+  The base installer configuration resembles the least common denominator of all
   HW/POD environment and deployment scenarios. These configurations are
-  normally carried by the the installer projects in this case (Fuel@OPNFV).
-- Per POD specific configuration specifies POD unique parameters, the POD
-  parameter possible to alter is governed by the Fuel@OPNFV project.
-- Deployment scenario configuration - provides a high level, POD/HW environment
-  independent scenario configuration for a specifiv deployment. It defines what
-  features shall be deployed - as well needed overrides of the base
-  installer, POD/HW environment configurations. Objects allowed to override
-  are governed by the Fuel@OPNFV project.
-
-Executing a deployment
+  normally carried by the the installer projects in this case (OPNFV Fuel).
+
+Executing a Deployment
 ----------------------
-deploy.sh must be executed locally at the target lab/pod/jumpserver
+
+``deploy.sh`` must be executed locally on the target lab/pod/jumpserver.
 A configuration structure must be provided - see the section below.
 It is straight forward to execute a deployment task - as an example:
 
-    .. code-block:: bash
+.. code-block:: console
+
+    jenkins@jumpserver:~/fuel/ci$ ./deploy.sh -b file:///home/jenkins/config \
+                                              -l lf \
+                                              -p pod2 \
+                                              -s os-nosdn-nofeature-ha
 
-        $ sudo deploy.sh -b file:///home/jenkins/config
-                         -l lf -p pod2 -s os-nosdn-nofeature-ha
+``-b`` argument should be expressed in URI style (eg: ``file://...`` or
+``http://...``). The resources can thus be local or remote.
 
--b and -i arguments should be expressed in URI style (eg: file://...
-or http://...). The resources can thus be local or remote.
+If ``-b`` is not used, the Pharos OPNFV project git submodule local path URI
+is used for the default configuration structure.
 
-Configuration repository structure
+Configuration Repository Structure
 ----------------------------------
+
 The CI deployment engine relies on a configuration directory/file structure
-pointed to by the -b option described above.
-Normally this points to the secure classified OPNFV securedlab repo to which
-only jenkins and andmins have access to, but you may point to any local or
-remote strcture fullfilling the diectory/file structure below.
-The reason that this configuration structure needs to be secure/hidden
-is that there are security sensitive information in the various configuration
-files.
-
-FIXME: Below information is out of date and should be refreshed after PDF
-support is fully implemented.
-
-A local stripped version of this configuration structure with virtual
-deployment configurations also exist under build/config/.
+pointed to by the ``-b`` option described above.
+Normally this points to the ``mcp/scripts/pharos`` git repo submodule, but you
+may point to any local or remote strcture fullfilling the diectory/file
+structure below.
+This configuration structure supports optional encryption of certain security
+sensitive data, mechanism described in the Pharos documentation.
+
 Following configuration directory and file structure should adheare to:
 
-    .. code-block:: bash
-
-        TOP
-        !
-        +---- labs
-               !
-               +---- lab-name-1
-               !        !
-               !        +---- pod-name-1
-               !        !        !
-               !        !        +---- fuel
-               !        !               !
-               !        !               +---- config
-               !        !                       !
-               !        !                       +---- dea-pod-override.yaml
-               !        !                       !
-               !        !                       +---- dha.yaml
-               !        !
-               !        +---- pod-name-2
-               !                 !
-               !
-               +---- lab-name-2
-               !        !
-
-
-Creating a deployment scenario
-------------------------------
-Please find `mcp/config/README.rst` for instructions on how to create a new
-deployment scenario.
+.. code-block:: console
+
+    TOP
+    !
+    +---- labs
+           !
+           +---- lab-name-1
+           !        !
+           !        +---- pod1.yaml
+           !        !
+           !        +---- idf-pod1.yaml
+           !        !
+           !        +---- pod2.yaml
+           !        !
+           !        +---- idf-pod2.yaml
+           !
+           +---- lab-name-2
+           !        !
index a899970..a61946e 100755 (executable)
@@ -32,7 +32,7 @@ usage ()
 {
 cat << EOF
 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-$(notify "$(basename "$0"): Deploy the Fuel@OPNFV MCP stack" 3)
+$(notify "$(basename "$0"): Deploy the OPNFV Fuel MCP stack" 3)
 
 $(notify "USAGE:" 2)
   $(basename "$0") -l lab-name -p pod-name -s deploy-scenario \\
@@ -59,9 +59,9 @@ $(notify "OPTIONS:" 2)
   -N  Experimental: Do not virtualize control plane (novcp)
 
 $(notify_i "Description:" 2)
-Deploys the Fuel@OPNFV stack on the indicated lab resource.
+Deploys the OPNFV Fuel stack on the indicated lab resource.
 
-This script provides the Fuel@OPNFV deployment abstraction.
+This script provides the OPNFV Fuel deployment abstraction.
 It depends on the OPNFV official configuration directory/file structure
 and provides a fairly simple mechanism to execute a deployment.
 
@@ -74,8 +74,6 @@ $(notify_i "Input parameters to the build script are:" 2)
    <base-uri>/labs/<lab-name>/idf-<pod-name>.yaml
    The default is using the git submodule tracking 'OPNFV Pharos' in
    <./mcp/scripts/pharos>.
-   An example config is provided inside current repo in
-   <./mcp/config>, automatically linked as <./mcp/scripts/pharos/labs/local>.
 -d Dry-run - Produce deploy config files, but do not execute deploy
 -D Debug logging - Enable extra logging in sh deploy scripts (set -x)
 -e Do not launch environment deployment
@@ -92,10 +90,7 @@ $(notify_i "Input parameters to the build script are:" 2)
 -h Print this message and exit
 -L Deployment log path and name, eg. -L /home/jenkins/job.log.tar.gz
 -l Lab name as defined in the configuration directory, e.g. lf
-   For the sample configuration in <./mcp/config>, lab name is 'local'.
 -p POD name as defined in the configuration directory, e.g. pod2
-   For the sample configuration in <./mcp/config>, POD name is 'virtual1'
-   for virtual deployments or 'pod1' for baremetal (based on lf-pod2).
 -m Use single socket compute nodes. Instead of using default NUMA-enabled
    topology for virtual compute nodes created via libvirt, configure a
    single guest CPU socket.
index 842853a..943e1d5 100644 (file)
@@ -9,6 +9,8 @@ FUEL
 ====
 
 .. toctree::
+   :numbered:
+   :maxdepth: 2
 
    release/release-notes/index
    release/installation/index
diff --git a/docs/release/developer-guide/img/README.rst b/docs/release/developer-guide/img/README.rst
new file mode 120000 (symlink)
index 0000000..1104109
--- /dev/null
@@ -0,0 +1 @@
+../../installation/img/README.rst
\ No newline at end of file
diff --git a/docs/release/developer-guide/img/detail_fuel.png b/docs/release/developer-guide/img/detail_fuel.png
new file mode 100755 (executable)
index 0000000..02af61a
Binary files /dev/null and b/docs/release/developer-guide/img/detail_fuel.png differ
diff --git a/docs/release/developer-guide/img/overview_fuel.png b/docs/release/developer-guide/img/overview_fuel.png
new file mode 100755 (executable)
index 0000000..6b879d7
Binary files /dev/null and b/docs/release/developer-guide/img/overview_fuel.png differ
diff --git a/docs/release/developer-guide/img/overview_mcp.png b/docs/release/developer-guide/img/overview_mcp.png
new file mode 100755 (executable)
index 0000000..037b293
Binary files /dev/null and b/docs/release/developer-guide/img/overview_mcp.png differ
diff --git a/docs/release/developer-guide/img/symbol_gerrit.png b/docs/release/developer-guide/img/symbol_gerrit.png
new file mode 100755 (executable)
index 0000000..aea346e
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_gerrit.png differ
diff --git a/docs/release/developer-guide/img/symbol_git_blue.png b/docs/release/developer-guide/img/symbol_git_blue.png
new file mode 100755 (executable)
index 0000000..569ed3f
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_git_blue.png differ
diff --git a/docs/release/developer-guide/img/symbol_git_orange.png b/docs/release/developer-guide/img/symbol_git_orange.png
new file mode 100755 (executable)
index 0000000..32f6729
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_git_orange.png differ
diff --git a/docs/release/developer-guide/img/symbol_git_red.png b/docs/release/developer-guide/img/symbol_git_red.png
new file mode 100755 (executable)
index 0000000..f288afe
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_git_red.png differ
diff --git a/docs/release/developer-guide/img/symbol_jenkins.png b/docs/release/developer-guide/img/symbol_jenkins.png
new file mode 100755 (executable)
index 0000000..20fde41
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_jenkins.png differ
diff --git a/docs/release/developer-guide/img/symbol_k8.png b/docs/release/developer-guide/img/symbol_k8.png
new file mode 100755 (executable)
index 0000000..0cbc310
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_k8.png differ
diff --git a/docs/release/developer-guide/img/symbol_os.png b/docs/release/developer-guide/img/symbol_os.png
new file mode 100755 (executable)
index 0000000..c2c8b26
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_os.png differ
diff --git a/docs/release/developer-guide/img/symbol_salt.png b/docs/release/developer-guide/img/symbol_salt.png
new file mode 100755 (executable)
index 0000000..e9011ae
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_salt.png differ
diff --git a/docs/release/developer-guide/img/symbol_trigger.png b/docs/release/developer-guide/img/symbol_trigger.png
new file mode 100755 (executable)
index 0000000..e7dc10f
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_trigger.png differ
diff --git a/docs/release/developer-guide/img/symbol_user.png b/docs/release/developer-guide/img/symbol_user.png
new file mode 100755 (executable)
index 0000000..6384f82
Binary files /dev/null and b/docs/release/developer-guide/img/symbol_user.png differ
index 4cb1f77..bf63044 100644 (file)
@@ -1,12 +1,18 @@
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. SPDX-License-Identifier: CC-BY-4.0
-.. (c) 2017 Ericsson AB, Mirantis Inc., Enea AB and others.
+.. (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others.
+
+:orphan:
 
 Image Editor
 ============
-All files in this directory have been created using `draw.io <https://draw.io>`_.
+
+All files in this directory have been created using `draw.io`_.
 
 Image Sources
 =============
-Image sources are embedded in each `png` file.
-To edit an image, import the `png` file using `draw.io <https://draw.io>`_.
+
+Image sources are embedded in each ``png`` file.
+To edit an image, import the ``png`` file using `draw.io`_.
+
+.. _`draw.io`: https://draw.io
diff --git a/docs/release/installation/img/arm_pod5.png b/docs/release/installation/img/arm_pod5.png
deleted file mode 100644 (file)
index 87edb8f..0000000
Binary files a/docs/release/installation/img/arm_pod5.png and /dev/null differ
diff --git a/docs/release/installation/img/fuel_baremetal.png b/docs/release/installation/img/fuel_baremetal.png
deleted file mode 100644 (file)
index 27e7620..0000000
Binary files a/docs/release/installation/img/fuel_baremetal.png and /dev/null differ
diff --git a/docs/release/installation/img/fuel_baremetal_ha.png b/docs/release/installation/img/fuel_baremetal_ha.png
new file mode 100644 (file)
index 0000000..f2ed610
Binary files /dev/null and b/docs/release/installation/img/fuel_baremetal_ha.png differ
diff --git a/docs/release/installation/img/fuel_baremetal_noha.png b/docs/release/installation/img/fuel_baremetal_noha.png
new file mode 100644 (file)
index 0000000..5a3b429
Binary files /dev/null and b/docs/release/installation/img/fuel_baremetal_noha.png differ
diff --git a/docs/release/installation/img/fuel_hybrid_noha.png b/docs/release/installation/img/fuel_hybrid_noha.png
new file mode 100644 (file)
index 0000000..51449a7
Binary files /dev/null and b/docs/release/installation/img/fuel_hybrid_noha.png differ
diff --git a/docs/release/installation/img/fuel_virtual.png b/docs/release/installation/img/fuel_virtual.png
deleted file mode 100644 (file)
index d766486..0000000
Binary files a/docs/release/installation/img/fuel_virtual.png and /dev/null differ
diff --git a/docs/release/installation/img/fuel_virtual_noha.png b/docs/release/installation/img/fuel_virtual_noha.png
new file mode 100644 (file)
index 0000000..7d05a9d
Binary files /dev/null and b/docs/release/installation/img/fuel_virtual_noha.png differ
diff --git a/docs/release/installation/img/lf_pod2.png b/docs/release/installation/img/lf_pod2.png
deleted file mode 100644 (file)
index da419d8..0000000
Binary files a/docs/release/installation/img/lf_pod2.png and /dev/null differ
index 0033226..866044e 100644 (file)
@@ -1,24 +1,10 @@
-.. _fuel-installation:
-
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-.. _fuel-release-installation-label:
-
-****************************************
-Installation instruction for Fuel\@OPNFV
-****************************************
-
-Contents:
+.. _fuel-installation:
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
    installation.instruction.rst
-
-Indices and tables
-==================
-
-* :ref:`search`
index 9aaebdd..40f9d26 100644 (file)
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-========
+***********************************
+OPNFV Fuel Installation Instruction
+***********************************
+
 Abstract
 ========
 
-This document describes how to install the Fraser release of
+This document describes how to install the ``Gambia`` release of
 OPNFV when using Fuel as a deployment tool, covering its usage,
 limitations, dependencies and required system resources.
-This is an unified documentation for both x86_64 and aarch64
+
+This is an unified documentation for both ``x86_64`` and ``aarch64``
 architectures. All information is common for both architectures
 except when explicitly stated.
 
-============
 Introduction
 ============
 
 This document provides guidelines on how to install and
-configure the Fraser release of OPNFV when using Fuel as a
+configure the ``Gambia`` release of OPNFV when using Fuel as a
 deployment tool, including required software and hardware configurations.
 
 Although the available installation options provide a high degree of
 freedom in how the system is set up, including architecture, services
 and features, etc., said permutations may not provide an OPNFV
 compliant reference architecture. This document provides a
-step-by-step guide that results in an OPNFV Fraser compliant
+step-by-step guide that results in an OPNFV ``Gambia`` compliant
 deployment.
 
 The audience of this document is assumed to have good knowledge of
 networking and Unix/Linux administration.
 
-=======
-Preface
-=======
-
-Before starting the installation of the Fraser release of
+Before starting the installation of the ``Gambia`` release of
 OPNFV, using Fuel as a deployment tool, some planning must be
 done.
 
 Preparations
 ============
 
-Prior to installation, a number of deployment specific parameters must be collected, those are:
+Prior to installation, a number of deployment specific parameters must be
+collected, those are:
 
 #.     Provider sub-net and gateway information
 
-#.     Provider VLAN information
-
-#.     Provider DNS addresses
+#.     Provider ``VLAN`` information
 
-#.     Provider NTP addresses
+#.     Provider ``DNS`` addresses
 
-#.     Network overlay you plan to deploy (VLAN, VXLAN, FLAT)
-
-#.     How many nodes and what roles you want to deploy (Controllers, Storage, Computes)
-
-#.     Monitoring options you want to deploy (Ceilometer, Syslog, etc.).
-
-#.     Other options not covered in the document are available in the links above
+#.     Provider ``NTP`` addresses
 
+#.     How many nodes and what roles you want to deploy (Controllers, Computes)
 
 This information will be needed for the configuration procedures
 provided in this document.
 
-=========================================
-Hardware Requirements for Virtual Deploys
-=========================================
-
-The following minimum hardware requirements must be met for the virtual
-installation of Fraser using Fuel:
-
-+----------------------------+--------------------------------------------------------+
-| **HW Aspect**              | **Requirement**                                        |
-|                            |                                                        |
-+============================+========================================================+
-| **1 Jumpserver**           | A physical node (also called Foundation Node) that     |
-|                            | will host a Salt Master VM and each of the VM nodes in |
-|                            | the virtual deploy                                     |
-+----------------------------+--------------------------------------------------------+
-| **CPU**                    | Minimum 1 socket with Virtualization support           |
-+----------------------------+--------------------------------------------------------+
-| **RAM**                    | Minimum 32GB/server (Depending on VNF work load)       |
-+----------------------------+--------------------------------------------------------+
-| **Disk**                   | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)|
-+----------------------------+--------------------------------------------------------+
-
-
-===========================================
-Hardware Requirements for Baremetal Deploys
-===========================================
-
-The following minimum hardware requirements must be met for the baremetal
-installation of Fraser using Fuel:
-
-+-------------------------+------------------------------------------------------+
-| **HW Aspect**           | **Requirement**                                      |
-|                         |                                                      |
-+=========================+======================================================+
-| **# of nodes**          | Minimum 5                                            |
-|                         |                                                      |
-|                         | - 3 KVM servers which will run all the controller    |
-|                         |   services                                           |
-|                         |                                                      |
-|                         | - 2 Compute nodes                                    |
-|                         |                                                      |
-+-------------------------+------------------------------------------------------+
-| **CPU**                 | Minimum 1 socket with Virtualization support         |
-+-------------------------+------------------------------------------------------+
-| **RAM**                 | Minimum 16GB/server (Depending on VNF work load)     |
-+-------------------------+------------------------------------------------------+
-| **Disk**                | Minimum 256GB 10kRPM spinning disks                  |
-+-------------------------+------------------------------------------------------+
-| **Networks**            | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be    |
-|                         | a mix of tagged/native                               |
-|                         |                                                      |
-|                         | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network        |
-|                         |                                                      |
-|                         | Note: These can be allocated to a single NIC -       |
-|                         | or spread out over multiple NICs                     |
-+-------------------------+------------------------------------------------------+
-| **1 Jumpserver**        | A physical node (also called Foundation Node) that   |
-|                         | hosts the Salt Master and MaaS VMs                   |
-+-------------------------+------------------------------------------------------+
-| **Power management**    | All targets need to have power management tools that |
-|                         | allow rebooting the hardware and setting the boot    |
-|                         | order (e.g. IPMI)                                    |
-+-------------------------+------------------------------------------------------+
+Hardware Requirements
+=====================
 
-.. NOTE::
-
-    All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
+Mininum hardware requirements depend on the deployment type.
 
-.. NOTE::
+.. WARNING::
 
-    For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
+    If ``baremetal`` nodes are present in the cluster, the architecture of the
+    nodes running the control plane (``kvm01``, ``kvm02``, ``kvm03`` for
+    ``HA`` scenarios, respectively ``ctl01``, ``gtw01``, ``odl01`` for
+    ``noHA`` scenarios) and the ``jumpserver`` architecture must be the same
+    (either ``x86_64`` or ``aarch64``).
+
+.. TIP::
+
+    The compute nodes may have different architectures, but extra
+    configuration might be required for scheduling VMs on the appropiate host.
+    This use-case is not tested in OPNFV CI, so it is considered experimental.
+
+Hardware Requirements for ``virtual`` Deploys
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following minimum hardware requirements must be met for the ``virtual``
+installation of ``Gambia`` using Fuel:
+
++------------------+------------------------------------------------------+
+| **HW Aspect**    | **Requirement**                                      |
+|                  |                                                      |
++==================+======================================================+
+| **1 Jumpserver** | A physical node (also called Foundation Node) that   |
+|                  | will host a Salt Master container and each of the VM |
+|                  | nodes in the virtual deploy                          |
++------------------+------------------------------------------------------+
+| **CPU**          | Minimum 1 socket with Virtualization support         |
++------------------+------------------------------------------------------+
+| **RAM**          | Minimum 32GB/server (Depending on VNF work load)     |
++------------------+------------------------------------------------------+
+| **Disk**         | Minimum 100GB (SSD or 15krpm SCSI highly recommended)|
++------------------+------------------------------------------------------+
+
+Hardware Requirements for ``baremetal`` Deploys
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following minimum hardware requirements must be met for the ``baremetal``
+installation of ``Gambia`` using Fuel:
+
++------------------+------------------------------------------------------+
+| **HW Aspect**    | **Requirement**                                      |
+|                  |                                                      |
++==================+======================================================+
+| **1 Jumpserver** | A physical node (also called Foundation Node) that   |
+|                  | hosts the Salt Master container and MaaS VM          |
++------------------+------------------------------------------------------+
+| **# of nodes**   | Minimum 5                                            |
+|                  |                                                      |
+|                  | - 3 KVM servers which will run all the controller    |
+|                  |   services                                           |
+|                  |                                                      |
+|                  | - 2 Compute nodes                                    |
+|                  |                                                      |
+|                  | .. WARNING::                                         |
+|                  |                                                      |
+|                  |     ``kvm01``, ``kvm02``, ``kvm03`` nodes and the    |
+|                  |     ``jumpserver`` must have the same architecture   |
+|                  |     (either ``x86_64`` or ``aarch64``).              |
+|                  |                                                      |
+|                  | .. NOTE::                                            |
+|                  |                                                      |
+|                  |     ``aarch64`` nodes should run an ``UEFI``         |
+|                  |     compatible firmware with PXE support             |
+|                  |     (e.g. ``EDK2``).                                 |
++------------------+------------------------------------------------------+
+| **CPU**          | Minimum 1 socket with Virtualization support         |
++------------------+------------------------------------------------------+
+| **RAM**          | Minimum 16GB/server (Depending on VNF work load)     |
++------------------+------------------------------------------------------+
+| **Disk**         | Minimum 256GB 10kRPM spinning disks                  |
++------------------+------------------------------------------------------+
+| **Networks**     | Mininum 4                                            |
+|                  |                                                      |
+|                  | - 3 VLANs (``public``, ``mgmt``, ``private``) -      |
+|                  |   can be a mix of tagged/native                      |
+|                  |                                                      |
+|                  | - 1 Un-Tagged VLAN for PXE Boot -                    |
+|                  |   ``PXE/admin`` Network                              |
+|                  |                                                      |
+|                  | .. NOTE::                                            |
+|                  |                                                      |
+|                  |     These can be allocated to a single NIC           |
+|                  |     or spread out over multiple NICs.                |
++------------------+------------------------------------------------------+
+| **Power mgmt**   | All targets need to have power management tools that |
+|                  | allow rebooting the hardware (e.g. ``IPMI``).        |
++------------------+------------------------------------------------------+
+
+Hardware Requirements for ``hybrid`` (``baremetal`` + ``virtual``) Deploys
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following minimum hardware requirements must be met for the ``hybrid``
+installation of ``Gambia`` using Fuel:
+
++------------------+------------------------------------------------------+
+| **HW Aspect**    | **Requirement**                                      |
+|                  |                                                      |
++==================+======================================================+
+| **1 Jumpserver** | A physical node (also called Foundation Node) that   |
+|                  | hosts the Salt Master container, MaaS VM and         |
+|                  | each of the virtual nodes defined in ``PDF``         |
++------------------+------------------------------------------------------+
+| **# of nodes**   | .. NOTE::                                            |
+|                  |                                                      |
+|                  |     Depends on ``PDF`` configuration.                |
+|                  |                                                      |
+|                  | If the control plane is virtualized, minimum         |
+|                  | baremetal requirements are:                          |
+|                  |                                                      |
+|                  | - 2 Compute nodes                                    |
+|                  |                                                      |
+|                  | If the computes are virtualized, minimum             |
+|                  | baremetal requirements are:                          |
+|                  |                                                      |
+|                  | - 3 KVM servers which will run all the controller    |
+|                  |   services                                           |
+|                  |                                                      |
+|                  | .. WARNING::                                         |
+|                  |                                                      |
+|                  |     ``kvm01``, ``kvm02``, ``kvm03`` nodes and the    |
+|                  |     ``jumpserver`` must have the same architecture   |
+|                  |     (either ``x86_64`` or ``aarch64``).              |
+|                  |                                                      |
+|                  | .. NOTE::                                            |
+|                  |                                                      |
+|                  |     ``aarch64`` nodes should run an ``UEFI``         |
+|                  |     compatible firmware with PXE support             |
+|                  |     (e.g. ``EDK2``).                                 |
++------------------+------------------------------------------------------+
+| **CPU**          | Minimum 1 socket with Virtualization support         |
++------------------+------------------------------------------------------+
+| **RAM**          | Minimum 16GB/server (Depending on VNF work load)     |
++------------------+------------------------------------------------------+
+| **Disk**         | Minimum 256GB 10kRPM spinning disks                  |
++------------------+------------------------------------------------------+
+| **Networks**     | Same as for ``baremetal`` deployments                |
++------------------+------------------------------------------------------+
+| **Power mgmt**   | Same as for ``baremetal`` deployments                |
++------------------+------------------------------------------------------+
 
-===============================
 Help with Hardware Requirements
-===============================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Calculate hardware requirements:
 
-For information on compatible hardware types available for use,
-please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_
-
 When choosing the hardware on which you will deploy your OpenStack
 environment, you should think about:
 
-- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine.
+- CPU -- Consider the number of virtual machines that you plan to deploy in
+  your cloud environment and the CPUs per virtual machine.
 
-- Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node.
+- Memory -- Depends on the amount of RAM assigned per virtual machine and the
+  controller node.
 
-- Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage.
+- Storage -- Depends on the local drive space per virtual machine, remote
+  volumes that can be attached to a virtual machine, and object storage.
 
-- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
+- Networking -- Depends on the Choose Network Topology, the network bandwidth
+  per virtual machine, and network storage.
 
-================================================
-Top of the Rack (TOR) Configuration Requirements
-================================================
+Top of the Rack (``TOR``) Configuration Requirements
+====================================================
 
 The switching infrastructure provides connectivity for the OPNFV
 infrastructure operations, tenant networks (East/West) and provider
 connectivity (North/South); it also provides needed connectivity for
 the Storage Area Network (SAN).
+
 To avoid traffic congestion, it is strongly suggested that three
 physically separated networks are used, that is: 1 physical network
 for administration and control, one physical network for tenant private
 and public networks, and one physical network for SAN.
+
 The switching connectivity can (but does not need to) be fully redundant,
 in such case it comprises a redundant 10GE switch pair for each of the
 three physically separated networks.
 
-The physical TOR switches are **not** automatically configured from
-the Fuel OPNFV reference platform. All the networks involved in the OPNFV
-infrastructure as well as the provider networks and the private tenant
-VLANs needs to be manually configured.
+.. WARNING::
 
-Manual configuration of the Fraser hardware platform should
-be carried out according to the `OPNFV Pharos Specification
-<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
+    The physical ``TOR`` switches are **not** automatically configured from
+    the OPNFV Fuel reference platform. All the networks involved in the OPNFV
+    infrastructure as well as the provider networks and the private tenant
+    VLANs needs to be manually configured.
+
+Manual configuration of the ``Gambia`` hardware platform should
+be carried out according to the `OPNFV Pharos Specification`_.
 
-============================
 OPNFV Software Prerequisites
 ============================
 
-The Jumpserver node should be pre-provisioned with an operating system,
-according to the Pharos specification. Relevant network bridges should
-also be pre-configured (e.g. admin_br, mgmt_br, public_br).
+.. NOTE::
 
-- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during Fuel installation.
-- The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
-  suggested to pre-configure it for debugging purposes.
-- The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
+    All prerequisites described in this chapter apply to the ``jumpserver``
+    node.
 
-The user running the deploy script on the Jumpserver should belong to ``sudo`` and ``libvirt`` groups,
-and have passwordless sudo access.
+OS Distribution Support
+~~~~~~~~~~~~~~~~~~~~~~~
 
-The following example adds the groups to the user ``jenkins``
+The Jumpserver node should be pre-provisioned with an operating system,
+according to the `OPNFV Pharos specification`_.
 
-.. code-block:: bash
+OPNFV Fuel has been validated by CI using the following distributions
+installed on the Jumpserver:
 
-    $ sudo usermod -aG sudo jenkins
-    $ sudo usermod -aG libvirt jenkins
-    $ reboot
-    $ groups
-    jenkins sudo libvirt
+- ``CentOS 7`` (recommended by Pharos specification);
+- ``Ubuntu Xenial 16.04``;
 
-    $ sudo visudo
-    ...
-    %jenkins ALL=(ALL) NOPASSWD:ALL
+.. TOPIC:: ``aarch64`` notes
 
-The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir`` in the examples below)
-needs to have mask 777 in order for libvirt to be able to use them.
+    For an ``aarch64`` Jumpserver, the ``libvirt`` minimum required
+    version is ``3.x``, ``3.5`` or newer highly recommended.
 
-.. code-block:: bash
+    .. TIP::
 
-    $ mkdir -p -m 777 /home/jenkins/tmpdir
+        ``CentOS 7`` (``aarch64``) distro provided packages are already new
+        enough.
 
-For an AArch64 Jumpserver, the ``libvirt`` minimum required version is 3.x, 3.5 or newer highly recommended.
-While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
-(especially on AArch64 Jumpservers).
+    .. WARNING::
 
-For CentOS 7.4 (AArch64), distro provided packages are already new enough.
-For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used.
-For convenience, Armband provides a DEB repository holding all the required packages.
+        ``Ubuntu 16.04`` (``arm64``), distro packages are too old and 3rd party
+        repositories should be used.
 
-To add and enable the Armband repository on an Ubuntu 16.04 system,
-create a new sources list file ``/apt/sources.list.d/armband.list`` with the following contents:
+    For convenience, Armband provides a DEB repository holding all the
+    required packages.
 
-.. code-block:: bash
+    To add and enable the Armband repository on an Ubuntu 16.04 system,
+    create a new sources list file ``/apt/sources.list.d/armband.list``
+    with the following contents:
 
-    $ cat /etc/apt/sources.list.d/armband.list
-    //for OpenStack Queens release
-    deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main
+    .. code-block:: console
 
-    $ apt-get update
+        jenkins@jumpserver:~$ cat /etc/apt/sources.list.d/armband.list
+        deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main
 
-Fuel@OPNFV has been validated by CI using the following distributions
-installed on the Jumpserver:
+        jenkins@jumpserver:~$ sudo apt-key adv --keyserver keys.gnupg.net \
+                                               --recv 798AB1D1
+        jenkins@jumpserver:~$ sudo apt-get update
 
-- CentOS 7 (recommended by Pharos specification);
-- Ubuntu Xenial;
+OS Distribution Packages
+~~~~~~~~~~~~~~~~~~~~~~~~
 
-.. WARNING::
+By default, the ``deploy.sh`` script will automatically install the required
+distribution package dependencies on the Jumpserver, so the end user does
+not have to manually install them before starting the deployment.
 
-    The install script expects ``libvirt`` to be already running on the Jumpserver.
-    In case ``libvirt`` packages are missing, the script will install them; but
-    depending on the OS distribution, the user might have to start the ``libvirtd``
-    service manually, then run the deploy script again. Therefore, it
-    is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment.
+This includes Python, QEMU, libvirt etc.
 
-.. NOTE::
+.. SEEALSO::
 
-    It is also recommended to install the newer kernel on the Jumpserver before the deployment.
+    To disable automatic package installation (and/or upgrade) during
+    deployment, check out the ``-P`` deploy argument.
 
 .. WARNING::
 
-    The install script will automatically install the rest of required distro package
-    dependencies on the Jumpserver, unless explicitly asked not to (via ``-P`` deploy arg).
-    This includes Python, QEMU, libvirt etc.
+    The install script expects ``libvirt`` to be already running on the
+    Jumpserver.
 
-.. WARNING::
+In case ``libvirt`` packages are missing, the script will install them; but
+depending on the OS distribution, the user might have to start the
+``libvirt`` daemon service manually, then run the deploy script again.
 
-    The install script will alter Jumpserver sysconf and disable ``net.bridge.bridge-nf-call``.
+Therefore, it is recommended to install ``libvirt`` explicitly on the
+Jumpserver before the deployment.
 
-.. code-block:: bash
+While not mandatory, upgrading the kernel on the Jumpserver is also highly
+recommended.
 
-    $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin
+.. code-block:: console
 
+    jenkins@jumpserver:~$ sudo apt-get install \
+                          linux-image-generic-hwe-16.04-edge libvirt-bin
+    jenkins@jumpserver:~$ sudo reboot
 
-==========================================
-OPNFV Software Installation and Deployment
-==========================================
+User Requirements
+~~~~~~~~~~~~~~~~~
 
-This section describes the process of installing all the components needed to
-deploy the full OPNFV reference platform stack across a server cluster.
+The user running the deploy script on the Jumpserver should belong to
+``sudo`` and ``libvirt`` groups, and have passwordless sudo access.
 
-The installation is done with Mirantis Cloud Platform (MCP), which is based on
-a reclass model. This model provides the formula inputs to Salt, to make the deploy
-automatic based on deployment scenario.
-The reclass model covers:
+.. NOTE::
 
-   - Infrastructure node definition: Salt Master node (cfg01) and MaaS node (mas01)
-   - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
-   - Infrastructure components to install (software packages, services etc.)
-   - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them
+    Throughout this documentation, we will use the ``jenkins`` username for
+    this role.
 
+The following example adds the groups to the user ``jenkins``:
 
-Automatic Installation of a Virtual POD
-=======================================
+.. code-block:: console
 
-For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will:
+    jenkins@jumpserver:~$ sudo usermod -aG sudo jenkins
+    jenkins@jumpserver:~$ sudo usermod -aG libvirt jenkins
+    jenkins@jumpserver:~$ sudo reboot
+    jenkins@jumpserver:~$ groups
+    jenkins sudo libvirt
 
-   - Create a Salt Master VM on the Jumpserver which will drive the installation
-   - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network)
-   - Install OpenStack on the targets
-      - Leverage Salt to install & configure OpenStack services
+    jenkins@jumpserver:~$ sudo visudo
+    ...
+    %jenkins ALL=(ALL) NOPASSWD:ALL
 
-.. figure:: img/fuel_virtual.png
-   :align: center
-   :alt: Fuel@OPNFV Virtual POD Network Layout Examples
+Local Artifact Storage
+~~~~~~~~~~~~~~~~~~~~~~
+
+The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir``
+in the examples below) needs to have mask ``777`` in order for ``libvirt`` to
+be able to use them.
+
+.. code-block:: console
+
+    jenkins@jumpserver:~$ mkdir -p -m 777 /home/jenkins/tmpdir
+
+Network Configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+Relevant Linux bridges should also be pre-configured for certain networks,
+depending on the type of the deployment.
+
++------------+---------------+----------------------------------------------+
+| Network    | Linux Bridge  | Linux Bridge necessity based on deploy type  |
+|            |               +--------------+---------------+---------------+
+|            |               | ``virtual``  | ``baremetal`` | ``hybrid``    |
++============+===============+==============+===============+===============+
+| PXE/admin  | ``admin_br``  | absent       | present       | present       |
++------------+---------------+--------------+---------------+---------------+
+| management | ``mgmt_br``   | optional     | optional,     | optional,     |
+|            |               |              | recommended,  | recommended,  |
+|            |               |              | required for  | required for  |
+|            |               |              | ``functest``, | ``functest``, |
+|            |               |              | ``yardstick`` | ``yardstick`` |
++------------+---------------+--------------+---------------+---------------+
+| internal   | ``int_br``    | optional     | optional      | present       |
++------------+---------------+--------------+---------------+---------------+
+| public     | ``public_br`` | optional     | optional,     | optional,     |
+|            |               |              | recommended,  | recommended,  |
+|            |               |              | useful for    | useful for    |
+|            |               |              | debugging     | debugging     |
++------------+---------------+--------------+---------------+---------------+
+
+.. TIP::
+
+    IP addresses should be assigned to the created bridge interfaces (not
+    to one of its ports).
 
-   Fuel@OPNFV Virtual POD Network Layout Examples
+.. WARNING::
 
-   +-----------------------+------------------------------------------------------------------------+
-   | cfg01                 | Salt Master VM                                                         |
-   +-----------------------+------------------------------------------------------------------------+
-   | ctl01                 | Controller VM                                                          |
-   +-----------------------+------------------------------------------------------------------------+
-   | cmp001/cmp002         | Compute VMs                                                            |
-   +-----------------------+------------------------------------------------------------------------+
-   | gtw01                 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) |
-   +-----------------------+------------------------------------------------------------------------+
-   | odl01                 | VM on which ODL runs (for scenarios deployed with ODL)                 |
-   +-----------------------+------------------------------------------------------------------------+
+    ``PXE/admin`` bridge (``admin_br``) **must** have an IP address.
 
+Changes ``deploy.sh`` Will Perform to Jumpserver OS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In this figure there are examples of two virtual deploys:
-   - Jumphost 1 has only virsh bridges, created by the deploy script
-   - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
-     the deploy script will skip creating a virsh bridge for it
+.. WARNING::
 
-.. NOTE::
+    The install script will alter Jumpserver sysconf and disable
+    ``net.bridge.bridge-nf-call``.
 
-    A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost.
+.. WARNING::
 
+    The install script will automatically install and/or upgrade the
+    required distribution package dependencies on the Jumpserver,
+    unless explicitly asked not to (via the ``-P`` deploy arg).
 
-Automatic Installation of a Baremetal POD
-=========================================
+OPNFV Software Configuration (``XDF``)
+======================================
 
-The baremetal installation process can be done by editing the information about
-hardware and environment in the reclass files, or by using the files Pod Descriptor
-File (PDF) and Installer Descriptor File (IDF) as described in the OPNFV Pharos project.
-These files contain all the information about the hardware and network of the deployment
-that will be fed to the reclass model during deployment.
+.. versionadded:: 5.0.0
+.. versionchanged:: 7.0.0
 
-The installation is done automatically with the deploy script, which will:
+Unlike the old approach based on OpenStack Fuel, OPNFV Fuel no longer has a
+graphical user interface for configuring the environment, but instead
+switched to OPNFV specific descriptor files that we will call generically
+``XDF``:
 
-   - Create a Salt Master VM on the Jumpserver which will drive the installation
-   - Create a MaaS Node VM on the Jumpserver which will provision the targets
-   - Install OpenStack on the targets
-      - Leverage MaaS to provision baremetal nodes with the operating system
-      - Leverage Salt to configure the operating system on the baremetal nodes
-      - Leverage Salt to install & configure OpenStack services
+- ``PDF`` (POD Descriptor File) provides an abstraction of the target POD
+  with all its hardware characteristics and required parameters;
+- ``IDF`` (Installer Descriptor File) extends the ``PDF`` with POD related
+  parameters required by the OPNFV Fuel installer;
+- ``SDF`` (Scenario Descriptor File, **not** yet adopted) will later
+  replace embedded scenario definitions, describing the roles and layout of
+  the cluster enviroment for a given reference architecture;
 
-.. figure:: img/fuel_baremetal.png
-   :align: center
-   :alt: Fuel@OPNFV Baremetal POD Network Layout Example
-
-   Fuel@OPNFV Baremetal POD Network Layout Example
-
-   +-----------------------+---------------------------------------------------------+
-   | cfg01                 | Salt Master VM                                          |
-   +-----------------------+---------------------------------------------------------+
-   | mas01                 | MaaS Node VM                                            |
-   +-----------------------+---------------------------------------------------------+
-   | kvm01..03             | Baremetals which hold the VMs with controller functions |
-   +-----------------------+---------------------------------------------------------+
-   | cmp001/cmp002         | Baremetal compute nodes                                 |
-   +-----------------------+---------------------------------------------------------+
-   | prx01/prx02           | Proxy VMs for Nginx                                     |
-   +-----------------------+---------------------------------------------------------+
-   | msg01..03             | RabbitMQ Service VMs                                    |
-   +-----------------------+---------------------------------------------------------+
-   | dbs01..03             | MySQL service VMs                                       |
-   +-----------------------+---------------------------------------------------------+
-   | mdb01..03             | Telemetry VMs                                           |
-   +-----------------------+---------------------------------------------------------+
-   | odl01                 | VM on which ODL runs (for scenarios deployed with ODL)  |
-   +-----------------------+---------------------------------------------------------+
-   | Tenant VM             | VM running in the cloud                                 |
-   +-----------------------+---------------------------------------------------------+
-
-In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is
-required to pre-configure at least the admin_br bridge for the PXE/Admin.
-For the targets, the bridges are created by the deploy script.
+.. TIP::
+
+    For ``virtual`` deployments, if the ``public`` network will be accessed
+    from outside the ``jumpserver`` node, a custom ``PDF``/``IDF`` pair is
+    required for customizing ``idf.net_config.public`` and
+    ``idf.fuel.jumphost.bridges.public``.
 
 .. NOTE::
 
-    A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost.
+    For OPNFV CI PODs, as well as simple (no ``public`` bridge) ``virtual``
+    deployments, ``PDF``/``IDF`` files are already available in the
+    `pharos git repo`_. They can be used as a reference for user-supplied
+    inputs or to kick off a deployment right away.
 
++----------+------------------------------------------------------------------+
+| LAB/POD  | ``PDF``/``IDF`` availability based on deploy type                |
+|          +------------------------+--------------------+--------------------+
+|          | ``virtual``            | ``baremetal``      | ``hybrid``         |
++==========+========================+====================+====================+
+| OPNFV CI | available in           | available in       | N/A, as currently  |
+| POD      | `pharos git repo`_     | `pharos git repo`_ | there are 0 hybrid |
+|          | (e.g.                  | (e.g. ``lf-pod2``, | PODs in OPNFV CI   |
+|          | ``ericsson-virtual1``) | ``arm-pod5``)      |                    |
++----------+------------------------+--------------------+--------------------+
+| local or | ``user-supplied``      | ``user-supplied``  | ``user-supplied``  |
+| new POD  |                        |                    |                    |
++----------+------------------------+--------------------+--------------------+
 
-Steps to Start the Automatic Deploy
-===================================
+.. TIP::
 
-These steps are common both for virtual and baremetal deploys.
+    Both ``PDF`` and ``IDF`` structure are modelled as ``yaml`` schemas in the
+    `pharos git repo`_, also included as a git submodule in OPNFV Fuel.
 
-#. Clone the Fuel code from gerrit
+    .. SEEALSO::
 
-   For x86_64
+        - ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml``
+        - ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml``
 
-   .. code-block:: bash
+    Schema files are also used during the initial deployment phase to validate
+    the user-supplied input ``PDF``/``IDF`` files.
 
-       $ git clone https://git.opnfv.org/fuel
-       $ cd fuel
+``PDF``
+~~~~~~~
 
-   For aarch64
+The Pod Descriptor File is a hardware description of the POD
+infrastructure. The information is modeled under a ``yaml`` structure.
 
-   .. code-block:: bash
+The hardware description covers the ``jumphost`` node and a set of ``nodes``
+for the cluster target boards. For each node the following characteristics
+are defined:
 
-       $ git clone https://git.opnfv.org/armband
-       $ cd armband
+- Node parameters including ``CPU`` features and total memory;
+- A list of available disks;
+- Remote management parameters;
+- Network interfaces list including name, ``MAC`` address, link speed,
+  advanced features;
 
-#. Checkout the Fraser release
+.. SEEALSO::
 
-   .. code-block:: bash
+    A reference file with the expected ``yaml`` structure is available at:
 
-       $ git checkout opnfv-6.2.1
+    - ``mcp/scripts/pharos/config/pdf/pod1.yaml``
 
-#. Start the deploy script
+    For more information on ``PDF``, see the `OPNFV PDF Wiki Page`_.
 
-    Besides the basic options,  there are other recommended deploy arguments:
+.. WARNING::
 
-    - use ``-D`` option to enable the debug info
-    - use ``-S`` option to point to a tmp dir where the disk images are saved. The images will be
-      re-used between deploys
-    - use ``|& tee`` to save the deploy log to a file
+    The fixed IPs defined in ``PDF`` are ignored by the OPNFV Fuel installer
+    script and it will instead assign addresses based on the network ranges
+    defined in ``IDF``.
+
+    For more details on the way IP addresses are assigned, see
+    :ref:`OPNFV Fuel User Guide <fuel-userguide>`.
+
+``PDF``/``IDF`` Role (hostname) Mapping
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Upcoming ``SDF`` support will introduce a series of possible node roles.
+Until that happens, the role mapping logic is hardcoded, based on node index
+in ``PDF``/``IDF`` (which should also be in sync, i.e. the parameters of the
+``n``-th cluster node defined in ``PDF`` should be the ``n``-th node in
+``IDF`` structures too).
+
++-------------+------------------+----------------------+
+| Node index  | ``HA`` scenario  | ``noHA`` scenario    |
++=============+==================+======================+
+| 1st         | ``kvm01``        | ``ctl01``            |
++-------------+------------------+----------------------+
+| 2nd         | ``kvm02``        | ``gtw01``            |
++-------------+------------------+----------------------+
+| 3rd         | ``kvm03``        | ``odl01``/``unused`` |
++-------------+------------------+----------------------+
+| 4th,        | ``cmp001``,      | ``cmp001``,          |
+| 5th,        | ``cmp002``,      | ``cmp002``,          |
+| ...         | ``...``          | ``...``              |
++-------------+------------------+----------------------+
+
+.. TIP::
+
+    To switch node role(s), simply reorder the node definitions in
+    ``PDF``/``IDF`` (make sure to keep them in sync).
+
+``IDF``
+~~~~~~~
+
+The Installer Descriptor File extends the ``PDF`` with POD related parameters
+required by the installer. This information may differ per each installer type
+and it is not considered part of the POD infrastructure.
+
+``idf.*`` Overview
+------------------
+
+The ``IDF`` file must be named after the ``PDF`` it attaches to, with the
+prefix ``idf-``.
+
+.. SEEALSO::
+
+    A reference file with the expected ``yaml`` structure is available at:
+
+    - ``mcp/scripts/pharos/config/pdf/idf-pod1.yaml``
+
+The file follows a ``yaml`` structure and at least two sections
+(``idf.net_config`` and ``idf.fuel``) are expected.
+
+The ``idf.fuel`` section defines several sub-sections required by the OPNFV
+Fuel installer:
+
+- ``jumphost``: List of bridge names for each network on the Jumpserver;
+- ``network``: List of device name and bus address info of all the target nodes.
+  The order must be aligned with the order defined in the ``PDF`` file.
+  The OPNFV Fuel installer relies on the ``IDF`` model to setup all node NICs
+  by defining the expected device name and bus address;
+- ``maas``: Defines the target nodes commission timeout and deploy timeout;
+- ``reclass``: Defines compute parameter tuning, including huge pages, ``CPU``
+  pinning and other ``DPDK`` settings;
+
+.. code-block:: yaml
+
+    ---
+    idf:
+      version: 0.1     # fixed, the only supported version (mandatory)
+      net_config:      # POD network configuration overview (mandatory)
+        oob: ...       # mandatory
+        admin: ...     # mandatory
+        mgmt: ...      # mandatory
+        storage: ...   # mandatory
+        private: ...   # mandatory
+        public: ...    # mandatory
+      fuel:            # OPNFV Fuel specific section (mandatory)
+        jumphost:      # OPNFV Fuel jumpserver bridge configuration (mandatory)
+          bridges:                          # Bridge name mapping (mandatory)
+            admin: 'admin_br'               # <PXE/admin bridge name> or ~
+            mgmt: 'mgmt_br'                 # <mgmt bridge name> or ~
+            private: ~                      # <private bridge name> or ~
+            public: 'public_br'             # <public bridge name> or ~
+          trunks: ...                       # Trunked networks (optional)
+        maas:                               # MaaS timeouts (optional)
+          timeout_comissioning: 10          # commissioning timeout in minutes
+          timeout_deploying: 15             # deploy timeout in minutes
+        network:                            # Cluster nodes network (mandatory)
+          ntp_strata_host1: 1.pool.ntp.org  # NTP1 (optional)
+          ntp_strata_host2: 0.pool.ntp.org  # NTP2 (optional)
+          node: ...                         # List of per-node cfg (mandatory)
+        reclass:                            # Additional params (mandatory)
+          node: ...                         # List of per-node cfg (mandatory)
+
+``idf.net_config``
+------------------
+
+``idf.net_config`` was introduced as a mechanism to map all the usual cluster
+networks (internal and provider networks, e.g. ``mgmt``) to their ``VLAN``
+tags, ``CIDR`` and a physical interface index (used to match networks to
+interface names, like ``eth0``, on the cluster nodes).
 
-   .. code-block:: bash
 
-       $ ci/deploy.sh -l <lab_name> \
-                      -p <pod_name> \
-                      -b <URI to configuration repo containing the PDF file> \
-                      -s <scenario> \
-                      -D \
-                      -S <Storage directory for disk images> |& tee deploy.log
+.. WARNING::
 
-.. NOTE::
+    The mapping between one network segment (e.g. ``mgmt``) and its ``CIDR``/
+    ``VLAN`` is not configurable on a per-node basis, but instead applies to
+    all the nodes in the cluster.
+
+For each network, the following parameters are currently supported:
+
++--------------------------+--------------------------------------------------+
+| ``idf.net_config.*`` key | Details                                          |
++==========================+==================================================+
+| ``interface``            | The index of the interface to use for this net.  |
+|                          | For each cluster node (if network is present),   |
+|                          | OPNFV Fuel will determine the underlying physical|
+|                          | interface by picking the element at index        |
+|                          | ``interface`` from the list of network interface |
+|                          | names defined in                                 |
+|                          | ``idf.fuel.network.node.*.interfaces``.          |
+|                          | Required for each network.                       |
+|                          |                                                  |
+|                          | .. NOTE::                                        |
+|                          |                                                  |
+|                          |     The interface index should be the            |
+|                          |     same on all cluster nodes. This can be       |
+|                          |     achieved by ordering them accordingly in     |
+|                          |     ``PDF``/``IDF``.                             |
++--------------------------+--------------------------------------------------+
+| ``vlan``                 | ``VLAN`` tag (integer) or the string ``native``. |
+|                          | Required for each network.                       |
++--------------------------+--------------------------------------------------+
+| ``ip-range``             | When specified, all cluster IPs dynamically      |
+|                          | allocated by OPNFV Fuel for that network will be |
+|                          | assigned inside this range.                      |
+|                          | Required for ``oob``, optional for others.       |
+|                          |                                                  |
+|                          | .. NOTE::                                        |
+|                          |                                                  |
+|                          |     For now, only range start address is used.   |
++--------------------------+--------------------------------------------------+
+| ``network``              | Network segment address.                         |
+|                          | Required for each network, except ``oob``.       |
++--------------------------+--------------------------------------------------+
+| ``mask``                 | Network segment mask.                            |
+|                          | Required for each network, except ``oob``.       |
++--------------------------+--------------------------------------------------+
+| ``gateway``              | Gateway IP address.                              |
+|                          | Required for ``public``, N/A for others.         |
++--------------------------+--------------------------------------------------+
+| ``dns``                  | List of DNS IP addresses.                        |
+|                          | Required for ``public``, N/A for others.         |
++--------------------------+--------------------------------------------------+
+
+Sample ``public`` network configuration block:
+
+.. code-block:: yaml
+
+    idf:
+        net_config:
+            public:
+              interface: 1
+              vlan: native
+              network: 10.0.16.0
+              ip-range: 10.0.16.100-10.0.16.253
+              mask: 24
+              gateway: 10.0.16.254
+              dns:
+                - 8.8.8.8
+                - 8.8.4.4
+
+.. TOPIC:: ``hybrid`` POD notes
+
+    Interface indexes must be the same for all nodes, which is problematic
+    when mixing ``virtual`` nodes (where all interfaces were untagged
+    so far) with ``baremetal`` nodes (where interfaces usually carry
+    tagged VLANs).
+
+    .. TIP::
+
+        To achieve this, a special ``jumpserver`` network layout is used:
+        ``mgmt``, ``storage``, ``private``, ``public`` are trunked together
+        in a single ``trunk`` bridge:
+
+        - without decapsulating them (if they are also tagged on ``baremetal``);
+          a ``trunk.<vlan_tag>`` interface should be created on the
+          ``jumpserver`` for each tagged VLAN so the kernel won't drop the
+          packets;
+        - by decapsulating them  first (if they are also untagged on
+          ``baremetal`` nodes);
+
+    The ``trunk`` bridge is then used for all bridges OPNFV Fuel
+    is aware of in ``idf.fuel.jumphost.bridges``, e.g. for a ``trunk`` where
+    only ``mgmt`` network is not decapsulated:
+
+    .. code-block:: yaml
+
+        idf:
+            fuel:
+              jumphost:
+                bridges:
+                  admin: 'admin_br'
+                  mgmt: 'trunk'
+                  private: 'trunk'
+                  public: 'trunk'
+                trunks:
+                  # mgmt network is not decapsulated for jumpserver infra VMs,
+                  # to align with the VLAN configuration of baremetal nodes.
+                  mgmt: True
 
-    The deployment uses the OPNFV Pharos project as input (PDF and IDF files)
-    for hardware and network configuration of all current OPNFV PODs.
-    When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override
-    the path for the labconfig directory structure containing the PDF and IDF (see below).
+.. WARNING::
 
-Examples
---------
-#. Virtual deploy
+    The Linux kernel limits the name of network interfaces to 16 characters.
+    Extra care is required when choosing bridge names, so appending the
+    ``VLAN`` tag won't lead to an interface name length exceeding that limit.
+
+``idf.fuel.network``
+--------------------
+
+``idf.fuel.network`` allows mapping the cluster networks (e.g. ``mgmt``) to
+their physical interface name (e.g. ``eth0``) and bus address on the cluster
+nodes.
+
+``idf.fuel.network.node`` should be a list with the same number (and order) of
+elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node
+in ``PDF`` will use the interface name and bus address defined in the second
+list element.
+
+Below is a sample configuration block for a single node with two interfaces:
+
+.. code-block:: yaml
+
+    idf:
+      fuel:
+        network:
+          node:
+            # Ordered-list, index should be in sync with node index in PDF
+            - interfaces:
+                # Ordered-list, index should be in sync with interface index
+                # in PDF
+                - 'ens3'
+                - 'ens4'
+              busaddr:
+                # Bus-info reported by `ethtool -i ethX`
+                - '0000:00:03.0'
+                - '0000:00:04.0'
+
+
+``idf.fuel.reclass``
+--------------------
+
+``idf.fuel.reclass`` provides a way of overriding default values in the
+reclass cluster model.
+
+This currently covers strictly compute parameter tuning, including huge
+pages, ``CPU`` pinning and other ``DPDK`` settings.
+
+``idf.fuel.reclass.node`` should be a list with the same number (and order) of
+elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node
+in ``PDF`` will use the parameters defined in the second list element.
+
+The following parameters are currently supported:
+
++---------------------------------+-------------------------------------------+
+| ``idf.fuel.reclass.node.*``     | Details                                   |
+| key                             |                                           |
++=================================+===========================================+
+| ``nova_cpu_pinning``            | List of CPU cores nova will be pinned to. |
+|                                 |                                           |
+|                                 | .. WARNING::                              |
+|                                 |                                           |
+|                                 |     Currently disabled.                   |
++---------------------------------+-------------------------------------------+
+| ``compute_hugepages_size``      | Size of each persistent huge pages.       |
+|                                 |                                           |
+|                                 | Usual values are ``2M`` and ``1G``.       |
++---------------------------------+-------------------------------------------+
+| ``compute_hugepages_count``     | Total number of persistent huge pages.    |
++---------------------------------+-------------------------------------------+
+| ``compute_hugepages_mount``     | Mount point to use for huge pages.        |
++---------------------------------+-------------------------------------------+
+| ``compute_kernel_isolcpu``      | List of certain CPU cores that are        |
+|                                 | isolated from Linux scheduler.            |
++---------------------------------+-------------------------------------------+
+| ``compute_dpdk_driver``         | Kernel module to provide userspace I/O    |
+|                                 | support.                                  |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_pmd_cpu_mask``    | Hexadecimal mask of CPUs to run ``DPDK``  |
+|                                 | Poll-mode drivers.                        |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_dpdk_socket_mem`` | Set of amount huge pages in ``MB`` to be  |
+|                                 | used by ``OVS-DPDK`` daemon taken for each|
+|                                 | ``NUMA`` node. Set size is equal to       |
+|                                 | ``NUMA`` nodes count, elements are        |
+|                                 | divided by comma.                         |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_dpdk_lcore_mask`` | Hexadecimal mask of ``DPDK`` lcore        |
+|                                 | parameter used to run ``DPDK`` processes. |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_memory_channels`` | Number of memory channels to be used.     |
++---------------------------------+-------------------------------------------+
+| ``dpdk0_driver``                | NIC driver to use for physical network    |
+|                                 | interface.                                |
++---------------------------------+-------------------------------------------+
+| ``dpdk0_n_rxq``                 | Number of ``RX`` queues.                  |
++---------------------------------+-------------------------------------------+
+
+Sample ``compute_params`` configuration block (for a single node):
+
+.. code-block:: yaml
+
+    idf:
+      fuel:
+        reclass:
+          node:
+            - compute_params:
+                common: &compute_params_common
+                  compute_hugepages_size: 2M
+                  compute_hugepages_count: 2048
+                  compute_hugepages_mount: /mnt/hugepages_2M
+                dpdk:
+                  <<: *compute_params_common
+                  compute_dpdk_driver: uio
+                  compute_ovs_pmd_cpu_mask: "0x6"
+                  compute_ovs_dpdk_socket_mem: "1024"
+                  compute_ovs_dpdk_lcore_mask: "0x8"
+                  compute_ovs_memory_channels: "2"
+                  dpdk0_driver: igb_uio
+                  dpdk0_n_rxq: 2
+
+``SDF``
+~~~~~~~
+
+Scenario Descriptor Files are not yet implemented in the OPNFV Fuel ``Gambia``
+release.
+
+Instead, embedded OPNFV Fuel scenarios files are locally available in
+``mcp/config/scenario``.
 
-   To start a virtual deployment, it is required to have the **virtual** keyword
-   while specifying the pod name to the installer script.
+OPNFV Software Installation and Deployment
+==========================================
 
-   It will create the required bridges and networks, configure Salt Master and
-   install OpenStack.
+This section describes the process of installing all the components needed to
+deploy the full OPNFV reference platform stack across a server cluster.
 
-      .. code-block:: bash
+Deployment Types
+~~~~~~~~~~~~~~~~
 
-          $ ci/deploy.sh -l ericsson \
-                         -p virtual3 \
-                         -s os-nosdn-nofeature-noha \
-                         -D \
-                         -S /home/jenkins/tmpdir |& tee deploy.log
+.. WARNING::
 
-   Once the deployment is complete, the OpenStack Dashboard, Horizon, is
-   available at ``http://<controller VIP>:8078``
-   The administrator credentials are **admin** / **opnfv_secret**.
+    OPNFV releases previous to ``Gambia`` used to rely on the ``virtual``
+    keyword being part of the POD name (e.g. ``ericsson-virtual2``) to
+    configure the deployment type as ``virtual``. Otherwise ``baremetal``
+    was implied.
 
-   A simple (and generic) sample PDF/IDF set of configuration files may
-   be used for virtual deployments by setting lab/POD name to ``local-virtual1``.
-   This sample configuration is x86_64 specific and hardcodes certain parameters,
-   like public network address space, so a dedicated PDF/IDF is highly recommended.
+``Gambia`` and newer releases are more flexbile towards supporting a mix
+of ``baremetal`` and ``virtual`` nodes, so the type of deployment is
+now automatically determined based on the cluster nodes types in ``PDF``:
 
-      .. code-block:: bash
++---------------------------------+-------------------------------------------+
+| ``PDF`` has nodes of type       | Deployment type                           |
++---------------+-----------------+                                           |
+| ``baremetal`` | ``virtual``     |                                           |
++===============+=================+===========================================+
+| yes           | no              | ``baremetal``                             |
++---------------+-----------------+-------------------------------------------+
+| yes           | yes             | ``hybrid``                                |
++---------------+-----------------+-------------------------------------------+
+| no            | yes             | ``virtual``                               |
++---------------+-----------------+-------------------------------------------+
 
-          $ ci/deploy.sh -l local \
-                         -p virtual1 \
-                         -s os-nosdn-nofeature-noha \
-                         -D \
-                         -S /home/jenkins/tmpdir |& tee deploy.log
+Based on that, the deployment script will later enable/disable certain extra
+nodes (e.g. ``mas01``) and/or ``STATE`` files (e.g. ``maas``).
 
-#. Baremetal deploy
+``HA`` vs ``noHA``
+~~~~~~~~~~~~~~~~~~
 
-   A x86 deploy on pod2 from Linux Foundation lab
+High availability of OpenStack services is determined based on scenario name,
+e.g. ``os-nosdn-nofeature-noha`` vs ``os-nosdn-nofeature-ha``.
 
-      .. code-block:: bash
+.. TIP::
 
-          $ ci/deploy.sh -l lf \
-                         -p pod2 \
-                         -s os-nosdn-nofeature-ha \
-                         -D \
-                         -S /home/jenkins/tmpdir |& tee deploy.log
+    ``HA`` scenarios imply a virtualized control plane (``VCP``) for the
+    OpenStack services running on the 3 ``kvm`` nodes.
 
-      .. figure:: img/lf_pod2.png
-         :align: center
-         :alt: Fuel@OPNFV LF POD2 Network Layout
+    .. SEEALSO::
 
-         Fuel@OPNFV LF POD2 Network Layout
+        An experimental feature argument (``-N``) is supported by the deploy
+        script for disabling ``VCP``, although it might not be supported by
+        all scenarios and is not being continuosly validated by OPNFV CI/CD.
 
-   An aarch64 deploy on pod5 from Arm lab
+.. WARNING::
 
-      .. code-block:: bash
+    ``virtual`` ``HA`` deployments are not officially supported, due to
+    poor performance and various limitations of nested virtualization on
+    both ``x86_64`` and ``aarch64`` architectures.
 
-          $ ci/deploy.sh -l arm \
-                         -p pod5 \
-                         -s os-nosdn-nofeature-ha \
-                         -D \
-                         -S /home/jenkins/tmpdir |& tee deploy.log
+    .. TIP::
 
-      .. figure:: img/arm_pod5.png
-         :align: center
-         :alt: Fuel@OPNFV ARM POD5 Network Layout
+        ``virtual`` ``HA`` deployments without ``VCP`` are supported, but
+        highly experimental.
 
-         Fuel@OPNFV ARM POD5 Network Layout
++-------------------------------+-------------------------+-------------------+
+| Feature                       | ``HA`` scenario         | ``noHA`` scenario |
++===============================+=========================+===================+
+| ``VCP``                       | yes,                    | no                |
+| (Virtualized Control Plane)   | disabled with ``-N``    |                   |
++-------------------------------+-------------------------+-------------------+
+| OpenStack APIs SSL            | yes                     | no                |
++-------------------------------+-------------------------+-------------------+
+| Storage                       | ``GlusterFS``           | ``NFS``           |
++-------------------------------+-------------------------+-------------------+
 
-   Once the deployment is complete, the SaltStack Deployment Documentation is
-   available at ``http://<proxy public VIP>:8090``.
+Steps to Start the Automatic Deploy
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-   When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override
-   the path for the labconfig directory structure containing the PDF and IDF.
+These steps are common for ``virtual``, ``baremetal`` or ``hybrid`` deploys,
+``x86_64``, ``aarch64`` or ``mixed`` (``x86_64`` and ``aarch64``):
 
-   .. code-block:: bash
+- Clone the OPNFV Fuel code from gerrit
+- Checkout the ``Gambia`` release tag
+- Start the deploy script
 
-       $ ci/deploy.sh -b file://<absolute_path_to_labconfig> \
-                      -l <lab_name> \
-                      -p <pod_name> \
-                      -s <scenario> \
-                      -D \
-                      -S <tmp_folder> |& tee deploy.log
+.. NOTE::
 
-   - <absolute_path_to_labconfig> is the absolute path to a local directory, populated
-     similar to Pharos, i.e. PDF/IDF reside in ``<absolute_path_to_labconfig>/labs/<lab_name>``
-   - <lab_name> is the same as the directory in the path above
-   - <pod_name> is the name used for the PDF (``<pod_name>.yaml``) and IDF (``idf-<pod_name>.yaml``) files
+    The deployment uses the OPNFV Pharos project as input (``PDF`` and
+    ``IDF`` files) for hardware and network configuration of all current
+    OPNFV PODs.
 
+    When deploying a new POD, one may pass the ``-b`` flag to the deploy
+    script to override the path for the labconfig directory structure
+    containing the ``PDF`` and ``IDF`` (``<URI to configuration repo ...>`` is
+    the absolute path to a local or remote directory structure, populated
+    similar to `pharos git repo`_, i.e. ``PDF``/``IDF`` reside in a
+    subdirectory called ``labs/<lab_name>``).
 
+.. code-block:: console
 
-Pod and Installer Descriptor Files
-==================================
+    jenkins@jumpserver:~$ git clone https://git.opnfv.org/fuel
+    jenkins@jumpserver:~$ cd fuel
+    jenkins@jumpserver:~/fuel$ git checkout opnfv-7.0.0
+    jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> \
+                                            -p <pod_name> \
+                                            -b <URI to configuration repo containing the PDF/IDF files> \
+                                            -s <scenario> \
+                                            -D \
+                                            -S <Storage directory for deploy artifacts> |& tee deploy.log
 
-Descriptor files provide the installer with an abstraction of the target pod
-with all its hardware characteristics and required parameters. This information
-is split into two different files:
-Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
+.. TIP::
 
-The Pod Descriptor File is a hardware description of the pod
-infrastructure. The information is modeled under a yaml structure.
-A reference file with the expected yaml structure is available at
-``mcp/config/labs/local/pod1.yaml``.
+    Besides the basic options,  there are other recommended deploy arguments:
 
-The hardware description is arranged into a main "jumphost" node and a "nodes"
-set for all target boards. For each node the following characteristics
-are defined:
+    - use ``-D`` option to enable the debug info
+    - use ``-S`` option to point to a tmp dir where the disk images are saved.
+      The deploy artifacts will be re-used on subsequent (re)deployments.
+    - use ``|& tee`` to save the deploy log to a file
+
+Typical Cluster Examples
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Common cluster layouts usually fall into one of the cases described below,
+categorized by deployment type (``baremetal``, ``virtual`` or ``hybrid``) and
+high availability (``HA`` or ``noHA``).
 
-- Node parameters including CPU features and total memory.
-- A list of available disks.
-- Remote management parameters.
-- Network interfaces list including mac address, speed, advanced features and name.
+A simplified overview of the steps ``deploy.sh`` will automatically perform is:
+
+- create a Salt Master Docker container on the jumpserver, which will drive
+  the rest of the installation;
+- ``baremetal`` or ``hybrid`` only: create a ``MaaS`` infrastructure node VM,
+  which will be leveraged using Salt to handle OS provisioning on the
+  ``baremetal`` nodes;
+- leverage Salt to install & configure OpenStack;
 
 .. NOTE::
 
-    The fixed IPs are ignored by the MCP installer script and it will instead
-    assign based on the network ranges defined in IDF.
+    A virtual network ``mcpcontrol`` is always created for initial connection
+    of the VMs on Jumphost.
 
-The Installer Descriptor File extends the PDF with pod related parameters
-required by the installer. This information may differ per each installer type
-and it is not considered part of the pod infrastructure.
-The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected
-structure is available at ``mcp/config/labs/local/idf-pod1.yaml``.
-
-The file follows a yaml structure and two sections "net_config" and "fuel" are expected.
-
-The "net_config" section describes all the internal and provider networks
-assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and
-attached interface on the boards. Untagged vlans shall be defined as "native".
-
-The "fuel" section defines several sub-sections required by the Fuel installer:
-
-- jumphost: List of bridge names for each network on the Jumpserver.
-- network: List of device name and bus address info of all the target nodes.
-  The order must be aligned with the order defined in PDF file. Fuel installer relies on the IDF model
-  to setup all node NICs by defining the expected device name and bus address.
-- maas: Defines the target nodes commission timeout and deploy timeout. (optional)
-- reclass: Defines compute parameter tuning, including huge pages, cpu pinning
-  and other DPDK settings. (optional)
-
-The following parameters can be defined in the IDF files under "reclass". Those value will
-overwrite the default configuration values in Fuel repository:
-
-- nova_cpu_pinning: List of CPU cores nova will be pinned to. Currently disabled.
-- compute_hugepages_size: Size of each persistent huge pages. Usual values are '2M' and '1G'.
-- compute_hugepages_count: Total number of persistent huge pages.
-- compute_hugepages_mount: Mount point to use for huge pages.
-- compute_kernel_isolcpu: List of certain CPU cores that are isolated from Linux scheduler.
-- compute_dpdk_driver: Kernel module to provide userspace I/O support.
-- compute_ovs_pmd_cpu_mask: Hexadecimal mask of CPUs to run DPDK Poll-mode drivers.
-- compute_ovs_dpdk_socket_mem: Set of amount huge pages in MB to be used by OVS-DPDK daemon
-  taken for each NUMA node. Set size is equal to NUMA nodes count, elements are divided by comma.
-- compute_ovs_dpdk_lcore_mask: Hexadecimal mask of DPDK lcore parameter used to run DPDK processes.
-- compute_ovs_memory_channels: Number of memory channels to be used.
-- dpdk0_driver: NIC driver to use for physical network interface.
-- dpdk0_n_rxq: Number of RX queues.
-
-
-The full description of the PDF and IDF file structure are available as yaml schemas.
-The schemas are defined as a git submodule in Fuel repository. Input files provided
-to the installer will be validated against the schemas.
-
-- ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml``
-- ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml``
+.. WARNING::
 
-=============
-Release Notes
-=============
+    A single cluster deployment per ``jumpserver`` node is currently supported,
+    indifferent of its type (``virtual``, ``baremetal`` or ``hybrid``).
 
-Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article.
+Once the deployment is complete, the following should be accessible:
 
-==========
-References
-==========
++---------------+----------------------------------+---------------------------+
+| Resource      | ``HA`` scenario                  | ``noHA`` scenario         |
++===============+==================================+===========================+
+| ``Horizon``   | ``https://<prx public VIP>``     | ``http://<ctl VIP>:8078`` |
+| (Openstack    |                                  |                           |
+| Dashboard)    |                                  |                           |
++---------------+----------------------------------+---------------------------+
+| ``SaltStack`` | ``http://<prx public VIP>:8090`` | N/A                       |
+| Deployment    |                                  |                           |
+| Documentation |                                  |                           |
++---------------+----------------------------------+---------------------------+
 
-OPNFV
+.. SEEALSO::
 
-1) `OPNFV Home Page <https://www.opnfv.org>`_
-2) `OPNFV documentation <https://docs.opnfv.org>`_
-3) `Software downloads <https://www.opnfv.org/software/download>`_
+    For more details on locating and importing the generated SSL certificate,
+    see :ref:`OPNFV Fuel User Guide <fuel-userguide>`.
 
-OpenStack
+``virtual`` ``noHA`` POD
+------------------------
 
-4) `OpenStack Queens Release Artifacts <https://www.openstack.org/software/queens>`_
-5) `OpenStack Documentation <https://docs.openstack.org>`_
+In the following figure there are two generic examples of ``virtual`` deploys,
+each on a separate Jumphost node, both behind the same ``TOR`` switch:
 
-OpenDaylight
+- Jumphost 1 has only virsh bridges (created by the deploy script);
+- Jumphost 2 has a mix of Linux (manually created) and ``libvirt`` managed
+  bridges (created by the deploy script);
 
-6) `OpenDaylight Artifacts <https://www.opendaylight.org/software/downloads>`_
+.. figure:: img/fuel_virtual_noha.png
+   :align: center
+   :width: 60%
+   :alt: OPNFV Fuel Virtual noHA POD Network Layout Examples
+
+   OPNFV Fuel Virtual noHA POD Network Layout Examples
+
+   +-------------+------------------------------------------------------------+
+   | ``cfg01``   | Salt Master Docker container                               |
+   +-------------+------------------------------------------------------------+
+   | ``ctl01``   | Controller VM                                              |
+   +-------------+------------------------------------------------------------+
+   | ``gtw01``   | Gateway VM with neutron services                           |
+   |             | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc)     |
+   +-------------+------------------------------------------------------------+
+   | ``odl01``   | VM on which ``ODL`` runs                                   |
+   |             | (for scenarios deployed with ODL)                          |
+   +-------------+------------------------------------------------------------+
+   | ``cmp001``, | Compute VMs                                                |
+   | ``cmp002``  |                                                            |
+   +-------------+------------------------------------------------------------+
+
+.. TIP::
+
+    If external access to the ``public`` network is not required, there is
+    little to no motivation to create a custom ``PDF``/``IDF`` set for a
+    virtual deployment.
+
+    Instead, the existing virtual PODs definitions in `pharos git repo`_ can
+    be used as-is:
+
+    - ``ericsson-virtual1`` for ``x86_64``;
+    - ``arm-virtual2`` for ``aarch64``;
+
+.. code-block:: console
+
+    # example deploy cmd for an x86_64 virtual cluster
+    jenkins@jumpserver:~/fuel$ ci/deploy.sh -l ericsson \
+                                            -p virtual1 \
+                                            -s os-nosdn-nofeature-noha \
+                                            -D \
+                                            -S /home/jenkins/tmpdir |& tee deploy.log
+
+``baremetal`` ``noHA`` POD
+--------------------------
 
-Fuel
+.. WARNING::
 
-7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
+    These scenarios are not tested in OPNFV CI, so they are considered
+    experimental.
 
-Salt
+.. figure:: img/fuel_baremetal_noha.png
+   :align: center
+   :width: 60%
+   :alt: OPNFV Fuel Baremetal noHA POD Network Layout Example
+
+   OPNFV Fuel Baremetal noHA POD Network Layout Example
+
+   +-------------+------------------------------------------------------------+
+   | ``cfg01``   | Salt Master Docker container                               |
+   +-------------+------------------------------------------------------------+
+   | ``mas01``   | MaaS Node VM                                               |
+   +-------------+------------------------------------------------------------+
+   | ``ctl01``   | Baremetal controller node                                  |
+   +-------------+------------------------------------------------------------+
+   | ``gtw01``   | Baremetal Gateway with neutron services                    |
+   |             | (dhcp agent, L3 agent, metadata, etc)                      |
+   +-------------+------------------------------------------------------------+
+   | ``odl01``   | Baremetal node on which ODL runs                           |
+   |             | (for scenarios deployed with ODL, otherwise unused         |
+   +-------------+------------------------------------------------------------+
+   | ``cmp001``, | Baremetal Computes                                         |
+   | ``cmp002``  |                                                            |
+   +-------------+------------------------------------------------------------+
+   | Tenant VM   | VM running in the cloud                                    |
+   +-------------+------------------------------------------------------------+
+
+``baremetal`` ``HA`` POD
+------------------------
+
+.. figure:: img/fuel_baremetal_ha.png
+   :align: center
+   :width: 60%
+   :alt: OPNFV Fuel Baremetal HA POD Network Layout Example
+
+   OPNFV Fuel Baremetal HA POD Network Layout Example
+
+   +---------------------------+----------------------------------------------+
+   | ``cfg01``                 | Salt Master Docker container                 |
+   +---------------------------+----------------------------------------------+
+   | ``mas01``                 | MaaS Node VM                                 |
+   +---------------------------+----------------------------------------------+
+   | ``kvm01``,                | Baremetals which hold the VMs with           |
+   | ``kvm02``,                | controller functions                         |
+   | ``kvm03``                 |                                              |
+   +---------------------------+----------------------------------------------+
+   | ``prx01``,                | Proxy VMs for Nginx                          |
+   | ``prx02``                 |                                              |
+   +---------------------------+----------------------------------------------+
+   | ``msg01``,                | RabbitMQ Service VMs                         |
+   | ``msg02``,                |                                              |
+   | ``msg03``                 |                                              |
+   +---------------------------+----------------------------------------------+
+   | ``dbs01``,                | MySQL service VMs                            |
+   | ``dbs02``,                |                                              |
+   | ``dbs03``                 |                                              |
+   +---------------------------+----------------------------------------------+
+   | ``mdb01``,                | Telemetry VMs                                |
+   | ``mdb02``,                |                                              |
+   | ``mdb03``                 |                                              |
+   +---------------------------+----------------------------------------------+
+   | ``odl01``                 | VM on which ``OpenDaylight`` runs            |
+   |                           | (for scenarios deployed with ``ODL``)        |
+   +---------------------------+----------------------------------------------+
+   | ``cmp001``,               | Baremetal Computes                           |
+   | ``cmp002``                |                                              |
+   +---------------------------+----------------------------------------------+
+   | Tenant VM                 | VM running in the cloud                      |
+   +---------------------------+----------------------------------------------+
+
+.. code-block:: console
+
+    # x86_x64 baremetal deploy on pod2 from Linux Foundation lab (lf-pod2)
+    jenkins@jumpserver:~/fuel$ ci/deploy.sh -l lf \
+                                            -p pod2 \
+                                            -s os-nosdn-nofeature-ha \
+                                            -D \
+                                            -S /home/jenkins/tmpdir |& tee deploy.log
+
+.. code-block:: console
+
+    # aarch64 baremetal deploy on pod5 from Enea ARM lab (arm-pod5)
+    jenkins@jumpserver:~/fuel$ ci/deploy.sh -l arm \
+                                            -p pod5 \
+                                            -s os-nosdn-nofeature-ha \
+                                            -D \
+                                            -S /home/jenkins/tmpdir |& tee deploy.log
+
+``hybrid`` ``noHA`` POD
+-----------------------
+
+.. figure:: img/fuel_hybrid_noha.png
+   :align: center
+   :width: 60%
+   :alt: OPNFV Fuel Hybrid noHA POD Network Layout Examples
+
+   OPNFV Fuel Hybrid noHA POD Network Layout Examples
+
+   +-------------+------------------------------------------------------------+
+   | ``cfg01``   | Salt Master Docker container                               |
+   +-------------+------------------------------------------------------------+
+   | ``mas01``   | MaaS Node VM                                               |
+   +-------------+------------------------------------------------------------+
+   | ``ctl01``   | Controller VM                                              |
+   +-------------+------------------------------------------------------------+
+   | ``gtw01``   | Gateway VM with neutron services                           |
+   |             | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc)     |
+   +-------------+------------------------------------------------------------+
+   | ``odl01``   | VM on which ``ODL`` runs                                   |
+   |             | (for scenarios deployed with ODL)                          |
+   +-------------+------------------------------------------------------------+
+   | ``cmp001``, | Baremetal Computes                                         |
+   | ``cmp002``  |                                                            |
+   +-------------+------------------------------------------------------------+
+
+Automatic Deploy Breakdown
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When an automatic deploy is started, the following operations are performed
+sequentially by the deploy script:
+
++------------------+----------------------------------------------------------+
+| **Deploy stage** | **Details**                                              |
++==================+==========================================================+
+| Argument         | enviroment variables and command line arguments passed   |
+| Parsing          | to ``deploy.sh`` are interpreted                         |
++------------------+----------------------------------------------------------+
+| Distribution     | Install and/or configure mandatory requirements on the   |
+| Package          | ``jumpserver`` node:                                     |
+| Installation     |                                                          |
+|                  | - ``Docker`` (from upstream and not distribution repos,  |
+|                  |   as the version included in ``Ubuntu`` ``Xenial`` is    |
+|                  |   outdated);                                             |
+|                  | - ``docker-compose`` (from upstream, as the version      |
+|                  |   included in both ``CentOS 7`` and                      |
+|                  |   ``Ubuntu Xenial 16.04`` has dependency issues on most  |
+|                  |   systems);                                              |
+|                  | - ``virt-inst`` (from upstream, as the version included  |
+|                  |   in ``Ubuntu Xenial 16.04`` is outdated and lacks       |
+|                  |   certain required features);                            |
+|                  | - other miscelaneous requirements, depending on          |
+|                  |   ``jumpserver`` distribution OS;                        |
+|                  |                                                          |
+|                  | .. SEEALSO::                                             |
+|                  |                                                          |
+|                  |     - ``mcp/scripts/requirements_deb.yaml`` (``Ubuntu``) |
+|                  |     - ``mcp/scripts/requirements_rpm.yaml`` (``CentOS``) |
+|                  |                                                          |
+|                  | .. WARNING::                                             |
+|                  |                                                          |
+|                  |     Mininum required ``Docker`` version is ``17.x``.     |
+|                  |                                                          |
+|                  | .. WARNING::                                             |
+|                  |                                                          |
+|                  |     Mininum required ``virt-inst`` version is ``1.4``.   |
++------------------+----------------------------------------------------------+
+| Patch            | For each ``git`` submodule in OPNFV Fuel repository,     |
+| Apply            | if a subdirectory with the same name exists under        |
+|                  | ``mcp/patches``, all patches in that subdirectory are    |
+|                  | applied using ``git-am`` to the respective ``git``       |
+|                  | submodule.                                               |
+|                  |                                                          |
+|                  | This allows OPNFV Fuel to alter upstream repositories    |
+|                  | contents before consuming them, including:               |
+|                  |                                                          |
+|                  | - ``Docker`` container build process customization;      |
+|                  | - ``salt-formulas`` customization;                       |
+|                  | - ``reclass.system`` customization;                      |
+|                  |                                                          |
+|                  | .. SEEALSO::                                             |
+|                  |                                                          |
+|                  |     - ``mcp/patches/README.rst``                         |
++------------------+----------------------------------------------------------+
+| SSH RSA Keypair  | If not already present, a RSA keypair is generated on    |
+| Generation       | the ``jumpserver`` node at:                              |
+|                  |                                                          |
+|                  | - ``/var/lib/opnfv/mcp.rsa{,.pub}``                      |
+|                  |                                                          |
+|                  | The public key will be added to the ``authorized_keys``  |
+|                  | list for ``ubuntu`` user, so the private key can be used |
+|                  | for key-based logins on:                                 |
+|                  |                                                          |
+|                  | - ``cfg01``, ``mas01`` infrastructure nodes;             |
+|                  | - all cluster nodes (``baremetal`` and/or ``virtual``),  |
+|                  |   including ``VCP`` VMs;                                 |
++------------------+----------------------------------------------------------+
+| ``j2``           | Based on ``XDF`` (``PDF``, ``IDF``, ``SDF``) and         |
+| Expansion        | additional deployment configuration determined during    |
+|                  | ``argument parsing`` stage described above, all jinja2   |
+|                  | templates are expanded, including:                       |
+|                  |                                                          |
+|                  | - various classes in ``reclass.cluster``;                |
+|                  | - docker-compose ``yaml`` for Salt Master bring-up;      |
+|                  | - ``libvirt`` network definitions (``xml``);             |
++------------------+----------------------------------------------------------+
+| Jumpserver       | Basic validation that common ``jumpserver`` requirements |
+| Requirements     | are satisfied, e.g. ``PXE/admin`` is Linux bridge if     |
+| Check            | ``baremetal`` nodes are defined in the ``PDF``.          |
++------------------+----------------------------------------------------------+
+| Infrastucture    | .. NOTE::                                                |
+| Setup            |                                                          |
+|                  |     All steps apply to and only to the ``jumpserver``.   |
+|                  |                                                          |
+|                  | - prepare virtual machines;                              |
+|                  | - (re)create ``libvirt`` managed networks;               |
+|                  | - apply ``sysctl`` configuration;                        |
+|                  | - apply ``udev`` configuration;                          |
+|                  | - create & start virtual machines prepared earlier;      |
+|                  | - create & start Salt Master (``cfg01``) Docker          |
+|                  |   container;                                             |
++------------------+----------------------------------------------------------+
+| ``STATE``        | Based on deployment type, scenario and other parameters, |
+| Files            | a ``STATE`` file list is constructed, then executed      |
+|                  | sequentially.                                            |
+|                  |                                                          |
+|                  | .. TIP::                                                 |
+|                  |                                                          |
+|                  |     The table below lists all current ``STATE`` files    |
+|                  |     and their intended action.                           |
+|                  |                                                          |
+|                  | .. SEEALSO::                                             |
+|                  |                                                          |
+|                  |     For more information on how the list of ``STATE``    |
+|                  |     files is constructed, see                            |
+|                  |     :ref:`OPNFV Fuel User Guide <fuel-userguide>`.       |
++------------------+----------------------------------------------------------+
+| Log              | Contents of ``/var/log`` are recursively gathered from   |
+| Collection       | all the nodes, then archived together for later          |
+|                  | inspection.                                              |
++------------------+----------------------------------------------------------+
+
+``STATE`` Files Overview
+------------------------
+
++---------------------------+-------------------------------------------------+
+| ``STATE`` file            | Targets involved and main intended action       |
++===========================+=================================================+
+| ``virtual_init``          | ``cfg01``: reclass node generation              |
+|                           |                                                 |
+|                           | ``jumpserver`` VMs (e.g. ``mas01``): basic OS   |
+|                           | config                                          |
++---------------------------+-------------------------------------------------+
+| ``maas``                  | ``mas01``: OS, MaaS installation,               |
+|                           | ``baremetal`` node commissioning and deploy     |
+|                           |                                                 |
+|                           | .. NOTE::                                       |
+|                           |                                                 |
+|                           |     Skipped if no ``baremetal`` nodes are       |
+|                           |     defined in ``PDF`` (``virtual`` deploy).    |
++---------------------------+-------------------------------------------------+
+| ``baremetal_init``        | ``kvm``, ``cmp``: OS install, config            |
++---------------------------+-------------------------------------------------+
+| ``dpdk``                  | ``cmp``: configure OVS-DPDK                     |
++---------------------------+-------------------------------------------------+
+| ``networks``              | ``ctl``: create OpenStack networks              |
++---------------------------+-------------------------------------------------+
+| ``neutron_gateway``       | ``gtw01``: configure Neutron gateway            |
++---------------------------+-------------------------------------------------+
+| ``opendaylight``          | ``odl01``: install & configure ``ODL``          |
++---------------------------+-------------------------------------------------+
+| ``openstack_noha``        | cluster nodes: install OpenStack without ``HA`` |
++---------------------------+-------------------------------------------------+
+| ``openstack_ha``          | cluster nodes: install OpenStack with ``HA``    |
++---------------------------+-------------------------------------------------+
+| ``virtual_control_plane`` | ``kvm``: create ``VCP`` VMs                     |
+|                           |                                                 |
+|                           | ``VCP`` VMs: basic OS config                    |
+|                           |                                                 |
+|                           | .. NOTE::                                       |
+|                           |                                                 |
+|                           |     Skipped if ``-N`` deploy argument is used.  |
++---------------------------+-------------------------------------------------+
+| ``tacker``                | ``ctl``: install & configure Tacker             |
++---------------------------+-------------------------------------------------+
 
-8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
-9) `Saltstack Formulas <https://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
+Release Notes
+=============
+
+Please refer to the :ref:`OPNFV Fuel Release Notes <fuel-releasenotes>`
+article.
 
-Reclass
+References
+==========
 
-10) `Reclass model <https://reclass.pantsfullofunix.net>`_
+For more information on the OPNFV ``Gambia`` 7.0 release, please see:
+
+#. `OPNFV Home Page`_
+#. `OPNFV Documentation`_
+#. `OPNFV Software Downloads`_
+#. `OPNFV Gambia Wiki Page`_
+#. `OpenStack Queens Release Artifacts`_
+#. `OpenStack Documentation`_
+#. `OpenDaylight Artifacts`_
+#. `Mirantis Cloud Platform Documentation`_
+#. `Saltstack Documentation`_
+#. `Saltstack Formulas`_
+#. `Reclass`_
+
+.. FIXME: cleanup unused refs, extend above list
+.. _`OpenDaylight`: https://www.opendaylight.org/software
+.. _`OpenDaylight Artifacts`: https://www.opendaylight.org/software/downloads
+.. _`MCP`: https://www.mirantis.com/software/mcp/
+.. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/
+.. _`fuel git repository`: https://git.opnfv.org/fuel
+.. _`pharos git repo`: https://git.opnfv.org/pharos
+.. _`OpenStack Documentation`: https://docs.openstack.org
+.. _`OpenStack Queens Release Artifacts`: https://www.openstack.org/software/queens
+.. _`OPNFV Home Page`: https://www.opnfv.org
+.. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia
+.. _`OPNFV Documentation`: https://docs.opnfv.org
+.. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download
+.. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0
+.. _`Saltstack Documentation`: https://docs.saltstack.com/en/latest/topics/
+.. _`Saltstack Formulas`: https://salt-formulas.readthedocs.io/en/latest/
+.. _`Reclass`: https://reclass.pantsfullofunix.net
+.. _`OPNFV Pharos Specification`: https://wiki.opnfv.org/display/pharos/Pharos+Specification
+.. _`OPNFV PDF Wiki Page`: https://wiki.opnfv.org/display/INF/POD+Descriptor
index 4b1e4fa..d456055 100644 (file)
@@ -1,17 +1,10 @@
-.. _fuel-releasenotes:
-
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-.. _fuel-release-notes-label:
-
-*****************************
-Release notes for Fuel\@OPNFV
-*****************************
+.. _fuel-releasenotes:
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
    release-notes.rst
index 6fd007e..909963c 100644 (file)
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-========
+************************
+OPNFV Fuel Release Notes
+************************
+
 Abstract
 ========
 
-This document compiles the release notes for the Fraser release of
-OPNFV when using Fuel as a deployment tool. This is an unified documentation
-for both x86_64 and aarch64 architectures. All information is common for
-both architectures except when explicitly stated.
+This document provides the release notes for ``Gambia`` release with the Fuel
+deployment toolchain.
 
+Starting with this release, both ``x86_64`` and ``aarch64`` architectures
+are supported at the same time by the ``fuel`` codebase.
+
+License
+=======
+
+All Fuel and "common" entities are protected by the `Apache License 2.0`_.
 
-===============
 Important Notes
 ===============
 
-These notes provides release information for the use of Fuel as deployment
-tool for the Fraser release of OPNFV.
+This is the OPNFV ``Gambia`` release that implements the deploy stage of the
+OPNFV CI pipeline via Fuel.
 
-The goal of the Fraser release and this Fuel-based deployment process is
+Fuel is based on the `MCP`_ installation tool chain.
+More information available at `Mirantis Cloud Platform Documentation`_.
+
+The goal of the ``Gambia`` release and this Fuel-based deployment process is
 to establish a lab ready platform accelerating further development
 of the OPNFV infrastructure.
 
-Carefully follow the installation-instructions.
+Carefully follow the installation instructions.
 
-=======
 Summary
 =======
 
-For Fraser, the typical use of Fuel as an OpenStack installer is
-supplemented with OPNFV unique components such as:
-
-- `OpenDaylight <https://www.opendaylight.org/software>`_
-- `Open vSwitch for NFV <https://wiki.opnfv.org/ovsnfv>`_
+``Gambia`` release with the Fuel deployment toolchain will establish an OPNFV
+target system on a Pharos compliant lab infrastructure. The current definition
+of an OPNFV target system is OpenStack Queens combined with an SDN
+controller, such as OpenDaylight. The system is deployed with OpenStack High
+Availability (HA) for most OpenStack services.
 
-As well as OPNFV-unique configurations of the Hardware and Software stack.
+Fuel also supports non-HA deployments, which deploys a
+single controller, one gateway node and a number of compute nodes.
 
-This Fraser artifact provides Fuel as the deployment stage tool in the
-OPNFV CI pipeline including:
+Fuel supports ``x86_64``, ``aarch64`` or ``mixed`` architecture clusters.
 
-- Documentation built by Jenkins
+Furthermore, Fuel is capable of deploying scenarios in a ``baremetal``,
+``virtual`` or ``hybrid`` fashion. ``virtual`` deployments use multiple VMs on
+the Jump Host and internal networking to simulate the ``baremetal`` deployment.
 
-  - overall OPNFV documentation
+For ``Gambia``, the typical use of Fuel as an OpenStack installer is
+supplemented with OPNFV unique components such as:
 
-  - this document (release notes)
+- `OpenDaylight`_
+- Open Virtual Network (``OVN``)
 
-  - installation instructions
+As well as OPNFV-unique configurations of the Hardware and Software stack.
 
-- Automated deployment of Fraser with running on baremetal or a nested
-  hypervisor environment (KVM)
+This ``Gambia`` artifact provides Fuel as the deployment stage tool in the
+OPNFV CI pipeline including:
 
-- Automated validation of the Fraser deployment
+- Automated (Jenkins, RTD) documentation build & publish (multiple documents);
+- Automated (Jenkins) build & publish of Salt Master Docker image;
+- Automated (Jenkins) deployment of ``Gambia`` running on baremetal or a nested
+  hypervisor environment (KVM);
+- Automated (Jenkins) validation of the ``Gambia`` deployment
 
-============
 Release Data
 ============
 
 +--------------------------------------+--------------------------------------+
-| **Project**                          | fuel/armband                         |
+| **Project**                          | fuel                                 |
 |                                      |                                      |
 +--------------------------------------+--------------------------------------+
-| **Repo/tag**                         | opnfv-6.2.1                          |
+| **Repo/tag**                         | opnfv-7.0.0                          |
 |                                      |                                      |
 +--------------------------------------+--------------------------------------+
-| **Release designation**              | Fraser 6.2                           |
+| **Release designation**              | Gambia 7.0                           |
 |                                      |                                      |
 +--------------------------------------+--------------------------------------+
-| **Release date**                     | June 29 2018                         |
+| **Release date**                     | November 2nd, 2018                   |
 |                                      |                                      |
 +--------------------------------------+--------------------------------------+
-| **Purpose of the delivery**          | Fraser alignment to Released         |
-|                                      | MCP baseline + features and          |
-|                                      | bug-fixes for the following          |
-|                                      | feaures:                             |
-|                                      |                                      |
-|                                      | - Open vSwitch for NFV               |
-|                                      | - OpenDaylight                       |
-|                                      | - DPDK                               |
+| **Purpose of the delivery**          | OPNFV Gambia 7.0 release             |
 +--------------------------------------+--------------------------------------+
 
 Version Change
-==============
+--------------
 
 Module Version Changes
-----------------------
-This is the Fraser 6.2 release.
-It is based on following upstream versions:
+~~~~~~~~~~~~~~~~~~~~~~
+
+This is the first tracked version of the ``Gambia`` release with the Fuel
+deployment toolchain. It is based on following upstream versions:
 
-- MCP Base Release
+- MCP (``Q2`18`` GA release)
 
-- OpenStack Pike Release
+- OpenStack (``Queens`` release)
 
-- OpenDaylight Oxygen Release
+- OpenDaylight (``Fluorine`` release)
+
+- Ubuntu (``16.04`` release)
 
 Document Changes
-----------------
-This is the Fraser 6.2 release.
+~~~~~~~~~~~~~~~~
+
+This is the ``Gambia`` 7.0 release.
 It comes with the following documentation:
 
-- :ref:`fuel-release-installation-label`
+- :ref:`OPNFV Fuel Installation Instruction <fuel-installation>`
 
 - Release notes (This document)
 
-- :ref:`fuel-release-userguide-label`
+- :ref:`OPNFV Fuel Userguide <fuel-userguide>`
 
 Reason for Version
-==================
+------------------
 
 Feature Additions
------------------
-
-**JIRA TICKETS:**
-None
-
-Bug Corrections
----------------
+~~~~~~~~~~~~~~~~~
 
-**JIRA TICKETS:**
+- ``multiarch`` cluster support;
+- ``hybrid`` cluster support;
+- ``PDF``/``IDF`` support for ``virtual`` PODs;
+- ``baremetal`` support for noHA deployments;
+- containerized Salt Master;
+- ``OVN`` scenarios;
 
-`Fraser 6.2 bug fixes  <https://jira.opnfv.org/issues/?filter=12318>`_
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia New features`_ filter.
 
-(Also See respective Integrated feature project's bug tracking)
+Bug Corrections
+~~~~~~~~~~~~~~~
 
-Deliverables
-============
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia Bugs (fixed)`_ filter.
 
 Software Deliverables
----------------------
-
-- `Fuel@x86_64 installer script files <https://git.opnfv.org/fuel>`_
+~~~~~~~~~~~~~~~~~~~~~
 
-- `Fuel@aarch64 installer script files <https://git.opnfv.org/armband>`_
+- `fuel git repository`_ with multiarch (``x86_64``, ``aarch64`` or ``mixed``)
+  installer script files
 
 Documentation Deliverables
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- :ref:`fuel-release-installation-label`
+- :ref:`OPNFV Fuel Installation Instruction <fuel-installation>`
 
 - Release notes (This document)
 
-- :ref:`fuel-release-userguide-label`
+- :ref:`OPNFV Fuel Userguide <fuel-userguide>`
+
+Scenario Matrix
+---------------
+
++-------------------------+---------------+-------------+------------+
+|                         | ``baremetal`` | ``virtual`` | ``hybrid`` |
++=========================+===============+=============+============+
+| os-nosdn-nofeature-noha |               | ``x86_64``  |            |
++-------------------------+---------------+-------------+------------+
+| os-nosdn-nofeature-ha   | ``x86_64``,   |             |            |
+|                         | ``aarch64``   |             |            |
++-------------------------+---------------+-------------+------------+
+| os-nosdn-ovs-noha       |               | ``x86_64``  |            |
++-------------------------+---------------+-------------+------------+
+| os-nosdn-ovs-ha         | ``x86_64``,   |             |            |
+|                         | ``aarch64``   |             |            |
++-------------------------+---------------+-------------+------------+
+| os-odl-nofeature-noha   |               | ``x86_64``  |            |
++-------------------------+---------------+-------------+------------+
+| os-odl-nofeature-ha     | ``x86_64``,   |             |            |
+|                         | ``aarch64``   |             |            |
++-------------------------+---------------+-------------+------------+
+| os-odl-ovs-noha         |               | ``x86_64``  |            |
++-------------------------+---------------+-------------+------------+
+| os-odl-ovs-ha           | ``x86_64``    |             |            |
++-------------------------+---------------+-------------+------------+
+| os-ovn-nofeature-noha   |               | ``x86_64``  |            |
++-------------------------+---------------+-------------+------------+
+| os-ovn-nofeature-ha     | ``aarch64``   |             |            |
++-------------------------+---------------+-------------+------------+
 
-=========================================
 Known Limitations, Issues and Workarounds
 =========================================
 
 System Limitations
-==================
+------------------
 
 - **Max number of blades:** 1 Jumpserver, 3 Controllers, 20 Compute blades
 
@@ -159,54 +199,50 @@ System Limitations
 
 
 Known Issues
-============
-
-**JIRA TICKETS:**
+------------
 
-`Known issues <https://jira.opnfv.org/issues/?filter=12317>`_
-
-(Also See respective Integrated feature project's bug tracking)
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia Known issues`_ filter.
 
 Workarounds
-===========
-
-**JIRA TICKETS:**
-
-None
+-----------
 
-(Also See respective Integrated feature project's bug tracking)
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia Workarounds`_ filter.
 
-============
 Test Results
 ============
-The Fraser 6.2 release with the Fuel deployment tool has undergone QA test
+
+The ``Gambia`` 7.0 release with the Fuel deployment tool has undergone QA test
 runs, see separate test results.
 
-==========
 References
 ==========
-For more information on the OPNFV Fraser 6.2 release, please see:
-
-OPNFV
-=====
-
-1) `OPNFV Home Page <https://www.opnfv.org>`_
-2) `OPNFV Documentation <https://docs.opnfv.org>`_
-3) `OPNFV Software Downloads <https://www.opnfv.org/software/download>`_
-
-OpenStack
-=========
-
-4) `OpenStack Pike Release Artifacts <https://www.openstack.org/software/pike>`_
-
-5) `OpenStack Documentation <https://docs.openstack.org>`_
-
-OpenDaylight
-============
-
-6) `OpenDaylight Artifacts <https://www.opendaylight.org/software/downloads>`_
-
-Fuel
-====
 
-7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
+For more information on the OPNFV ``Gambia`` 7.0 release, please see:
+
+#. `OPNFV Home Page`_
+#. `OPNFV Documentation`_
+#. `OPNFV Software Downloads`_
+#. `OPNFV Gambia Wiki Page`_
+#. `OpenStack Queens Release Artifacts`_
+#. `OpenStack Documentation`_
+#. `OpenDaylight Artifacts`_
+#. `Mirantis Cloud Platform Documentation`_
+
+.. FIXME: cleanup unused refs, extend above list
+.. _`OpenDaylight`: https://www.opendaylight.org/software
+.. _`OpenDaylight Artifacts`: https://www.opendaylight.org/software/downloads
+.. _`MCP`: https://www.mirantis.com/software/mcp/
+.. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/
+.. _`fuel git repository`: https://git.opnfv.org/fuel
+.. _`OpenStack Documentation`: https://docs.openstack.org
+.. _`OpenStack Queens Release Artifacts`: https://www.openstack.org/software/queens
+.. _`OPNFV Home Page`: https://www.opnfv.org
+.. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia
+.. _`OPNFV Documentation`: https://docs.opnfv.org
+.. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download
+.. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0
+.. OPNFV Fuel Gambia JIRA filters
+.. _`OPNFV Fuel JIRA: Gambia Bugs (fixed)`: https://jira.opnfv.org/issues/?filter=12503
+.. _`OPNFV Fuel JIRA: Gambia New features`: https://jira.opnfv.org/issues/?filter=12504
+.. _`OPNFV Fuel JIRA: Gambia Known issues`: https://jira.opnfv.org/issues/?filter=12505
+.. _`OPNFV Fuel JIRA: Gambia Workarounds`: https://jira.opnfv.org/issues/?filter=12506
index dc12fd0..29509c0 100644 (file)
@@ -4,11 +4,12 @@
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-*************************
-Scenarios for Fuel\@OPNFV
-*************************
+********************
+OPNFV Fuel Scenarios
+********************
 
 .. toctree::
+   :maxdepth: 2
 
    os-nosdn-ovs-noha/index.rst
    os-nosdn-ovs-ha/index.rst
index 723e83b..c9c9b99 100644 (file)
@@ -9,8 +9,6 @@ os-nosdn-ovs-ha overview and description
 ========================================
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
-   os-nosdn-ovs-ha.rst
-
+.. include:: os-nosdn-ovs-ha.rst
index 6841c62..e653a62 100644 (file)
@@ -5,7 +5,6 @@
 This document provides scenario level details for Gambia 7.0 of
 deployment with no SDN controller and no extra features enabled.
 
-============
 Introduction
 ============
 
index 9726dd0..135cefc 100644 (file)
@@ -9,8 +9,6 @@ os-nosdn-ovs-noha overview and description
 ==========================================
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
-   os-nosdn-ovs-noha.rst
-
+.. include:: os-nosdn-ovs-noha.rst
index edda710..42f6ccc 100644 (file)
@@ -5,7 +5,6 @@
 This document provides scenario level details for Gambia 7.0 of
 deployment with no SDN controller and no extra features enabled.
 
-============
 Introduction
 ============
 
index a17c272..d4d5a46 100644 (file)
@@ -9,8 +9,6 @@ os-nosdn-vpp-ha overview and description
 ========================================
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
-   os-nosdn-vpp-ha.rst
-
+.. include:: os-nosdn-vpp-ha.rst
index eb49e3d..80c829a 100644 (file)
@@ -5,7 +5,6 @@
 This document provides scenario level details for Gambia 7.0 of
 deployment with no SDN controller and VPP enabled as virtual switch.
 
-============
 Introduction
 ============
 
index d6576a5..3505985 100644 (file)
@@ -9,8 +9,6 @@ os-nosdn-vpp-noha overview and description
 ==========================================
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
-   os-nosdn-vpp-noha.rst
-
+.. include:: os-nosdn-vpp-noha.rst
index 51a0000..a699779 100644 (file)
@@ -5,7 +5,6 @@
 This document provides scenario level details for Gambia 7.0 of
 deployment with no SDN controller and VPP enabled as virtual switch.
 
-============
 Introduction
 ============
 
index 7041722..5a9b2cd 100644 (file)
@@ -9,8 +9,6 @@ os-ovn-nofeature-ha overview and description
 ============================================
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
-   os-ovn-nofeature-ha.rst
-
+.. include:: os-ovn-nofeature-ha.rst
index cb469cb..0317c4b 100644 (file)
@@ -6,7 +6,6 @@ This document provides scenario level details for Gambia 7.0 of deployment
 with Open Virtual Network (OVN) providing Layers 2 and 3 networking and no
 extra features enabled.
 
-============
 Introduction
 ============
 
index 7c5baf5..ba823f3 100644 (file)
@@ -9,8 +9,6 @@ os-ovn-nofeature-noha overview and description
 ==============================================
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
-   os-ovn-nofeature-noha.rst
-
+.. include:: os-ovn-nofeature-noha.rst
index 0005f75..44bcbfa 100644 (file)
@@ -6,7 +6,6 @@ This document provides scenario level details for Gambia 7.0 of deployment
 with Open Virtual Network (OVN) providing Layers 2 and 3 networking and no
 extra features enabled.
 
-============
 Introduction
 ============
 
diff --git a/docs/release/userguide/img/saltstack.png b/docs/release/userguide/img/saltstack.png
deleted file mode 100644 (file)
index d57452c..0000000
Binary files a/docs/release/userguide/img/saltstack.png and /dev/null differ
index d4330d0..ab616d3 100644 (file)
@@ -1,18 +1,10 @@
-.. _fuel-userguide:
-
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-.. _fuel-release-userguide-label:
-
-**************************
-User guide for Fuel\@OPNFV
-**************************
+.. _fuel-userguide:
 
 .. toctree::
-   :numbered:
    :maxdepth: 2
 
    userguide.rst
-
index 76639ab..c6602f3 100644 (file)
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) Open Platform for NFV Project, Inc. and its contributors
 
-========
+*********************
+OPNFV Fuel User Guide
+*********************
+
 Abstract
 ========
 
-This document contains details about how to use OPNFV Fuel - Fraser
-release - after it was deployed. For details on how to deploy check the
-installation instructions in the :ref:`fuel_userguide_references` section.
+This document contains details about using OPNFV Fuel ``Gambia`` release after
+it was deployed. For details on how to deploy OpenStack, check
+the installation instructions in the :ref:`fuel_userguide_references` section.
 
-This is an unified documentation for both x86_64 and aarch64
+This is an unified documentation for both ``x86_64`` and ``aarch64``
 architectures. All information is common for both architectures
 except when explicitly stated.
 
-
-
-================
 Network Overview
 ================
 
 Fuel uses several networks to deploy and administer the cloud:
 
-+------------------+---------------------------------------------------------+
-| Network name     | Description                                             |
-|                  |                                                         |
-+==================+=========================================================+
-| **PXE/ADMIN**    | Used for booting the nodes via PXE and/or Salt          |
-|                  | control network                                         |
-+------------------+---------------------------------------------------------+
-| **MCPCONTROL**   | Used to provision the infrastructure VMs (Salt & MaaS)  |
-+------------------+---------------------------------------------------------+
-| **Mgmt**         | Used for internal communication between                 |
-|                  | OpenStack components                                    |
-+------------------+---------------------------------------------------------+
-| **Internal**     | Used for VM data communication within the               |
-|                  | cloud deployment                                        |
-+------------------+---------------------------------------------------------+
-| **Public**       | Used to provide Virtual IPs for public endpoints        |
-|                  | that are used to connect to OpenStack services APIs.    |
-|                  | Used by Virtual machines to access the Internet         |
-+------------------+---------------------------------------------------------+
++------------------+----------------------------------------------------------+
+| Network name     | Description                                              |
+|                  |                                                          |
++==================+==========================================================+
+| **PXE/admin**    | Used for booting the nodes via PXE and/or Salt           |
+|                  | control network                                          |
++------------------+----------------------------------------------------------+
+| **mcpcontrol**   | Used to provision the infrastructure hosts (Salt & MaaS) |
++------------------+----------------------------------------------------------+
+| **management**   | Used for internal communication between                  |
+|                  | OpenStack components                                     |
++------------------+----------------------------------------------------------+
+| **internal**     | Used for VM data communication within the                |
+|                  | cloud deployment                                         |
++------------------+----------------------------------------------------------+
+| **public**       | Used to provide Virtual IPs for public endpoints         |
+|                  | that are used to connect to OpenStack services APIs.     |
+|                  | Used by Virtual machines to access the Internet          |
++------------------+----------------------------------------------------------+
+
+These networks - except ``mcpcontrol`` - can be Linux bridges configured
+before the deploy on the Jumpserver.
+If they don't exists at deploy time, they will be created by the scripts as
+``libvirt`` managed networks.
+
+Network ``mcpcontrol``
+~~~~~~~~~~~~~~~~~~~~~~
+
+``mcpcontrol`` is a virtual network, managed by libvirt. Its only purpose is to
+provide a simple method of assigning an arbitrary ``INSTALLER_IP`` to the Salt
+master node (``cfg01``), to maintain backwards compatibility with old OPNFV
+Fuel behavior. Normally, end-users only need to change the ``INSTALLER_IP`` if
+the default CIDR (``10.20.0.0/24``) overlaps with existing lab networks.
+
+``mcpcontrol`` has both NAT and DHCP enabled, so the Salt master (``cfg01``)
+and the MaaS VM (``mas01``, when present) get assigned predefined IPs (``.2``,
+``.3``, while the jumpserver bridge port gets ``.1``).
+
++------------------+---------------------------+-----------------------------+
+| Host             | Offset in IP range        | Default address             |
++==================+===========================+=============================+
+| ``jumpserver``   | 1st                       | ``10.20.0.1``               |
++------------------+---------------------------+-----------------------------+
+| ``cfg01``        | 2nd                       | ``10.20.0.2``               |
++------------------+---------------------------+-----------------------------+
+| ``mas01``        | 3rd                       | ``10.20.0.3``               |
++------------------+---------------------------+-----------------------------+
+
+This network is limited to the ``jumpserver`` host and does not require any
+manual setup.
+
+Network ``PXE/admin``
+~~~~~~~~~~~~~~~~~~~~~
+
+.. TIP::
+
+    ``PXE/admin`` does not usually use an IP range offset in ``IDF``.
+
+.. NOTE::
+
+    During ``MaaS`` commissioning phase, IP addresses are handed out by
+    ``MaaS``'s DHCP.
 
+.. NOTE::
 
-These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
-Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
-networks.
+    Default addresses in below table correspond to a ``PXE/admin`` CIDR of
+    ``192.168.11.0/24`` (the usual value used in OPNFV labs).
+
+    This is defined in ``IDF`` and can easily be changed to something else.
+
+.. TODO: detail MaaS DHCP range start/end
+
++------------------+-----------------------+---------------------------------+
+| Host             | Offset in IP range    | Default address                 |
++==================+=======================+=================================+
+| ``jumpserver``   | 1st                   | ``192.168.11.1``                |
+|                  |                       | (manual assignment)             |
++------------------+-----------------------+---------------------------------+
+| ``cfg01``        | 2nd                   | ``192.168.11.2``                |
++------------------+-----------------------+---------------------------------+
+| ``mas01``        | 3rd                   | ``192.168.11.3``                |
++------------------+-----------------------+---------------------------------+
+| ``prx01``,       | 4th,                  | ``192.168.11.4``,               |
+| ``prx02``        | 5th                   | ``192.168.11.5``                |
++------------------+-----------------------+---------------------------------+
+| ``gtw01``,       | ...                   | ``...``                         |
+| ``gtw02``,       |                       |                                 |
+| ``gtw03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``kvm01``,       |                       |                                 |
+| ``kvm02``,       |                       |                                 |
+| ``kvm03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``dbs01``,       |                       |                                 |
+| ``dbs02``,       |                       |                                 |
+| ``dbs03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``msg01``,       |                       |                                 |
+| ``msg02``,       |                       |                                 |
+| ``msg03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``mdb01``,       |                       |                                 |
+| ``mdb02``,       |                       |                                 |
+| ``mdb03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``ctl01``,       |                       |                                 |
+| ``ctl02``,       |                       |                                 |
+| ``ctl03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``odl01``,       |                       |                                 |
+| ``odl02``,       |                       |                                 |
+| ``odl03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``mon01``,       |                       |                                 |
+| ``mon02``,       |                       |                                 |
+| ``mon03``,       |                       |                                 |
+| ``log01``,       |                       |                                 |
+| ``log02``,       |                       |                                 |
+| ``log03``,       |                       |                                 |
+| ``mtr01``,       |                       |                                 |
+| ``mtr02``,       |                       |                                 |
+| ``mtr03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``cmp001``,      |                       |                                 |
+| ``cmp002``,      |                       |                                 |
+| ``...``          |                       |                                 |
++------------------+-----------------------+---------------------------------+
+
+Network ``management``
+~~~~~~~~~~~~~~~~~~~~~~
+
+.. TIP::
+
+    ``management`` often has an IP range offset defined in ``IDF``.
 
-Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
-on this network and associates static host entry IPs for Salt and Maas VMs.
+.. NOTE::
 
+    Default addresses in below table correspond to a ``management`` CIDR of
+    ``172.16.10.0/24`` (one of the commonly used values in OPNFV labs).
+    This is defined in ``IDF`` and can easily be changed to something else.
+
+.. WARNING::
+
+    Default addresses in below table correspond to a ``management`` IP range of
+    ``172.16.10.10-172.16.10.254`` (one of the commonly used values in OPNFV
+    labs). This is defined in ``IDF`` and can easily be changed to something
+    else. Since the ``jumpserver`` address is manually assigned, this is
+    usually not subject to the IP range restriction in ``IDF``.
+
++------------------+-----------------------+---------------------------------+
+| Host             | Offset in IP range    | Default address                 |
++==================+=======================+=================================+
+| ``jumpserver``   | N/A                   | ``172.16.10.1``                 |
+|                  |                       | (manual assignment)             |
++------------------+-----------------------+---------------------------------+
+| ``cfg01``        | 1st                   | ``172.16.10.2``                 |
+|                  |                       | (IP range ignored for now)      |
++------------------+-----------------------+---------------------------------+
+| ``mas01``        | 2nd                   | ``172.16.10.12``                |
++------------------+-----------------------+---------------------------------+
+| ``prx``          | 3rd,                  | ``172.16.10.13``,               |
+|                  |                       |                                 |
+| ``prx01``,       | 4th,                  | ``172.16.10.14``,               |
+| ``prx02``        | 5th                   | ``172.16.10.15``                |
++------------------+-----------------------+---------------------------------+
+| ``gtw01``,       | ...                   | ``...``                         |
+| ``gtw02``,       |                       |                                 |
+| ``gtw03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``kvm``,         |                       |                                 |
+|                  |                       |                                 |
+| ``kvm01``,       |                       |                                 |
+| ``kvm02``,       |                       |                                 |
+| ``kvm03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``dbs``,         |                       |                                 |
+|                  |                       |                                 |
+| ``dbs01``,       |                       |                                 |
+| ``dbs02``,       |                       |                                 |
+| ``dbs03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``msg``,         |                       |                                 |
+|                  |                       |                                 |
+| ``msg01``,       |                       |                                 |
+| ``msg02``,       |                       |                                 |
+| ``msg03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``mdb``,         |                       |                                 |
+|                  |                       |                                 |
+| ``mdb01``,       |                       |                                 |
+| ``mdb02``,       |                       |                                 |
+| ``mdb03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``ctl``,         |                       |                                 |
+|                  |                       |                                 |
+| ``ctl01``,       |                       |                                 |
+| ``ctl02``,       |                       |                                 |
+| ``ctl03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``odl``,         |                       |                                 |
+|                  |                       |                                 |
+| ``odl01``,       |                       |                                 |
+| ``odl02``,       |                       |                                 |
+| ``odl03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``mon``,         |                       |                                 |
+|                  |                       |                                 |
+| ``mon01``,       |                       |                                 |
+| ``mon02``,       |                       |                                 |
+| ``mon03``,       |                       |                                 |
+|                  |                       |                                 |
+| ``log``,         |                       |                                 |
+|                  |                       |                                 |
+| ``log01``,       |                       |                                 |
+| ``log02``,       |                       |                                 |
+| ``log03``,       |                       |                                 |
+|                  |                       |                                 |
+| ``mtr``,         |                       |                                 |
+|                  |                       |                                 |
+| ``mtr01``,       |                       |                                 |
+| ``mtr02``,       |                       |                                 |
+| ``mtr03``        |                       |                                 |
++------------------+-----------------------+---------------------------------+
+| ``cmp001``,      |                       |                                 |
+| ``cmp002``,      |                       |                                 |
+| ``...``          |                       |                                 |
++------------------+-----------------------+---------------------------------+
+
+Network ``internal``
+~~~~~~~~~~~~~~~~~~~~
+
+.. TIP::
+
+    ``internal`` does not usually use an IP range offset in ``IDF``.
 
+.. NOTE::
 
-===================
-Accessing the Cloud
-===================
+    Default addresses in below table correspond to an ``internal`` CIDR of
+    ``10.1.0.0/24`` (the usual value used in OPNFV labs).
+    This is defined in ``IDF`` and can easily be changed to something else.
+
++------------------+------------------------+--------------------------------+
+| Host             | Offset in IP range     | Default address                |
++==================+========================+================================+
+| ``jumpserver``   | N/A                    | ``10.1.0.1``                   |
+|                  |                        | (manual assignment, optional)  |
++------------------+------------------------+--------------------------------+
+| ``gtw01``,       | 1st,                   | ``10.1.0.2``,                  |
+| ``gtw02``,       | 2nd,                   | ``10.1.0.3``,                  |
+| ``gtw03``        | 3rd                    | ``10.1.0.4``                   |
++------------------+------------------------+--------------------------------+
+| ``cmp001``,      | 4th,                   | ``10.1.0.5``,                  |
+| ``cmp002``,      | 5th,                   | ``10.1.0.6``,                  |
+| ``...``          | ...                    | ``...``                        |
++------------------+------------------------+--------------------------------+
 
-Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
-ssh key ``/var/lib/opnfv/mcp.rsa``. The example below is a connection to Salt master.
+Network ``public``
+~~~~~~~~~~~~~~~~~~
 
-    .. code-block:: bash
+.. TIP::
 
-        $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
+    ``public`` often has an IP range offset defined in ``IDF``.
 
 .. NOTE::
 
-    The Salt master IP is not hard set, it is configurable via ``INSTALLER_IP`` during deployment
+    Default addresses in below table correspond to a ``public`` CIDR of
+    ``172.30.10.0/24`` (one of the used values in OPNFV labs).
+    This is defined in ``IDF`` and can easily be changed to something else.
+
+.. WARNING::
+
+    Default addresses in below table correspond to a ``public`` IP range of
+    ``172.30.10.100-172.30.10.254`` (one of the used values in OPNFV
+    labs). This is defined in ``IDF`` and can easily be changed to something
+    else. Since the ``jumpserver`` address is manually assigned, this is
+    usually not subject to the IP range restriction in ``IDF``.
+
++------------------+------------------------+--------------------------------+
+| Host             | Offset in IP range     | Default address                |
++==================+========================+================================+
+| ``jumpserver``   | N/A                    | ``172.30.10.72``               |
+|                  |                        | (manual assignment, optional)  |
++------------------+------------------------+--------------------------------+
+| ``prx``,         | 1st,                   | ``172.30.10.101``,             |
+|                  |                        |                                |
+| ``prx01``,       | 2nd,                   | ``172.30.10.102``,             |
+| ``prx02``        | 3rd                    | ``172.30.10.103``              |
++------------------+------------------------+--------------------------------+
+| ``gtw01``,       | 4th,                   | ``172.30.10.104``,             |
+| ``gtw02``,       | 5th,                   | ``172.30.10.105``,             |
+| ``gtw03``        | 6th                    | ``172.30.10.106``              |
++------------------+------------------------+--------------------------------+
+| ``ctl01``,       | ...                    | ``...``                        |
+| ``ctl02``,       |                        |                                |
+| ``ctl03``        |                        |                                |
++------------------+------------------------+--------------------------------+
+| ``odl``,         |                        |                                |
++------------------+------------------------+--------------------------------+
+| ``cmp001``,      |                        |                                |
+| ``cmp002``,      |                        |                                |
+| ``...``          |                        |                                |
++------------------+------------------------+--------------------------------+
+
+Accessing the Salt Master Node (``cfg01``)
+==========================================
+
+The Salt Master node (``cfg01``) runs a ``sshd`` server listening on
+``0.0.0.0:22``.
+
+To login as ``ubuntu`` user, use the RSA private key ``/var/lib/opnfv/mcp.rsa``:
+
+.. code-block:: console
+
+    jenkins@jumpserver:~$ ssh -o StrictHostKeyChecking=no \
+                              -i /var/lib/opnfv/mcp.rsa \
+                              -l ubuntu 10.20.0.2
+    ubuntu@cfg01:~$
 
-Logging in to cluster nodes is possible from the Jumpserver and from Salt master. On the Salt master
-cluster hostnames can be used instead of IP addresses:
+.. NOTE::
 
-    .. code-block:: bash
+    User ``ubuntu`` has sudo rights.
 
-        $ sudo -i
-        $ ssh -i mcp.rsa ubuntu@ctl01
+.. TIP::
 
-User *ubuntu* has sudo rights.
+    The Salt master IP (``10.20.0.2``) is not hard set, it is configurable via
+    ``INSTALLER_IP`` during deployment.
 
+.. TIP::
 
-=============================
-Exploring the Cloud with Salt
-=============================
+    Starting with the ``Gambia`` release, ``cfg01`` is containerized, so this
+    also works (from ``jumpserver`` only):
 
-To gather information about the cloud, the salt commands can be used. It is based
-around a master-minion idea where the salt-master pushes config to the minions to
-execute actions.
+.. code-block:: console
 
-For example tell salt to execute a ping to ``8.8.8.8`` on all the nodes.
+    jenkins@jumpserver:~$ docker exec -it fuel bash
+    root@cfg01:~$
 
-.. figure:: img/saltstack.png
+Accessing Cluster Nodes
+=======================
 
-Complex filters can be done to the target like compound queries or node roles.
-For more information about Salt see the :ref:`fuel_userguide_references` section.
+Logging in to cluster nodes is possible from the Jumpserver, Salt Master etc.
 
-Some examples are listed below. Note that these commands are issued from Salt master
-as *root* user.
+.. code-block:: console
 
+    jenkins@jumpserver:~$ ssh -i /var/lib/opnfv/mcp.rsa ubuntu@192.168.11.52
 
-#. View the IPs of all the components
-
-    .. code-block:: bash
-
-        root@cfg01:~$ salt "*" network.ip_addrs
-        cfg01.mcp-pike-odl-ha.local:
-           - 10.20.0.2
-           - 172.16.10.100
-        mas01.mcp-pike-odl-ha.local:
-           - 10.20.0.3
-           - 172.16.10.3
-           - 192.168.11.3
-        .........................
+.. TIP::
 
+    ``/etc/hosts`` on ``cfg01`` has all the cluster hostnames, which can be
+    used instead of IP addresses.
 
-#. View the interfaces of all the components and put the output in a file with yaml format
+.. code-block:: console
 
-    .. code-block:: bash
+    root@cfg01:~$ ssh -i ~/fuel/mcp/scripts/mcp.rsa ubuntu@ctl01
 
-        root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
-        root@cfg01:~# cat interfaces.yaml
-        cfg01.mcp-pike-odl-ha.local:
-         enp1s0:
-           hwaddr: 52:54:00:72:77:12
-           inet:
-           - address: 10.20.0.2
-             broadcast: 10.20.0.255
-             label: enp1s0
-             netmask: 255.255.255.0
-           inet6:
-           - address: fe80::5054:ff:fe72:7712
-             prefixlen: '64'
-             scope: link
-           up: true
-        .........................
+Debugging ``MaaS`` Comissioning/Deployment Issues
+=================================================
 
+One of the most common issues when setting up a new POD is ``MaaS`` failing to
+commission/deploy the nodes, usually timing out after a couple of retries.
 
-#. View installed packages in MaaS node
+Such failures might indicate misconfiguration in ``PDF``/``IDF``, ``TOR``
+switch configuration or even faulty hardware.
 
-    .. code-block:: bash
+Here are a couple of pointers for isolating the problem.
 
-        root@cfg01:~# salt "mas*" pkg.list_pkgs
-        mas01.mcp-pike-odl-ha.local:
-            ----------
-            accountsservice:
-                0.6.40-2ubuntu11.3
-            acl:
-                2.2.52-3
-            acpid:
-                1:2.0.26-1ubuntu2
-            adduser:
-                3.113+nmu3ubuntu4
-            anerd:
-                1
-        .........................
+Accessing the ``MaaS`` Dashboard
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+``MaaS`` web-based dashboard is available at
+``http://<mas01 IP address>:5240/MAAS``, e.g.
+``http://172.16.10.12:5240/MAAS``.
 
-#. Execute any linux command on all nodes (list the content of ``/var/log`` in this example)
+The administrator credentials are ``opnfv``/``opnfv_secret``.
 
-    .. code-block:: bash
+.. NOTE::
 
-        root@cfg01:~# salt "*" cmd.run 'ls /var/log'
-        cfg01.mcp-pike-odl-ha.local:
-           alternatives.log
-           apt
-           auth.log
-           boot.log
-           btmp
-           cloud-init-output.log
-           cloud-init.log
-        .........................
+    ``mas01`` VM does not automatically get assigned an IP address in the
+    public network segment. If ``MaaS`` dashboard should be accesiable from
+    the public network, such an address can be manually added to the last
+    VM NIC interface in ``mas01`` (which is already connected to the public
+    network bridge).
 
+Ensure Commission/Deploy Timeouts Are Not Too Small
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-#. Execute any linux command on nodes using compound queries filter
+Some hardware takes longer to boot or to run the initial scripts during
+commissioning/deployment phases. If that's the case, ``MaaS`` will time out
+waiting for the process to finish. ``MaaS`` logs will reflect that, and the
+issue is usually easy to observe on the nodes' serial console - if the node
+seems to PXE-boot the OS live image, starts executing cloud-init/curtin
+hooks without spilling critical errors, then it is powered down/shut off,
+most likely the timeout was hit.
 
-    .. code-block:: bash
+To access the serial console of a node, see your board manufacturer's
+documentation. Some hardware no longer has a physical serial connector these
+days, usually being replaced by a vendor-specific software-based interface.
 
-        root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
-        cfg01.mcp-pike-odl-ha.local:
-           alternatives.log
-           apt
-           auth.log
-           boot.log
-           btmp
-           cloud-init-output.log
-           cloud-init.log
-        .........................
+If the board supports ``SOL`` (Serial Over LAN) over ``IPMI`` lanplus protocol,
+a simpler solution to hook to the serial console is to use ``ipmitool``.
 
+.. TIP::
 
-#. Execute any linux command on nodes using role filter
+    Early boot stage output might not be shown over ``SOL``, but only over
+    the video console provided by the (vendor-specific) interface.
 
-    .. code-block:: bash
+.. code-block:: console
 
-        root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
-        cmp001.mcp-pike-odl-ha.local:
-           alternatives.log
-           apache2
-           apt
-           auth.log
-           btmp
-           ceilometer
-           cinder
-           cloud-init-output.log
-           cloud-init.log
-        .........................
+    jenkins@jumpserver:~$ ipmitool -H <host BMC IP> -U <user> -P <pass> \
+                                   -I lanplus sol activate
 
+To bypass this, simply set a larger timeout in the ``IDF``.
 
+Check Jumpserver Network Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-===================
-Accessing Openstack
-===================
+.. code-block:: console
 
-Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
-Openstack credentials are at ``/root/keystonercv3``.
+    jenkins@jumpserver:~$ brctl show
+    jenkins@jumpserver:~$ ifconfig -a
 
-    .. code-block:: bash
++-----------------------+------------------------------------------------+
+| Configuration item    | Expected behavior                              |
++=======================+================================================+
+| IP addresses assigned | IP addresses should be assigned to the bridge, |
+| to bridge ports       | and not to individual bridge ports             |
++-----------------------+------------------------------------------------+
 
-        root@ctl01:~# source keystonercv3
-        root@ctl01:~# openstack image list
-        +--------------------------------------+-----------------------------------------------+--------+
-        | ID                                   | Name                                          | Status |
-        +======================================+===============================================+========+
-        | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage                                   | active |
-        | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04                                   | active |
-        +--------------------------------------+-----------------------------------------------+--------+
+Check Network Connectivity Between Nodes on the Jumpserver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+``cfg01`` is a Docker container running on the ``jumpserver``, connected to
+Docker networks (created by docker-compose automatically on container up),
+which in turn are connected using veth pairs to their ``libvirt`` managed
+counterparts.
 
-The OpenStack Dashboard, Horizon, is available at ``http://<proxy public VIP>``.
-The administrator credentials are **admin**/**opnfv_secret**.
+For example, the ``mcpcontrol`` network(s) should look like below.
 
-.. figure:: img/horizon_login.png
+.. code-block:: console
 
+    jenkins@jumpserver:~$ brctl show mcpcontrol
+    bridge name   bridge id           STP enabled   interfaces
+    mcpcontrol    8000.525400064f77   yes           mcpcontrol-nic
+                                                    veth_mcp0
+                                                    vnet8
 
-A full list of IPs/services is available at ``<proxy public VIP>:8090`` for baremetal deploys.
+    jenkins@jumpserver:~$ docker network ls
+    NETWORK ID    NAME                              DRIVER   SCOPE
+    81a0fdb3bd78  docker-compose_docker-mcpcontrol  macvlan  local
+    [...]
 
-.. figure:: img/salt_services_ip.png
+    jenkins@jumpserver:~$ docker network inspect docker-compose_mcpcontrol
+    [
+        {
+            "Name": "docker-compose_mcpcontrol",
+            [...]
+            "Options": {
+                "parent": "veth_mcp1"
+            },
+        }
+    ]
 
-==============================
-Guest Operating System Support
-==============================
+Before investigating the rest of the cluster networking configuration, the
+first thing to check is that ``cfg01`` has network connectivity to other
+jumpserver hosted nodes, e.g. ``mas01`` and to the jumpserver itself
+(provided that the jumpserver has an IP address in that particular network
+segment).
 
-There are a number of possibilities regarding the guest operating systems which can be spawned
-on the nodes. The current system spawns virtual machines for VCP VMs on the KVM nodes  and VMs
-requested by users in OpenStack compute nodes. Currently the system supports the following
-UEFI-images for the guests:
-
-+------------------+-------------------+------------------+
-| OS name          | x86_64 status     | aarch64 status   |
-+==================+===================+==================+
-| Ubuntu 17.10     | untested          | Full support     |
-+------------------+-------------------+------------------+
-| Ubuntu 16.04     | Full support      | Full support     |
-+------------------+-------------------+------------------+
-| Ubuntu 14.04     | untested          | Full support     |
-+------------------+-------------------+------------------+
-| Fedora atomic 27 | untested          | Full support     |
-+------------------+-------------------+------------------+
-| Fedora cloud 27  | untested          | Full support     |
-+------------------+-------------------+------------------+
-| Debian           | untested          | Full support     |
-+------------------+-------------------+------------------+
-| Centos 7         | untested          | Not supported    |
-+------------------+-------------------+------------------+
-| Cirros 0.3.5     | Full support      | Full support     |
-+------------------+-------------------+------------------+
-| Cirros 0.4.0     | Full support      | Full support     |
-+------------------+-------------------+------------------+
-
-
-The above table covers only UEFI image and implies OVMF/AAVMF firmware on the host. An x86 deployment
-also supports non-UEFI images, however that choice is up to the underlying hardware and the administrator
-to make.
-
-The images for the above operating systems can be found in their respective websites.
+.. code-block:: console
 
+    jenkins@jumpserver:~$ docker exec -it fuel bash
+    root@cfg01:~# ifconfig -a | grep inet
+        inet addr:10.20.0.2     Bcast:0.0.0.0  Mask:255.255.255.0
+        inet addr:172.16.10.2   Bcast:0.0.0.0  Mask:255.255.255.0
+        inet addr:192.168.11.2  Bcast:0.0.0.0  Mask:255.255.255.0
 
-=================
-OpenStack Storage
-=================
+For each network of interest (``mcpcontrol``, ``mgmt``, ``PXE/admin``), check
+that ``cfg01`` can ping the jumpserver IP in that network segment, as well as
+the ``mas01`` IP in that network.
+
+.. NOTE::
 
-OpenStack Cinder is the project behind block storage in OpenStack and Fuel@OPNFV supports LVM out of the box.
-By default x86 supports 2 additional block storage devices and ARMBand supports only one.
-More devices can be supported if the OS-image created has additional properties allowing block storage devices
-to be spawned as SCSI drives. To do this, add the properties below to the server:
+    ``mcpcontrol`` is set up at VM bringup, so it should always be available,
+    while the other networks are configured by Salt as part of the
+    ``virtual_init`` STATE file.
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ openstack image set --property hw_disk_bus='scsi' --property hw_scsi_model='virtio-scsi' <image>
+    root@cfg01:~# ping -c1 10.20.0.1  # mcpcontrol jumpserver IP
+    root@cfg01:~# ping -c1 10.20.0.3  # mcpcontrol mas01 IP
 
-The choice regarding which bus to use for the storage drives is an important one. Virtio-blk is the default
-choice for Fuel@OPNFV which attaches the drives in ``/dev/vdX``. However, since we want to be able to attach a
-larger number of volumes to the virtual machines, we recommend the switch to SCSI drives which are attached
-in ``/dev/sdX`` instead. Virtio-scsi is a little worse in terms of performance but the ability to add a larger
-number of drives combined with added features like ZFS, Ceph et al, leads us to suggest the use of virtio-scsi in Fuel@OPNFV for both architectures.
+.. TIP::
 
-More details regarding the differences and performance of virtio-blk vs virtio-scsi are beyond the scope
-of this manual but can be easily found in other sources online like `4`_ or `5`_.
+    ``mcpcontrol`` CIDR is configurable via ``INSTALLER_IP`` env var during
+    deployment. However, IP offsets inside that segment are hard set to ``.1``
+    for the jumpserver, ``.2`` for ``cfg01``, respectively to ``.3`` for
+    ``mas01`` node.
 
-.. _4: https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/
+.. code-block:: console
 
-.. _5: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/
+    root@cfg01:~# salt 'mas*' pillar.item --out yaml \
+                  _param:infra_maas_node01_deploy_address \
+                  _param:infra_maas_node01_address
+    mas01.mcp-ovs-noha.local:
+      _param:infra_maas_node01_address: 172.16.10.12
+      _param:infra_maas_node01_deploy_address: 192.168.11.3
 
-Additional configuration for configuring images in openstack can be found in the OpenStack Glance documentation.
+    root@cfg01:~# ping -c1 192.168.11.1  # PXE/admin jumpserver IP
+    root@cfg01:~# ping -c1 192.168.11.3  # PXE/admin mas01 IP
+    root@cfg01:~# ping -c1 172.16.10.1   # mgmt jumpserver IP
+    root@cfg01:~# ping -c1 172.16.10.12  # mgmt mas01 IP
 
+.. TIP::
 
+    Jumpserver IP addresses for ``PXE/admin``, ``mgmt`` and ``public`` bridges
+    are user-chosen and manually set, so above snippets should be adjusted
+    accordingly if the user chose a different IP, other than ``.1`` in each
+    CIDR.
 
-===================
-Openstack Endpoints
-===================
+Alternatively, a quick ``nmap`` scan would work just as well.
 
-For each Openstack service three endpoints are created: ``admin``, ``internal`` and ``public``.
+.. code-block:: console
 
-    .. code-block:: bash
+    root@cfg01:~# apt update && apt install -y nmap
+    root@cfg01:~# nmap -sn 10.20.0.0/24     # expected: cfg01, mas01, jumpserver
+    root@cfg01:~# nmap -sn 192.168.11.0/24  # expected: cfg01, mas01, jumpserver
+    root@cfg01:~# nmap -sn 172.16.10.0/24   # expected: cfg01, mas01, jumpserver
 
-        ubuntu@ctl01:~$ openstack endpoint list --service keystone
-        +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
-        | ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                          |
-        +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
-        | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone     | identity     | True    | internal  | http://172.16.10.26:5000/v3  |
-        | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone     | identity     | True    | admin     | http://172.16.10.26:35357/v3 |
-        | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone     | identity     | True    | public    | https://10.0.15.2:5000/v3    |
-        +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+Check ``DHCP`` Reaches Cluster Nodes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-MCP sets up all Openstack services to talk to each other over unencrypted
-connections on the internal management network. All admin/internal endpoints use
-plain http, while the public endpoints are https connections terminated via nginx
-at the VCP proxy VMs.
+One common symptom observed during failed commissioning is that ``DHCP`` does
+not work as expected between cluster nodes (baremetal nodes in the cluster; or
+virtual machines on the jumpserver in case of ``hybrid`` deployments) and
+the ``MaaS`` node.
 
-To access the public endpoints an SSL certificate has to be provided. For
-convenience, the installation script will copy the required certificate into
-to the cfg01 node at ``/etc/ssl/certs/os_cacert``.
+To confirm or rule out this possibility, monitor the serial console output of
+one (or more) cluster nodes during ``MaaS`` commissioning. If the node is
+properly configured to attempt PXE boot, yet it times out waiting for an IP
+address from ``mas01`` ``DHCP``, it's worth checking that ``DHCP`` packets
+reach the ``jumpserver``, respectively the ``mas01`` VM.
 
-Copy the certificate from the cfg01 node to the client that will access the https
-endpoints and place it under ``/etc/ssl/certs/``. The SSL connection will be established
-automatically after.
+.. code-block:: console
 
-    .. code-block:: bash
+    jenkins@jumpserver:~$ sudo apt update && sudo apt install -y dhcpdump
+    jenkins@jumpserver:~$ sudo dhcpdump -i admin_br
 
-        $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
-          "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
+.. TIP::
 
+    If ``DHCP`` requests are present, but no replies are sent, ``iptables``
+    might be interfering on the jumpserver.
 
+Check ``MaaS`` Logs
+~~~~~~~~~~~~~~~~~~~
+
+If networking looks fine, yet nodes still fail to commission and/or deploy,
+``MaaS`` logs might offer more details about the failure:
+
+* ``/var/log/maas/maas.log``
+* ``/var/log/maas/rackd.log``
+* ``/var/log/maas/regiond.log``
+
+.. TIP::
+
+    If the problem is with the cluster node and not on the ``MaaS`` server,
+    node's kernel logs usually contain useful information.
+    These are saved via rsyslog on the ``mas01`` node in
+    ``/var/log/maas/rsyslog``.
+
+Recovering Failed Deployments
 =============================
-Reclass model viewer tutorial
-=============================
 
+The first deploy attempt might fail due to various reasons. If the problem
+is not systemic (i.e. fixing it will not introduce incompatible configuration
+changes, like setting a different ``INSTALLER_IP``), the environment is safe
+to be reused and the deployment process can pick up from where it left off.
+
+Leveraging these mechanisms requires a minimum understanding of how the
+deploy process works, at least for manual ``STATE`` runs.
+
+Automatic (re)deploy
+~~~~~~~~~~~~~~~~~~~~
+
+OPNFV Fuel's ``deploy.sh`` script offers a dedicated argument for this, ``-f``,
+which will skip executing the first ``N`` ``STATE`` files, where ``N`` is the
+number of ``-f`` occurrences in the argument list.
+
+.. TIP::
+
+    The list of ``STATE`` files to be executed for a specific environment
+    depends on the OPNFV scenario chosen, deployment type (``virtual``,
+    ``baremetal`` or ``hybrid``) and the presence/absence of a ``VCP``
+    (virtualized control plane).
+
+e.g.: Let's consider a ``baremetal`` enviroment, with ``VCP`` and a simple
+scenario ``os-nosdn-nofeature-ha``, where ``deploy.sh`` failed executing the
+``openstack_ha`` ``STATE`` file.
+
+The simplest redeploy approach (which usually works for **any** combination of
+deployment type/VCP/scenario) is to issue the same deploy command as the
+original attempt used, then adding a single ``-f``:
+
+.. code-block:: console
+
+    jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> -p <pod_name> \
+                                            -s <scenario> [...] \
+                                            -f # skips running the virtual_init STATE file
 
-In order to get a better understanding on the reclass model Fuel uses, the `reclass-doc
-<https://github.com/jirihybek/reclass-doc>`_ can be used to visualise the reclass model.
-A simplified installation can be done with the use of a docker ubuntu container. This
-approach will avoid installing packages on the host, which might collide with other packages.
-After the installation is done, a webbrowser on the host can be used to view the results.
+All ``STATE`` files are re-entrant, so the above is equivalent (but a little
+slower) to skipping all ``STATE`` files before the ``openstack_ha`` one, like:
+
+.. code-block:: console
+
+    jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> -p <pod_name> \
+                                            -s <scenario> [...] \
+                                            -ffff # skips virtual_init, maas, baremetal_init, virtual_control_plane
+
+.. TIP::
+
+    For fine tuning the infrastructure setup steps executed during deployment,
+    see also the ``-e`` and ``-P`` deploy arguments.
 
 .. NOTE::
 
-    The host can be any device with Docker package already installed.
-    The user which runs the docker needs to have root priviledges.
+    On rare occassions, the cluster cannot idempotently be redeployed (e.g.
+    broken MySQL/Galera cluster), in which case some cleanup is due before
+    (re)running the ``STATE`` files. See ``-E`` deploy arg, which allows
+    either forcing a ``MaaS`` node deletion, then redeployment of all
+    baremetal nodes, if used twice (``-EE``); or only erasing the ``VCP`` VMs
+    if used only once (``-E``).
+
+Manual ``STATE`` Run
+~~~~~~~~~~~~~~~~~~~~
+
+Instead of leveraging the full ``deploy.sh``, one could execute the ``STATE``
+files one by one (or partially) from the ``cfg01``.
+
+However, this requires a better understanding of how the list of ``STATE``
+files to be executed is constructed for a specific scenario, depending on the
+deployment type and the cluster having baremetal nodes, implemented in:
 
+* ``mcp/config/scenario/defaults.yaml.j2``
+* ``mcp/config/scenario/<scenario-name>.yaml``
 
-**Instructions**
+e.g.: For the example presented above (baremetal with ``VCP``,
+``os-nosdn-nofeature-ha``), the list of ``STATE`` files would be:
 
+* ``virtual_init``
+* ``maas``
+* ``baremetal_init``
+* ``virtual_control_plane``
+* ``openstack_ha``
+* ``networks``
 
-#. Create a new directory at any location
+To execute one (or more) of the remaining ``STATE`` files after a failure:
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ mkdir -p modeler
+    jenkins@jumpserver:~$ docker exec -it fuel bash
+    root@cfg01:~$ cd ~/fuel/mcp/config/states
+    root@cfg01:~/fuel/mcp/config/states$ ./openstack_ha
+    root@cfg01:~/fuel/mcp/config/states$ CI_DEBUG=true ./networks
 
+For even finer granularity, one can also run the commands in a ``STATE`` file
+one by one manually, e.g. if the execution failed applying the ``rabbitmq``
+sls:
 
-#. Place fuel repo in the above directory
+.. code-block:: console
 
-    .. code-block:: bash
+    root@cfg01:~$ salt -I 'rabbitmq:server' state.sls rabbitmq
 
-        $ cd modeler
-        $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
+Exploring the Cloud with Salt
+=============================
 
+To gather information about the cloud, the salt commands can be used.
+It is based around a master-minion idea where the salt-master pushes config to
+the minions to execute actions.
 
-#. Create a container and mount the above host directory
+For example tell salt to execute a ping to ``8.8.8.8`` on all the nodes.
 
-    .. code-block:: bash
+.. code-block:: console
+
+    root@cfg01:~$ salt "*" network.ping 8.8.8.8
+                       ^^^                       target
+                           ^^^^^^^^^^^^          function to execute
+                                        ^^^^^^^  argument passed to the function
+
+.. TIP::
+
+    Complex filters can be done to the target like compound queries or node roles.
+
+For more information about Salt see the :ref:`fuel_userguide_references`
+section.
+
+Some examples are listed below. Note that these commands are issued from Salt
+master as ``root`` user.
+
+View the IPs of All the Components
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+    root@cfg01:~$ salt "*" network.ip_addrs
+    cfg01.mcp-odl-ha.local:
+       - 10.20.0.2
+       - 172.16.10.100
+    mas01.mcp-odl-ha.local:
+       - 10.20.0.3
+       - 172.16.10.3
+       - 192.168.11.3
+    .........................
+
+View the Interfaces of All the Components and Put the Output in a ``yaml`` File
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+    root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
+    root@cfg01:~# cat interfaces.yaml
+    cfg01.mcp-odl-ha.local:
+     enp1s0:
+       hwaddr: 52:54:00:72:77:12
+       inet:
+       - address: 10.20.0.2
+         broadcast: 10.20.0.255
+         label: enp1s0
+         netmask: 255.255.255.0
+       inet6:
+       - address: fe80::5054:ff:fe72:7712
+         prefixlen: '64'
+         scope: link
+       up: true
+    .........................
+
+View Installed Packages on MaaS Node
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+    root@cfg01:~# salt "mas*" pkg.list_pkgs
+    mas01.mcp-odl-ha.local:
+        ----------
+        accountsservice:
+            0.6.40-2ubuntu11.3
+        acl:
+            2.2.52-3
+        acpid:
+            1:2.0.26-1ubuntu2
+        adduser:
+            3.113+nmu3ubuntu4
+        anerd:
+            1
+    .........................
+
+Execute Any Linux Command on All Nodes (e.g. ``ls /var/log``)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+    root@cfg01:~# salt "*" cmd.run 'ls /var/log'
+    cfg01.mcp-odl-ha.local:
+       alternatives.log
+       apt
+       auth.log
+       boot.log
+       btmp
+       cloud-init-output.log
+       cloud-init.log
+    .........................
+
+Execute Any Linux Command on Nodes Using Compound Queries Filter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+    root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
+    cfg01.mcp-odl-ha.local:
+       alternatives.log
+       apt
+       auth.log
+       boot.log
+       btmp
+       cloud-init-output.log
+       cloud-init.log
+    .........................
+
+Execute Any Linux Command on Nodes Using Role Filter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+    root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
+    cmp001.mcp-odl-ha.local:
+       alternatives.log
+       apache2
+       apt
+       auth.log
+       btmp
+       ceilometer
+       cinder
+       cloud-init-output.log
+       cloud-init.log
+    .........................
 
-        $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
+Accessing Openstack
+===================
+
+Once the deployment is complete, Openstack CLI is accessible from controller
+VMs (``ctl01`` ... ``ctl03``).
 
+Openstack credentials are at ``/root/keystonercv3``.
 
-#. Install all the required packages inside the container.
+.. code-block:: console
 
-    .. code-block:: bash
+    root@ctl01:~# source keystonercv3
+    root@ctl01:~# openstack image list
+    +--------------------------------------+-----------------------------------------------+--------+
+    | ID                                   | Name                                          | Status |
+    +======================================+===============================================+========+
+    | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage                                   | active |
+    | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04                                   | active |
+    +--------------------------------------+-----------------------------------------------+--------+
 
-        $ apt-get update
-        $ apt-get install -y npm nodejs
-        $ npm install -g reclass-doc
-        $ cd /host/fuel/mcp/reclass
-        $ ln -s /usr/bin/nodejs /usr/bin/node
-        $ reclass-doc --output /host /host/fuel/mcp/reclass
+The OpenStack Dashboard, Horizon, is available at ``http://<proxy public VIP>``.
+The administrator credentials are ``admin``/``opnfv_secret``.
+
+.. figure:: img/horizon_login.png
+    :width: 60%
+    :align: center
+
+A full list of IPs/services is available at ``<proxy public VIP>:8090`` for
+``baremetal`` deploys.
+
+.. figure:: img/salt_services_ip.png
+    :width: 60%
+    :align: center
+
+Guest Operating System Support
+==============================
+
+There are a number of possibilities regarding the guest operating systems
+which can be spawned on the nodes.
+The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
+requested by users in OpenStack compute nodes. Currently the system supports
+the following ``UEFI``-images for the guests:
+
++------------------+-------------------+--------------------+
+| OS name          | ``x86_64`` status | ``aarch64`` status |
++==================+===================+====================+
+| Ubuntu 17.10     | untested          | Full support       |
++------------------+-------------------+--------------------+
+| Ubuntu 16.04     | Full support      | Full support       |
++------------------+-------------------+--------------------+
+| Ubuntu 14.04     | untested          | Full support       |
++------------------+-------------------+--------------------+
+| Fedora atomic 27 | untested          | Full support       |
++------------------+-------------------+--------------------+
+| Fedora cloud 27  | untested          | Full support       |
++------------------+-------------------+--------------------+
+| Debian           | untested          | Full support       |
++------------------+-------------------+--------------------+
+| Centos 7         | untested          | Not supported      |
++------------------+-------------------+--------------------+
+| Cirros 0.3.5     | Full support      | Full support       |
++------------------+-------------------+--------------------+
+| Cirros 0.4.0     | Full support      | Full support       |
++------------------+-------------------+--------------------+
+
+The above table covers only ``UEFI`` images and implies ``OVMF``/``AAVMF``
+firmware on the host. An ``x86_64`` deployment also supports ``non-UEFI``
+images, however that choice is up to the underlying hardware and the
+administrator to make.
+
+The images for the above operating systems can be found in their respective
+websites.
+
+OpenStack Storage
+=================
 
+OpenStack Cinder is the project behind block storage in OpenStack and OPNFV
+Fuel supports LVM out of the box.
 
-#. View the results from the host by using a browser. The file to open should be now at modeler/index.html
+By default ``x86_64`` supports 2 additional block storage devices, while
+``aarch64`` supports only one.
 
-   .. figure:: img/reclass_doc.png
+More devices can be supported if the OS-image created has additional
+properties allowing block storage devices to be spawned as ``SCSI`` drives.
+To do this, add the properties below to the server:
 
+.. code-block:: console
+
+    root@ctl01:~$ openstack image set --property hw_disk_bus='scsi' \
+                                      --property hw_scsi_model='virtio-scsi' \
+                                      <image>
+
+The choice regarding which bus to use for the storage drives is an important
+one. ``virtio-blk`` is the default choice for OPNFV Fuel, which attaches the
+drives in ``/dev/vdX``. However, since we want to be able to attach a
+larger number of volumes to the virtual machines, we recommend the switch to
+``SCSI`` drives which are attached in ``/dev/sdX`` instead.
+
+``virtio-scsi`` is a little worse in terms of performance but the ability to
+add a larger number of drives combined with added features like ZFS, Ceph et
+al, leads us to suggest the use of ``virtio-scsi`` in OPNFV Fuel for both
+architectures.
+
+More details regarding the differences and performance of ``virtio-blk`` vs
+``virtio-scsi`` are beyond the scope of this manual but can be easily found
+in other sources online like `VirtIO SCSI`_ or `VirtIO performance`_.
+
+Additional configuration for configuring images in OpenStack can be found in
+the OpenStack Glance documentation.
+
+OpenStack Endpoints
+===================
+
+For each OpenStack service three endpoints are created: ``admin``, ``internal``
+and ``public``.
+
+.. code-block:: console
+
+    ubuntu@ctl01:~$ openstack endpoint list --service keystone
+    +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+    | ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                          |
+    +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+    | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone     | identity     | True    | internal  | http://172.16.10.26:5000/v3  |
+    | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone     | identity     | True    | admin     | http://172.16.10.26:35357/v3 |
+    | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone     | identity     | True    | public    | https://10.0.15.2:5000/v3    |
+    +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+
+MCP sets up all Openstack services to talk to each other over unencrypted
+connections on the internal management network. All admin/internal endpoints
+use plain http, while the public endpoints are https connections terminated
+via nginx at the ``VCP`` proxy VMs.
+
+To access the public endpoints an SSL certificate has to be provided. For
+convenience, the installation script will copy the required certificate
+to the ``cfg01`` node at ``/etc/ssl/certs/os_cacert``.
+
+Copy the certificate from the ``cfg01`` node to the client that will access
+the https endpoints and place it under ``/etc/ssl/certs/``.
+The SSL connection will be established automatically after.
+
+.. code-block:: console
+
+    jenkins@jumpserver:~$ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
+      "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
+
+Reclass Model Viewer Tutorial
+=============================
+
+In order to get a better understanding of the ``reclass`` model Fuel uses, the
+`reclass-doc`_ tool can be used to visualise the ``reclass`` model.
+
+To avoid installing packages on the ``jumpserver`` or another host, the
+``cfg01`` Docker container can be used. Since the ``fuel`` git repository
+located on the ``jumpserver`` is already mounted inside ``cfg01`` container,
+the results can be visualized using a web browser on the ``jumpserver`` at the
+end of the procedure.
+
+.. code-block:: console
+
+    jenkins@jumpserver:~$ docker exec -it fuel bash
+    root@cfg01:~$ apt-get update
+    root@cfg01:~$ apt-get install -y npm nodejs
+    root@cfg01:~$ npm install -g reclass-doc
+    root@cfg01:~$ ln -s /usr/bin/nodejs /usr/bin/node
+    root@cfg01:~$ reclass-doc --output ~/fuel/mcp/reclass/modeler \
+                                       ~/fuel/mcp/reclass
+
+The generated documentation should be available on the ``jumpserver`` inside
+``fuel`` git repo subpath ``mcp/reclass/modeler/index.html``.
+
+.. figure:: img/reclass_doc.png
+    :width: 60%
+    :align: center
 
 .. _fuel_userguide_references:
 
-==========
 References
 ==========
 
-1) :ref:`fuel-release-installation-label`
-2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics/>`_
-3) `Saltstack Formulas <https://salt-formulas.readthedocs.io/en/latest/>`_
-4) `Virtio performance <https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/>`_
-5) `Virtio SCSI <https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/>`_
+#. :ref:`OPNFV Fuel Installation Instruction <fuel-installation>`
+#. `Saltstack Documentation`_
+#. `Saltstack Formulas`_
+#. `VirtIO performance`_
+#. `VirtIO SCSI`_
+
+.. _`Saltstack Documentation`: https://docs.saltstack.com/en/latest/topics/
+.. _`Saltstack Formulas`: https://salt-formulas.readthedocs.io/en/latest/
+.. _`VirtIO performance`: https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/
+.. _`VirtIO SCSI`: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/
+.. _`reclass-doc`: https://github.com/jirihybek/reclass-doc
diff --git a/mcp/config/labs/local/idf-pod1.yaml b/mcp/config/labs/local/idf-pod1.yaml
deleted file mode 100644 (file)
index b916707..0000000
+++ /dev/null
@@ -1,79 +0,0 @@
-##############################################################################
-# Copyright (c) 2018 Linux Foundation, Mirantis Inc., Enea AB and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
----
-### LF POD 2 installer descriptor file ###
-
-idf:
-  version: 0.1
-  net_config:
-    # NOTE: Network names are likely to change after the PDF spec is updated
-    oob:
-      interface: 0
-      ip-range: 172.30.8.65-172.30.8.75
-      vlan: 410
-    admin:
-      interface: 0
-      vlan: native
-      network: 192.168.11.0  # Untagged, 'PXE/Admin' on wiki, different IP
-      mask: 24
-    mgmt:
-      interface: 0
-      vlan: 300
-      network: 10.167.4.0    # Tagged, 'vlan 300' on wiki
-      ip-range: 10.167.4.10-10.167.4.254  # Some IPs are in use by lab infra
-      mask: 24
-    storage:
-      interface: 3
-      vlan: 301
-      network: 10.2.0.0      # Tagged, not the same with 'storage' on wiki
-      mask: 24
-    private:
-      interface: 1
-      vlan: 1000
-      network: 10.1.0.0      # Tagged, not the same with 'private' on wiki
-      mask: 24
-    public:
-      interface: 2
-      vlan: native
-      network: 172.30.10.0   # Untagged, 'public' on wiki
-      ip-range: 172.30.10.100-172.30.10.254  # Some IPs are in use by lab infra
-      mask: 24
-      gateway: 172.30.10.1
-      dns:
-        - 8.8.8.8
-        - 8.8.4.4
-  fuel:
-    jumphost:
-      bridges:
-        admin: 'pxebr'
-        mgmt: 'br-ctl'
-        private: ~
-        public: ~
-    network:
-      node:
-        # Ordered-list, index should be in sync with node index in PDF
-        - interfaces: &interfaces
-            # Ordered-list, index should be in sync with interface index in PDF
-            - 'enp6s0'
-            - 'enp7s0'
-            - 'enp8s0'
-            - 'enp9s0'
-          busaddr: &busaddr
-            # Bus-info reported by `ethtool -i ethX`
-            - '0000:06:00.0'
-            - '0000:07:00.0'
-            - '0000:08:00.0'
-            - '0000:09:00.0'
-        - interfaces: *interfaces
-          busaddr: *busaddr
-        - interfaces: *interfaces
-          busaddr: *busaddr
-        - interfaces: *interfaces
-          busaddr: *busaddr
-        - interfaces: *interfaces
-          busaddr: *busaddr
diff --git a/mcp/config/labs/local/idf-virtual1.yaml b/mcp/config/labs/local/idf-virtual1.yaml
deleted file mode 100644 (file)
index 402af98..0000000
+++ /dev/null
@@ -1,103 +0,0 @@
-##############################################################################
-# Copyright (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
----
-### Fuel@OPNFV sample VIRTUAL installer descriptor file ###
-
-idf:
-  version: 0.0  # Intentionally invalid to indicate this is experimental
-  net_config:
-    # NOTE: Network names are likely to change after the PDF spec is updated
-    oob:
-      interface: 0
-      ip-range: ~
-      vlan: native
-    # All networks (except OOB) are virtual networks managed by `libvirt`
-    # Interface indexes are based on Fuel installer defaults
-    admin:
-      interface: 0  # when used, will be first vnet interface, untagged
-      vlan: native
-      network: 192.168.11.0
-      mask: 24
-    mgmt:
-      interface: 1  # when used, will be second vnet interface, untagged
-      vlan: native
-      network: 172.16.10.0
-      ip-range: 172.16.10.10-172.16.10.254  # Some IPs are in use by lab infra
-      mask: 24
-    storage:
-      interface: 4  # when used, will be fifth vnet interface, untagged
-      vlan: native
-      network: 192.168.20.0
-      mask: 24
-    private:
-      interface: 2  # when used, will be third vnet interface, untagged
-      vlan: 1000-1999
-      network: 10.1.0.0
-      mask: 24
-    public:
-      interface: 3  # when used, will be fourth vnet interface, untagged
-      vlan: native
-      network: 10.16.0.0
-      ip-range: 10.16.0.100-10.16.0.254  # Some IPs are in use by lab infra
-      mask: 24
-      gateway: 10.16.0.1
-      dns:
-        - 8.8.8.8
-        - 8.8.4.4
-  fuel:
-    jumphost:
-      bridges:
-        admin: ~
-        mgmt: ~
-        private: ~
-        public: ~
-    network:
-      ntp_strata_host1: 1.se.pool.ntp.org
-      ntp_strata_host2: 0.se.pool.ntp.org
-      node:
-        # Ordered-list, index should be in sync with node index in PDF
-        - interfaces: &interfaces
-            # Ordered-list, index should be in sync with interface index in PDF
-            - 'ens3'
-            - 'ens4'
-            - 'ens5'
-            - 'ens6'
-          busaddr: &busaddr
-            # Bus-info reported by `ethtool -i ethX`
-            - '0000:00:03.0'
-            - '0000:00:04.0'
-            - '0000:00:05.0'
-            - '0000:00:06.0'
-        - interfaces: *interfaces
-          busaddr: *busaddr
-        - interfaces: *interfaces
-          busaddr: *busaddr
-        - interfaces: *interfaces
-          busaddr: *busaddr
-        - interfaces: *interfaces
-          busaddr: *busaddr
-    reclass:
-      node:
-        - compute_params: &compute_params
-            common: &compute_params_common
-              compute_hugepages_size: 2M
-              compute_hugepages_count: 2048
-              compute_hugepages_mount: /mnt/hugepages_2M
-            dpdk:
-              <<: *compute_params_common
-              compute_dpdk_driver: uio
-              compute_ovs_pmd_cpu_mask: "0x6"
-              compute_ovs_dpdk_socket_mem: "1024"
-              compute_ovs_dpdk_lcore_mask: "0x8"
-              compute_ovs_memory_channels: "2"
-              dpdk0_driver: igb_uio
-              dpdk0_n_rxq: 2
-        - compute_params: *compute_params
-        - compute_params: *compute_params
-        - compute_params: *compute_params
-        - compute_params: *compute_params
diff --git a/mcp/config/labs/local/pod1.yaml b/mcp/config/labs/local/pod1.yaml
deleted file mode 100644 (file)
index 219b2a6..0000000
+++ /dev/null
@@ -1,199 +0,0 @@
-##############################################################################
-# Copyright (c) 2018 Linux Foundation, Enea AB and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
----
-### LF POD 2 descriptor file ###
-
-version: 1.0
-details:
-  pod_owner: Trevor Bramwell
-  contact: tbramwell@linuxfoundation.org
-  lab: LF Pharos Lab
-  location: Portland
-  type: production
-  link: https://wiki.opnfv.org/display/pharos/LF+POD+2
-##############################################################################
-jumphost:
-  name: pod2-jump
-  node:
-    type: baremetal
-    vendor: Cisco Systems Inc
-    model: UCSB-B200-M4
-    arch: x86_64
-    cpus: 2
-    cpu_cflags: haswell
-    cores: 8
-    memory: 128G
-  disks: &disks
-    - name: 'disk1'
-      disk_capacity: 2400G
-      disk_type: hdd
-      disk_interface: sas
-      disk_rotation: 0
-  os: centos-7
-  remote_params: &remote_params
-    type: ipmi
-    versions:
-      - 2.0
-    user: admin
-    pass: octopus
-  remote_management:
-    <<: *remote_params
-    address: 172.30.8.83
-    mac_address: "a8:9d:21:c9:c4:9e"
-  interfaces:
-    - mac_address: "00:25:b5:a0:00:1a"
-      speed: 40gb
-      features: 'dpdk|sriov'
-      address: 192.168.11.1
-      name: 'nic1'
-    - mac_address: "00:25:b5:a0:00:1b"
-      speed: 40gb
-      features: 'dpdk|sriov'
-      name: 'nic2'
-    - mac_address: "00:25:b5:a0:00:1c"
-      speed: 40gb
-      features: 'dpdk|sriov'
-      name: 'nic3'
-    - mac_address: "00:25:b5:a0:00:1d"
-      speed: 40gb
-      features: 'dpdk|sriov'
-      name: 'nic4'
-##############################################################################
-nodes:
-  - name: pod2-node1
-    node: &nodeparams
-      type: baremetal
-      vendor: Cisco Systems Inc
-      model: UCSB-B200-M4
-      arch: x86_64
-      cpus: 2
-      cpu_cflags: haswell
-      cores: 8
-      memory: 32G
-    disks: *disks
-    remote_management:
-      <<: *remote_params
-      address: 172.30.8.75
-      mac_address: "a8:9d:21:c9:8b:56"
-    interfaces:
-      - mac_address: "00:25:b5:a0:00:2a"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic1'
-      - mac_address: "00:25:b5:a0:00:2b"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic2'
-      - mac_address: "00:25:b5:a0:00:2c"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic3'
-      - mac_address: "00:25:b5:a0:00:2d"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic4'
-  ############################################################################
-  - name: pod2-node2
-    node: *nodeparams
-    disks: *disks
-    remote_management:
-      <<: *remote_params
-      address: 172.30.8.65
-      mac_address: "a8:9d:21:c9:4d:26"
-    interfaces:
-      - mac_address: "00:25:b5:a0:00:3a"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic1'
-      - mac_address: "00:25:b5:a0:00:3b"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic2'
-      - mac_address: "00:25:b5:a0:00:3c"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic3'
-      - mac_address: "00:25:b5:a0:00:3d"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic4'
-  ############################################################################
-  - name: pod2-node3
-    node: *nodeparams
-    disks: *disks
-    remote_management:
-      <<: *remote_params
-      address: 172.30.8.74
-      mac_address: "a8:9d:21:c9:3a:92"
-    interfaces:
-      - mac_address: "00:25:b5:a0:00:4a"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic1'
-      - mac_address: "00:25:b5:a0:00:4b"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic2'
-      - mac_address: "00:25:b5:a0:00:4c"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic3'
-      - mac_address: "00:25:b5:a0:00:4d"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic4'
-  ############################################################################
-  - name: pod2-node4
-    node: *nodeparams
-    disks: *disks
-    remote_management:
-      <<: *remote_params
-      address: 172.30.8.73
-      mac_address: "74:a2:e6:a4:14:9c"
-    interfaces:
-      - mac_address: "00:25:b5:a0:00:5a"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic1'
-      - mac_address: "00:25:b5:a0:00:5b"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic2'
-      - mac_address: "00:25:b5:a0:00:5c"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic3'
-      - mac_address: "00:25:b5:a0:00:5d"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic4'
-  ############################################################################
-  - name: pod2-node5
-    node: *nodeparams
-    disks: *disks
-    remote_management:
-      <<: *remote_params
-      address: 172.30.8.72
-      mac_address: "a8:9d:21:a0:15:9c"
-    interfaces:
-      - mac_address: "00:25:b5:a0:00:6a"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic1'
-      - mac_address: "00:25:b5:a0:00:6b"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic2'
-      - mac_address: "00:25:b5:a0:00:6c"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic3'
-      - mac_address: "00:25:b5:a0:00:6d"
-        speed: 40gb
-        features: 'dpdk|sriov'
-        name: 'nic4'
diff --git a/mcp/config/labs/local/virtual1.yaml b/mcp/config/labs/local/virtual1.yaml
deleted file mode 100644 (file)
index b293b97..0000000
+++ /dev/null
@@ -1,127 +0,0 @@
-##############################################################################
-# Copyright (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
----
-### Fuel@OPNFV sample VIRTUAL POD descriptor file ###
-### NOTE: This is subject to change as vPDF is not yet officialy supported ###
-
-version: 0.0  # Intentionally invalid to indicate this is experimental
-details:
-  pod_owner: Fuel@OPNFV
-  contact: Fuel@OPNFV
-  lab: Example Lab
-  location: Example Location
-  type: development
-  link: https://wiki.opnfv.org/display/pharos/
-##############################################################################
-jumphost:
-  name: virtual1-jump
-  node:
-    type: baremetal
-    vendor: HP
-    model: ProLiant BL460c Gen8
-    arch: x86_64
-    cpus: 2
-    cpu_cflags: ivybridge
-    cores: 10
-    memory: 64G
-  disks:
-    - name: 'disk1'
-      disk_capacity: 800G
-      disk_type: hdd
-      disk_interface: scsi
-      disk_rotation: 15000
-  os: ubuntu-16.04
-  remote_management:
-    type: ipmi
-    versions:
-      - 1.0
-      - 2.0
-    user: changeme
-    pass: changeme
-    address: 0.0.0.0
-    mac_address: "00:00:00:00:00:00"
-  interfaces:
-    - name: 'nic1'
-      speed: 10gb
-      features: 'dpdk|sriov'
-      mac_address: "00:00:00:00:00:00"
-      vlan: native
-    - name: 'nic2'
-      speed: 10gb
-      features: 'dpdk|sriov'
-      mac_address: "00:00:00:00:00:00"
-      vlan: native
-##############################################################################
-nodes:
-  - name: node-1  # noha ctl01 or ha (novcp) kvm01
-    node: &nodeparams
-      # Fuel overrides certain params (e.g. cpus, mem) based on node role later
-      type: virtual
-      vendor: libvirt
-      model: virt
-      arch: x86_64
-      cpus: 1
-      cpu_cflags: ivybridge
-      cores: 8
-      memory: 6G
-    disks: &disks
-      - name: 'disk1'
-        disk_capacity: 100G
-        disk_type: hdd
-        disk_interface: scsi  # virtio-scsi
-        disk_rotation: 15000
-    remote_management: &remotemgmt
-      type: libvirt
-      user: changeme
-      pass: changeme
-      address: 127.0.0.1  # Not used currently, will be 'qemu:///system' later
-    interfaces: &interfaces
-      - name: 'nic1'
-        speed: 10gb
-        features: 'dpdk|sriov'
-        mac_address: "00:00:00:00:00:00"  # MACs will be assigned by libvirt
-        vlan: native
-      - name: 'nic2'
-        speed: 10gb
-        features: 'dpdk|sriov'
-        mac_address: "00:00:00:00:00:00"
-        vlan: native
-      - name: 'nic3'
-        speed: 10gb
-        features: 'dpdk|sriov'
-        mac_address: "00:00:00:00:00:00"
-        vlan: native
-      - name: 'nic4'
-        speed: 10gb
-        features: 'dpdk|sriov'
-        mac_address: "00:00:00:00:00:00"
-        vlan: native
-  ############################################################################
-  - name: node-2  # noha gtw01 or ha (novcp) kvm02
-    node: *nodeparams
-    disks: *disks
-    remote_management: *remotemgmt
-    interfaces: *interfaces
-  ############################################################################
-  - name: node-3  # noha odl01 / unused or ha (novcp) kvm02
-    node: *nodeparams
-    disks: *disks
-    remote_management: *remotemgmt
-    interfaces: *interfaces
-  ############################################################################
-  - name: node-4  # cmp001
-    node: *nodeparams
-    disks: *disks
-    remote_management: *remotemgmt
-    interfaces: *interfaces
-  ############################################################################
-  - name: node-5  # cmp002
-    node: *nodeparams
-    disks: *disks
-    remote_management: *remotemgmt
-    interfaces: *interfaces
index e03182f..0a53916 100644 (file)
@@ -1,22 +1,25 @@
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. SPDX-License-Identifier: CC-BY-4.0
-.. (c) 2017 Mirantis Inc., Enea AB and others.
+.. (c) 2018 Mirantis Inc., Enea AB and others.
 
-Fuel@OPNFV Scenario Configuration
+OPNFV Fuel Scenario Configuration
 =================================
 
-Abstract:
----------
+Abstract
+--------
+
 This directory contains configuration files for different OPNFV deployment
-feature scenarios used by Fuel@OPNFV, e.g.:
+feature scenarios used by OPNFV Fuel, e.g.:
 
 - High availability configuration;
 - Type of SDN controller to be deployed;
 - OPNFV collaboration project features to be deployed;
 - Provisioning of any other sevices;
-- POD configuration (baremetal, virtual);
+- POD configuration (``baremetal``, ``virtual``);
+
+NOTES
+-----
 
-NOTES:
-------
 This directory is highly likely to change and/or be replaced/complemented
-by the new PDF (Pod Descriptor File) info in Pharos OPNFV git repo.
+by the new ``SDF`` (Scenario Descriptor File) info in Pharos OPNFV git repo
+in upcoming OPNFV releases.
index be3eb9e..e0a1c34 100644 (file)
@@ -25,7 +25,7 @@ FPATCHES = $(shell find ${F_PATCH_DIR} -name '*.patch')
 # In order to keep things sort of separate, we should only pass up (to main
 # Makefile) the fully-patched repos, and gather any fingerprinting info here.
 
-# Fuel@OPNFV relies on upstream git repos (one per component) in 1 of 2 ways:
+# OPNFV Fuel relies on upstream git repos (one per component) in 1 of 2 ways:
 #   - pinned down to tag objects (e.g. "9.0.1")
 #   - tracking upstream remote HEAD on a stable or master branch
 # FIXME(alav): Should we support mixed cases? (e.g. pin down only fuel-main)
index 735b703..28af0e8 100644 (file)
@@ -1,30 +1,30 @@
 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
 .. SPDX-License-Identifier: CC-BY-4.0
-.. (c) 2017 Mirantis Inc., Enea AB and others.
+.. (c) 2018 Mirantis Inc., Enea AB and others.
 
 ==========================================
-Fuel@OPNFV submodule fetching and patching
+OPNFV Fuel Submodule Fetching and Patching
 ==========================================
 
 This directory holds submodule fetching/patching scripts, intended for
-working with upstream Fuel/MCP components (e.g.: reclass-system-salt-model) in
-developing/applying OPNFV patches (backports, custom fixes etc.).
+working with upstream Fuel/MCP components (e.g.: ``reclass-system-salt-model``)
+in developing/applying OPNFV patches (backports, custom fixes etc.).
 
 The scripts should be friendly to the following 2 use-cases:
 
-  - development work: easily cloning, binding repos to specific commits,
-    remote tracking, patch development etc.;
-  - to provide parent build scripts an easy method of tracking upstream
-    references and applying OPNFV patches on top;
+- development work: easily cloning, binding repos to specific commits,
+  remote tracking, patch development etc.;
+- to provide parent build scripts an easy method of tracking upstream
+  references and applying OPNFV patches on top;
 
 Also, we need to support at least the following modes of operations:
 
-  - submodule bind - each submodule patches will be based on the commit ID
-    saved in the .gitmodules config file;
-  - remote tracking - each submodule will sync with the upstream remote
-    and patches will be applied on top of <sub_remote>/<sub_branch>/HEAD;
+- submodule bind - each submodule patches will be based on the commit ID
+  saved in the ``.gitmodules`` config file;
+- remote tracking - each submodule will sync with the upstream remote
+  and patches will be applied on top of ``<sub_remote>/<sub_branch>/HEAD``;
 
-Workflow (development)
+Workflow (Development)
 ======================
 
 The standard development workflow should look as follows:
@@ -32,114 +32,116 @@ The standard development workflow should look as follows:
 Decide whether remote tracking should be active or not
 ------------------------------------------------------
 
-NOTE: Setting the following var to any non-empty str enables remote track.
+.. NOTE::
 
-NOTE: Leaving unset will enable remote track for anything but stable branch.
+    Setting the following var to any non-empty str enables remote track.
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ export FUEL_TRACK_REMOTES=""
+    developer@machine:~/fuel$ export FUEL_TRACK_REMOTES=""
 
 Initialize git submodules
 -------------------------
 
-All Fuel sub-projects are registered as submodules.
+All Fuel direct dependency projects are registered as submodules.
 If remote tracking is active, upstream remote is queried and latest remote
-branch HEAD is fetched. Otherwise, checkout commit IDs from .gitmodules.
+branch ``HEAD`` is fetched. Otherwise, checkout commit IDs from ``.gitmodules``.
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ make sub
+    developer@machine:~/fuel$ make -C mcp/patches sub
 
-Apply patches from `patches/<sub-project>/*` to respective submodules
----------------------------------------------------------------------
+Apply patches from ``patches/<sub-project>/*`` to respective submodules
+-----------------------------------------------------------------------
 
 This will result in creation of:
 
-- a tag called `${FUEL_MAIN_TAG}-opnfv-root` at the same commit as Fuel@OPNFV
-  upstream reference (bound to git submodule OR tracking remote HEAD);
-- a new branch `opnfv-fuel` which will hold all the OPNFV patches,
-  each patch is applied on this new branch with `git-am`;
-- a tag called `${FUEL_MAIN_TAG}-opnfv` at `opnfv-fuel/HEAD`;
+- a tag called ``${F_OPNFV_TAG}-root`` at the same commit as OPNFV Fuel
+  upstream reference (bound to git submodule OR tracking remote ``HEAD``);
+- a new branch ``nightly`` which will hold all the OPNFV patches,
+  each patch is applied on this new branch with ``git-am``;
+- a tag called ``${F_OPNFV_TAG}`` at ``nightly/HEAD``;
+- for each (sub)directory of ``patches/<sub-project>``, another pair of tags
+  ``${F_OPNFV_TAG}-<sub-directory>-fuel/patch-root`` and
+  ``${F_OPNFV_TAG}-<sub-directory>-fuel/patch`` are also created;
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ make patches-import
+    developer@machine:~/fuel$ make -C mcp/patches patches-import
 
 Modify sub-projects for whatever you need
 -----------------------------------------
 
-Commit your changes when you want them taken into account in the build.
+To add/change OPNFV-specific patches for a sub-project:
 
-Re-create patches
+- commit your changes inside the git submodule(s);
+- move the git tag to the new reference so ``make patches-export`` will
+  pick up the new commit later;
+
+.. code-block:: console
+
+    developer@machine:~/fuel$ cd ./path/to/submodule
+    developer@machine:~/fuel/path/to/submodule$ # ...
+    developer@machine:~/fuel/path/to/submodule$ git commit
+    developer@machine:~/fuel/path/to/submodule$ git tag -f ${F_OPNFV_TAG}-fuel/patch
+
+Re-create Patches
 -----------------
 
-Each commit on `opnfv-fuel` branch of each subproject will be
-exported to `patches/subproject/` via `git format-patch`.
+Each commit on ``nightly`` branch of each subproject will be
+exported to ``patches/subproject/`` via ``git format-patch``.
+
+.. NOTE::
+
+    Only commit submodule file changes when you need to bump upstream refs.
 
-NOTE: Only commit (-f) submodules when you need to bump upstream ref.
+.. WARNING::
 
-NOTE: DO NOT commit patched submodules!
+    DO NOT commit patched submodules!
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ make patches-export
+    developer@machine:~/fuel$ make -C mcp/patches patches-export patches-copyright
 
-Clean workbench branches and tags
+Clean Workbench Branches and Tags
 ---------------------------------
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ make clean
+    developer@machine:~/fuel$ make -C mcp/patches clean
 
-De-initialize submodules and force a clean clone
+De-initialize Submodules and Force a Clean Clone
 ------------------------------------------------
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ make deepclean
+    developer@machine:~/fuel$ make -C mcp/patches deepclean
 
-Sub-project maintenance
+Sub-project Maintenance
 =======================
 
-Adding a new submodule
+Adding a New Submodule
 ----------------------
 
-If you need to add another subproject, you can do it with `git submodule`.
-Make sure that you specify branch (with `-b`), short name (with `--name`):
-
-    .. code-block:: bash
-
-        $ git submodule -b master add --name reclass-system-salt-model
-          https://github.com/Mirantis/reclass-system-salt-model
-          relative/path/to/submodule
-
-Working with remote tracking for upgrading Fuel components
-----------------------------------------------------------
-
-Enable remote tracking as described above, which at `make sub` will update
-ALL submodules (e.g. reclass-system-salt-model) to remote branch (set in
-.gitmodules) HEAD.
+If you need to add another subproject, you can do it with ``git submodule``.
+Make sure that you specify branch (with ``-b``), short name (with ``--name``):
 
-* If upstream has NOT already tagged a new version, we can still work on
-  our patches, make sure they apply etc., then check for new upstream
-  changes (and that our patches still apply on top of them) by:
+.. code-block:: console
 
-* If upstream has already tagged a new version we want to pick up, checkout
-  the new tag in each submodule:
+    developer@machine:~/fuel$ git submodule -b master add --name reclass-system-salt-model \
+                              https://github.com/Mirantis/reclass-system-salt-model \
+                              mcp/reclass/classes/system
 
-* Once satisfied with the patch and submodule changes, commit them:
+Working with Remote Tracking
+----------------------------
 
-  - enforce FUEL_TRACK_REMOTES to "yes" if you want to constatly use the
-    latest remote branch HEAD (as soon as upstream pushes a change on that
-    branch, our next build will automatically include it - risk of our
-    patches colliding with new upstream changes);
-  - stage patch changes if any;
-  - if submodule tags have been updated (relevant when remote tracking is
-    disabled, i.e. we have a stable upstream baseline), add submodules;
+Enable remote tracking as described above, which at ``make sub`` will update
+ALL submodules (e.g. ``reclass-system-salt-model``) to remote branch (set in
+``.gitmodules``) ``HEAD``.
 
-        .. code-block:: bash
+.. WARNING::
 
-            $ make deepclean patches-import
-            $ git submodule foreach 'git checkout <newtag>'
-            $ make deepclean sub && git add -f relative/path/to/submodule
+    Enforce ``FUEL_TRACK_REMOTES`` to ``yes`` only if you want to constatly
+    use the latest remote branch ``HEAD`` (as soon as upstream pushes a change
+    on that branch, our next build will automatically include it - risk of our
+    patches colliding with new upstream changes) - for **ALL** submodules.
index 260cbf8..5e5d3b3 100644 (file)
@@ -1,5 +1,5 @@
 ##############################################################################
-# Copyright (c) 2015,2016,2017 Ericsson AB, Enea AB and others.
+# Copyright (c) 2018 Ericsson AB, Enea AB and others.
 # stefan.k.berg@ericsson.com
 # jonas.bjurel@ericsson.com
 # All rights reserved. This program and the accompanying materials
@@ -18,6 +18,5 @@ F_GIT_DIR    := $(shell git rev-parse --git-dir)
 F_PATCH_DIR  := $(shell pwd)
 F_OPNFV_TAG  := master-opnfv
 
-# for the patches applying purposes (empty git config in docker build container)
 export GIT_COMMITTER_NAME?=Fuel OPNFV
 export GIT_COMMITTER_EMAIL?=fuel@opnfv.org
index 6923404..2bb0f26 100644 (file)
@@ -2,22 +2,22 @@
 .. http://creativecommons.org/licenses/by/4.0
 .. (c) 2017 Mirantis Inc., Enea AB and others.
 
-Fuel@OPNFV Cluster Reclass Models
+OPNFV Fuel Cluster Reclass Models
 =================================
 
 Overview
 --------
 
-#. Common classes (HA + noHA)
+#. Common classes (HA **and** noHA)
 
-   - all-mcp-arch-common
+    - all-mcp-arch-common
 
-#. Common classes (HA baremetal/virtual, noHA virtual)
+#. Common classes (HA **or** noHA)
 
-   - mcp-<release>-common-ha
-   - mcp-<release>-common-noha
+    - mcp-common-ha
+    - mcp-common-noha
 
 #. Cluster specific classes
 
-   - mcp-<release>-*-{ha,noha}
-   - mcp-<release>-*-{ha,noha}
+    - mcp-\*-ha
+    - mcp-\*-noha
index daa0444..980827c 100644 (file)
@@ -31,9 +31,6 @@ function do_templates_scenario {
   LOCAL_PDF="${image_dir}/$(basename "${BASE_CONFIG_PDF}")"
   LOCAL_IDF="${image_dir}/$(basename "${BASE_CONFIG_IDF}")"
 
-  # Make sample PDF/IDF available via default lab-config (pharos submodule)
-  ln -sf "$(readlink -f "../config/labs/local")" "./pharos/labs/"
-
   # Expand scenario file and main reclass input (pod_config.yaml) based on PDF
   if ! curl --create-dirs -o "${LOCAL_PDF}" "${BASE_CONFIG_PDF}"; then
     notify_e "[ERROR] Could not retrieve PDF (Pod Descriptor File)!"
diff --git a/onboarding.txt b/onboarding.txt
deleted file mode 100644 (file)
index c9c45ac..0000000
+++ /dev/null
@@ -1,15 +0,0 @@
-###########################################################################
-This document is protected/licensed under the following conditions
-(c) Jonas Bjurel (Ericsson AB)
-Licensed under a Creative Commons Attribution 4.0 International License.
-You should have received a copy of the license along with this work.
-If not, see <http://creativecommons.org/licenses/by/4.0/>.
-###########################################################################
-Get on board by filling this out and submitting it for review.
-This is all optional, it's just to give you a taste of the workflow.
-
-Full Name:
-IRC Nick:
-Linux Foundation ID:
-Favourite Open Source project:
-How would you like to help this project: