1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) Open Platform for NFV Project, Inc. and its contributors
9 This document describes how to install the Euphrates release of
10 OPNFV when using Fuel as a deployment tool, covering its usage,
11 limitations, dependencies and required system resources.
12 This is an unified documentation for both x86_64 and aarch64
13 architectures. All information is common for both architectures
14 except when explicitly stated.
20 This document provides guidelines on how to install and
21 configure the Euphrates release of OPNFV when using Fuel as a
22 deployment tool, including required software and hardware configurations.
24 Although the available installation options provide a high degree of
25 freedom in how the system is set up, including architecture, services
26 and features, etc., said permutations may not provide an OPNFV
27 compliant reference architecture. This document provides a
28 step-by-step guide that results in an OPNFV Euphrates compliant
31 The audience of this document is assumed to have good knowledge of
32 networking and Unix/Linux administration.
38 Before starting the installation of the Euphrates release of
39 OPNFV, using Fuel as a deployment tool, some planning must be
45 Prior to installation, a number of deployment specific parameters must be collected, those are:
47 #. Provider sub-net and gateway information
49 #. Provider VLAN information
51 #. Provider DNS addresses
53 #. Provider NTP addresses
55 #. Network overlay you plan to deploy (VLAN, VXLAN, FLAT)
57 #. How many nodes and what roles you want to deploy (Controllers, Storage, Computes)
59 #. Monitoring options you want to deploy (Ceilometer, Syslog, etc.).
61 #. Other options not covered in the document are available in the links above
64 This information will be needed for the configuration procedures
65 provided in this document.
67 =========================================
68 Hardware Requirements for Virtual Deploys
69 =========================================
71 The following minimum hardware requirements must be met for the virtual
72 installation of Euphrates using Fuel:
74 +----------------------------+--------------------------------------------------------+
75 | **HW Aspect** | **Requirement** |
77 +============================+========================================================+
78 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
79 | | will host a Salt Master VM and each of the VM nodes in |
80 | | the virtual deploy |
81 +----------------------------+--------------------------------------------------------+
82 | **CPU** | Minimum 1 socket with Virtualization support |
83 +----------------------------+--------------------------------------------------------+
84 | **RAM** | Minimum 32GB/server (Depending on VNF work load) |
85 +----------------------------+--------------------------------------------------------+
86 | **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended |
87 +----------------------------+--------------------------------------------------------+
90 ===========================================
91 Hardware Requirements for Baremetal Deploys
92 ===========================================
94 The following minimum hardware requirements must be met for the baremetal
95 installation of Euphrates using Fuel:
97 +-------------------------+------------------------------------------------------+
98 | **HW Aspect** | **Requirement** |
100 +=========================+======================================================+
101 | **# of nodes** | Minimum 5 |
103 | | - 3 KVM servers which will run all the controller |
106 | | - 2 Compute nodes |
108 +-------------------------+------------------------------------------------------+
109 | **CPU** | Minimum 1 socket with Virtualization support |
110 +-------------------------+------------------------------------------------------+
111 | **RAM** | Minimum 16GB/server (Depending on VNF work load) |
112 +-------------------------+------------------------------------------------------+
113 | **Disk** | Minimum 256GB 10kRPM spinning disks |
114 +-------------------------+------------------------------------------------------+
115 | **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be |
116 | | a mix of tagged/native |
118 | | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network |
120 | | Note: These can be allocated to a single NIC - |
121 | | or spread out over multiple NICs |
122 +-------------------------+------------------------------------------------------+
123 | **1 Jumpserver** | A physical node (also called Foundation Node) that |
124 | | hosts the Salt Master and MaaS VMs |
125 +-------------------------+------------------------------------------------------+
126 | **Power management** | All targets need to have power management tools that |
127 | | allow rebooting the hardware and setting the boot |
128 | | order (e.g. IPMI) |
129 +-------------------------+------------------------------------------------------+
131 **NOTE:** All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
133 **NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
136 ===============================
137 Help with Hardware Requirements
138 ===============================
140 Calculate hardware requirements:
142 For information on compatible hardware types available for use, please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_.
144 When choosing the hardware on which you will deploy your OpenStack
145 environment, you should think about:
147 - CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine.
149 - Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node.
151 - Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage.
153 - Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
155 ================================================
156 Top of the Rack (TOR) Configuration Requirements
157 ================================================
159 The switching infrastructure provides connectivity for the OPNFV
160 infrastructure operations, tenant networks (East/West) and provider
161 connectivity (North/South); it also provides needed connectivity for
162 the Storage Area Network (SAN).
163 To avoid traffic congestion, it is strongly suggested that three
164 physically separated networks are used, that is: 1 physical network
165 for administration and control, one physical network for tenant private
166 and public networks, and one physical network for SAN.
167 The switching connectivity can (but does not need to) be fully redundant,
168 in such case it comprises a redundant 10GE switch pair for each of the
169 three physically separated networks.
171 The physical TOR switches are **not** automatically configured from
172 the Fuel OPNFV reference platform. All the networks involved in the OPNFV
173 infrastructure as well as the provider networks and the private tenant
174 VLANs needs to be manually configured.
176 Manual configuration of the Euphrates hardware platform should
177 be carried out according to the `OPNFV Pharos Specification
178 <https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
180 ============================
181 OPNFV Software Prerequisites
182 ============================
184 The Jumpserver node should be pre-provisioned with an operating system,
185 according to the Pharos specification. Relevant network bridges should
186 also be pre-configured (e.g. admin, management, public).
188 Fuel@OPNFV has been validated by CI using the following distributions
189 installed on the Jumpserver:
191 - CentOS 7 (recommended by Pharos specification);
194 **NOTE:** The install script expects 'libvirt' to be installed and running
195 on the Jumpserver. In case the packages are missing, the script will install
196 them; but depending on the OS distribution, the user might have to start the
197 'libvirtd' service manually.
199 ==========================================
200 OPNFV Software Installation and Deployment
201 ==========================================
203 This section describes the process of installing all the components needed to
204 deploy the full OPNFV reference platform stack across a server cluster.
206 The installation is done with Mirantis Cloud Platform (MCP), which is based on
207 a reclass model. This model provides the formula inputs to Salt, to make the deploy
208 automatic based on deployment scenario.
209 The reclass model covers:
211 - Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01)
212 - Openstack node defition: Controler nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
213 - Infrastructure components to install (software packages, services etc.)
214 - Openstack components and services (rabbitmq, galera etc.), as well as all configuration for them
217 Automatic Installation of a Virtual POD
218 =======================================
220 For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will:
222 - Create a Salt Master VM on the Jumpserver which will drive the installation
223 - Create the bridges for networking with virsh (only if a real bridge does not already exists for a given network)
224 - Install Openstack on the targets
225 - Leverage Salt to install & configure Openstack services
227 .. figure:: img/fuel_virtual.png
229 :alt: Fuel@OPNFV Virtual POD Network Layout Examples
231 Fuel@OPNFV Virtual POD Network Layout Examples
234 Automatic Installation of a Baremetal POD
235 =========================================
237 The baremetal installation process can be done by editing the information about
238 hardware and enviroment in the reclass files, or by using a Pod Descriptor File (PDF).
239 This file contains all the information about the hardware and network of the deployment
240 the will be fed to the reclass model during deployment.
242 The installation is done automatically with the deploy script, which will:
244 - Create a Salt Master VM on the Jumpserver which will drive the installation
245 - Create a MaaS Node VM on the Jumpserver which will provision the targets
246 - Install Openstack on the targets
247 - Leverage MaaS to provision baremetal nodes with the operating system
248 - Leverage Salt to configure the operatign system on the baremetal nodes
249 - Leverage Salt to install & configure Openstack services
251 .. figure:: img/fuel_baremetal.png
253 :alt: Fuel@OPNFV Baremetal POD Network Layout Example
255 Fuel@OPNFV Baremetal POD Network Layout Example
258 Steps to Start the Automatic Deploy
259 ===================================
261 These steps are common both for virtual and baremetal deploys.
263 #. Clone the Fuel code from gerrit
269 $ git clone https://git.opnfv.org/fuel
276 $ git clone https://git.opnfv.org/armband
279 #. Checkout the Euphrates release
283 $ git checkout opnfv-5.0.2
285 #. Start the deploy script
289 $ ci/deploy.sh -l <lab_name> \
291 -b <URI to configuration repo containing the PDF file> \
293 -B <list of admin, management, private and public bridges>
299 To start a virtual deployment, it is required to have the `virtual` keyword
300 while specifying the pod name to the installer script.
302 It will create the required bridges and networks, configure Salt Master and
307 $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
310 -s os-nosdn-nofeature-noha
312 Once the deployment is complete, the OpenStack Dashboard, Horizon is
313 available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
314 The administrator credentials are **admin** / **opnfv_secret**.
318 A x86 deploy on pod2 from Linux Foundation lab
322 $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
325 -s os-nosdn-nofeature-ha \
328 .. figure:: img/lf_pod2.png
330 :alt: Fuel@OPNFV LF POD2 Network Layout
332 Fuel@OPNFV LF POD2 Network Layout
334 An aarch64 deploy on pod5 from Arm lab
338 $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
341 -s os-nosdn-nofeature-ha \
342 -B admin7_br0,mgmt7_br0,,public7_br0
344 .. figure:: img/arm_pod5.png
346 :alt: Fuel@OPNFV ARM POD5 Network Layout
348 Fuel@OPNFV ARM POD5 Network Layout
354 Descriptor files provide the installer with an abstraction of the target pod
355 with all its hardware characteristics and required parameters. This information
356 is split into two different files:
357 Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
360 The Pod Descriptor File is a hardware and network description of the pod
361 infrastructure. The information is modeled under a yaml structure.
362 A reference file with the expected yaml structure is available at
363 *mcp/config/labs/local/pod1.yaml*
365 A common network section describes all the internal and provider networks
366 assigned to the pod. Each network is expected to have a vlan tag, IP subnet and
367 attached interface on the boards. Untagged vlans shall be defined as "native".
369 The hardware description is arranged into a main "jumphost" node and a "nodes"
370 set for all target boards. For each node the following characteristics
373 - Node parameters including CPU features and total memory.
374 - A list of available disks.
375 - Remote management parameters.
376 - Network interfaces list including mac address, speed and advanced features.
377 - IP list of fixed IPs for the node
378 Note: the fixed IPs are ignored by the MCP installer script and it will instead
379 assign based on the network ranges defined under the pod network configuration.
382 The Installer Descriptor File extends the PDF with pod related parameters
383 required by the installer. This information may differ per each installer type
384 and it is not considered part of the pod infrastructure. Fuel installer relies
385 on the IDF model to map the networks to the bridges on the foundation node and
386 to setup all node NICs by defining the expected OS device name and bus address.
389 The file follows a yaml structure and a "fuel" section is expected. Contents and
390 references must be aligned with the PDF file. The IDF file must be named after
391 the PDF with the prefix "idf-". A reference file with the expected structure
392 is available at *mcp/config/labs/local/idf-pod1.yaml*
399 Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article.
407 1) `OPNFV Home Page <http://www.opnfv.org>`_
408 2) `OPNFV documentation <http://docs.opnfv.org>`_
409 3) `Software downloads <https://www.opnfv.org/software/download>`_
413 4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
414 5) `OpenStack Documentation <http://docs.openstack.org>`_
418 6) `OpenDaylight Artifacts <http://www.opendaylight.org/software/downloads>`_
422 7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
426 8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
427 9) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
431 10) `Reclass model <http://reclass.pantsfullofunix.net>`_