1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) Open Platform for NFV Project, Inc. and its contributors
9 This document contains details about how to use OPNFV Fuel - Fraser
10 release - after it was deployed. For details on how to deploy check the
11 installation instructions in the :ref:`fuel_userguide_references` section.
13 This is an unified documentation for both x86_64 and aarch64
14 architectures. All information is common for both architectures
15 except when explicitly stated.
23 Fuel uses several networks to deploy and administer the cloud:
25 +------------------+---------------------------------------------------------+
26 | Network name | Description |
28 +==================+=========================================================+
29 | **PXE/ADMIN** | Used for booting the nodes via PXE and/or Salt |
31 +------------------+---------------------------------------------------------+
32 | **MCPCONTROL** | Used to provision the infrastructure VMs (Salt & MaaS) |
33 +------------------+---------------------------------------------------------+
34 | **Mgmt** | Used for internal communication between |
35 | | OpenStack components |
36 +------------------+---------------------------------------------------------+
37 | **Internal** | Used for VM data communication within the |
38 | | cloud deployment |
39 +------------------+---------------------------------------------------------+
40 | **Public** | Used to provide Virtual IPs for public endpoints |
41 | | that are used to connect to OpenStack services APIs. |
42 | | Used by Virtual machines to access the Internet |
43 +------------------+---------------------------------------------------------+
46 These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
47 Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
50 Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
51 on this network and associates static host entry IPs for Salt and Maas VMs.
59 Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
60 ssh key ``/var/lib/opnfv/mcp.rsa``. The example below is a connection to Salt master.
64 $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
68 The Salt master IP is not hard set, it is configurable via ``INSTALLER_IP`` during deployment
70 Logging in to cluster nodes is possible from the Jumpserver and from Salt master. On the Salt master
71 cluster hostnames can be used instead of IP addresses:
76 $ ssh -i mcp.rsa ubuntu@ctl01
78 User *ubuntu* has sudo rights.
81 =============================
82 Exploring the Cloud with Salt
83 =============================
85 To gather information about the cloud, the salt commands can be used. It is based
86 around a master-minion idea where the salt-master pushes config to the minions to
89 For example tell salt to execute a ping to ``8.8.8.8`` on all the nodes.
91 .. figure:: img/saltstack.png
93 Complex filters can be done to the target like compound queries or node roles.
94 For more information about Salt see the :ref:`fuel_userguide_references` section.
96 Some examples are listed below. Note that these commands are issued from Salt master
100 #. View the IPs of all the components
104 root@cfg01:~$ salt "*" network.ip_addrs
105 cfg01.mcp-pike-odl-ha.local:
108 mas01.mcp-pike-odl-ha.local:
112 .........................
115 #. View the interfaces of all the components and put the output in a file with yaml format
119 root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
120 root@cfg01:~# cat interfaces.yaml
121 cfg01.mcp-pike-odl-ha.local:
123 hwaddr: 52:54:00:72:77:12
126 broadcast: 10.20.0.255
128 netmask: 255.255.255.0
130 - address: fe80::5054:ff:fe72:7712
134 .........................
137 #. View installed packages in MaaS node
141 root@cfg01:~# salt "mas*" pkg.list_pkgs
142 mas01.mcp-pike-odl-ha.local:
154 .........................
157 #. Execute any linux command on all nodes (list the content of ``/var/log`` in this example)
161 root@cfg01:~# salt "*" cmd.run 'ls /var/log'
162 cfg01.mcp-pike-odl-ha.local:
168 cloud-init-output.log
170 .........................
173 #. Execute any linux command on nodes using compound queries filter
177 root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
178 cfg01.mcp-pike-odl-ha.local:
184 cloud-init-output.log
186 .........................
189 #. Execute any linux command on nodes using role filter
193 root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
194 cmp001.mcp-pike-odl-ha.local:
202 cloud-init-output.log
204 .........................
212 Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
213 Openstack credentials are at ``/root/keystonercv3``.
217 root@ctl01:~# source keystonercv3
218 root@ctl01:~# openstack image list
219 +--------------------------------------+-----------------------------------------------+--------+
220 | ID | Name | Status |
221 +======================================+===============================================+========+
222 | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
223 | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
224 +--------------------------------------+-----------------------------------------------+--------+
227 The OpenStack Dashboard, Horizon, is available at ``http://<proxy public VIP>``.
228 The administrator credentials are **admin**/**opnfv_secret**.
230 .. figure:: img/horizon_login.png
233 A full list of IPs/services is available at ``<proxy public VIP>:8090`` for baremetal deploys.
235 .. figure:: img/salt_services_ip.png
237 ==============================
238 Guest Operating System Support
239 ==============================
241 There are a number of possibilities regarding the guest operating systems which can be spawned
242 on the nodes. The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
243 requested by users in OpenStack compute nodes. Currently the system supports the following
244 UEFI-images for the guests:
246 +------------------+-------------------+------------------+
247 | OS name | x86_64 status | aarch64 status |
248 +==================+===================+==================+
249 | Ubuntu 17.10 | untested | Full support |
250 +------------------+-------------------+------------------+
251 | Ubuntu 16.04 | Full support | Full support |
252 +------------------+-------------------+------------------+
253 | Ubuntu 14.04 | untested | Full support |
254 +------------------+-------------------+------------------+
255 | Fedora atomic 27 | untested | Full support |
256 +------------------+-------------------+------------------+
257 | Fedora cloud 27 | untested | Full support |
258 +------------------+-------------------+------------------+
259 | Debian | untested | Full support |
260 +------------------+-------------------+------------------+
261 | Centos 7 | untested | Not supported |
262 +------------------+-------------------+------------------+
263 | Cirros 0.3.5 | Full support | Full support |
264 +------------------+-------------------+------------------+
265 | Cirros 0.4.0 | Full support | Full support |
266 +------------------+-------------------+------------------+
269 The above table covers only UEFI image and implies OVMF/AAVMF firmware on the host. An x86 deployment
270 also supports non-UEFI images, however that choice is up to the underlying hardware and the administrator
273 The images for the above operating systems can be found in their respective websites.
280 OpenStack Cinder is the project behind block storage in OpenStack and Fuel@OPNFV supports LVM out of the box.
281 By default x86 supports 2 additional block storage devices and ARMBand supports only one.
282 More devices can be supported if the OS-image created has additional properties allowing block storage devices
283 to be spawned as SCSI drives. To do this, add the properties below to the server:
287 $ openstack image set --property hw_disk_bus='scsi' --property hw_scsi_model='virtio-scsi' <image>
289 The choice regarding which bus to use for the storage drives is an important one. Virtio-blk is the default
290 choice for Fuel@OPNFV which attaches the drives in ``/dev/vdX``. However, since we want to be able to attach a
291 larger number of volumes to the virtual machines, we recommend the switch to SCSI drives which are attached
292 in ``/dev/sdX`` instead. Virtio-scsi is a little worse in terms of performance but the ability to add a larger
293 number of drives combined with added features like ZFS, Ceph et al, leads us to suggest the use of virtio-scsi in Fuel@OPNFV for both architectures.
295 More details regarding the differences and performance of virtio-blk vs virtio-scsi are beyond the scope
296 of this manual but can be easily found in other sources online like `4`_ or `5`_.
298 .. _4: https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/
300 .. _5: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/
302 Additional configuration for configuring images in openstack can be found in the OpenStack Glance documentation.
310 For each Openstack service three endpoints are created: ``admin``, ``internal`` and ``public``.
314 ubuntu@ctl01:~$ openstack endpoint list --service keystone
315 +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
316 | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
317 +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
318 | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True | internal | http://172.16.10.26:5000/v3 |
319 | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True | admin | http://172.16.10.26:35357/v3 |
320 | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True | public | https://10.0.15.2:5000/v3 |
321 +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
323 MCP sets up all Openstack services to talk to each other over unencrypted
324 connections on the internal management network. All admin/internal endpoints use
325 plain http, while the public endpoints are https connections terminated via nginx
326 at the VCP proxy VMs.
328 To access the public endpoints an SSL certificate has to be provided. For
329 convenience, the installation script will copy the required certificate into
330 to the cfg01 node at ``/etc/ssl/certs/os_cacert``.
332 Copy the certificate from the cfg01 node to the client that will access the https
333 endpoints and place it under ``/etc/ssl/certs/``. The SSL connection will be established
338 $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
339 "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
342 =============================
343 Reclass model viewer tutorial
344 =============================
347 In order to get a better understanding on the reclass model Fuel uses, the `reclass-doc
348 <https://github.com/jirihybek/reclass-doc>`_ can be used to visualise the reclass model.
349 A simplified installation can be done with the use of a docker ubuntu container. This
350 approach will avoid installing packages on the host, which might collide with other packages.
351 After the installation is done, a webbrowser on the host can be used to view the results.
355 The host can be any device with Docker package already installed.
356 The user which runs the docker needs to have root priviledges.
362 #. Create a new directory at any location
369 #. Place fuel repo in the above directory
374 $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
377 #. Create a container and mount the above host directory
381 $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
384 #. Install all the required packages inside the container.
389 $ apt-get install -y npm nodejs
390 $ npm install -g reclass-doc
391 $ cd /host/fuel/mcp/reclass
392 $ ln -s /usr/bin/nodejs /usr/bin/node
393 $ reclass-doc --output /host /host/fuel/mcp/reclass
396 #. View the results from the host by using a browser. The file to open should be now at modeler/index.html
398 .. figure:: img/reclass_doc.png
401 .. _fuel_userguide_references:
407 1) :ref:`fuel-release-installation-label`
408 2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
409 3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/>`_
410 4) `Virtio performance <https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/>`_
411 5) `Virtio SCSI <https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/>`_