1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) Open Platform for NFV Project, Inc. and its contributors
9 This document contains details about how to use OPNFV Fuel - Fraser
10 release - after it was deployed. For details on how to deploy check the
11 installation instructions in the :ref:`fuel_userguide_references` section.
13 This is an unified documentation for both x86_64 and aarch64
14 architectures. All information is common for both architectures
15 except when explicitly stated.
23 Fuel uses several networks to deploy and administer the cloud:
25 +------------------+---------------------------------------------------------+
26 | Network name | Description |
28 +==================+=========================================================+
29 | **PXE/ADMIN** | Used for booting the nodes via PXE and/or Salt |
31 +------------------+---------------------------------------------------------+
32 | **MCPCONTROL** | Used to provision the infrastructure VMs (Salt & MaaS) |
33 +------------------+---------------------------------------------------------+
34 | **Mgmt** | Used for internal communication between |
35 | | OpenStack components |
36 +------------------+---------------------------------------------------------+
37 | **Internal** | Used for VM data communication within the |
38 | | cloud deployment |
39 +------------------+---------------------------------------------------------+
40 | **Public** | Used to provide Virtual IPs for public endpoints |
41 | | that are used to connect to OpenStack services APIs. |
42 | | Used by Virtual machines to access the Internet |
43 +------------------+---------------------------------------------------------+
46 These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
47 Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
50 Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
51 on this network and associates static host entry IPs for Salt and Maas VMs.
59 Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
60 ssh key */var/lib/opnfv/mcp.rsa*. The example below is a connection to Salt master.
64 $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
66 **Note**: The Salt master IP is not hard set, it is configurable via INSTALLER_IP during deployment
68 Logging in to cluster nodes is possible from the Jumpserver and from Salt master. On the Salt master
69 cluster hostnames can be used instead of IP addresses:
74 $ ssh -i mcp.rsa ubuntu@ctl01
76 User *ubuntu* has sudo rights.
79 =============================
80 Exploring the Cloud with Salt
81 =============================
83 To gather information about the cloud, the salt commands can be used. It is based
84 around a master-minion idea where the salt-master pushes config to the minions to
87 For example tell salt to execute a ping to 8.8.8.8 on all the nodes.
89 .. figure:: img/saltstack.png
91 Complex filters can be done to the target like compound queries or node roles.
92 For more information about Salt see the :ref:`fuel_userguide_references` section.
94 Some examples are listed below. Note that these commands are issued from Salt master
98 #. View the IPs of all the components
102 root@cfg01:~$ salt "*" network.ip_addrs
103 cfg01.mcp-pike-odl-ha.local:
106 mas01.mcp-pike-odl-ha.local:
110 .........................
113 #. View the interfaces of all the components and put the output in a file with yaml format
117 root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
118 root@cfg01:~# cat interfaces.yaml
119 cfg01.mcp-pike-odl-ha.local:
121 hwaddr: 52:54:00:72:77:12
124 broadcast: 10.20.0.255
126 netmask: 255.255.255.0
128 - address: fe80::5054:ff:fe72:7712
132 .........................
135 #. View installed packages in MaaS node
139 root@cfg01:~# salt "mas*" pkg.list_pkgs
140 mas01.mcp-pike-odl-ha.local:
152 .........................
155 #. Execute any linux command on all nodes (list the content of */var/log* in this example)
159 root@cfg01:~# salt "*" cmd.run 'ls /var/log'
160 cfg01.mcp-pike-odl-ha.local:
166 cloud-init-output.log
168 .........................
171 #. Execute any linux command on nodes using compound queries filter
175 root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
176 cfg01.mcp-pike-odl-ha.local:
182 cloud-init-output.log
184 .........................
187 #. Execute any linux command on nodes using role filter
191 root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
192 cmp001.mcp-pike-odl-ha.local:
200 cloud-init-output.log
202 .........................
210 Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
211 Openstack credentials are at */root/keystonercv3*.
215 root@ctl01:~# source keystonercv3
216 root@ctl01:~# openstack image list
217 +--------------------------------------+-----------------------------------------------+--------+
218 | ID | Name | Status |
219 +======================================+===============================================+========+
220 | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
221 | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
222 +--------------------------------------+-----------------------------------------------+--------+
225 The OpenStack Dashboard, Horizon, is available at http://<proxy public VIP>
226 The administrator credentials are *admin*/*opnfv_secret*.
228 .. figure:: img/horizon_login.png
231 A full list of IPs/services is available at <proxy public VIP>:8090 for baremetal deploys.
233 .. figure:: img/salt_services_ip.png
235 ==============================
236 Guest Operating System Support
237 ==============================
239 There are a number of possibilities regarding the guest operating systems which can be spawned
240 on the nodes. The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
241 requested by users in OpenStack compute nodes. Currently the system supports the following
242 UEFI-images for the guests:
244 +------------------+-------------------+------------------+
245 | OS name | x86_64 status | aarch64 status |
246 +==================+===================+==================+
247 | Ubuntu 17.10 | untested | Full support |
248 +------------------+-------------------+------------------+
249 | Ubuntu 16.04 | Full support | Full support |
250 +------------------+-------------------+------------------+
251 | Ubuntu 14.04 | untested | Full support |
252 +------------------+-------------------+------------------+
253 | Fedora atomic 27 | untested | Full support |
254 +------------------+-------------------+------------------+
255 | Fedora cloud 27 | untested | Full support |
256 +------------------+-------------------+------------------+
257 | Debian | untested | Full support |
258 +------------------+-------------------+------------------+
259 | Centos 7 | untested | Not supported |
260 +------------------+-------------------+------------------+
261 | Cirros 0.3.5 | Full support | Full support |
262 +------------------+-------------------+------------------+
263 | Cirros 0.4.0 | Full support | Full support |
264 +------------------+-------------------+------------------+
267 The above table covers only UEFI image and implies OVMF/AAVMF firmware on the host. An x86 deployment
268 also supports non-UEFI images, however that choice is up to the underlying hardware and the administrator
271 The images for the above operating systems can be found in their respective websites.
278 OpenStack Cinder is the project behind block storage in OpenStack and Fuel@OPNFV supports LVM out of the box.
279 By default x86 supports 2 additional block storage devices and ARMBand supports only one.
280 More devices can be supported if the OS-image created has additional properties allowing block storage devices
281 to be spawned as SCSI drives. To do this, add the properties below to the server:
285 openstack image set --property hw_disk_bus='scsi' --property hw_scsi_model='virtio-scsi' <image>
287 The choice regarding which bus to use for the storage drives is an important one. Virtio-blk is the default
288 choice for Fuel@OPNFV which attaches the drives in /dev/vdX. However, since we want to be able to attach a
289 larger number of volumes to the virtual machines, we recommend the switch to SCSI drives which are attached
290 in /dev/sdX instead. Virtio-scsi is a little worse in terms of performance but the ability to add a larger
291 number of drives combined with added features like ZFS, Ceph et al, leads us to suggest the use of virtio-scsi in Fuel@OPNFV for both architectures.
293 More details regarding the differences and performance of virtio-blk vs virtio-scsi are beyond the scope
294 of this manual but can be easily found in other sources online like `4`_ or `5`_.
296 .. _4: https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/
298 .. _5 : https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/
300 Additional configuration for configuring images in openstack can be found in the OpenStack Glance documentation.
308 For each Openstack service three endpoints are created: admin, internal and public.
312 ubuntu@ctl01:~$ openstack endpoint list --service keystone
313 +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
314 | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
315 +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
316 | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True | internal | http://172.16.10.26:5000/v3 |
317 | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True | admin | http://172.16.10.26:35357/v3 |
318 | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True | public | https://10.0.15.2:5000/v3 |
319 +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
321 MCP sets up all Openstack services to talk to each other over unencrypted
322 connections on the internal management network. All admin/internal endpoints use
323 plain http, while the public endpoints are https connections terminated via nginx
324 at the VCP proxy VMs.
326 To access the public endpoints an SSL certificate has to be provided. For
327 convenience, the installation script will copy the required certificate into
328 to the cfg01 node at /etc/ssl/certs/os_cacert.
330 Copy the certificate from the cfg01 node to the client that will access the https
331 endpoints and place it under /etc/ssl/certs. The SSL connection will be established
336 $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
337 "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
340 =============================
341 Reclass model viewer tutorial
342 =============================
345 In order to get a better understanding on the reclass model Fuel uses, the `reclass-doc
346 <https://github.com/jirihybek/reclass-doc>`_ can be used to visualise the reclass model.
347 A simplified installation can be done with the use of a docker ubuntu container. This
348 approach will avoid installing packages on the host, which might collide with other packages.
349 After the installation is done, a webbrowser on the host can be used to view the results.
351 **NOTE**: The host can be any device with Docker package already installed.
352 The user which runs the docker needs to have root priviledges.
358 #. Create a new directory at any location
365 #. Place fuel repo in the above directory
370 $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
373 #. Create a container and mount the above host directory
377 $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
380 #. Install all the required packages inside the container.
385 $ apt-get install -y npm nodejs
386 $ npm install -g reclass-doc
387 $ cd /host/fuel/mcp/reclass
388 $ ln -s /usr/bin/nodejs /usr/bin/node
389 $ reclass-doc --output /host /host/fuel/mcp/reclass
392 #. View the results from the host by using a browser. The file to open should be now at modeler/index.html
394 .. figure:: img/reclass_doc.png
397 .. _fuel_userguide_references:
403 1) :ref:`fuel-release-installation-label`
404 2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
405 3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/>`_
406 4) `Virtio performance <https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/>`_
407 5) `Virtio SCSI <https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/>`_