1 ========================================================================
2 OPNFV Release Notes for the Fraser release of OPNFV Apex deployment tool
3 ========================================================================
8 This document provides the release notes for Fraser release with the Apex
14 All Apex and "common" entities are protected by the Apache 2.0 License
15 ( http://www.apache.org/licenses/ )
20 This is the OPNFV Fraser release that implements the deploy stage of the
21 OPNFV CI pipeline via Apex.
23 Apex is based on RDO's Triple-O installation tool chain.
24 More information at http://rdoproject.org
26 Carefully follow the installation-instructions which guide a user on how to
27 deploy OPNFV using Apex installer.
32 Fraser release with the Apex deployment toolchain will establish an OPNFV
33 target system on a Pharos compliant lab infrastructure. The current definition
34 of an OPNFV target system is OpenStack Pike combined with an SDN
35 controller, such as OpenDaylight. The system is deployed with OpenStack High
36 Availability (HA) for most OpenStack services. SDN controllers are deployed
37 on every controller unless deploying with one the HA FD.IO scenarios. Ceph
38 storage is used as Cinder backend, and is the only supported storage for
39 Fraser. Ceph is setup as 3 OSDs and 3 Monitors, one OSD+Mon per Controller
40 node in an HA setup. Apex also supports non-HA deployments, which deploys a
41 single controller and n number of compute nodes. Furthermore, Apex is
42 capable of deploying scenarios in a bare metal or virtual fashion. Virtual
43 deployments use multiple VMs on the Jump Host and internal networking to
44 simulate the a bare metal deployment.
46 - Documentation is built by Jenkins
47 - .iso image is built by Jenkins
48 - .rpm packages are built by Jenkins
49 - Jenkins deploys a Fraser release with the Apex deployment toolchain
50 bare metal, which includes 3 control+network nodes, and 2 compute nodes.
55 +--------------------------------------+--------------------------------------+
56 | **Project** | apex |
58 +--------------------------------------+--------------------------------------+
59 | **Repo/tag** | opnfv-6.0.0 |
61 +--------------------------------------+--------------------------------------+
62 | **Release designation** | 6.0.0 |
64 +--------------------------------------+--------------------------------------+
65 | **Release date** | 2018-04-30 |
67 +--------------------------------------+--------------------------------------+
68 | **Purpose of the delivery** | OPNFV Fraser release |
70 +--------------------------------------+--------------------------------------+
75 Module version changes
76 ~~~~~~~~~~~~~~~~~~~~~~
77 This is the first tracked version of the Fraser release with the Apex
78 deployment toolchain. It is based on following upstream versions:
80 - OpenStack (Pike release)
82 - OpenDaylight (Nitrogen/Oxygen releases)
86 Document Version Changes
87 ~~~~~~~~~~~~~~~~~~~~~~~~
89 This is the first tracked version of Fraser release with the Apex
91 The following documentation is provided with this release:
93 - OPNFV Installation instructions for the Fraser release with the Apex
94 deployment toolchain - ver. 1.0.0
95 - OPNFV Release Notes for the Fraser release with the Apex deployment
96 toolchain - ver. 1.0.0 (this document)
101 Software Deliverables
102 ~~~~~~~~~~~~~~~~~~~~~
103 - Apex .rpm (python34-opnfv-apex)
104 - build.py - Builds the above artifact
105 - opnfv-deploy - Automatically deploys Target OPNFV System
106 - opnfv-clean - Automatically resets a Target OPNFV Deployment
107 - opnfv-util - Utility to connect to or debug Overcloud nodes + OpenDaylight
109 Documentation Deliverables
110 ~~~~~~~~~~~~~~~~~~~~~~~~~~
111 - OPNFV Installation instructions for the Fraser release with the Apex
112 deployment toolchain - ver. 6.0
113 - OPNFV Release Notes for the Fraser release with the Apex deployment
114 toolchain - ver. 6.0 (this document)
116 Known Limitations, Issues and Workarounds
117 =========================================
122 **Max number of blades:** 1 Apex undercloud, 3 Controllers, 20 Compute blades
124 **Min number of blades:** 1 Apex undercloud, 1 Controller, 1 Compute blade
126 **Storage:** Ceph is the only supported storage configuration.
128 **Min master requirements:** At least 16GB of RAM for baremetal Jump Host,
129 24GB for virtual deployments (noHA).
137 +--------------------------------------+--------------------------------------+
138 | **JIRA REFERENCE** | **SLOGAN** |
140 +--------------------------------------+--------------------------------------+
141 | JIRA: APEX-280 | Deleted network not cleaned up |
143 +--------------------------------------+--------------------------------------+
144 | JIRA: APEX-295 | Missing support for VLAN tenant |
146 +--------------------------------------+--------------------------------------+
147 | JIRA: APEX-368 | Ceilometer stores samples and events |
149 +--------------------------------------+--------------------------------------+
150 | JIRA: APEX-371 | Ceph partitions need to be prepared |
151 | | on deployment when using 2nd disk |
152 +--------------------------------------+--------------------------------------+
153 | JIRA: APEX-375 | Default glance storage points to |
154 | | http,swift when ceph disabled |
155 +--------------------------------------+--------------------------------------+
156 | JIRA: APEX-389 | Compute kernel parameters are used |
158 +--------------------------------------+--------------------------------------+
159 | JIRA: APEX-412 | Install failures with UEFI |
160 +--------------------------------------+--------------------------------------+
161 | JIRA: APEX-425 | Need to tweak performance settings |
162 | | virtual DPDK scenarios |
163 +--------------------------------------+--------------------------------------+
174 Please reference Functest project documentation for test results with the
181 For more information on the OPNFV Fraser release, please see:
183 http://wiki.opnfv.org/releases/Fraser
185 :Authors: Tim Rozet (trozet@redhat.com)
186 :Authors: Dan Radez (dradez@redhat.com)