1 Bare Metal Installations:
2 =========================
4 Requirements as per Pharos:
5 ===========================
10 **Minimum 2 networks**
12 | ``1. First for Admin network with gateway to access external network``
13 | ``2. Second for public network to consume by tenants for floating ips``
15 **NOTE: JOID support multiple isolated networks for data as well as storage.
16 Based on your network options for Openstack.**
18 **Minimum 6 physical servers**
22 | `` Minimum H/W Spec needed``
25 | `` Hard Disk: 1(250 GB)``
26 | `` NIC: eth0(Admin, Management), eth1 (external network)``
28 2. Node servers (minimum 5):
30 | `` Minimum H/W Spec``
33 | `` Hard Disk: 1(1 TB) this includes the space for ceph as well``
34 | `` NIC: eth0(Admin, Management), eth1 (external network)``
37 **NOTE: Above configuration is minimum and for better performance and usage of
38 the Openstack please consider higher spec for each nodes.**
40 Make sure all servers are connected to top of rack switch and configured accordingly. No DHCP server should be up and configured. Only gateway at eth0 and eth1 network should be configure to access the network outside your lab.
42 ------------------------
43 Jump node configuration:
44 ------------------------
46 1. Install Ubuntu 14.04 LTS server version of OS on the nodes.
47 2. Install the git and bridge-utils packages on the server and configure minimum two bridges on jump host:
49 brAdm and brPublic cat /etc/network/interfaces
51 | `` # The loopback network interface``
53 | `` iface lo inet loopback``
54 | `` iface eth0 inet manual``
56 | `` iface brAdm inet static``
57 | `` address 10.4.1.1``
58 | `` netmask 255.255.248.0``
59 | `` network 10.4.0.0``
60 | `` broadcast 10.4.7.255``
61 | `` gateway 10.4.0.1``
62 | `` # dns-* options are implemented by the resolvconf package, if installed``
63 | `` dns-nameservers 10.4.0.2``
64 | `` bridge_ports eth0``
66 | `` iface brPublic inet static``
67 | `` address 10.2.66.2``
68 | Seperate yaml fi `` netmask 255.255.255.0``
69 | `` bridge_ports eth2``
71 **NOTE: If you choose to use the separate network for management, data and
72 storage then you need to create bridge for each interface. In case of VLAN tags
73 use the appropriate network on jump-host depend upon VLAN ID on the interface.**
76 Configure JOID for your lab
77 ===========================
79 **Get the joid code from gerritt**
81 *git clone https://gerrit.opnfv.org/gerrit/p/joid.git*
83 **Enable MAAS (labconfig.yaml is must and base for MAAS installation and scenario deployment)**
85 If you have already enabled maas for your environment and installed it then there is no need to enabled it again or install it. If you have patches from previous MAAS enablement then you can apply it here.
87 NOTE: If MAAS is pre installed without 00-maasdeploy.sh then please do the following and skip rest of the step to enable MAAS.
89 1. Copy MAAS API key and paste in ~/.juju/environments.yaml at appropriate place.
90 2. Run command cp ~/.juju/environments.yaml ./joid/ci/
91 3. Generate labconfig.yaml for your lab and copy it to joid.
92 a. cp joid/labconfig/<company name>/<pod number>/labconfig.yaml joid/ci/ or
93 b. cp <newly generated labconfig.yaml> joid/ci
95 5. python genMAASConfig.py -l labconfig.yaml > deployment.yaml
96 6. python genDeploymentConfig.py -l labconfig.yaml > deployconfig.yaml
97 7. cp ./environments.yaml ~/.juju/
98 8. cp ./deployment.yaml ~/.juju/
99 9. cp ./labconfig.yaml ~/.juju/
100 10. cp ./deployconfig.yaml ~/.juju/
102 If enabling first time then follow it further.
103 - Create a directory in joid/labconfig/<company name>/<pod number>/ for example
105 *mkdir joid/labconfig/intel/pod7/*
107 - copy labconfig.yaml from pod6 to pod7
108 *cp joid/labconfig/intel/pod5/\* joid/labconfig/intel/pod7/*
117 1. Make sure Jump host node has been configured with bridges on each interface,
118 so that appropriate MAAS and JUJU bootstrap VM can be created. For example if
119 you have three network admin, data and public then I would suggest to give names
120 like brAdm, brData and brPublic.
121 2. You have information about the node MAC address and power management details (IPMI IP, username, password) of the nodes used for control and compute node.
123 ---------------------
124 modify labconfig.yaml
125 ---------------------
127 This file has been used to configure your maas and bootstrap node in a
128 VM. Comments in the file are self explanatory and we expect fill up the
129 information according to match lab infrastructure information. Sample
130 labconfig.yaml can be found at
131 https://gerrit.opnfv.org/gerrit/gitweb?p=joid.git;a=blob;f=labconfig/intel/pod6/labconfig.yaml
140 roles: [network,control]
144 mac: ["xx:xx:xx:xx:xx:xx"]
152 roles: [network,control]
156 mac: ["xx:xx:xx:xx:xx:xx"]
164 roles: [network,control]
168 mac: ["xx:xx:xx:xx:xx:xx"]
176 roles: [network,control]
180 mac: ["xx:xx:xx:xx:xx:xx"]
188 roles: [network,control]
192 mac: ["xx:xx:xx:xx:xx:xx"]
198 floating-ip-range: 10.5.15.6,10.5.15.250,10.5.15.254,10.5.15.0/24
222 ipaddress: 10.2.117.92
225 NOTE: If you are using VLAN tagged network then make sure you modify the case $1 section under Enable vlan interface with maas appropriately.
229 enableautomodebyname eth2 AUTO "10.4.9.0/24" compute || true
230 enableautomodebyname eth2 AUTO "10.4.9.0/24" control || true
233 Deployment of OPNFV using JOID:
234 ===============================
236 Once you have done the change in above section then run the following commands to do the automatic deployments.
242 After integrating the changes as mentioned above run the MAAS install.
243 then run the below commands to start the MAAS deployment.
245 `` ./00-maasdeploy.sh custom <absolute path of config>/labconfig.yaml ``
247 `` ./00-maasdeploy.sh custom http://<web site location>/labconfig.yaml ``
253 | `` ./deploy.sh -o mitaka -s odl -t ha -l custom -f none -d xenial``
256 ./deploy.sh -o mitaka -s odl -t ha -l custom -f none -d xenial
258 NOTE: Possible options are as follows:
260 choose which sdn controller to use.
261 [-s <nosdn|odl|opencontrail|onos>]
262 nosdn: openvswitch only and no other SDN.
263 odl: OpenDayLight Lithium version.
264 opencontrail: OpenContrail SDN can be installed with Juno Openstack today.
265 onos: ONOS framework as SDN.
267 Mode of Openstack deployed.
269 nonha: NO HA mode of Openstack
270 ha: HA mode of openstack.
272 Wihch version of Openstack deployed.
273 [-o <liberty|Mitaka>]
274 liberty: Liberty version of openstack.
275 Mitaka: Mitaka version of openstack.
278 [-l <custom | default | intelpod5 >] etc...
279 custom: For bare metal deployment where labconfig.yaml provided externally and not part of JOID.
280 default: For virtual deployment where installation will be done on KVM created using ./00-maasdeploy.sh
281 intelpod5: Install on bare metal OPNFV pod5 of Intel lab.
286 what feature to deploy. Comma seperated list
287 [-f <lxd|dvr|sfc|dpdk|ipv6|none>]
288 none: no special feature will be enabled.
289 ipv6: ipv6 will be enabled for tenant in openstack.
290 lxd: With this feature hypervisor will be LXD rather than KVM.
291 dvr: Will enable distributed virtual routing.
292 dpdk: Will enable DPDK feature.
293 sfc: Will enable sfc feature only supported with onos deployment.
295 which Ubuntu distro to use.
296 [ -d <trusty|xenial> ]
298 OPNFV Scenarios in JOID
299 Following OPNFV scenarios can be deployed using JOID. Seperate yaml bundle will be created to deploy the individual scenario.
301 Scenario Owner Known Issues
302 os-nosdn-nofeature-ha Joid
303 os-nosdn-nofeature-noha Joid
304 os-odl_l2-nofeature-ha Joid
305 os-nosdn-lxd-ha Joid Yardstick team is working to support.
306 os-nosdn-lxd-noha Joid Yardstick team is working to support.
307 os-onos-nofeature-ha ONOSFW
308 os-onos-sfc-ha ONOSFW
314 By default debug is enabled in script and error messages will be printed on ssh terminal where you are running the scripts.
316 To Access of any control or compute nodes.
317 juju ssh <service name>
318 for example to login into openstack-dashboard container.
320 juju ssh openstack-dashboard/0
321 juju ssh nova-compute/0
322 juju ssh neutron-gateway/0
324 By default juju will add the Ubuntu user keys for authentication into the deployed server and only ssh access will be available.