4 This document will explain how to install OPNFV Brahmaputra with JOID including installing JOID, configuring JOID for your environment, and deploying OPNFV with different SDN solutions in HA, or non-HA mode. Prerequisites include
6 - An Ubuntu 14.04 LTS Server Jumphost
7 - Minimum 2 Networks per Pharos requirement
9 - One for the administrative network with gateway to access the Internet
10 - One for the OpenStack public network to access OpenStack instances via floating IPs
11 - JOID supports multiple isolated networks for data as well as storage based on your network requirement for OpenStack.
13 - Minimum 6 Physical servers for bare metal environment
15 - Jump Host x 1, minimum H/W configuration:
19 - Hard Disk: 1 (250GB)
20 - NIC: eth0 (Admin, Management), eth1 (external network)
22 - Control Node x 3, minimum H/W configuration:
26 - Hard Disk: 1 (500GB)
27 - NIC: eth0 (Admin, Management), eth1 (external network)
29 - Compute Node x 2, minimum H/W configuration:
33 - Hard Disk: 1 (1TB), this includes the space for Ceph.
34 - NIC: eth0 (Admin, Management), eth1 (external network)
36 **OTE**: Above configuration is minimum. For better performance and usage of the OpenStack, please consider higher specs for all nodes.
38 Make sure all servers are connected to top of rack switch and configured accordingly. No DHCP server should be up and configured. Configure gateways only on eth0 and eth1 networks to access the network outside your lab.
44 JOID as Juju OPNFV Infrastructure Deployer allows you to deploy different combinations of
45 OpenStack release and SDN solution in HA or non-HA mode. For OpenStack, JOID supports
46 Juno and Liberty. For SDN, it supports Openvswitch, OpenContrail, OpenDayLight, and ONOS. In addition to HA or non-HA mode, it also supports deploying from the latest development tree.
48 JOID heavily utilizes the technology developed in Juju and MAAS. Juju is a
49 state-of-the-art, open source, universal model for service oriented architecture and
50 service oriented deployments. Juju allows you to deploy, configure, manage, maintain,
51 and scale cloud services quickly and efficiently on public clouds, as well as on physical
52 servers, OpenStack, and containers. You can use Juju from the command line or through its
53 powerful GUI. MAAS (Metal-As-A-Service) brings the dynamism of cloud computing to the
54 world of physical provisioning and Ubuntu. Connect, commission and deploy physical servers
55 in record time, re-allocate nodes between services dynamically, and keep them up to date;
56 and in due course, retire them from use. In conjunction with the Juju service
57 orchestration software, MAAS will enable you to get the most out of your physical hardware
58 and dynamically deploy complex services with ease and confidence.
60 For more info on Juju and MAAS, please visit https://jujucharms.com/ and http://maas.ubuntu.com.
64 The MAAS server is installed and configured in a VM on the Ubuntu 14.04 LTS Jump Host with
65 access to the Internet. Another VM is created to be managed by MAAS as a bootstrap node
66 for Juju. The rest of the resources, bare metal or virtual, will be registered and
67 provisioned in MAAS. And finally the MAAS environment details are passed to Juju for use.
71 We will use MAAS-deployer to automate the deployment of MAAS clusters for use as a Juju provider. MAAS-deployer uses a set of configuration files and simple commands to build a MAAS cluster using virtual machines for the region controller and bootstrap hosts and automatically commission nodes as required so that the only remaining step is to deploy services with Juju. For more information about the maas-deployer, please see https://launchpad.net/maas-deployer.
73 Configuring the Jump Host
74 ^^^^^^^^^^^^^^^^^^^^^^^^^
75 Let's get started on the Jump Host node.
77 The MAAS server is going to be installed and configured in a virtual machine. We need to create bridges on the Jump Host prior to setting up the MAAS-deployer.
79 **OTE**: For all the commands in this document, please do not use a ‘root’ user account to run. Please create a non root user account. We recommend using the ‘ubuntu’ user.
81 Install the bridge-utils package on the Jump Host and configure a minimum of two bridges, one for the Admin network, the other for the Public network:
85 $ sudo apt-get install bridge-utils
87 $ cat /etc/network/interfaces
88 # This file describes the network interfaces available on your system
89 # and how to activate them. For more information, see interfaces(5).
91 # The loopback network interface
93 iface lo inet loopback
95 iface p1p1 inet manual
98 iface brAdm inet static
100 netmask 255.255.255.0
103 iface p1p2 inet manual
106 iface brPublic inet static
108 netmask 255.255.240.0
110 dns-nameservers 8.8.8.8
113 **NOTE**: If you choose to use separate networks for management, data, and storage, then you need to create a bridge for each interface. In case of VLAN tags, make the appropriate network on jump-host depend upon VLAN ID on the interface.
115 **NOTE**: The Ethernet device names can vary from one installation to another. Please change the Ethernet device names according to your environment.
117 MAAS-deployer has been integrated in the JOID project. To get the JOID code, please run
121 $ sudo apt-get install git
122 $ git clone https://gerrit.opnfv.org/gerrit/p/joid.git
124 Setting Up Your Environment for JOID
125 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
126 To set up your own environment, create a directory in joid/ci/maas/<company name>/<pod number>/ and copy an existing JOID environment over. For example:
131 $ mkdir -p maas/myown/pod
132 $ cp maas/juniper/pod1/deployment.yaml maas/myown/pod/
134 Now let's configure MAAS-deployer by editing the deployment.yaml file. Let's review each section. We will use the Juniper pod deployment.yaml as an example.
138 # This file defines the deployment for the MAAS environment which is to be
139 # deployed and automated.
142 # Defines the general setup for the MAAS environment, including the
143 # username and password for the host as well as the MAAS server.
147 'demo-maas' is the environment name we set, it will be used by Juju. The username and password will be the login credentials for the MAAS server VM and also for the MAAS server web UI.
151 # Contains the virtual machine parameters for creating the MAAS virtual
152 # server. Here you can configure the name of the virsh domain, the
153 # parameters for how the network is attached.
154 name: opnfv-maas-juniper
155 interfaces: ['bridge=brAdm,model=virtio', 'bridge=brPublic,model=virtio']
162 When it's configured, you will see a KVM VM created and named 'opnfv-maas-juniper' on the
163 Jump Host with 2 network interfaces configured and connected to brAdm and brPublic on the
164 host. You may want to increase the vcpu number and disk size for the VM depending on the
169 # Apt http proxy setting(s)
176 If in your environment uses an http proxy, please enter its information here. In addition, add the MAAS and Juju PPA locations here.
180 # Virsh power settings
181 # Specifies the uri and keys to use for virsh power control of the
182 # juju virtual machine. If the uri is omitted, the value for the
183 # --remote is used. If no power settings are desired, then do not
184 # supply the virsh block.
186 rsa_priv_key: /home/ubuntu/.ssh/id_rsa
187 rsa_pub_key: /home/ubuntu/.ssh/id_rsa.pub
188 uri: qemu+ssh://ubuntu@172.16.50.51/system
190 # Defines the IP Address that the configuration script will use
191 # to access the MAAS controller via SSH.
192 ip_address: 172.16.50.50
194 This section defines MAAS server IP (172.16.50.50) and the virsh power settings. The Juju bootstrap VM is defined later.
198 # This section allows the user to set a series of options on the
199 # MAAS server itself. The list of config options can be found in
200 # the upstream MAAS documentation:
201 # - http://maas.ubuntu.com/docs/api.html#maas-server
203 main_archive: http://us.archive.ubuntu.com/ubuntu
204 upstream_dns: 8.8.8.8
205 maas_name: juniperpod1
206 # kernel_opts: "console=tty0 console=ttyS1,115200n8"
207 # ntp_server: ntp.ubuntu.com
209 Here we specify some settings for the MAAS server itself. Once MAAS is deployed, you will find these settings on http://172.16.50.50/MAAS/settings/.
213 # This section is used to define the networking parameters for when
214 # the node first comes up. It is fed into the meta-data cloud-init
215 # configuration and is used to configure the networking piece of the
216 # service. The contents of this section are written directly to the
217 # /etc/network/interfaces file.
219 # Please note, this is slightly different than the
220 # node-group-interfaces section below. This will configure the
221 # machine's networking params, and the node-group-interfaces will
222 # configure the maas node-group interfaces which is used for
223 # controlling the dhcp, dns, etc.
226 iface lo inet loopback
229 iface eth0 inet static
231 netmask 255.255.255.0
233 broadcast 172.16.50.255
234 dns-nameservers 8.8.8.8 127.0.0.1
237 iface eth1 inet static
239 netmask 255.255.240.0
241 broadcast 10.10.15.255
244 This section defines the MAAS server's network interfaces. Once MAAS is deployed, you will find this setting at /etc/network/interfaces in the MAAS VM.
248 # The node-group-interfaces section is used to configure the MAAS
249 # network interfaces. Basic configuration is supported, such as which
250 # device should be bound, the range of IP addresses, etc.
251 # Note: this may contain the special identifiers:
252 # ${maas_net} - the first 3 octets of the ipv4 address
253 # ${maas_ip} - the ip address of the MAAS controller
257 subnet_mask: 255.255.255.0
258 broadcast_ip: 172.16.50.255
259 router_ip: 172.16.50.50
267 This section configures the MAAS cluster controller. Here it configures the MAAS cluster
268 to provide DHCP and DNS services on the eth0 interface with dynamic and static IP ranges
269 defined. You should allocate enough IP addresses for bare metal hosts in the static IP
270 range, and allocate as many as possible in the dynamic IP range.
274 # Defines the physical nodes which are added to the MAAS cluste
275 # controller upon startup of the node.
277 - name: 2-R4N4B2-control
279 architecture: amd64/generic
281 - "0c:c4:7a:16:2a:70"
289 - name: 3-R4N3B1-compute
291 architecture: amd64/generic
293 - "0c:c4:7a:53:57:c2"
302 This section defines the physical nodes to be added to the MAAS cluster controller. For
303 example, the first node here is named ‘2-R4N4B2-control’, with a tag 'control' and
304 architecture specified as amd64/generic. You will need to know the MAC address of the
305 network interface of the node where it can reach MAAS server; it's the network interface
306 of the node to PXE boot on. You need to tell MAAS how to power control the node by
307 providing the the BMC IP address and BMC admin credentials. MAAS power control not only
308 supports IPMI v2.0, but also supports virsh, Cisco UCS manager, HP moonshot iLO, and
309 Microsoft OCS, among others. Tag is used here with Juju constraints to make sure that a
310 particular service gets deployed only on hardware with the tag you created. Later when we
311 go through the Juju deploy bundle, you will see the constraints setting.
315 # Contains the virtual machine parameters for creating the Juju bootstrap
316 # node virtual machine
319 interfaces: ['bridge=brAdm,model=virtio', 'bridge=brPublic,model=virtio']
326 The last section of the example deployment.yaml file defines the Juju bootstrap VM node.
327 When it's configured, you will see a KVM VM created and named 'juju-boostrap' on the Jump
328 Host with 2 network interfaces configured and connected to brAdm and brPublic on the host.
329 You may want to increase the vcpu number and disk size for the VM depending on the resources.
331 We are now done providing all the information regarding the MAAS VM and Juju VM, and how
332 nodes and how many of them will be registered in MAAS. This information is very important,
333 if you have questions, please hop on to #opnfv-joid IRC channel on freenode to ask.
335 Next we will use the 02-maasdeploy.sh in joid/ci to kick off maas-deployer. Before we do
336 that, we will create an entry to tell maas-deployer what deployment.yaml file to use. Use
337 your favorite editor to add an entry under the section case $1. In our example, this is
341 cp maas/juniper/pod1/deployment.yaml ./deployment.yaml
344 **NOTE**: If your username is different from ‘ubuntu’, please change the ssh key section accordingly::
346 #just make sure the ssh keys are added into maas for the current user
347 sed --i "s@/home/ubuntu@$HOME@g" ./deployment.yaml
348 sed --i "s@qemu+ssh://ubuntu@qemu+ssh://$USER@g" ./deployment.yaml
350 Starting MAAS-deployer
351 ^^^^^^^^^^^^^^^^^^^^^^
352 Now run the 02-maasdeploy.sh script with the environment you just created
356 ~/joid/ci$ ./02-maasdeploy.sh juniperpod1
358 This will take approximately 40 minutes to couple of hours depending on your environment. This script will do the following:
359 1. Create 2 VMs (KVM).
360 2. Install MAAS in one of the VMs.
361 3. Configure MAAS to enlist and commission a VM for Juju bootstrap node.
362 4. Configure MAAS to enlist and commission bare metal servers.
364 When it's done, you should be able to view the MAAS webpage (in our example http://172.16.50.50/MAAS) and see 1 bootstrap node and bare metal servers in the 'Ready' state on the nodes page.
366 Here is an example output of running 02-maasdeploy.sh: http://pastebin.ubuntu.com/15117137/
368 Troubleshooting MAAS deployer
369 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
370 During the installation process, please carefully review the error messages.
372 Join IRC channel #opnfv-joid on freenode to ask question. After the issues are resolved, re-running 02-maasdeploy.sh will clean up the VMs created previously. There is no need to manually undo what’s been done.
376 JOID allows you to deploy different combinations of OpenStack release and SDN solution in
377 HA or non-HA mode. For OpenStack, it supports Juno and Liberty. For SDN, it supports Open
378 vSwitch, OpenContrail, OpenDaylight and ONOS (Open Network Operating System). In addition
379 to HA or non-HA mode, it also supports deploying the latest from the development tree (tip).
381 The deploy.sh script in the joid/ci directoy will do all the work for you. For example, the following deploys OpenStack Liberty with OpenDaylight in a HA mode in the Intelpod7.
385 ~/joid/ci$ ./deploy.sh -o liberty -s odl -t ha -l intelpod7 -f none
387 **NOTE: You will need to modify ~/joid/ci/01-deploybundle.sh to deploy to your own environment, explained later.**
389 Take a look at the deploy.sh script. You will find we support the following for each option::
393 odl: OpenDayLight Lithium version.
394 opencontrail: OpenContrail.
395 onos: ONOS framework as SDN.
397 nonha: NO HA mode of OpenStack.
398 ha: HA mode of OpenStack.
399 tip: The tip of the development.
401 juno: OpenStack Juno version.
402 liberty: OpenStack Liberty version.
404 default: For virtual deployment where installation will be done on KVM created using ./02-maasdeploy.sh
405 intelpod5: Install on bare metal OPNFV pod5 of the Intel lab.
406 intelpod6: Install on bare metal OPNFV pod6 of the Intel lab.
407 orangepod2: Install on bare metal OPNFV pod2 of the Orange lab.
409 Note: if you make changes as per your pod above then please use your pod.
411 none: no special feature will be enabled.
412 ipv6: IPv6 will be enabled for tenant in OpenStack.
414 The script will call 00-bootstrap.sh to bootstrap the Juju VM node, then it will call 01-deploybundle.sh with the corrosponding parameter values.
418 ./01-deploybundle.sh $opnfvtype $openstack $opnfvlab $opnfvsdn $opnfvfeature
420 You will notice in the 01-deploybundle.sh, it copies over the charm bundle file based on the ha/nonha/tip setting::
424 cp $4/juju-deployer/ovs-$4-nonha.yaml ./bundles.yaml
427 cp $4/juju-deployer/ovs-$4-ha.yaml ./bundles.yaml
430 cp $4/juju-deployer/ovs-$4-tip.yaml ./bundles.yaml
431 cp common/source/* ./
432 sed -i -- "s|branch: master|branch: stable/$2|g" ./*.yaml
435 cp $4/juju-deployer/ovs-$4-nonha.yaml ./bundles.yaml
439 After the respective yaml file is copied over and renamed to bundle.yaml, in the next
440 section, it will update the bundle.yaml based on your network configuration and
441 environment. For example, for the Juniper pod 1, we need to change vip suffix from
442 10.4.1.1 to 172.16.50.1, which is our admin network, and eth1 is on the public network.
447 sed -i -- 's/10.4.1.1/172.16.50.1/g' ./bundles.yaml
448 sed -i -- 's/#ext-port: "eth1"/ext-port: "eth1"/g' ./bundles.yaml
451 **NOTE**: If you are using a separate data network, then add this line below along with other changes, which signify that network 10.4.9.0/24 will be used as the data network for openstack.
455 sed -i -- 's/#os-data-network: 10.4.8.0\/21/os-data-network: 10.4.9.0\/24/g' ./bundles.yaml
457 By default debug is enabled in the deploy.sh script and error messages will be printed on the SSH terminal where you are running the scripts. It could take an hour to a couple of hours (maximum) to complete. Here is an example output of the deployment: http://pastebin.ubuntu.com/15006924/
459 You can check the status of the deployment by running this command in another terminal::
461 $ watch juju status --format tabular
463 This will refresh the juju status output in tabular format every 2 seconds. Here is an example output of juju status --format tabular: http://pastebin.ubuntu.com/15134109/
465 Next we will show you what Juju is deploying and to where, and how you can modify based on your own needs.
467 OPNFV Juju Charm Bundles
468 ^^^^^^^^^^^^^^^^^^^^^^^^
469 The magic behind Juju is a collection of software components called charms. They contain
470 all the instructions necessary for deploying and configuring cloud-based services. The
471 charms publicly available in the online Charm Store represent the distilled DevOps
472 knowledge of experts.
474 A bundle is a set of services with a specific configuration and their corresponding
475 relations that can be deployed together in a single step. Instead of deploying a single
476 service, they can be used to deploy an entire workload, with working relations and
477 configuration. The use of bundles allows for easy repeatability and for sharing of
478 complex, multi-service deployments.
480 For OPNFV, we have collected the charm bundles for each SDN deployment. They are stored in
481 each SDN directory in ~/joid/ci. In each SDN folder, there are 3 bundle.yaml files, one
482 for HA, one for non-HA, and the other for tip. For example for OpenDaylight::
484 ~/joid/ci/odl/juju-deployer$ ls
485 ovs-odl-ha.yaml ovs-odl-nonha.yaml ovs-odl-tip.yaml scripts
486 ~/joid/ci/odl/juju-deployer$
488 We use Juju-deployer to deploy a set of charms via a yaml configuration file. You can find the complete format guide for the Juju-deployer configuration file here: http://pythonhosted.org/juju-deployer/config.html
490 Let’s take a quick look at the ovs-odl-nonha.yaml to give you an idea about the charm bundle.
492 Assuming we are deploying OpenDayling with OpenStack Liberty in non-HA mode, according to the deploy.sh, we know it will run these two commands::
494 juju-deployer -vW -d -t 3600 -c bundles.yaml trusty-liberty-nodes
495 juju-deployer -vW -d -t 7200 -r 5 -c bundles.yaml trusty-liberty
497 In the ovs-odl-nonha.yaml file, find the section of ‘trusty-liberty-nodes’ close to the bottom of the file::
499 trusty-liberty-nodes:
500 inherits: openstack-phase1
504 It inherits ‘openstack-phase1’, which you will find in the beginning of the file::
510 charm: "cs:trusty/ubuntu"
512 constraints: tags=control
514 charm: "cs:trusty/ubuntu"
516 constraints: tags=compute
518 charm: "cs:trusty/ntp"
521 - "nodes-api:juju-info"
523 - "nodes-compute:juju-info"
525 In the ‘services’ subsection, here we deploy the ‘Ubuntu Trusty charm from the charm
526 store,’ name the service ‘nodes-api,’ deploy just one unit, and assign a tag of ‘control’
527 to this service. You can deploy the same charm and name it differently such as the second
528 service ‘nodes-compute.’ The third service we deploy is named ‘ntp’ and is deployed from
529 the NTP Trusty charm from the Charm Store. The NTP charm is a subordinate charm, which is
530 designed for and deployed to the running space of another service unit.
532 The tag here is related to what we define in the deployment.yaml file for the
533 MAAS-deployer. When ‘constraints’ is set, Juju will ask its provider, in this case MAAS,
534 to provide a resource with the tags. In this case, Juju is asking one resource tagged with
535 control and one resource tagged with compute from MAAS. Once the resource information is
536 passed to Juju, Juju will start the installation of the specified version of Ubuntu.
538 In the next subsection, we define the relations between the services. The beauty of Juju
539 and charms is you can define the relation of two services and all the service units
540 deployed will set up the relations accordingly. This makes scaling out a very easy task.
541 Here we add the relation between NTP and the two bare metal services.
543 Once the relations are established, Juju-deployer considers the deployment complete and moves to the next.
547 juju-deployer -vW -d -t 7200 -r 5 -c bundles.yaml trusty-liberty
549 It will start at the ‘trusty-liberty’ section, which inherits the ‘contrail’ section,
550 which inherits the ‘openstack-phase2’ section. it follows the same services and relations
551 format as above. We will take a look at another common service configuration next.
555 nova-cloud-controller:
556 branch: lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next
559 network-manager: Neutron
563 We define a service name ‘nova-cloud-controller,’ which is deployed from the next branch
564 of the nova-cloud-controller Trusty charm hosted on the Launchpad openstack-charmers team.
565 The number of units to be deployed is 1. We set the network-manager option to ‘Neutron.’
566 This 1-service unit will be deployed to a LXC container at service ‘nodes-api’ unit 0.
568 To find out what other options there are for this particular charm, you can go to the code location at http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/nova-cloud-controller/next/files and the options are defined in the config.yaml file.
570 Once the service unit is deployed, you can see the current configuration by running juju get::
572 $ juju get nova-cloud-controller
574 You can change the value with juju set, for example::
576 $ juju set nova-cloud-controller network-manager=’FlatManager’
578 Charms encapsulate the operation best practices. The number of options you need to configure should be at the minimum. The Juju Charm Store is a great resource to explore what a charm can offer you. Following the nova-cloud-controller charm example, here is the main page of the recommended charm on the Charm Store: https://jujucharms.com/nova-cloud-controller/trusty/66
580 If you have any questions regarding Juju, please join the IRC channel #opnfv-joid on freenode for JOID related questions or #juju for general questions.
582 Testing Your Deployment
583 ^^^^^^^^^^^^^^^^^^^^^^^
584 Once juju-deployer is complete, use juju status --format tabular to verify that all deployed units are in the ready state.
586 Find the Openstack-dashboard IP address from the juju status output, and see if you can login via a web browser. The username and password is admin/openstack.
588 Optionally, see if you can log in to the Juju GUI. The Juju GUI is on the Juju bootstrap node, which is the second VM you define in the 02-maasdeploy.sh file. The username and password is admin/admin.
590 If you deploy OpenDaylight, OpenContrail or ONOS, find the IP address of the web UI and login. Please refer to each SDN bundle.yaml for the login username/password.
594 Logs are indispensable when it comes time to troubleshoot. If you want to see all the
595 service unit deployment logs, you can run juju debug-log in another terminal. The
596 debug-log command shows the consolidated logs of all Juju agents (machine and unit logs)
597 running in the environment.
599 To view a single service unit deployment log, use juju ssh to access to the deployed unit. For example to login into nova-compute unit and look for /var/log/juju/unit-nova-compute-0.log for more info.
603 $ juju ssh nova-compute/0
607 ubuntu@R4N4B1:~$ juju ssh nova-compute/0
608 Warning: Permanently added '172.16.50.60' (ECDSA) to the list of known hosts.
609 Warning: Permanently added '3-r4n3b1-compute.maas' (ECDSA) to the list of known hosts.
610 Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-77-generic x86_64)
612 * Documentation: https://help.ubuntu.com/
614 Last login: Tue Feb 2 21:23:56 2016 from bootstrap.maas
615 ubuntu@3-R4N3B1-compute:~$ sudo -i
616 root@3-R4N3B1-compute:~# cd /var/log/juju/
617 root@3-R4N3B1-compute:/var/log/juju# ls
618 machine-2.log unit-ceilometer-agent-0.log unit-ceph-osd-0.log unit-neutron-contrail-0.log unit-nodes-compute-0.log unit-nova-compute-0.log unit-ntp-0.log
619 root@3-R4N3B1-compute:/var/log/juju#
621 **NOTE**: By default Juju will add the Ubuntu user keys for authentication into the deployed server and only ssh access will be available.
623 Once you resolve the error, go back to the jump host to rerun the charm hook with::
625 $ juju resolved --retry <unit>
627 If you would like to start over, run juju destroy-environment <environment name> to release the resources, then you can run deploy.sh again.
631 $ juju destroy-environment demo-maas
632 WARNING! this command will destroy the "demo-maas" environment (type: maas)
633 This includes all machines, services, data and other resources.
638 If there is an error destroying the environment, use --force.
642 $ juju destroy-environment demo-maas --force
645 If the above command hangs, use Ctrl-C to get out of it, and manually remove the environment file in the ~/.juju/environments/ directory.
649 $ ls ~/.juju/environments/
651 $ sudo rm ~/.juju/environments/demo-maas.jenv
655 The following are the common issues we have collected from the community:
657 - The right variables are not passed as part of the deployment procedure.
661 ./deploy.sh -o liberty -s odl -t ha -l intelpod5 -f none
663 - If you have setup maas not with 02-maasdeply.sh then the ./clean.sh command could hang,
664 the juju status command may hang because the correct MAAS API keys are not listed in
665 environments.yaml, or environments.yaml does not exist in the current working directory.
666 Solution: Please make sure you have an environments.yaml file under joid/ci directory
667 and the correct MAAS API key has been listed.
668 - Deployment times out:
669 use the command juju status --format=tabular and make sure all service containers receive an IP address and they are executing code. Ensure there is no service in the error state.
670 - In case the cleanup process hangs,remove the files from the ~/.juju/ directory except environments.yaml and shutdown all nodes manually.
672 **Direct console access** via the OpenStack GUI can be quite helpful if you need to login to a VM but cannot get to it over the network.
673 It can be enabled by setting the ``console-access-protocol`` in the ``nova-cloud-controller`` to ``vnc``. One option is to directly edit the juju-deployer bundle and set it there prior to deploying OpenStack.
677 nova-cloud-controller:
679 console-access-protocol: vnc
681 To access the console, just click on the instance in the OpenStack GUI and select the Console tab.
683 Post Installation Configuration
684 ===============================
685 Configuring OpenStack
686 ^^^^^^^^^^^^^^^^^^^^^
687 At the end of the deployment, the admin-openrc with OpenStack login credentials will be created for you. You can source the file and start configuring OpenStack via CLI.
691 ~/joid/ci/cloud$ cat admin-openrc
692 export OS_USERNAME=admin
693 export OS_PASSWORD=openstack
694 export OS_TENANT_NAME=admin
695 export OS_AUTH_URL=http://172.16.50.114:5000/v2.0
696 export OS_REGION_NAME=Canonical
699 We have prepared some scripts to help your configure the OpenStack cloud that you just deployed. In each SDN directory, for example joid/ci/opencontrail, there is a ‘scripts’ folder where you can find the scripts. These scripts are created to help you configure a basic OpenStack Cloud to verify the cloud. For more information on OpenStack Cloud configuration, please refer to the OpenStack Cloud Administrator Guide: http://docs.openstack.org/user-guide-admin/. Similarly, for complete SDN configuration, please refer to the respective SDN administrator guide.
701 Each SDN solution requires slightly different setup. Please refer to the README in each
702 SDN folder. Most likely you will need to modify the openstack.sh and cloud-setup.sh
703 scripts for the floating IP range, private IP network, and SSH keys. Please go through
704 openstack.sh, glance.sh and cloud-setup.sh and make changes as you see fit.
706 Let’s take a look at those for the Open vSwitch and briefly go through each script so you know what you need to change for your own environment.
710 ~/joid/ci/nosdn/juju-deployer/scripts$ ls
711 cloud-setup.sh glance.sh openstack.sh
712 ~/joid/ci/nosdn/juju-deployer/scripts$
716 Let’s first look at ‘openstack.sh’. First there are 3 functions defined, configOpenrc(), unitAddress(), and unitMachine().
723 export OS_USERNAME=$1
724 export OS_PASSWORD=$2
725 export OS_TENANT_NAME=$3
726 export OS_AUTH_URL=$4
727 export OS_REGION_NAME=$5
733 juju status | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
738 juju status | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
741 The function configOpenrc() creates the OpenStack login credentials, the function unitAddress() finds the IP address of the unit, and the function unitMachine() finds the machine info of the unit.
745 mkdir -m 0700 -p cloud
746 controller_address=$(unitAddress keystone 0)
747 configOpenrc admin openstack admin http://$controller_address:5000/v2.0 Canonical > cloud/admin-openrc
748 chmod 0600 cloud/admin-openrc
750 This creates a folder named ‘cloud’, finds the IP address of the keystone unit 0, feeds in
751 the OpenStack admin credentials to a new file name ‘admin-openrc’ in the ‘cloud’ folder
752 and change the permission of the file. It’s important to change the credentials here if
753 you use a different password in the deployment Juju charm bundle.yaml.
757 machine=$(unitMachine glance 0)
758 juju scp glance.sh cloud/admin-openrc $machine:
759 juju run --machine $machine ./glance.sh
761 This section first finds the machine ID of the glance service unit 0, transfers the
762 glance.sh and admin-openrc files over to the glance unit 0, and then run the glance.sh in
763 the glance unit 0. We will take a look at the glance.sh in the next section.
767 machine=$(unitMachine nova-cloud-controller 0)
768 juju scp cloud-setup.sh cloud/admin-openrc ~/.ssh/id_rsa.pub $machine:
769 juju run --machine $machine ./cloud-setup.sh
771 This section first finds the the machine ID of the nova-cloud-controller service unit 0,
772 transfers 3 files over to the nova-cloud-controller unit 0, and then runs the
773 cloud-setup.sh in the nova-cloud-controller unit 0. We will take a look at the
774 cloud-setup.sh following glance.sh.
783 First, this script sources the admin-openrc file.
787 wget -P /tmp/images http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
788 wget -P /tmp/images http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
790 Download two images, Cirros and Ubuntu Trusty cloud image to /tmp/images folder.
794 glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --progress
795 glance image-create --name "ubuntu-trusty-daily" --file /tmp/images/trusty-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --progress
798 Use the glance python client to upload those two images, and finally remove those images from the local file system.
800 If you wish to use different images, please change the image download links and filenames here accordingly.`
802 **NOTE**: The image downloading and uploading might take too long and time out. In this case, use juju ssh glance/0 to log in to the glance unit 0 and run the script again, or manually run the glance commands.
811 First, source the the admin-openrc file.
816 nova flavor-delete m1.tiny
817 nova flavor-create m1.tiny 1 512 8 1
819 Adjust the tiny image profile as the default tiny instance is too small for Ubuntu.
823 # configure security groups
824 neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp --remote-ip-prefix 0.0.0.0/0 default
825 neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 default
827 Open up the ICMP and SSH access in the default security group.
832 keystone tenant-create --name demo --description "Demo Tenant"
833 keystone user-create --name demo --tenant demo --pass demo --email demo@demo.demo
835 nova keypair-add --pub-key id_rsa.pub ubuntu-keypair
837 Create a project called ‘demo’ and create a user called ‘demo’ in this project. Import the key pair.
841 # configure external network
842 neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat --shared
843 neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.5.8.5,end=10.5.8.254 --disable-dhcp --gateway 10.5.8.1 10.5.8.0/24
845 This section configures an external network ‘ext-net’ with a subnet called ‘ext-subnet’.
846 In this subnet, the IP pool starts at 10.5.8.5 and ends at 10.5.8.254. DHCP is disabled.
847 The gateway is at 10.5.8.1, and the subnet mask is 10.5.8.0/24. These are the public IPs
848 that will be requested and associated to the instance. Please change the network configuration according to your environment.
853 neutron net-create demo-net
854 neutron subnet-create --name demo-subnet --gateway 10.20.5.1 demo-net 10.20.5.0/24
856 This section creates a private network for the instances. Please change accordingly.
860 neutron router-create demo-router
862 neutron router-interface-add demo-router demo-subnet
864 neutron router-gateway-set demo-router ext-net
866 This section creates a router and connects this router to the two networks we just created.
870 # create pool of floating ips
872 while [ $i -ne 10 ]; do
873 neutron floatingip-create ext-net
877 Finally, the script will request 10 floating IPs.
879 Appendix A: Single Node Deployment
880 ==================================
881 By default, running the script ./02-maasdeploy.sh will automatically create the KVM VMs on a single machine and configure everything for you.
888 cp maas/default/deployment.yaml ./deployment.yaml
891 Please change ~/joid/ci/maas/default/deployment.yaml accordingly. The MAAS-deployer will do the following:
892 1. Create 2 VMs (KVM).
893 2. Install MAAS in one of the VMs.
894 3. Configure MAAS to enlist and commission a VM for Juju bootstrap node.
896 Later, the 02-massdeploy.sh script will create two additional VMs and register them into the MAAS Server:
900 if [ "$virtinstall" -eq 1 ]; then
901 # create two more VMs to do the deployment.
902 sudo virt-install --connect qemu:///system --name node1-control --ram 8192 --vcpus 4 --disk size=120,format=qcow2,bus=virtio,io=native,pool=default --network bridge=virbr0,model=virtio --network bridge=virbr0,model=virtio --boot network,hd,menu=off --noautoconsole --vnc --print-xml | tee node1-control
903 sudo virt-install --connect qemu:///system --name node2-compute --ram 8192 --vcpus 4 --disk size=120,format=qcow2,bus=virtio,io=native,pool=default --network bridge=virbr0,model=virtio --network bridge=virbr0,model=virtio --boot network,hd,menu=off --noautoconsole --vnc --print-xml | tee node2-compute
905 node1controlmac=`grep "mac address" node1-control | head -1 | cut -d "'" -f 2`
906 node2computemac=`grep "mac address" node2-compute | head -1 | cut -d "'" -f 2`
908 sudo virsh -c qemu:///system define --file node1-control
909 sudo virsh -c qemu:///system define --file node2-compute
911 maas maas tags new name='control'
912 maas maas tags new name='compute'
914 controlnodeid=`maas maas nodes new autodetect_nodegroup='yes' name='node1-control' tags='control' hostname='node1-control' power_type='virsh' mac_addresses=$node1controlmac power_parameters_power_address='qemu+ssh://'$USER'@192.168.122.1/system' architecture='amd64/generic' power_parameters_power_id='node1-control' | grep system_id | cut -d '"' -f 4 `
916 maas maas tag update-nodes control add=$controlnodeid
918 computenodeid=`maas maas nodes new autodetect_nodegroup='yes' name='node2-compute' tags='compute' hostname='node2-compute' power_type='virsh' mac_addresses=$node2computemac power_parameters_power_address='qemu+ssh://'$USER'@192.168.122.1/system' architecture='amd64/generic' power_parameters_power_id='node2-compute' | grep system_id | cut -d '"' -f 4 `
920 maas maas tag update-nodes compute add=$computenodeid
924 Appendix B: Automatic Device Discovery
925 ======================================
926 If your bare metal servers support IPMI, they can be discovered and enlisted automatically
927 by the MAAS server. You need to configure bare metal servers to PXE boot on the network
928 interface where they can reach the MAAS server. With nodes set to boot from a PXE image,
929 they will start, look for a DHCP server, receive the PXE boot details, boot the image,
930 contact the MAAS server and shut down.
932 During this process, the MAAS server will be passed information about the node, including
933 the architecture, MAC address and other details which will be stored in the database of
934 nodes. You can accept and commission the nodes via the web interface. When the nodes have
935 been accepted the selected series of Ubuntu will be installed.
938 Appendix C: Machine Constraints
939 ===============================
940 Juju and MAAS together allow you to assign different roles to servers, so that hardware and software can be configured according to their roles. We have briefly mentioned and used this feature in our example. Please visit Juju Machine Constraints https://jujucharms.com/docs/stable/charms-constraints and MAAS tags https://maas.ubuntu.com/docs/tags.html for more information.
942 Appendix D: Offline Deployment
943 ==============================
944 When you have limited access policy in your environment, for example, when only the Jump Host has Internet access, but not the rest of the servers, we provide tools in JOID to support the offline installation.
946 The following package set is provided to those wishing to experiment with a ‘disconnected
947 from the internet’ setup when deploying JOID utilizing MAAS. These instructions provide
948 basic guidance as to how to accomplish the task, but it should be noted that due to the
949 current reliance of MAAS and DNS, that behavior and success of deployment may vary
950 depending on infrastructure setup. An official guided setup is in the roadmap for the next release:
952 1. Get the packages from here: https://launchpad.net/~thomnico/+archive/ubuntu/ubuntu-cloud-mirrors
954 **NOTE**: The mirror is quite large 700GB in size, and does not mirror SDN repo/ppa.
956 2. Additionally to make juju use a private repository of charms instead of using an external location are provided via the following link and configuring environments.yaml to use cloudimg-base-url: https://github.com/juju/docs/issues/757