6 Testing Your Deployment
7 -----------------------
8 Once Juju deployment is complete, use ``juju status`` to verify that all
9 deployed units are in the _Ready_ state.
11 Find the OpenStack dashboard IP address from the ``juju status`` output, and
12 see if you can login via a web browser. The domain, username and password are
13 ``admin_domain``, ``admin`` and ``openstack``.
15 Optionally, see if you can log in to the Juju GUI. Run ``juju gui`` to see the
18 If you deploy OpenDaylight, OpenContrail or ONOS, find the IP address of the
19 web UI and login. Please refer to each SDN bundle.yaml for the login
23 If the deployment worked correctly, you can get easier access to the web
24 dashboards with the ``setupproxy.sh`` script described in the next section.
27 Create proxies to the dashboards
28 --------------------------------
29 MAAS, Juju and OpenStack/Kubernetes all come with their own web-based
30 dashboards. However, they might be on private networks and require SSH
31 tunnelling to see them. To simplify access to them, you can use the following
32 script to configure the Apache server on Jumphost to work as a proxy to Juju
33 and OpenStack/Kubernetes dashboards. Furthermore, this script also creates
34 JOID deployment homepage with links to these dashboards, listing also their
37 Simply run the following command after JOID has been deployed.
41 # run in joid/ci directory
42 # for OpenStack model:
43 ./setupproxy.sh openstack
44 # for Kubernetes model:
45 ./setupproxy.sh kubernetes
47 You can also use the ``-v`` argument for more verbose output with xtrace.
49 After the script has finished, it will print out the addresses and credentials
50 to the dashboards. You can also find the JOID deployment homepage if you
51 open the Jumphost's IP address in your web browser.
57 At the end of the deployment, the ``admin-openrc`` with OpenStack login
58 credentials will be created for you. You can source the file and start
59 configuring OpenStack via CLI.
63 . ~/joid_config/admin-openrc
65 The script ``openstack.sh`` under ``joid/ci`` can be used to configure the
66 OpenStack after deployment.
70 ./openstack.sh <nosdn> custom xenial pike
72 Below commands are used to setup domain in heat.
76 juju run-action heat/0 domain-setup
78 Upload cloud images and creates the sample network to test.
82 joid/juju/get-cloud-images
83 joid/juju/joid-configure-openstack
86 Configuring Kubernetes
87 ----------------------
89 The script ``k8.sh`` under ``joid/ci`` would be used to show the Kubernetes
90 workload and create sample pods.
99 At the end of the deployment, the ``admin-openrc`` with OpenStack login
100 credentials will be created for you. You can source the file and start
101 configuring OpenStack via CLI.
105 cat ~/joid_config/admin-openrc
106 export OS_USERNAME=admin
107 export OS_PASSWORD=openstack
108 export OS_TENANT_NAME=admin
109 export OS_AUTH_URL=http://172.16.50.114:5000/v2.0
110 export OS_REGION_NAME=RegionOne
112 We have prepared some scripts to help your configure the OpenStack cloud that
113 you just deployed. In each SDN directory, for example joid/ci/opencontrail,
114 there is a ‘scripts’ folder where you can find the scripts. These scripts are
115 created to help you configure a basic OpenStack Cloud to verify the cloud. For
116 more information on OpenStack Cloud configuration, please refer to the
117 OpenStack Cloud Administrator Guide:
118 http://docs.openstack.org/user-guide-admin/.
119 Similarly, for complete SDN configuration, please refer to the respective SDN
122 Each SDN solution requires slightly different setup. Please refer to the README
123 in each SDN folder. Most likely you will need to modify the ``openstack.sh``
124 and ``cloud-setup.sh`` scripts for the floating IP range, private IP network,
125 and SSH keys. Please go through ``openstack.sh``, ``glance.sh`` and
126 ``cloud-setup.sh`` and make changes as you see fit.
128 Let’s take a look at those for the Open vSwitch and briefly go through each
129 script so you know what you need to change for your own environment.
134 configure-juju-on-openstack get-cloud-images joid-configure-openstack
138 Let’s first look at ``openstack.sh``. First there are 3 functions defined,
139 ``configOpenrc()``, ``unitAddress()``, and ``unitMachine()``.
145 export SERVICE_ENDPOINT=$4
147 unset SERVICE_ENDPOINT
148 export OS_USERNAME=$1
149 export OS_PASSWORD=$2
150 export OS_TENANT_NAME=$3
151 export OS_AUTH_URL=$4
152 export OS_REGION_NAME=$5
157 if [[ "$jujuver" < "2" ]]; then
158 juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
160 juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"applications\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
165 if [[ "$jujuver" < "2" ]]; then
166 juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
168 juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"applications\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
172 The function configOpenrc() creates the OpenStack login credentials, the function unitAddress() finds the IP address of the unit, and the function unitMachine() finds the machine info of the unit.
177 keystoneIp=$(keystoneIp)
178 if [[ "$jujuver" < "2" ]]; then
179 adminPasswd=$(juju get keystone | grep admin-password -A 5 | grep value | awk '{print $2}' 2> /dev/null)
181 adminPasswd=$(juju config keystone | grep admin-password -A 5 | grep value | awk '{print $2}' 2> /dev/null)
184 configOpenrc admin $adminPasswd admin http://$keystoneIp:5000/v2.0 RegionOne > ~/joid_config/admin-openrc
185 chmod 0600 ~/joid_config/admin-openrc
188 This finds the IP address of the keystone unit 0, feeds in the OpenStack admin
189 credentials to a new file name ‘admin-openrc’ in the ‘~/joid_config/’ folder
190 and change the permission of the file. It’s important to change the credentials here if
191 you use a different password in the deployment Juju charm bundle.yaml.
195 neutron net-show ext-net > /dev/null 2>&1 || neutron net-create ext-net \
196 --router:external=True \
197 --provider:network_type flat \
198 --provider:physical_network physnet1
202 neutron subnet-show ext-subnet > /dev/null 2>&1 || neutron subnet-create ext-net \
203 --name ext-subnet --allocation-pool start=$EXTNET_FIP,end=$EXTNET_LIP \
204 --disable-dhcp --gateway $EXTNET_GW $EXTNET_NET
206 This section will create the ext-net and ext-subnet for defining the for floating ips.
210 openstack congress datasource create nova "nova" \
211 --config username=$OS_USERNAME \
212 --config tenant_name=$OS_TENANT_NAME \
213 --config password=$OS_PASSWORD \
214 --config auth_url=http://$keystoneIp:5000/v2.0
216 This section will create the congress datasource for various services.
217 Each service datasource will have entry in the file.
225 sudo mkdir $folder || true
227 if grep -q 'virt-type: lxd' bundles.yaml; then
229 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-lxc.tar.gz \
230 http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz "
234 http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img \
235 http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
236 http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img \
237 http://mirror.catn.com/pub/catn/images/qcow2/centos6.4-x86_64-gold-master.img \
238 http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 \
239 http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img "
245 if [ -f $folder/$FILENAME ];
247 echo "$FILENAME already downloaded."
249 wget -O $folder/$FILENAME $URL
253 This section of the file will download the images to jumphost if not found to
254 be used with openstack VIM.
257 The image downloading and uploading might take too long and time out. In
258 this case, use juju ssh glance/0 to log in to the glance unit 0 and run the
259 script again, or manually run the glance commands.
261 joid-configure-openstack
262 ------------------------
266 source ~/joid_config/admin-openrc
268 First, source the the ``admin-openrc`` file.
271 #Upload images to glance
272 glance image-create --name="Xenial LXC x86_64" --visibility=public --container-format=bare --disk-format=root-tar --property architecture="x86_64" < /srv/data/xenial-server-cloudimg-amd64-root.tar.gz
273 glance image-create --name="Cirros LXC 0.3" --visibility=public --container-format=bare --disk-format=root-tar --property architecture="x86_64" < /srv/data/cirros-0.3.4-x86_64-lxc.tar.gz
274 glance image-create --name="Trusty x86_64" --visibility=public --container-format=ovf --disk-format=qcow2 < /srv/data/trusty-server-cloudimg-amd64-disk1.img
275 glance image-create --name="Xenial x86_64" --visibility=public --container-format=ovf --disk-format=qcow2 < /srv/data/xenial-server-cloudimg-amd64-disk1.img
276 glance image-create --name="CentOS 6.4" --visibility=public --container-format=bare --disk-format=qcow2 < /srv/data/centos6.4-x86_64-gold-master.img
277 glance image-create --name="Cirros 0.3" --visibility=public --container-format=bare --disk-format=qcow2 < /srv/data/cirros-0.3.4-x86_64-disk.img
279 Upload the images into Glance to be used for creating the VM.
284 nova flavor-delete m1.tiny
285 nova flavor-create m1.tiny 1 512 8 1
287 Adjust the tiny image profile as the default tiny instance is too small for Ubuntu.
291 # configure security groups
292 neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp --remote-ip-prefix 0.0.0.0/0 default
293 neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 default
295 Open up the ICMP and SSH access in the default security group.
300 keystone tenant-create --name demo --description "Demo Tenant"
301 keystone user-create --name demo --tenant demo --pass demo --email demo@demo.demo
303 nova keypair-add --pub-key id_rsa.pub ubuntu-keypair
305 Create a project called ‘demo’ and create a user called ‘demo’ in this project. Import the key pair.
309 # configure external network
310 neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat --shared
311 neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.5.8.5,end=10.5.8.254 --disable-dhcp --gateway 10.5.8.1 10.5.8.0/24
313 This section configures an external network ‘ext-net’ with a subnet called ‘ext-subnet’.
314 In this subnet, the IP pool starts at 10.5.8.5 and ends at 10.5.8.254. DHCP is disabled.
315 The gateway is at 10.5.8.1, and the subnet mask is 10.5.8.0/24. These are the public IPs
316 that will be requested and associated to the instance. Please change the network configuration according to your environment.
321 neutron net-create demo-net
322 neutron subnet-create --name demo-subnet --gateway 10.20.5.1 demo-net 10.20.5.0/24
324 This section creates a private network for the instances. Please change accordingly.
328 neutron router-create demo-router
330 neutron router-interface-add demo-router demo-subnet
332 neutron router-gateway-set demo-router ext-net
334 This section creates a router and connects this router to the two networks we just created.
338 # create pool of floating ips
340 while [ $i -ne 10 ]; do
341 neutron floatingip-create ext-net
345 Finally, the script will request 10 floating IPs.
347 configure-juju-on-openstack
348 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
350 This script can be used to do juju bootstrap on openstack so that Juju can be used as model tool to deploy the services and VNF on top of openstack using the JOID.