1 Developer Guide and Troubleshooting
2 ===================================
4 This section aims to explain in more detail the steps that Apex follows
5 to make a deployment. It also tries to explain possible issues you might find
6 in the process of building or deploying an environment.
8 After installing the Apex RPMs in the Jump Host, some files will be located
11 1. /etc/opnfv-apex: this directory contains a bunch of scenarios to be
12 deployed with different characteristics such HA (High Availability), SDN
13 controller integration (OpenDaylight/ONOS), BGPVPN, FDIO, etc. Having a
14 look at any of these files will give you an idea of how to make a
15 customized scenario setting up different flags.
17 2. /usr/bin/: it contains the binaries for the commands opnfv-deploy,
18 opnfv-clean and opnfv-util.
20 3. /usr/share/opnfv/: contains Ansible playbooks and other non-python based
21 configuration and libraries.
23 4. /var/opt/opnfv/: contains disk images for Undercloud and Overcloud
29 As mentioned earlier in this guide, the Undercloud VM will be in charge of
30 deploying OPNFV (Overcloud VMs). Since the Undercloud is an all-in-one
31 OpenStack deployment, it will use Glance to manage the images that will be
32 deployed as the Overcloud.
34 So whatever customization that is done to the images located in the jumpserver
35 (/var/opt/opnfv/images) will be uploaded to the undercloud and consequently, to
38 Make sure, the customization is performed on the right image. For example, if I
39 virt-customize the following image overcloud-full-opendaylight.qcow2, but then
40 I deploy OPNFV with the following command:
42 ``sudo opnfv-deploy -n network_settings.yaml -d
43 /etc/opnfv-apex/os-onos-nofeature-ha.yaml``
45 It will not have any effect over the deployment, since the customized image is
46 the opendaylight one, and the scenario indicates that the image to be deployed
47 is the overcloud-full-onos.qcow2.
50 Post-deployment Configuration
51 -----------------------------
53 Post-deployment scripts will perform some configuration tasks such ssh-key
54 injection, network configuration, NATing, OpenVswitch creation. It will take
55 care of some OpenStack tasks such creation of endpoints, external networks,
58 If any of these steps fail, the execution will be interrupted. In some cases,
59 the interruption occurs at very early stages, so a new deployment must be
60 executed. However, some other cases it could be worth it to try to debug it.
62 1. There is not external connectivity from the overcloud nodes:
64 Post-deployment scripts will configure the routing, nameservers
65 and a bunch of other things between the overcloud and the
66 undercloud. If local connectivity, like pinging between the
67 different nodes, is working fine, script must have failed when
68 configuring the NAT via iptables. The main rules to enable
69 external connectivity would look like these:
71 ``iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE``
72 ``iptables -t nat -A POSTROUTING -s ${external_cidr} -o eth0 -j
74 ``iptables -A FORWARD -i eth2 -j ACCEPT``
75 ``iptables -A FORWARD -s ${external_cidr} -m state --state
76 ESTABLISHED,RELATED -j ACCEPT``
77 ``service iptables save``
79 These rules must be executed as root (or sudo) in the
82 OpenDaylight Integration
83 ------------------------
85 When a user deploys a scenario that starts with os-odl*:
87 OpenDaylight (ODL) SDN controller will be deployed and integrated with
88 OpenStack. ODL will run as a systemd service, and can be managed as
91 ``systemctl start/restart/stop opendaylight.service``
93 This command must be executed as root in the controller node of the overcloud,
94 where OpenDaylight is running. ODL files are located in /opt/opendaylight. ODL
95 uses karaf as a Java container management system that allows the users to
96 install new features, check logs and configure a lot of things. In order to
97 connect to Karaf's console, use the following command:
99 ``opnfv-util opendaylight``
101 This command is very easy to use, but in case it is not connecting to Karaf,
102 this is the command that is executing underneath:
104 ``ssh -p 8101 -o UserKnownHostsFile=/dev/null -o
105 StrictHostKeyChecking=no karaf@localhost``
107 Of course, localhost when the command is executed in the overcloud controller,
108 but you use its public IP to connect from elsewhere.
113 This section will try to gather different type of failures, the root cause and
114 some possible solutions or workarounds to get the process continued.
116 1. I can see in the output log a post-deployment error messages:
118 Heat resources will apply puppet manifests during this phase. If one of
119 these processes fail, you could try to see the error and after that,
120 re-run puppet to apply that manifest. Log into the controller (see
121 verification section for that) and check as root /var/log/messages.
122 Search for the error you have encountered and see if you can fix it. In
123 order to re-run the puppet manifest, search for "puppet apply" in that
124 same log. You will have to run the last "puppet apply" before the
125 error. And It should look like this:
127 ``FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/5b4c7a01-0d63-4a71-81e9-d5ee6f0a1f2f" FACTER_fqdn="overcloud-controller-0.localdomain.com" \
128 FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes -l syslog -l console \
129 /var/lib/heat-config/heat-config-puppet/5b4c7a01-0d63-4a71-81e9-d5ee6f0a1f2f.pp``
131 As a comment, Heat will trigger the puppet run via os-apply-config and
132 it will pass a different value for step each time. There is a total of
133 five steps. Some of these steps will not be executed depending on the
134 type of scenario that is being deployed.