Add scripts and playbook to deploy OSA 89/28189/6
authorwutianwei <wutianwei1@huawei.com>
Tue, 7 Feb 2017 08:35:27 +0000 (16:35 +0800)
committerYolanda Robla <yroblamo@redhat.com>
Wed, 8 Mar 2017 15:11:16 +0000 (16:11 +0100)
The script and playbooks defined on this repo will deploy an OpenStack cloud
based on OpenStack-Ansible.
You just need to run the osa_deploy.sh.
More information please refer to the README.md

Change-Id: I731c366ab7197aefd7726150477ba1cc4d2932d3
Signed-off-by: wutianwei <wutianwei1@huawei.com>
13 files changed:
prototypes/openstack-ansible/README.md [new file with mode: 0644]
prototypes/openstack-ansible/file/cinder.yml [new file with mode: 0644]
prototypes/openstack-ansible/file/exports [new file with mode: 0644]
prototypes/openstack-ansible/file/modules [new file with mode: 0644]
prototypes/openstack-ansible/file/openstack_user_config.yml [new file with mode: 0644]
prototypes/openstack-ansible/file/user_variables.yml [new file with mode: 0644]
prototypes/openstack-ansible/playbooks/inventory [new file with mode: 0644]
prototypes/openstack-ansible/playbooks/jumphost_configuration.yml [new file with mode: 0644]
prototypes/openstack-ansible/playbooks/targethost_configuration.yml [new file with mode: 0644]
prototypes/openstack-ansible/scripts/osa_deploy.sh [new file with mode: 0755]
prototypes/openstack-ansible/template/bifrost/compute.interface.j2 [new file with mode: 0644]
prototypes/openstack-ansible/template/bifrost/controller.interface.j2 [new file with mode: 0644]
prototypes/openstack-ansible/var/ubuntu.yml [new file with mode: 0644]

diff --git a/prototypes/openstack-ansible/README.md b/prototypes/openstack-ansible/README.md
new file mode 100644 (file)
index 0000000..34c1d0d
--- /dev/null
@@ -0,0 +1,48 @@
+===============================
+How to deploy OpenStack-Ansible
+===============================
+The script and playbooks defined on this repo will deploy an OpenStack
+cloud based on OpenStack-Ansible.
+It needs to be combined with Bifrost. You need use Bifrost to provide six VMs.
+To learn about how to use Bifrost, you can read the document on
+[/opt/releng/prototypes/bifrost/README.md].
+
+Minimal requirements:
+1. You will need to have a least 150G free space for the partition on where
+   "/var/lib/libvirt/images/" lives.
+2. each vm needs to have at least 8 vCPU, 12 GB RAM, 60 GB HDD.
+
+After provisioning the six VMs please follow that steps:
+
+1.Run the script to deploy OpenStack
+  cd /opt/releng/prototypes/openstack-ansible/scripts/
+  sudo ./osa_deploy.sh
+It will take a lot of time. When the deploy is successful, you will see the
+message "OpenStack deployed successfully".
+
+2.To verify the OpenStack operation
+  2.1 ssh into the controller::
+      ssh 192.168.122.3
+  2.2 Enter into the lxc container::
+      lxcname=$(lxc-ls | grep utility)
+      lxc-attach -n $lxcname
+  2.3 Verify the OpenStack API::
+      source /root/openrc
+      openstack user list
+
+This will show the following output::
++----------------------------------+--------------------+
+| ID                               | Name               |
++----------------------------------+--------------------+
+| 056f8fe41336435991fd80872731cada | aodh               |
+| 308f6436e68f40b49d3b8e7ce5c5be1e | glance             |
+| 351b71b43a66412d83f9b3cd75485875 | nova               |
+| 511129e053394aea825cce13b9f28504 | ceilometer         |
+| 5596f71319d44c8991fdc65f3927b62e | gnocchi            |
+| 586f49e3398a4c47a2f6fe50135d4941 | stack_domain_admin |
+| 601b329e6b1d427f9a1e05ed28753497 | heat               |
+| 67fe383b94964a4781345fbcc30ae434 | cinder             |
+| 729bb08351264d729506dad84ed3ccf0 | admin              |
+| 9f2beb2b270940048fe6844f0b16281e | neutron            |
+| fa68f86dd1de4ddbbb7415b4d9a54121 | keystone           |
++----------------------------------+--------------------+
diff --git a/prototypes/openstack-ansible/file/cinder.yml b/prototypes/openstack-ansible/file/cinder.yml
new file mode 100644 (file)
index 0000000..e40b392
--- /dev/null
@@ -0,0 +1,13 @@
+---
+# This file contains an example to show how to set
+# the cinder-volume service to run in a container.
+#
+# Important note:
+# When using LVM or any iSCSI-based cinder backends, such as NetApp with
+# iSCSI protocol, the cinder-volume service *must* run on metal.
+# Reference: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
+
+container_skel:
+  cinder_volumes_container:
+    properties:
+      is_metal: false
diff --git a/prototypes/openstack-ansible/file/exports b/prototypes/openstack-ansible/file/exports
new file mode 100644 (file)
index 0000000..315f79d
--- /dev/null
@@ -0,0 +1,12 @@
+# /etc/exports: the access control list for filesystems which may be exported
+#               to NFS clients.  See exports(5).
+#
+# Example for NFSv2 and NFSv3:
+# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
+#
+# Example for NFSv4:
+# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
+# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
+#
+/images         *(rw,sync,no_subtree_check,no_root_squash)
+
diff --git a/prototypes/openstack-ansible/file/modules b/prototypes/openstack-ansible/file/modules
new file mode 100644 (file)
index 0000000..60a517f
--- /dev/null
@@ -0,0 +1,8 @@
+# /etc/modules: kernel modules to load at boot time.
+#
+# This file contains the names of kernel modules that should be loaded
+# at boot time, one per line. Lines beginning with "#" are ignored.
+# Parameters can be specified after the module name.
+
+bonding
+8021q
diff --git a/prototypes/openstack-ansible/file/openstack_user_config.yml b/prototypes/openstack-ansible/file/openstack_user_config.yml
new file mode 100644 (file)
index 0000000..2811e62
--- /dev/null
@@ -0,0 +1,278 @@
+---
+cidr_networks:
+  container: 172.29.236.0/22
+  tunnel: 172.29.240.0/22
+  storage: 172.29.244.0/22
+
+used_ips:
+  - "172.29.236.1,172.29.236.50"
+  - "172.29.240.1,172.29.240.50"
+  - "172.29.244.1,172.29.244.50"
+  - "172.29.248.1,172.29.248.50"
+
+global_overrides:
+  internal_lb_vip_address: 172.29.236.222
+  external_lb_vip_address: 192.168.122.220
+  tunnel_bridge: "br-vxlan"
+  management_bridge: "br-mgmt"
+  provider_networks:
+    - network:
+        container_bridge: "br-mgmt"
+        container_type: "veth"
+        container_interface: "eth1"
+        ip_from_q: "container"
+        type: "raw"
+        group_binds:
+          - all_containers
+          - hosts
+        is_container_address: true
+        is_ssh_address: true
+    - network:
+        container_bridge: "br-vxlan"
+        container_type: "veth"
+        container_interface: "eth10"
+        ip_from_q: "tunnel"
+        type: "vxlan"
+        range: "1:1000"
+        net_name: "vxlan"
+        group_binds:
+          - neutron_linuxbridge_agent
+    - network:
+        container_bridge: "br-vlan"
+        container_type: "veth"
+        container_interface: "eth12"
+        host_bind_override: "eth12"
+        type: "flat"
+        net_name: "flat"
+        group_binds:
+          - neutron_linuxbridge_agent
+    - network:
+        container_bridge: "br-vlan"
+        container_type: "veth"
+        container_interface: "eth11"
+        type: "vlan"
+        range: "1:1"
+        net_name: "vlan"
+        group_binds:
+          - neutron_linuxbridge_agent
+    - network:
+        container_bridge: "br-storage"
+        container_type: "veth"
+        container_interface: "eth2"
+        ip_from_q: "storage"
+        type: "raw"
+        group_binds:
+          - glance_api
+          - cinder_api
+          - cinder_volume
+          - nova_compute
+
+###
+### Infrastructure
+###
+
+# galera, memcache, rabbitmq, utility
+shared-infra_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# repository (apt cache, python packages, etc)
+repo-infra_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# load balancer
+# Ideally the load balancer should not use the Infrastructure hosts.
+# Dedicated hardware is best for improved performance and security.
+haproxy_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# rsyslog server
+#log_hosts:
+ # log1:
+ #  ip: 172.29.236.14
+
+###
+### OpenStack
+###
+
+# keystone
+identity_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# cinder api services
+storage-infra_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# glance
+# The settings here are repeated for each infra host.
+# They could instead be applied as global settings in
+# user_variables, but are left here to illustrate that
+# each container could have different storage targets.
+image_hosts:
+   controller00:
+     ip: 172.29.236.11
+     container_vars:
+       limit_container_types: glance
+       glance_nfs_client:
+         - server: "172.29.244.15"
+           remote_path: "/images"
+           local_path: "/var/lib/glance/images"
+           type: "nfs"
+           options: "_netdev,auto"
+   controller01:
+     ip: 172.29.236.12
+     container_vars:
+       limit_container_types: glance
+       glance_nfs_client:
+         - server: "172.29.244.15"
+           remote_path: "/images"
+           local_path: "/var/lib/glance/images"
+           type: "nfs"
+           options: "_netdev,auto"
+   controller02:
+     ip: 172.29.236.13
+     container_vars:
+       limit_container_types: glance
+       glance_nfs_client:
+         - server: "172.29.244.15"
+           remote_path: "/images"
+           local_path: "/var/lib/glance/images"
+           type: "nfs"
+           options: "_netdev,auto"
+
+# nova api, conductor, etc services
+compute-infra_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# heat
+orchestration_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# horizon
+dashboard_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# neutron server, agents (L3, etc)
+network_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# ceilometer (telemetry API)
+metering-infra_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# aodh (telemetry alarm service)
+metering-alarm_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# gnocchi (telemetry metrics storage)
+metrics_hosts:
+  controller00:
+    ip: 172.29.236.11
+  controller01:
+    ip: 172.29.236.12
+  controller02:
+    ip: 172.29.236.13
+
+# nova hypervisors
+compute_hosts:
+  compute00:
+    ip: 172.29.236.14
+  compute01:
+    ip: 172.29.236.15
+
+# ceilometer compute agent (telemetry)
+metering-compute_hosts:
+  compute00:
+    ip: 172.29.236.14
+  compute01:
+    ip: 172.29.236.15
+# cinder volume hosts (NFS-backed)
+# The settings here are repeated for each infra host.
+# They could instead be applied as global settings in
+# user_variables, but are left here to illustrate that
+# each container could have different storage targets.
+storage_hosts:
+  controller00:
+    ip: 172.29.236.11
+    container_vars:
+      cinder_backends:
+        limit_container_types: cinder_volume
+        lvm:
+          volume_group: cinder-volumes
+          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
+          volume_backend_name: LVM_iSCSI
+          iscsi_ip_address: "172.29.244.11"
+  controller01:
+    ip: 172.29.236.12
+    container_vars:
+      cinder_backends:
+        limit_container_types: cinder_volume
+        lvm:
+          volume_group: cinder-volumes
+          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
+          volume_backend_name: LVM_iSCSI
+          iscsi_ip_address: "172.29.244.12"
+  controller02:
+    ip: 172.29.236.13
+    container_vars:
+      cinder_backends:
+        limit_container_types: cinder_volume
+        lvm:
+          volume_group: cinder-volumes
+          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
+          volume_backend_name: LVM_iSCSI
+          iscsi_ip_address: "172.29.244.13"
diff --git a/prototypes/openstack-ansible/file/user_variables.yml b/prototypes/openstack-ansible/file/user_variables.yml
new file mode 100644 (file)
index 0000000..3e14bc5
--- /dev/null
@@ -0,0 +1,27 @@
+---
+# Copyright 2014, Rackspace US, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+### This file contains commonly used overrides for convenience. Please inspect
+### the defaults for each role to find additional override options.
+###
+
+## Debug and Verbose options.
+debug: false
+
+haproxy_keepalived_external_vip_cidr: "192.168.122.220/32"
+haproxy_keepalived_internal_vip_cidr: "172.29.236.222/32"
+haproxy_keepalived_external_interface: br-vlan
+haproxy_keepalived_internal_interface: br-mgmt
diff --git a/prototypes/openstack-ansible/playbooks/inventory b/prototypes/openstack-ansible/playbooks/inventory
new file mode 100644 (file)
index 0000000..f53da53
--- /dev/null
@@ -0,0 +1,11 @@
+[jumphost]
+jumphost ansible_ssh_host=192.168.122.2
+
+[controller]
+controller00 ansible_ssh_host=192.168.122.3
+controller01 ansible_ssh_host=192.168.122.4
+controller02 ansible_ssh_host=192.168.122.5
+
+[compute]
+compute00 ansible_ssh_host=192.168.122.6
+compute01 ansible_ssh_host=192.168.122.7
diff --git a/prototypes/openstack-ansible/playbooks/jumphost_configuration.yml b/prototypes/openstack-ansible/playbooks/jumphost_configuration.yml
new file mode 100644 (file)
index 0000000..c51d830
--- /dev/null
@@ -0,0 +1,53 @@
+---
+- hosts: jumphost
+  remote_user: root
+  vars_files:
+    - ../var/ubuntu.yml
+  tasks:
+  - name: generate SSH keys
+    shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
+    args:
+      creates: /root/.ssh/id_rsa
+  - name:  fetch public key
+    fetch: src="/root/.ssh/id_rsa.pub" dest="/"
+  - name: remove the directory
+    shell: "rm -rf {{OSA_PATH}} {{OSA_ETC_PATH}}"
+  - name: git openstack ansible
+    shell: "git clone {{OSA_URL}} {{OSA_PATH}} -b {{OSA_BRANCH}}"
+  - name: copy /opt/openstack-ansible/etc/openstack_deploy to /etc/openstack_deploy
+    shell: "/bin/cp -rf {{OSA_PATH}}/etc/openstack_deploy {{OSA_ETC_PATH}}"
+  - name: bootstrap
+    command: "/bin/bash ./scripts/bootstrap-ansible.sh"
+    args:
+      chdir: "{{OSA_PATH}}"
+  - name: generate password token
+    command: "python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml"
+    args:
+      chdir: /opt/openstack-ansible/scripts/
+  - name: copy openstack_user_config.yml to /etc/openstack_deploy
+    copy:
+      src: ../file/openstack_user_config.yml
+      dest: "{{OSA_ETC_PATH}}/openstack_user_config.yml"
+  - name: copy cinder.yml to /etc/openstack_deploy/env.d
+    copy:
+      src: ../file/cinder.yml
+      dest: "{{OSA_ETC_PATH}}/env.d/cinder.yml"
+  - name: copy user_variables.yml to /etc/openstack_deploy/
+    copy:
+      src: ../file/user_variables.yml
+      dest: "{{OSA_ETC_PATH}}/user_variables.yml"
+  - name: configure network
+    template:
+      src: ../template/bifrost/controller.interface.j2
+      dest: /etc/network/interfaces
+    notify:
+    - restart network service
+  handlers:
+    - name: restart network service
+      shell: "/sbin/ifconfig ens3 0 &&/sbin/ifdown -a && /sbin/ifup -a"
+
+- hosts: localhost
+  remote_user: root
+  tasks:
+  - name: Generate authorized_keys
+    shell: "/bin/cat /jumphost/root/.ssh/id_rsa.pub >> ../file/authorized_keys"
diff --git a/prototypes/openstack-ansible/playbooks/targethost_configuration.yml b/prototypes/openstack-ansible/playbooks/targethost_configuration.yml
new file mode 100644 (file)
index 0000000..ffe788f
--- /dev/null
@@ -0,0 +1,61 @@
+---
+- hosts: all
+  remote_user: root
+  vars_files:
+    - ../var/ubuntu.yml
+  tasks:
+  - name: add public key to host
+    copy:
+      src: ../file/authorized_keys
+      dest: /root/.ssh/authorized_keys
+  - name: configure modules
+    copy:
+      src: ../file/modules
+      dest: /etc/modules
+
+- hosts: controller
+  remote_user: root
+  vars_files:
+    - ../var/ubuntu.yml
+  tasks:
+  - name: configure network
+    template:
+      src: ../template/bifrost/controller.interface.j2
+      dest: /etc/network/interfaces
+    notify:
+    - restart network service
+  handlers:
+    - name: restart network service
+      shell: "/sbin/ifconfig ens3 0 &&/sbin/ifdown -a && /sbin/ifup -a"
+
+- hosts: compute
+  remote_user: root
+  vars_files:
+    - ../var/ubuntu.yml
+  tasks:
+  - name: configure network
+    template:
+      src: ../template/bifrost/compute.interface.j2
+      dest: /etc/network/interfaces
+    notify:
+    - restart network service
+  handlers:
+    - name: restart network service
+      shell: "/sbin/ifconfig ens3 0 &&/sbin/ifdown -a && /sbin/ifup -a"
+
+- hosts: compute01
+  remote_user: root
+  tasks:
+  - name: make nfs dir
+    file: "dest=/images mode=777 state=directory"
+  - name: configure sdrvice
+    shell: "echo 'nfs        2049/tcp' >>  /etc/services && echo 'nfs        2049/udp' >>  /etc/services"
+  - name: configure NFS
+    copy:
+      src: ../file/exports
+      dest: /etc/exports
+    notify:
+    - restart nfs service
+  handlers:
+    - name: restart nfs service
+      service: name=nfs-kernel-server state=restarted
diff --git a/prototypes/openstack-ansible/scripts/osa_deploy.sh b/prototypes/openstack-ansible/scripts/osa_deploy.sh
new file mode 100755 (executable)
index 0000000..95f5931
--- /dev/null
@@ -0,0 +1,82 @@
+#!/bin/bash
+
+export OSA_PATH=/opt/openstack-ansible
+export LOG_PATH=$OSA_PATH/log
+export PLAYBOOK_PATH=$OSA_PATH/playbooks
+export OSA_BRANCH=${OSA_BRANCH:-"master"}
+
+JUMPHOST_IP="192.168.122.2"
+
+sudo /bin/rm -rf $LOG_PATH
+sudo /bin/mkdir -p $LOG_PATH
+sudo /bin/cp /root/.ssh/id_rsa.pub ../file/authorized_keys
+sudo echo -e '\n'>>../file/authorized_keys
+
+cd ../playbooks/
+# this will prepare the jump host
+# git clone the Openstack-Ansible, bootstrap and configure network
+sudo ansible-playbook -i inventory jumphost_configuration.yml -vvv
+
+# this will prepare the target host
+# such as configure network and NFS
+sudo ansible-playbook -i inventory targethost_configuration.yml
+
+# using OpenStack-Ansible deploy the OpenStack
+
+echo "set UP Host !"
+sudo /bin/sh -c "ssh root@$JUMPHOST_IP openstack-ansible \
+     $PLAYBOOK_PATH/setup-hosts.yml" | \
+     tee $LOG_PATH/setup-host.log
+
+#check the result of openstack-ansible setup-hosts.yml
+#if failed, exit with exit code 1
+grep "failed=1" $LOG_PATH/setup-host.log>/dev/null \
+  || grep "unreachable=1" $LOG_PATH/setup-host.log>/dev/null
+if [ $? -eq 0 ]; then
+    echo "failed setup host!"
+    exit 1
+else
+    echo "setup host successfully!"
+fi
+
+echo "Set UP Infrastructure !"
+sudo /bin/sh -c "ssh root@$JUMPHOST_IP openstack-ansible \
+     $PLAYBOOK_PATH/setup-infrastructure.yml" | \
+     tee $LOG_PATH/setup-infrastructure.log
+
+grep "failed=1" $LOG_PATH/setup-infrastructure.log>/dev/null \
+  || grep "unreachable=1" $LOG_PATH/setup-infrastructure.log>/dev/null
+if [ $? -eq 0 ]; then
+    echo "failed setup infrastructure!"
+    exit 1
+else
+    echo "setup infrastructure successfully!"
+fi
+
+sudo /bin/sh -c "ssh root@$JUMPHOST_IP ansible -i $PLAYBOOK_PATH/inventory/ \
+           galera_container -m shell \
+           -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"" \
+           | tee $LOG_PATH/galera.log
+
+grep "FAILED" $LOG_PATH/galera.log>/dev/null
+if [ $? -eq 0 ]; then
+    echo "failed verify the database cluster!"
+    exit 1
+else
+    echo "verify the database cluster successfully!"
+fi
+
+echo "Set UP OpenStack !"
+sudo /bin/sh -c "ssh root@$JUMPHOST_IP openstack-ansible \
+     $PLAYBOOK_PATH/setup-openstack.yml" | \
+     tee $LOG_PATH/setup-openstack.log
+
+grep "failed=1" $LOG_PATH/setup-openstack.log>/dev/null \
+  || grep "unreachable=1" $LOG_PATH/setup-openstack.log>/dev/null
+if [ $? -eq 0 ]; then
+   echo "failed setup openstack!"
+   exit 1
+else
+   echo "OpenStack successfully deployed!"
+   exit 0
+fi
diff --git a/prototypes/openstack-ansible/template/bifrost/compute.interface.j2 b/prototypes/openstack-ansible/template/bifrost/compute.interface.j2
new file mode 100644 (file)
index 0000000..1719f6a
--- /dev/null
@@ -0,0 +1,86 @@
+# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+
+# The loopback network interface
+auto lo
+iface lo inet loopback
+
+
+# Physical interface
+auto ens3
+iface ens3 inet manual
+
+# Container/Host management VLAN interface
+auto ens3.10
+iface ens3.10 inet manual
+    vlan-raw-device ens3
+
+# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
+auto ens3.30
+iface ens3.30 inet manual
+    vlan-raw-device ens3
+
+# Storage network VLAN interface (optional)
+auto ens3.20
+iface ens3.20 inet manual
+    vlan-raw-device ens3
+
+# Container/Host management bridge
+auto br-mgmt
+iface br-mgmt inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3.10
+    address {{host_info[inventory_hostname].MGMT_IP}}
+    netmask 255.255.252.0
+
+# compute1 VXLAN (tunnel/overlay) bridge config
+auto br-vxlan
+iface br-vxlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3.30
+    address {{host_info[inventory_hostname].VXLAN_IP}}
+    netmask 255.255.252.0
+
+# OpenStack Networking VLAN bridge
+auto br-vlan
+iface br-vlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3
+    address {{host_info[inventory_hostname].VLAN_IP}}
+    netmask 255.255.255.0
+    gateway 192.168.122.1
+    offload-sg off
+    # Create veth pair, don't bomb if already exists
+    pre-up ip link add br-vlan-veth type veth peer name eth12 || true
+    # Set both ends UP
+    pre-up ip link set br-vlan-veth up
+    pre-up ip link set eth12 up
+    # Delete veth pair on DOWN
+    post-down ip link del br-vlan-veth || true
+    bridge_ports br-vlan-veth
+
+# Add an additional address to br-vlan
+iface br-vlan inet static
+    # Flat network default gateway
+    # -- This needs to exist somewhere for network reachability
+    # -- from the router namespace for floating IP paths.
+    # -- Putting this here is primarily for tempest to work.
+    address {{host_info[inventory_hostname].VLAN_IP_SECOND}}
+    netmask 255.255.252.0
+    dns-nameserver 8.8.8.8 8.8.4.4
+
+# compute1 Storage bridge
+auto br-storage
+iface br-storage inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3.20
+    address {{host_info[inventory_hostname].STORAGE_IP}}
+    netmask 255.255.252.0
diff --git a/prototypes/openstack-ansible/template/bifrost/controller.interface.j2 b/prototypes/openstack-ansible/template/bifrost/controller.interface.j2
new file mode 100644 (file)
index 0000000..74aeea9
--- /dev/null
@@ -0,0 +1,71 @@
+# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+
+# The loopback network interface
+auto lo
+iface lo inet loopback
+
+# Physical interface
+auto ens3
+iface ens3 inet manual
+
+# Container/Host management VLAN interface
+auto ens3.10
+iface ens3.10 inet manual
+    vlan-raw-device ens3
+
+# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
+auto ens3.30
+iface ens3.30 inet manual
+    vlan-raw-device ens3
+
+# Storage network VLAN interface (optional)
+auto ens3.20
+iface ens3.20 inet manual
+    vlan-raw-device ens3
+
+# Container/Host management bridge
+auto br-mgmt
+iface br-mgmt inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3.10
+    address {{host_info[inventory_hostname].MGMT_IP}}
+    netmask 255.255.252.0
+
+# OpenStack Networking VXLAN (tunnel/overlay) bridge
+#
+# Only the COMPUTE and NETWORK nodes must have an IP address
+# on this bridge. When used by infrastructure nodes, the
+# IP addresses are assigned to containers which use this
+# bridge.
+#
+auto br-vxlan
+iface br-vxlan inet manual
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3.30
+
+# OpenStack Networking VLAN bridge
+auto br-vlan
+iface br-vlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3
+    address {{host_info[inventory_hostname].VLAN_IP}}
+    netmask 255.255.255.0
+    gateway 192.168.122.1
+    dns-nameserver 8.8.8.8 8.8.4.4
+
+# compute1 Storage bridge
+auto br-storage
+iface br-storage inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports ens3.20
+    address {{host_info[inventory_hostname].STORAGE_IP}}
+    netmask 255.255.252.0
diff --git a/prototypes/openstack-ansible/var/ubuntu.yml b/prototypes/openstack-ansible/var/ubuntu.yml
new file mode 100644 (file)
index 0000000..71f54ec
--- /dev/null
@@ -0,0 +1,6 @@
+---
+OSA_URL: https://git.openstack.org/openstack/openstack-ansible
+OSA_PATH: /opt/openstack-ansible
+OSA_ETC_PATH: /etc/openstack_deploy
+JUMPHOST_IP: 192.168.122.2
+host_info: {'jumphost':{'MGMT_IP': '172.29.236.10','VLAN_IP': '192.168.122.2', 'STORAGE_IP': '172.29.244.10'},'controller00':{'MGMT_IP': '172.29.236.11','VLAN_IP': '192.168.122.3', 'STORAGE_IP': '172.29.244.11'},'controller01':{'MGMT_IP': '172.29.236.12','VLAN_IP': '192.168.122.4', 'STORAGE_IP': '172.29.244.12'},'controller02':{'MGMT_IP': '172.29.236.13','VLAN_IP': '192.168.122.5', 'STORAGE_IP': '172.29.240.13'},'compute00':{'MGMT_IP': '172.29.236.14','VLAN_IP': '192.168.122.6','VLAN_IP_SECOND': '173.29.241.1','VXLAN_IP': '172.29.240.14', 'STORAGE_IP': '172.29.244.14'},'compute01':{'MGMT_IP': '172.29.236.15','VLAN_IP': '192.168.122.7','VLAN_IP_SECOND': '173.29.241.2','VXLAN_IP': '172.29.240.15', 'STORAGE_IP': '172.29.244.15'}}