xci: Update the repo directory structure 29/39629/2
authorFatih Degirmenci <fatih.degirmenci@ericsson.com>
Fri, 18 Aug 2017 20:32:58 +0000 (22:32 +0200)
committerFatih Degirmenci <fatih.degirmenci@ericsson.com>
Sun, 20 Aug 2017 22:48:33 +0000 (00:48 +0200)
This patch
- removes obsolete openstack-ansible and puppet-infracloud directories
- adds upstream directory to keep the contributions that are pending to
be accepted by upstream in order to have progress in OPNFV. In a perfect
world, one should expect to have nothing in this directory so the items
in this folder are short-lived.
- adds prototypes directory to keep stuff that hasn't been discussed to be
part of XCI and to share ideas and trials with the rest of the community.

Change-Id: I12afe7050ff2b0ac457d4b16d21dfd7df6ac84c9
Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
28 files changed:
openstack-ansible/README.md [deleted file]
openstack-ansible/file/cinder.yml [deleted file]
openstack-ansible/file/exports [deleted file]
openstack-ansible/file/modules [deleted file]
openstack-ansible/file/openstack_user_config.yml [deleted file]
openstack-ansible/file/opnfv-setup-openstack.yml [deleted file]
openstack-ansible/file/user_variables.yml [deleted file]
openstack-ansible/playbooks/configure-targethosts.yml [deleted file]
openstack-ansible/playbooks/configure-xcimaster.yml [deleted file]
openstack-ansible/playbooks/inventory [deleted file]
openstack-ansible/scripts/osa-deploy.sh [deleted file]
openstack-ansible/template/bifrost/compute.interface.j2 [deleted file]
openstack-ansible/template/bifrost/controller.interface.j2 [deleted file]
openstack-ansible/var/ubuntu.yml [deleted file]
prototypes/.gitkeep [moved from puppet-infracloud/.gitkeep with 100% similarity]
puppet-infracloud/README.md [deleted file]
puppet-infracloud/creds/clouds.yaml [deleted file]
puppet-infracloud/deploy_on_baremetal.md [deleted file]
puppet-infracloud/hiera/common.yaml [deleted file]
puppet-infracloud/hiera/common_baremetal.yaml [deleted file]
puppet-infracloud/install_modules.sh [deleted file]
puppet-infracloud/install_puppet.sh [deleted file]
puppet-infracloud/manifests/site.pp [deleted file]
puppet-infracloud/modules.env [deleted file]
puppet-infracloud/modules/opnfv/manifests/compute.pp [deleted file]
puppet-infracloud/modules/opnfv/manifests/controller.pp [deleted file]
puppet-infracloud/modules/opnfv/manifests/server.pp [deleted file]
upstream/.gitkeep [new file with mode: 0644]

diff --git a/openstack-ansible/README.md b/openstack-ansible/README.md
deleted file mode 100644 (file)
index 6210cc0..0000000
+++ /dev/null
@@ -1,48 +0,0 @@
-===============================
-How to deploy OpenStack-Ansible
-===============================
-The script and playbooks defined on this repo will deploy an OpenStack
-cloud based on OpenStack-Ansible.
-It needs to be combined with Bifrost. You need use Bifrost to provide six VMs.
-To learn about how to use Bifrost, you can read the document on
-[/opt/bifrost/README.md].
-
-Minimal requirements:
-1. You will need to have a least 150G free space for the partition on where
-   "/var/lib/libvirt/images/" lives.
-2. each vm needs to have at least 8 vCPU, 12 GB RAM, 60 GB HDD.
-
-After provisioning the six VMs please follow that steps:
-
-1.Run the script to deploy OpenStack
-  cd /opt/openstack-ansible/scripts/
-  sudo ./osa_deploy.sh
-It will take a lot of time. When the deploy is successful, you will see the
-message "OpenStack deployed successfully".
-
-2.To verify the OpenStack operation
-  2.1 ssh into the controller::
-      ssh 192.168.122.3
-  2.2 Enter into the lxc container::
-      lxcname=$(lxc-ls | grep utility)
-      lxc-attach -n $lxcname
-  2.3 Verify the OpenStack API::
-      source /root/openrc
-      openstack user list
-
-This will show the following output::
-+----------------------------------+--------------------+
-| ID                               | Name               |
-+----------------------------------+--------------------+
-| 056f8fe41336435991fd80872731cada | aodh               |
-| 308f6436e68f40b49d3b8e7ce5c5be1e | glance             |
-| 351b71b43a66412d83f9b3cd75485875 | nova               |
-| 511129e053394aea825cce13b9f28504 | ceilometer         |
-| 5596f71319d44c8991fdc65f3927b62e | gnocchi            |
-| 586f49e3398a4c47a2f6fe50135d4941 | stack_domain_admin |
-| 601b329e6b1d427f9a1e05ed28753497 | heat               |
-| 67fe383b94964a4781345fbcc30ae434 | cinder             |
-| 729bb08351264d729506dad84ed3ccf0 | admin              |
-| 9f2beb2b270940048fe6844f0b16281e | neutron            |
-| fa68f86dd1de4ddbbb7415b4d9a54121 | keystone           |
-+----------------------------------+--------------------+
diff --git a/openstack-ansible/file/cinder.yml b/openstack-ansible/file/cinder.yml
deleted file mode 100644 (file)
index e40b392..0000000
+++ /dev/null
@@ -1,13 +0,0 @@
----
-# This file contains an example to show how to set
-# the cinder-volume service to run in a container.
-#
-# Important note:
-# When using LVM or any iSCSI-based cinder backends, such as NetApp with
-# iSCSI protocol, the cinder-volume service *must* run on metal.
-# Reference: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
-
-container_skel:
-  cinder_volumes_container:
-    properties:
-      is_metal: false
diff --git a/openstack-ansible/file/exports b/openstack-ansible/file/exports
deleted file mode 100644 (file)
index 315f79d..0000000
+++ /dev/null
@@ -1,12 +0,0 @@
-# /etc/exports: the access control list for filesystems which may be exported
-#               to NFS clients.  See exports(5).
-#
-# Example for NFSv2 and NFSv3:
-# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
-#
-# Example for NFSv4:
-# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
-# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
-#
-/images         *(rw,sync,no_subtree_check,no_root_squash)
-
diff --git a/openstack-ansible/file/modules b/openstack-ansible/file/modules
deleted file mode 100644 (file)
index 60a517f..0000000
+++ /dev/null
@@ -1,8 +0,0 @@
-# /etc/modules: kernel modules to load at boot time.
-#
-# This file contains the names of kernel modules that should be loaded
-# at boot time, one per line. Lines beginning with "#" are ignored.
-# Parameters can be specified after the module name.
-
-bonding
-8021q
diff --git a/openstack-ansible/file/openstack_user_config.yml b/openstack-ansible/file/openstack_user_config.yml
deleted file mode 100644 (file)
index 43e88c0..0000000
+++ /dev/null
@@ -1,278 +0,0 @@
----
-cidr_networks:
-  container: 172.29.236.0/22
-  tunnel: 172.29.240.0/22
-  storage: 172.29.244.0/22
-
-used_ips:
-  - "172.29.236.1,172.29.236.50"
-  - "172.29.240.1,172.29.240.50"
-  - "172.29.244.1,172.29.244.50"
-  - "172.29.248.1,172.29.248.50"
-
-global_overrides:
-  internal_lb_vip_address: 172.29.236.222
-  external_lb_vip_address: 192.168.122.220
-  tunnel_bridge: "br-vxlan"
-  management_bridge: "br-mgmt"
-  provider_networks:
-    - network:
-        container_bridge: "br-mgmt"
-        container_type: "veth"
-        container_interface: "eth1"
-        ip_from_q: "container"
-        type: "raw"
-        group_binds:
-          - all_containers
-          - hosts
-        is_container_address: true
-        is_ssh_address: true
-    - network:
-        container_bridge: "br-vxlan"
-        container_type: "veth"
-        container_interface: "eth10"
-        ip_from_q: "tunnel"
-        type: "vxlan"
-        range: "1:1000"
-        net_name: "vxlan"
-        group_binds:
-          - neutron_linuxbridge_agent
-    - network:
-        container_bridge: "br-vlan"
-        container_type: "veth"
-        container_interface: "eth12"
-        host_bind_override: "eth12"
-        type: "flat"
-        net_name: "flat"
-        group_binds:
-          - neutron_linuxbridge_agent
-    - network:
-        container_bridge: "br-vlan"
-        container_type: "veth"
-        container_interface: "eth11"
-        type: "vlan"
-        range: "1:1"
-        net_name: "vlan"
-        group_binds:
-          - neutron_linuxbridge_agent
-    - network:
-        container_bridge: "br-storage"
-        container_type: "veth"
-        container_interface: "eth2"
-        ip_from_q: "storage"
-        type: "raw"
-        group_binds:
-          - glance_api
-          - cinder_api
-          - cinder_volume
-          - nova_compute
-
-# ##
-# ## Infrastructure
-# ##
-
-# galera, memcache, rabbitmq, utility
-shared-infra_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# repository (apt cache, python packages, etc)
-repo-infra_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# load balancer
-# Ideally the load balancer should not use the Infrastructure hosts.
-# Dedicated hardware is best for improved performance and security.
-haproxy_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# rsyslog server
-# log_hosts:
-# log1:
-#  ip: 172.29.236.14
-
-# ##
-# ## OpenStack
-# ##
-
-# keystone
-identity_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# cinder api services
-storage-infra_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# glance
-# The settings here are repeated for each infra host.
-# They could instead be applied as global settings in
-# user_variables, but are left here to illustrate that
-# each container could have different storage targets.
-image_hosts:
-  controller00:
-    ip: 172.29.236.11
-    container_vars:
-      limit_container_types: glance
-      glance_nfs_client:
-        - server: "172.29.244.15"
-          remote_path: "/images"
-          local_path: "/var/lib/glance/images"
-          type: "nfs"
-          options: "_netdev,auto"
-  controller01:
-    ip: 172.29.236.12
-    container_vars:
-      limit_container_types: glance
-      glance_nfs_client:
-        - server: "172.29.244.15"
-          remote_path: "/images"
-          local_path: "/var/lib/glance/images"
-          type: "nfs"
-          options: "_netdev,auto"
-  controller02:
-    ip: 172.29.236.13
-    container_vars:
-      limit_container_types: glance
-      glance_nfs_client:
-        - server: "172.29.244.15"
-          remote_path: "/images"
-          local_path: "/var/lib/glance/images"
-          type: "nfs"
-          options: "_netdev,auto"
-
-# nova api, conductor, etc services
-compute-infra_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# heat
-orchestration_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# horizon
-dashboard_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# neutron server, agents (L3, etc)
-network_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# ceilometer (telemetry API)
-metering-infra_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# aodh (telemetry alarm service)
-metering-alarm_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# gnocchi (telemetry metrics storage)
-metrics_hosts:
-  controller00:
-    ip: 172.29.236.11
-  controller01:
-    ip: 172.29.236.12
-  controller02:
-    ip: 172.29.236.13
-
-# nova hypervisors
-compute_hosts:
-  compute00:
-    ip: 172.29.236.14
-  compute01:
-    ip: 172.29.236.15
-
-# ceilometer compute agent (telemetry)
-metering-compute_hosts:
-  compute00:
-    ip: 172.29.236.14
-  compute01:
-    ip: 172.29.236.15
-# cinder volume hosts (NFS-backed)
-# The settings here are repeated for each infra host.
-# They could instead be applied as global settings in
-# user_variables, but are left here to illustrate that
-# each container could have different storage targets.
-storage_hosts:
-  controller00:
-    ip: 172.29.236.11
-    container_vars:
-      cinder_backends:
-        limit_container_types: cinder_volume
-        lvm:
-          volume_group: cinder-volumes
-          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
-          volume_backend_name: LVM_iSCSI
-          iscsi_ip_address: "172.29.244.11"
-  controller01:
-    ip: 172.29.236.12
-    container_vars:
-      cinder_backends:
-        limit_container_types: cinder_volume
-        lvm:
-          volume_group: cinder-volumes
-          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
-          volume_backend_name: LVM_iSCSI
-          iscsi_ip_address: "172.29.244.12"
-  controller02:
-    ip: 172.29.236.13
-    container_vars:
-      cinder_backends:
-        limit_container_types: cinder_volume
-        lvm:
-          volume_group: cinder-volumes
-          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
-          volume_backend_name: LVM_iSCSI
-          iscsi_ip_address: "172.29.244.13"
diff --git a/openstack-ansible/file/opnfv-setup-openstack.yml b/openstack-ansible/file/opnfv-setup-openstack.yml
deleted file mode 100644 (file)
index aacdeff..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
----
-# Copyright 2014, Rackspace US, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-- include: os-keystone-install.yml
-- include: os-glance-install.yml
-- include: os-cinder-install.yml
-- include: os-nova-install.yml
-- include: os-neutron-install.yml
-- include: os-heat-install.yml
-- include: os-horizon-install.yml
-- include: os-ceilometer-install.yml
-- include: os-aodh-install.yml
-#NOTE(stevelle) Ensure Gnocchi identities exist before Swift
-- include: os-gnocchi-install.yml
-  when:
-    - gnocchi_storage_driver is defined
-    - gnocchi_storage_driver == 'swift'
-  vars:
-    gnocchi_identity_only: True
-- include: os-swift-install.yml
-- include: os-gnocchi-install.yml
-- include: os-ironic-install.yml
diff --git a/openstack-ansible/file/user_variables.yml b/openstack-ansible/file/user_variables.yml
deleted file mode 100644 (file)
index 65cbcc1..0000000
+++ /dev/null
@@ -1,27 +0,0 @@
----
-# Copyright 2014, Rackspace US, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# ##
-# ## This file contains commonly used overrides for convenience. Please inspect
-# ## the defaults for each role to find additional override options.
-# ##
-
-# # Debug and Verbose options.
-debug: false
-
-haproxy_keepalived_external_vip_cidr: "192.168.122.220/32"
-haproxy_keepalived_internal_vip_cidr: "172.29.236.222/32"
-haproxy_keepalived_external_interface: br-vlan
-haproxy_keepalived_internal_interface: br-mgmt
diff --git a/openstack-ansible/playbooks/configure-targethosts.yml b/openstack-ansible/playbooks/configure-targethosts.yml
deleted file mode 100644 (file)
index 538fe17..0000000
+++ /dev/null
@@ -1,61 +0,0 @@
----
-- hosts: all
-  remote_user: root
-  vars_files:
-    - ../var/ubuntu.yml
-  tasks:
-    - name: add public key to host
-      copy:
-        src: ../file/authorized_keys
-        dest: /root/.ssh/authorized_keys
-    - name: configure modules
-      copy:
-        src: ../file/modules
-        dest: /etc/modules
-
-- hosts: controller
-  remote_user: root
-  vars_files:
-    - ../var/ubuntu.yml
-  tasks:
-    - name: configure network
-      template:
-        src: ../template/bifrost/controller.interface.j2
-        dest: /etc/network/interfaces
-      notify:
-        - restart network service
-  handlers:
-    - name: restart network service
-      shell: "/sbin/ifconfig ens3 0 &&/sbin/ifdown -a && /sbin/ifup -a"
-
-- hosts: compute
-  remote_user: root
-  vars_files:
-    - ../var/ubuntu.yml
-  tasks:
-    - name: configure network
-      template:
-        src: ../template/bifrost/compute.interface.j2
-        dest: /etc/network/interfaces
-      notify:
-        - restart network service
-  handlers:
-    - name: restart network service
-      shell: "/sbin/ifconfig ens3 0 &&/sbin/ifdown -a && /sbin/ifup -a"
-
-- hosts: compute01
-  remote_user: root
-  tasks:
-    - name: make nfs dir
-      file: "dest=/images mode=0777 state=directory"
-    - name: configure sdrvice
-      shell: "echo 'nfs        2049/tcp' >>  /etc/services && echo 'nfs        2049/udp' >>  /etc/services"
-    - name: configure NFS
-      copy:
-        src: ../file/exports
-        dest: /etc/exports
-      notify:
-        - restart nfs service
-  handlers:
-    - name: restart nfs service
-      service: name=nfs-kernel-server state=restarted
diff --git a/openstack-ansible/playbooks/configure-xcimaster.yml b/openstack-ansible/playbooks/configure-xcimaster.yml
deleted file mode 100644 (file)
index fbbde64..0000000
+++ /dev/null
@@ -1,66 +0,0 @@
----
-- hosts: xcimaster
-  remote_user: root
-  vars_files:
-    - ../var/ubuntu.yml
-  tasks:
-    - name: generate SSH keys
-      shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
-      args:
-        creates: /root/.ssh/id_rsa
-    - name: fetch public key
-      fetch: src="/root/.ssh/id_rsa.pub" dest="/"
-    - name: remove openstack-ansible directories
-      file:
-        path={{ item }}
-        state=absent
-        recurse=no
-      with_items:
-        - "{{OSA_PATH}}"
-        - "{{OSA_ETC_PATH}}"
-    - name: clone openstack-ansible
-      git:
-        repo: "{{OSA_URL}}"
-        dest: "{{OSA_PATH}}"
-        version: "{{OPENSTACK_OSA_VERSION}}"
-    - name: copy opnfv-setup-openstack.yml to /opt/openstack-ansible/playbooks
-      copy:
-        src: ../file/opnfv-setup-openstack.yml
-        dest: "{{OSA_PATH}}/playbooks/opnfv-setup-openstack.yml"
-    - name: copy /opt/openstack-ansible/etc/openstack_deploy to /etc/openstack_deploy
-      shell: "/bin/cp -rf {{OSA_PATH}}/etc/openstack_deploy {{OSA_ETC_PATH}}"
-    - name: bootstrap
-      command: "/bin/bash ./scripts/bootstrap-ansible.sh"
-      args:
-        chdir: "{{OSA_PATH}}"
-    - name: generate password token
-      command: "python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml"
-      args:
-        chdir: /opt/openstack-ansible/scripts/
-    - name: copy openstack_user_config.yml to /etc/openstack_deploy
-      copy:
-        src: ../file/openstack_user_config.yml
-        dest: "{{OSA_ETC_PATH}}/openstack_user_config.yml"
-    - name: copy cinder.yml to /etc/openstack_deploy/env.d
-      copy:
-        src: ../file/cinder.yml
-        dest: "{{OSA_ETC_PATH}}/env.d/cinder.yml"
-    - name: copy user_variables.yml to /etc/openstack_deploy/
-      copy:
-        src: ../file/user_variables.yml
-        dest: "{{OSA_ETC_PATH}}/user_variables.yml"
-    - name: configure network
-      template:
-        src: ../template/bifrost/controller.interface.j2
-        dest: /etc/network/interfaces
-      notify:
-        - restart network service
-  handlers:
-    - name: restart network service
-      shell: "/sbin/ifconfig ens3 0 &&/sbin/ifdown -a && /sbin/ifup -a"
-
-- hosts: localhost
-  remote_user: root
-  tasks:
-    - name: Generate authorized_keys
-      shell: "/bin/cat /xcimaster/root/.ssh/id_rsa.pub >> ../file/authorized_keys"
diff --git a/openstack-ansible/playbooks/inventory b/openstack-ansible/playbooks/inventory
deleted file mode 100644 (file)
index d3768f5..0000000
+++ /dev/null
@@ -1,11 +0,0 @@
-[xcimaster]
-xcimaster ansible_ssh_host=192.168.122.2
-
-[controller]
-controller00 ansible_ssh_host=192.168.122.3
-controller01 ansible_ssh_host=192.168.122.4
-controller02 ansible_ssh_host=192.168.122.5
-
-[compute]
-compute00 ansible_ssh_host=192.168.122.6
-compute01 ansible_ssh_host=192.168.122.7
diff --git a/openstack-ansible/scripts/osa-deploy.sh b/openstack-ansible/scripts/osa-deploy.sh
deleted file mode 100755 (executable)
index ec60744..0000000
+++ /dev/null
@@ -1,136 +0,0 @@
-#!/bin/bash
-# SPDX-license-identifier: Apache-2.0
-##############################################################################
-# Copyright (c) 2016 Huawei Technologies Co.,Ltd and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
-set -o errexit
-set -o nounset
-set -o pipefail
-
-export OSA_PATH=/opt/openstack-ansible
-export LOG_PATH=$OSA_PATH/log
-export PLAYBOOK_PATH=$OSA_PATH/playbooks
-export OSA_BRANCH=${OSA_BRANCH:-"master"}
-XCIMASTER_IP="192.168.122.2"
-
-sudo /bin/rm -rf $LOG_PATH
-sudo /bin/mkdir -p $LOG_PATH
-sudo /bin/cp /root/.ssh/id_rsa.pub ../file/authorized_keys
-echo -e '\n' | sudo tee --append ../file/authorized_keys
-
-# log some info
-echo -e "\n"
-echo "***********************************************************************"
-echo "*                                                                     *"
-echo "*                        Configure XCI Master                         *"
-echo "*                                                                     *"
-echo "*  Bootstrap xci-master, configure network, clone openstack-ansible   *"
-echo "*                Playbooks: configure-xcimaster.yml                   *"
-echo "*                                                                     *"
-echo "***********************************************************************"
-echo -e "\n"
-
-cd ../playbooks/
-# this will prepare the jump host
-# git clone the Openstack-Ansible, bootstrap and configure network
-echo "xci: running ansible playbook configure-xcimaster.yml"
-sudo -E ansible-playbook -i inventory configure-xcimaster.yml
-
-echo "XCI Master is configured successfully!"
-
-# log some info
-echo -e "\n"
-echo "***********************************************************************"
-echo "*                                                                     *"
-echo "*                          Configure Nodes                            *"
-echo "*                                                                     *"
-echo "*       Configure network on OpenStack Nodes, configure NFS           *"
-echo "*                Playbooks: configure-targethosts.yml                 *"
-echo "*                                                                     *"
-echo "***********************************************************************"
-echo -e "\n"
-
-# this will prepare the target host
-# such as configure network and NFS
-echo "xci: running ansible playbook configure-targethosts.yml"
-sudo -E ansible-playbook -i inventory configure-targethosts.yml
-
-echo "Nodes are configured successfully!"
-
-# log some info
-echo -e "\n"
-echo "***********************************************************************"
-echo "*                                                                     *"
-echo "*                       Set Up OpenStack Nodes                        *"
-echo "*                                                                     *"
-echo "*            Set up OpenStack Nodes using openstack-ansible           *"
-echo "*         Playbooks: setup-hosts.yml, setup-infrastructure.yml        *"
-echo "*                                                                     *"
-echo "***********************************************************************"
-echo -e "\n"
-
-# using OpenStack-Ansible deploy the OpenStack
-echo "xci: running ansible playbook setup-hosts.yml"
-sudo -E /bin/sh -c "ssh root@$XCIMASTER_IP openstack-ansible \
-     $PLAYBOOK_PATH/setup-hosts.yml" | \
-     tee $LOG_PATH/setup-hosts.log
-
-# check the result of openstack-ansible setup-hosts.yml
-# if failed, exit with exit code 1
-if grep -q 'failed=1\|unreachable=1' $LOG_PATH/setup-hosts.log; then
-    echo "OpenStack node setup failed!"
-    exit 1
-fi
-
-echo "xci: running ansible playbook setup-infrastructure.yml"
-sudo -E /bin/sh -c "ssh root@$XCIMASTER_IP openstack-ansible \
-     $PLAYBOOK_PATH/setup-infrastructure.yml" | \
-     tee $LOG_PATH/setup-infrastructure.log
-
-# check the result of openstack-ansible setup-infrastructure.yml
-# if failed, exit with exit code 1
-if grep -q 'failed=1\|unreachable=1' $LOG_PATH/setup-infrastructure.log; then
-    echo "OpenStack node setup failed!"
-    exit 1
-fi
-
-echo "OpenStack nodes are setup successfully!"
-
-sudo -E /bin/sh -c "ssh root@$XCIMASTER_IP ansible -i $PLAYBOOK_PATH/inventory/ \
-           galera_container -m shell \
-           -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"" \
-           | tee $LOG_PATH/galera.log
-
-if grep -q 'FAILED' $LOG_PATH/galera.log; then
-    echo "Database cluster verification failed!"
-    exit 1
-else
-    echo "Database cluster verification successful!"
-fi
-
-# log some info
-echo -e "\n"
-echo "***********************************************************************"
-echo "*                                                                     *"
-echo "*                           Install OpenStack                         *"
-echo "*                 Playbooks: opnfv-setup-openstack.yml                *"
-echo "*                                                                     *"
-echo "***********************************************************************"
-echo -e "\n"
-
-echo "xci: running ansible playbook opnfv-setup-openstack.yml"
-sudo -E /bin/sh -c "ssh root@$XCIMASTER_IP openstack-ansible \
-     $PLAYBOOK_PATH/opnfv-setup-openstack.yml" | \
-     tee $LOG_PATH/opnfv-setup-openstack.log
-
-if grep -q 'failed=1\|unreachable=1' $LOG_PATH/opnfv-setup-openstack.log; then
-   echo "OpenStack installation failed!"
-   exit 1
-else
-   echo "OpenStack installation is successfully completed!"
-   exit 0
-fi
diff --git a/openstack-ansible/template/bifrost/compute.interface.j2 b/openstack-ansible/template/bifrost/compute.interface.j2
deleted file mode 100644 (file)
index 1719f6a..0000000
+++ /dev/null
@@ -1,86 +0,0 @@
-# This file describes the network interfaces available on your system
-# and how to activate them. For more information, see interfaces(5).
-
-# The loopback network interface
-auto lo
-iface lo inet loopback
-
-
-# Physical interface
-auto ens3
-iface ens3 inet manual
-
-# Container/Host management VLAN interface
-auto ens3.10
-iface ens3.10 inet manual
-    vlan-raw-device ens3
-
-# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
-auto ens3.30
-iface ens3.30 inet manual
-    vlan-raw-device ens3
-
-# Storage network VLAN interface (optional)
-auto ens3.20
-iface ens3.20 inet manual
-    vlan-raw-device ens3
-
-# Container/Host management bridge
-auto br-mgmt
-iface br-mgmt inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3.10
-    address {{host_info[inventory_hostname].MGMT_IP}}
-    netmask 255.255.252.0
-
-# compute1 VXLAN (tunnel/overlay) bridge config
-auto br-vxlan
-iface br-vxlan inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3.30
-    address {{host_info[inventory_hostname].VXLAN_IP}}
-    netmask 255.255.252.0
-
-# OpenStack Networking VLAN bridge
-auto br-vlan
-iface br-vlan inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3
-    address {{host_info[inventory_hostname].VLAN_IP}}
-    netmask 255.255.255.0
-    gateway 192.168.122.1
-    offload-sg off
-    # Create veth pair, don't bomb if already exists
-    pre-up ip link add br-vlan-veth type veth peer name eth12 || true
-    # Set both ends UP
-    pre-up ip link set br-vlan-veth up
-    pre-up ip link set eth12 up
-    # Delete veth pair on DOWN
-    post-down ip link del br-vlan-veth || true
-    bridge_ports br-vlan-veth
-
-# Add an additional address to br-vlan
-iface br-vlan inet static
-    # Flat network default gateway
-    # -- This needs to exist somewhere for network reachability
-    # -- from the router namespace for floating IP paths.
-    # -- Putting this here is primarily for tempest to work.
-    address {{host_info[inventory_hostname].VLAN_IP_SECOND}}
-    netmask 255.255.252.0
-    dns-nameserver 8.8.8.8 8.8.4.4
-
-# compute1 Storage bridge
-auto br-storage
-iface br-storage inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3.20
-    address {{host_info[inventory_hostname].STORAGE_IP}}
-    netmask 255.255.252.0
diff --git a/openstack-ansible/template/bifrost/controller.interface.j2 b/openstack-ansible/template/bifrost/controller.interface.j2
deleted file mode 100644 (file)
index 74aeea9..0000000
+++ /dev/null
@@ -1,71 +0,0 @@
-# This file describes the network interfaces available on your system
-# and how to activate them. For more information, see interfaces(5).
-
-# The loopback network interface
-auto lo
-iface lo inet loopback
-
-# Physical interface
-auto ens3
-iface ens3 inet manual
-
-# Container/Host management VLAN interface
-auto ens3.10
-iface ens3.10 inet manual
-    vlan-raw-device ens3
-
-# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
-auto ens3.30
-iface ens3.30 inet manual
-    vlan-raw-device ens3
-
-# Storage network VLAN interface (optional)
-auto ens3.20
-iface ens3.20 inet manual
-    vlan-raw-device ens3
-
-# Container/Host management bridge
-auto br-mgmt
-iface br-mgmt inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3.10
-    address {{host_info[inventory_hostname].MGMT_IP}}
-    netmask 255.255.252.0
-
-# OpenStack Networking VXLAN (tunnel/overlay) bridge
-#
-# Only the COMPUTE and NETWORK nodes must have an IP address
-# on this bridge. When used by infrastructure nodes, the
-# IP addresses are assigned to containers which use this
-# bridge.
-#
-auto br-vxlan
-iface br-vxlan inet manual
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3.30
-
-# OpenStack Networking VLAN bridge
-auto br-vlan
-iface br-vlan inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3
-    address {{host_info[inventory_hostname].VLAN_IP}}
-    netmask 255.255.255.0
-    gateway 192.168.122.1
-    dns-nameserver 8.8.8.8 8.8.4.4
-
-# compute1 Storage bridge
-auto br-storage
-iface br-storage inet static
-    bridge_stp off
-    bridge_waitport 0
-    bridge_fd 0
-    bridge_ports ens3.20
-    address {{host_info[inventory_hostname].STORAGE_IP}}
-    netmask 255.255.252.0
diff --git a/openstack-ansible/var/ubuntu.yml b/openstack-ansible/var/ubuntu.yml
deleted file mode 100644 (file)
index eb595be..0000000
+++ /dev/null
@@ -1,8 +0,0 @@
----
-OSA_URL: https://git.openstack.org/openstack/openstack-ansible
-OSA_PATH: /opt/openstack-ansible
-OSA_ETC_PATH: /etc/openstack_deploy
-OPENSTACK_OSA_VERSION: "{{ lookup('env','OPENSTACK_OSA_VERSION') }}"
-
-XCIMASTER_IP: 192.168.122.2
-host_info: {'xcimaster':{'MGMT_IP': '172.29.236.10','VLAN_IP': '192.168.122.2', 'STORAGE_IP': '172.29.244.10'},'controller00':{'MGMT_IP': '172.29.236.11','VLAN_IP': '192.168.122.3', 'STORAGE_IP': '172.29.244.11'},'controller01':{'MGMT_IP': '172.29.236.12','VLAN_IP': '192.168.122.4', 'STORAGE_IP': '172.29.244.12'},'controller02':{'MGMT_IP': '172.29.236.13','VLAN_IP': '192.168.122.5', 'STORAGE_IP': '172.29.240.13'},'compute00':{'MGMT_IP': '172.29.236.14','VLAN_IP': '192.168.122.6','VLAN_IP_SECOND': '173.29.241.1','VXLAN_IP': '172.29.240.14', 'STORAGE_IP': '172.29.244.14'},'compute01':{'MGMT_IP': '172.29.236.15','VLAN_IP': '192.168.122.7','VLAN_IP_SECOND': '173.29.241.2','VXLAN_IP': '172.29.240.15', 'STORAGE_IP': '172.29.244.15'}}
diff --git a/puppet-infracloud/README.md b/puppet-infracloud/README.md
deleted file mode 100644 (file)
index 7ebce3d..0000000
+++ /dev/null
@@ -1,61 +0,0 @@
-===============================
-How to deploy puppet-infracloud
-===============================
-The manifest and mmodules defined on this repo will deploy an OpenStack cloud based on `Infra Cloud <http://docs.openstack.org/infra/system-config/infra-cloud.html>`_ project.
-
-Once all the hardware is provisioned, enter in controller and compute nodes and follow these steps:
-
-1. Clone releng-xci::
-
-    git clone https://gerrit.opnfv.org/gerrit/releng-xci /opt/releng-xci
-
-2. Copy hiera to the right place::
-
-    cp /opt/puppet-infracloud/hiera/common.yaml /var/lib/hiera
-
-3. Install modules::
-
-    cd /opt/puppet-infracloud
-    ./install_modules.sh
-
-4. Apply the infracloud manifest::
-
-    cd /opt/puppet-infracloud
-    puppet apply manifests/site.pp --modulepath=/etc/puppet/modules:/opt/puppet-infracloud/modules
-
-5. Once you finish this operation on controller and compute nodes, you will have a functional OpenStack cloud.
-
-In jumphost, follow that steps:
-
-1. Clone releng-xci::
-
-    git clone https://gerrit.opnfv.org/gerrit/releng-xci /opt/releng-xci
-
-2. Create OpenStack clouds config directory::
-
-    mkdir -p /root/.config/openstack
-
-3. Copy credentials file::
-
-    cp /opt/puppet-infracloud/creds/clouds.yaml /root/.config/openstack/
-
-4. Install python-dev package as the installation of python-openstackclient depends on it
-
-    apt-get install -y python-dev
-
-5. Install openstack-client. (version 3.2.0 is known to work)::
-
-    pip install python-openstackclient
-
-6. Update /etc/hosts and add controller00::
-
-    192.168.122.3 controller00
-    192.168.122.3 controller00.opnfvlocal controller00
-
-7. Export the desired cloud::
-
-    export OS_CLOUD=opnfv
-
-8. Start using it::
-
-    openstack service list
diff --git a/puppet-infracloud/creds/clouds.yaml b/puppet-infracloud/creds/clouds.yaml
deleted file mode 100644 (file)
index cc27da2..0000000
+++ /dev/null
@@ -1,13 +0,0 @@
----
-clouds:
-  opnfv:
-    verify: False
-    auth:
-      auth_url: https://controller00.opnfvlocal:5000
-      project_name: opnfv
-      username: opnfv
-      password: pass
-    identity_api_version: '3'
-    region_name: RegionOne
-    user_domain_name: opnfv
-    project_domain_name: opnfv
diff --git a/puppet-infracloud/deploy_on_baremetal.md b/puppet-infracloud/deploy_on_baremetal.md
deleted file mode 100644 (file)
index 8efa5af..0000000
+++ /dev/null
@@ -1,58 +0,0 @@
-How to deploy Infra Cloud on baremetal
-==================================
-
-Install bifrost controller
---------------------------
-First step for deploying Infra Cloud is to install the bifrost controller. This can be virtualized, doesn't need to be on baremetal.
-To achieve that, first we can create a virtual machine with libvirt, with the proper network setup. This VM needs to share one physical interface (the PXE boot one), with the servers for the controller and compute nodes.
-Please follow documentation on: [https://git.openstack.org/cgit/openstack/bifrost/tree/tools/virsh_dev_env/README.md](https://git.openstack.org/cgit/openstack/bifrost/tree/tools/virsh_dev_env/README.md) to get sample templates and instructions for creating the bifrost VM.
-
-Once the **baremetal** VM is finished, you can login by ssh and start installing bifrost there. To proceed, follow this steps:
-
- 1. Change to root user, install git
- 2. Clone releng-xci project (cd /opt, git clone https://gerrit.opnfv.org/gerrit/releng-xci)
- 3. cd /opt/puppet-infracloud
- 4. Copy hiera to the right folder (cp hiera/common_baremetal.yaml /var/lib/hiera/common.yaml)
- 5. Ensure hostname is properly set ( hostnamectl set-hostname baremetal.opnfvlocal , hostname -f )
- 6. Install puppet and modules ( ./install_puppet.sh , ./install_modules.sh )
- 7. Apply puppet to install bifrost (puppet apply manifests/site.pp --modulepath=/etc/puppet/modules:/opt/puppet-infracloud/modules)
-
- With these steps you will have a bifrost controller up and running.
-
-Deploy baremetal servers
---------------------------
-Once you have bifrost controller ready, you need to use it to start deployment of the baremetal servers.
-On the same bifrost VM, follow these steps:
-
- 1. Source bifrost env vars: source /opt/stack/bifrost/env-vars
- 2. Export baremetal servers inventory:  export BIFROST_INVENTORY-SOURCE=/opt/stack/baremetal.json 
- 3. Change active directory: cd /opt/stack/bifrost/playbooks
- 3. Enroll the servers: ansible-playbook -vvv -i inventory/bifrost_inventory.py enroll-dynamic.yaml -e @/etc/bifrost/bifrost_global_vars
- 4. Deploy the servers:  ansible-playbook -vvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e @/etc/bifrost/bifrost_global_vars
- 5. Wait until they are on **active** state, check it with: ironic node-list
-
-In case of some server needing to be redeployed, you can reset it and redeploy again with:
-
- 1. ironic node-set-provision-state <name_of_server> deleted
- 2. Wait and check with ironic node-list until the server is on **available** state
- 3. Redeploy again: ansible-playbook -vvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml -e @/etc/bifrost/bifrost_global_vars
-
-Deploy baremetal servers
---------------------------
-Once all the servers are on **active** state, they can be accessed by ssh and InfraCloud manifests can be deployed on them, to properly deploy a controller and a compute.
-On each of those, follow that steps:
-
- 1. ssh from the bifrost controller to their external ips: ssh root@172.30.13.90
- 2. cd /opt, clone releng-xci project (git clone https://gerrit.opnfv.org/gerrit/releng-xci)
- 3. Copy hiera to the right folder ( cp hiera/common_baremetal.yaml /var/lib/hiera/common.yaml)
- 4. Install modules: ./install_modules.sh
- 5. Apply puppet: puppet apply manifests/site.pp --modulepath=/etc/puppet/modules:/opt/puppet-infracloud/modules
-
-Once this has been done on controller and compute, you will have a working cloud. To start working with it, follow that steps:
-
- 1. Ensure that controller00.opnfvlocal resolves properly to the external IP (this is already done in the bifrost controller)
- 2. Copy releng-xci/puppet-infracloud/creds/clouds.yaml to $HOME/.config/openstack/clouds.yaml
- 3. Install python-openstackclient
- 4. Specify the cloud you want to use: export OS_CLOUD=opnfvlocal
- 5. Now you can start operating in your cloud with openstack-client: openstack flavor list
-
diff --git a/puppet-infracloud/hiera/common.yaml b/puppet-infracloud/hiera/common.yaml
deleted file mode 100644 (file)
index 634d96c..0000000
+++ /dev/null
@@ -1,85 +0,0 @@
----
-keystone_rabbit_password: pass
-neutron_rabbit_password: pass
-nova_rabbit_password: pass
-root_mysql_password: pass
-keystone_mysql_password: pass
-glance_mysql_password: pass
-neutron_mysql_password: pass
-nova_mysql_password: pass
-keystone_admin_password: pass
-glance_admin_password: pass
-neutron_admin_password: pass
-nova_admin_password: pass
-keystone_admin_token: token
-ssl_key_file_contents: |
-  -----BEGIN PRIVATE KEY-----
-  MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC0YX6wsA/Jhe3q
-  ByoiLsyagO5rOCIyzDsMTV0YMWVIa/QybvS1vI+pK9FIoYPbqWFGHXmQF0DJYulb
-  GnB6A0GlT3YXuaKPucaaANr5hTjuEBF6LuQeq+OIO5u7+l56HGWbbVeB7+vnIxK9
-  43G545aBZSGlUnVfFg+v+IQtmRr36iEa5UDd4sahDXcp2Dm3zGgkFhFKie6AJ4UU
-  TzrH2SL6Nhl7i+AenuoUEDdgDWfGnCXozLngfmhKDi6lHDmh5zJhFS7cKz14wLgF
-  37fsWxxxEX8a6gtGYEEHqXV3x3AXO+U98pr15/xQM9O2O3mrqc/zkmcCRUwCjEeD
-  jEHey3UJAgMBAAECggEAGqapBEwPGRRbsY87b2+AtXdFQrw5eU3pj4jCr3dk4o1o
-  uCbiqxNgGnup4VRT2hmtkKF8O4jj/p1JozdF1RE0GsuhxCGeXiPxrwFfWSyQ28Ou
-  AWJ6O/njlVZRTTXRzbLyZEOEgWNEdJMfCsVXIUL6EsYxcW68fr8QtExAo0gSzvwe
-  IVyhopBy4A1jr5jWqjjlgJhoTHQCkp1e9pHiaW5WWHtk2DFdy6huw5PoDRppG42P
-  soMzqHy9AIWXrYaTGNjyybdJvbaiF0X5Bkr6k8ZxMlRuEb3Vpyrj7SsBrUifRJM3
-  +yheSq3drdQHlw5VrukoIgXGYB4zAQq3LndLoL5YTQKBgQDlzz/hB1IuGOKBXRHy
-  p0j+Lyoxt5EiOW2mdEkbTUYyYnD9EDbJ0wdQ5ijtWLw0J3AwhASkH8ZyljOVHKlY
-  Sq2Oo/uroIH4M8cVIBOJQ2/ak98ItLZ1OMMnDxlZva52jBfYwOEkg6OXeLOLmay6
-  ADfxQ56RFqreVHi9J0/jvpn9UwKBgQDI8CZrM4udJTP7gslxeDcRZw6W34CBBFds
-  49d10Tfd05sysOludzWAfGFj27wqIacFcIyYQmnSga9lBhowv+RwdSjcb2QCCjOb
-  b2GdH+qSFU8BTOcd5FscCBV3U8Y1f/iYp0EQ1/GiG2AYcQC67kjWOO4/JZEXsmtq
-  LisFlWTcswKBgQCC/bs/nViuhei2LELKuafVmzTF2giUJX/m3Wm+cjGNDqew18kj
-  CXKmHks93tKIN+KvBNFQa/xF3G/Skt/EP+zl3XravUbYH0tfM0VvfE0JnjgHUlqe
-  PpiebvDYQlJrqDb/ihHLKm3ZLSfKbvIRo4Y/s3dy5CTJTgT0bLAQ9Nf5mQKBgGqb
-  Dqb9d+rtnACqSNnMn9q5xIHDHlhUx1VcJCm70Fn+NG7WcWJMGLSMSNdD8zafGA/I
-  wK7fPWmTqEx+ylJm3HnVjtI0vuheJTcoBq/oCPlsGLhl5pBzYOskVs8yQQyNUoUa
-  52haSTZqM7eD7JFAbqBJIA2cjrf1zwtMZ0LVGegFAoGBAIFSkI+y4tDEEaSsxrMM
-  OBYEZDkffVar6/mDJukvyn0Q584K3I4eXIDoEEfMGgSN2Tza6QamuNFxOPCH+AAv
-  UKvckK4yuYkc7mQIgjCE8N8UF4kgsXjPek61TZT1QVI1aYFb78ZAZ0miudqWkx4t
-  YSNDj7llArylrPGHBLQ38X4/
-  -----END PRIVATE KEY-----
-ssl_cert_file_contents: |
-  -----BEGIN CERTIFICATE-----
-  MIIDcTCCAlmgAwIBAgIJAJsHSxF0u/oaMA0GCSqGSIb3DQEBCwUAME8xCzAJBgNV
-  BAYTAlVTMQ4wDAYDVQQHDAVXb3JsZDEOMAwGA1UECgwFT1BORlYxIDAeBgNVBAMM
-  F2NvbnRyb2xsZXIwMC5vcG5mdmxvY2FsMB4XDTE2MDgxNzE2MzQwOFoXDTE3MDgx
-  NzE2MzQwOFowTzELMAkGA1UEBhMCVVMxDjAMBgNVBAcMBVdvcmxkMQ4wDAYDVQQK
-  DAVPUE5GVjEgMB4GA1UEAwwXY29udHJvbGxlcjAwLm9wbmZ2bG9jYWwwggEiMA0G
-  CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0YX6wsA/Jhe3qByoiLsyagO5rOCIy
-  zDsMTV0YMWVIa/QybvS1vI+pK9FIoYPbqWFGHXmQF0DJYulbGnB6A0GlT3YXuaKP
-  ucaaANr5hTjuEBF6LuQeq+OIO5u7+l56HGWbbVeB7+vnIxK943G545aBZSGlUnVf
-  Fg+v+IQtmRr36iEa5UDd4sahDXcp2Dm3zGgkFhFKie6AJ4UUTzrH2SL6Nhl7i+Ae
-  nuoUEDdgDWfGnCXozLngfmhKDi6lHDmh5zJhFS7cKz14wLgF37fsWxxxEX8a6gtG
-  YEEHqXV3x3AXO+U98pr15/xQM9O2O3mrqc/zkmcCRUwCjEeDjEHey3UJAgMBAAGj
-  UDBOMB0GA1UdDgQWBBQyFVbU5s2ihD0hX3W7GyHiHZGG1TAfBgNVHSMEGDAWgBQy
-  FVbU5s2ihD0hX3W7GyHiHZGG1TAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUA
-  A4IBAQB+xf7I9RVWzRNjMbWBDE6pBvOWnSksv7Jgr4cREvyOxBDaIoO3uQRDDu6r
-  RCgGs1CuwEaFX1SS/OVrKRFiy9kCU/LBZEFwaHRaL2Kj57Z2yNInPIiKB4h9jen2
-  75fYrpq42XUDSI0NpsqAJpmcQqXOOo8V08FlH0/6h8mWdsfQfbyaf+g73+aRZds8
-  Q4ttmBrqY4Pi5CJW46w7LRCA5o92Di3GI9dAh9MVZ3023cTTjDkW04QbluphuTFj
-  O07Npz162/fHTXut+piV78t+1HlfYWY5TOSQMIVwenftA/Bn8+TQAgnLR+nGo/wu
-  oEaxLtj3Jr07+yIjL88ewT+c3fpq
-  -----END CERTIFICATE-----
-infracloud_mysql_password: pass
-opnfv_password: pass
-
-rabbitmq::package_gpg_key: 'https://www.rabbitmq.com/rabbitmq-release-signing-key.asc'
-rabbitmq::repo::apt::key: '0A9AF2115F4687BD29803A206B73A36E6026DFCA'
-
-hosts:
-  jumphost.opnfvlocal:
-    ip: 192.168.122.2
-  controller00.opnfvlocal:
-    ip: 192.168.122.3
-  compute00.opnfvlocal:
-    ip: 192.168.122.4
-
-bridge_name: br_opnfv
-neutron_subnet_cidr: '192.168.122.0/24'
-neutron_subnet_gateway: '192.168.122.1'
-neutron_subnet_allocation_pools:
-  - 'start=192.168.122.50,end=192.168.122.254'
-virt_type: 'qemu'
diff --git a/puppet-infracloud/hiera/common_baremetal.yaml b/puppet-infracloud/hiera/common_baremetal.yaml
deleted file mode 100644 (file)
index 015612c..0000000
+++ /dev/null
@@ -1,174 +0,0 @@
----
-keystone_rabbit_password: pass
-neutron_rabbit_password: pass
-nova_rabbit_password: pass
-root_mysql_password: pass
-keystone_mysql_password: pass
-glance_mysql_password: pass
-neutron_mysql_password: pass
-nova_mysql_password: pass
-keystone_admin_password: pass
-glance_admin_password: pass
-neutron_admin_password: pass
-nova_admin_password: pass
-keystone_admin_token: token
-ssl_key_file_contents: |
-  -----BEGIN PRIVATE KEY-----
-  MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC0YX6wsA/Jhe3q
-  ByoiLsyagO5rOCIyzDsMTV0YMWVIa/QybvS1vI+pK9FIoYPbqWFGHXmQF0DJYulb
-  GnB6A0GlT3YXuaKPucaaANr5hTjuEBF6LuQeq+OIO5u7+l56HGWbbVeB7+vnIxK9
-  43G545aBZSGlUnVfFg+v+IQtmRr36iEa5UDd4sahDXcp2Dm3zGgkFhFKie6AJ4UU
-  TzrH2SL6Nhl7i+AenuoUEDdgDWfGnCXozLngfmhKDi6lHDmh5zJhFS7cKz14wLgF
-  37fsWxxxEX8a6gtGYEEHqXV3x3AXO+U98pr15/xQM9O2O3mrqc/zkmcCRUwCjEeD
-  jEHey3UJAgMBAAECggEAGqapBEwPGRRbsY87b2+AtXdFQrw5eU3pj4jCr3dk4o1o
-  uCbiqxNgGnup4VRT2hmtkKF8O4jj/p1JozdF1RE0GsuhxCGeXiPxrwFfWSyQ28Ou
-  AWJ6O/njlVZRTTXRzbLyZEOEgWNEdJMfCsVXIUL6EsYxcW68fr8QtExAo0gSzvwe
-  IVyhopBy4A1jr5jWqjjlgJhoTHQCkp1e9pHiaW5WWHtk2DFdy6huw5PoDRppG42P
-  soMzqHy9AIWXrYaTGNjyybdJvbaiF0X5Bkr6k8ZxMlRuEb3Vpyrj7SsBrUifRJM3
-  +yheSq3drdQHlw5VrukoIgXGYB4zAQq3LndLoL5YTQKBgQDlzz/hB1IuGOKBXRHy
-  p0j+Lyoxt5EiOW2mdEkbTUYyYnD9EDbJ0wdQ5ijtWLw0J3AwhASkH8ZyljOVHKlY
-  Sq2Oo/uroIH4M8cVIBOJQ2/ak98ItLZ1OMMnDxlZva52jBfYwOEkg6OXeLOLmay6
-  ADfxQ56RFqreVHi9J0/jvpn9UwKBgQDI8CZrM4udJTP7gslxeDcRZw6W34CBBFds
-  49d10Tfd05sysOludzWAfGFj27wqIacFcIyYQmnSga9lBhowv+RwdSjcb2QCCjOb
-  b2GdH+qSFU8BTOcd5FscCBV3U8Y1f/iYp0EQ1/GiG2AYcQC67kjWOO4/JZEXsmtq
-  LisFlWTcswKBgQCC/bs/nViuhei2LELKuafVmzTF2giUJX/m3Wm+cjGNDqew18kj
-  CXKmHks93tKIN+KvBNFQa/xF3G/Skt/EP+zl3XravUbYH0tfM0VvfE0JnjgHUlqe
-  PpiebvDYQlJrqDb/ihHLKm3ZLSfKbvIRo4Y/s3dy5CTJTgT0bLAQ9Nf5mQKBgGqb
-  Dqb9d+rtnACqSNnMn9q5xIHDHlhUx1VcJCm70Fn+NG7WcWJMGLSMSNdD8zafGA/I
-  wK7fPWmTqEx+ylJm3HnVjtI0vuheJTcoBq/oCPlsGLhl5pBzYOskVs8yQQyNUoUa
-  52haSTZqM7eD7JFAbqBJIA2cjrf1zwtMZ0LVGegFAoGBAIFSkI+y4tDEEaSsxrMM
-  OBYEZDkffVar6/mDJukvyn0Q584K3I4eXIDoEEfMGgSN2Tza6QamuNFxOPCH+AAv
-  UKvckK4yuYkc7mQIgjCE8N8UF4kgsXjPek61TZT1QVI1aYFb78ZAZ0miudqWkx4t
-  YSNDj7llArylrPGHBLQ38X4/
-  -----END PRIVATE KEY-----
-ssl_cert_file_contents: |
-  -----BEGIN CERTIFICATE-----
-  MIIDcTCCAlmgAwIBAgIJAJsHSxF0u/oaMA0GCSqGSIb3DQEBCwUAME8xCzAJBgNV
-  BAYTAlVTMQ4wDAYDVQQHDAVXb3JsZDEOMAwGA1UECgwFT1BORlYxIDAeBgNVBAMM
-  F2NvbnRyb2xsZXIwMC5vcG5mdmxvY2FsMB4XDTE2MDgxNzE2MzQwOFoXDTE3MDgx
-  NzE2MzQwOFowTzELMAkGA1UEBhMCVVMxDjAMBgNVBAcMBVdvcmxkMQ4wDAYDVQQK
-  DAVPUE5GVjEgMB4GA1UEAwwXY29udHJvbGxlcjAwLm9wbmZ2bG9jYWwwggEiMA0G
-  CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0YX6wsA/Jhe3qByoiLsyagO5rOCIy
-  zDsMTV0YMWVIa/QybvS1vI+pK9FIoYPbqWFGHXmQF0DJYulbGnB6A0GlT3YXuaKP
-  ucaaANr5hTjuEBF6LuQeq+OIO5u7+l56HGWbbVeB7+vnIxK943G545aBZSGlUnVf
-  Fg+v+IQtmRr36iEa5UDd4sahDXcp2Dm3zGgkFhFKie6AJ4UUTzrH2SL6Nhl7i+Ae
-  nuoUEDdgDWfGnCXozLngfmhKDi6lHDmh5zJhFS7cKz14wLgF37fsWxxxEX8a6gtG
-  YEEHqXV3x3AXO+U98pr15/xQM9O2O3mrqc/zkmcCRUwCjEeDjEHey3UJAgMBAAGj
-  UDBOMB0GA1UdDgQWBBQyFVbU5s2ihD0hX3W7GyHiHZGG1TAfBgNVHSMEGDAWgBQy
-  FVbU5s2ihD0hX3W7GyHiHZGG1TAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUA
-  A4IBAQB+xf7I9RVWzRNjMbWBDE6pBvOWnSksv7Jgr4cREvyOxBDaIoO3uQRDDu6r
-  RCgGs1CuwEaFX1SS/OVrKRFiy9kCU/LBZEFwaHRaL2Kj57Z2yNInPIiKB4h9jen2
-  75fYrpq42XUDSI0NpsqAJpmcQqXOOo8V08FlH0/6h8mWdsfQfbyaf+g73+aRZds8
-  Q4ttmBrqY4Pi5CJW46w7LRCA5o92Di3GI9dAh9MVZ3023cTTjDkW04QbluphuTFj
-  O07Npz162/fHTXut+piV78t+1HlfYWY5TOSQMIVwenftA/Bn8+TQAgnLR+nGo/wu
-  oEaxLtj3Jr07+yIjL88ewT+c3fpq
-  -----END CERTIFICATE-----
-infracloud_mysql_password: pass
-opnfv_password: pass
-
-rabbitmq::package_gpg_key: 'https://www.rabbitmq.com/rabbitmq-release-signing-key.asc'
-rabbitmq::repo::apt::key: '0A9AF2115F4687BD29803A206B73A36E6026DFCA'
-
-hosts:
-  jumphost.opnfvlocal:
-    ip: 172.30.13.89
-  controller00.opnfvlocal:
-    ip: 172.30.13.90
-  compute00.opnfvlocal:
-    ip: 172.30.13.91
-
-# settings for bifrost
-bridge_name: br_opnfv
-ironic_db_password: pass
-bifrost_mysql_password: pass
-bifrost_ssh_private_key: |
-  -----BEGIN RSA PRIVATE KEY-----
-  MIIEowIBAAKCAQEAvwr2LbfJQuKZDOQse+DQHX84c9LCHvQfy0pu15JkiLM5dUtx
-  hLr/5fxSzblubS4WkNZVsGTtUp51f8yoQyltqquGlVfUf0GO+PCLaRp0arhli0Rl
-  sAGatI12amnrVap82jINiKQRO+UnF97z2hiB35Zxko4jSaPOOiL48DEKowZHL2Ja
-  jjUt6dXcaNotXNaKZpcxz92gdZhFOPU8BrJ/mI9k9u6QI/4qLG/WzW4frHLigA1t
-  OrZ3Nnu3tloWNsS1lh71KRfEv46VD8tCAZfXqJtjdH4Z4AUO++CLF/K4zXhIoFqU
-  Wf8aS64YzoaAfnJ+jUwKs92dVjuFtbEk+t2YLQIDAQABAoIBAQCAr++YaD6oUV9r
-  caANaiiGVhY+3u9oTmXEWMVFbRVPh/riaglzsUuDLm7QqWIbJXqJ4fcitTmv95GK
-  nt+RLizzVEt5+gnoFs8qHU6rY+ibos6z+0TMRKhjiw8DK4oc0JT9nc3EB1CcmgW1
-  bLeyZ+PEKuEiKaDXkAHw43HwyfgyS3Lc90TSaLj3P7egsBuhx1Yy+wgyiPQ/bF0b
-  OBLHHK+nwYLGAq25n/+zA7XAndc2OQd4KzUJcvjyND+IMYnzEbeFH36UcFqbvgGu
-  nR55yIrCxsxcJhhT2slMNtg/xCmo3Jzz1kNBtwbNBik4/5Lkckny0xhQl+h7vz9U
-  +cKjwfK5AoGBAPSy/JHMeQ5/rzbA5LAZhVa/Yc4B5datkwLNg6mh4CzMabJs8AKd
-  de05XB/Nq6Hfp8Aa7zLt2GIb3iqF6w/y+j8YAXS2KQD8/HDs2/9Oxr512kfssk5D
-  dcpTqeIFetzM9pqnctVXBGlbz0QLeL+lT3kXY00+CBm6LjEv8dsPxZr3AoGBAMfd
-  nDnTjUVZ+sRpTBDM3MhKLMETxNWNDaozL+SgpYQwtKlSTfQVdFcM66a8qCFjQFsc
-  /6AjL0bjCA5u859IoQ4ValD0vgkyLHdEN0P1Grf3MK8kjOW1A1s1i2FY6U0z9AM2
-  zsUCA9bB5A9wwxwofoa8VkaDpVSMITbakVoNxJj7AoGAImcft2fmBTHScoJAJLoR
-  0xZpK8t8gug4aQZ34luN5v5+RcWnINb+g3GzEA2cec+2B/5BbwmdiH2eiJ/3YnCo
-  2kIHwl7x+N+Ypk/GxmhO7Owo2j/e+b3mS6HjmpFmqrBuY2PzcyceyalMxKZQPbGC
-  MOYm4e88uFFCuUuiV0gqYhUCgYBmSFhCE6yxeCnoSEbgNicq7SLYMIjEDOqYVpfE
-  9h2ed9qM6IzyQ+SFBBy4+MVGSOfPeRis2DTCnz8pO8i7lEyvy2/cPFPgmue8pZFu
-  2smwqfUlPJxKlgdArzdEO18x3kubNXo9whk614EiEcAX8fVGeK3iak665Pe+fb5z
-  Cqa47wKBgDp3/dgtMneoePKNefy4a9vp5y4XKviC6GOrr0xpEM2ptZ+I7mUJcACN
-  KbaW0dPgtS1cApelmF73IAJRYbKMW7lQzql61IoGw4pGTIMPKerqRs/hTWYPZiSG
-  QHWf3iTV5uQr6cSRoUgkAUHVw2KTGad41RAhDp352iakZuNNBFga
-  -----END RSA PRIVATE KEY-----
-bifrost_ssh_public_key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/CvYtt8lC4pkM5Cx74NAdfzhz0sIe9B/LSm7XkmSIszl1S3GEuv/l/FLNuW5tLhaQ1lWwZO1SnnV/zKhDKW2qq4aVV9R/QY748ItpGnRquGWLRGWwAZq0jXZqaetVqnzaMg2IpBE75ScX3vPaGIHflnGSjiNJo846IvjwMQqjBkcvYlqONS3p1dxo2i1c1opmlzHP3aB1mEU49TwGsn+Yj2T27pAj/iosb9bNbh+scuKADW06tnc2e7e2WhY2xLWWHvUpF8S/jpUPy0IBl9eom2N0fhngBQ774IsX8rjNeEigWpRZ/xpLrhjOhoB+cn6NTAqz3Z1WO4W1sST63Zgt yolanda@trasto
-infracloud_vlan: 415
-infracloud_gateway_ip: 172.30.13.1
-default_network_interface: eno3
-dhcp_static_mask: 255.255.255.128
-dhcp_pool_start: 10.20.0.130
-dhcp_pool_end: 10.20.0.254
-network_interface: eth1
-ipv4_nameserver: 8.8.8.8
-ipv4_subnet_mask: 255.255.255.0
-ipv4_gateway: 172.30.13.1
-ironic_inventory:
-  controller00.opnfvlocal:
-    driver: agent_ipmitool
-    driver_info:
-      power:
-        ipmi_address: 172.30.8.90
-        ipmi_username: admin
-    provisioning_ipv4_address: 10.20.0.130
-    ipv4_address: 172.30.13.90
-    ansible_ssh_host: 172.30.13.90
-    ipv4_gateway: 172.30.13.1
-    ipv4_interface_mac: 00:1e:67:f6:9b:35
-    ipv4_subnet_mask: 255.255.255.192
-    name: controller00.opnfvlocal
-    nics:
-      - mac: a4:bf:01:01:a9:fc
-      - mac: 00:1e:67:f6:9b:35
-    properties:
-      cpu_arch: x86_64
-      cpus: '44'
-      disk_size: '1800'
-      ram: '65536'
-    uuid: 00a22849-2442-e511-906e-0012795d96dd
-  compute00.opnfvlocal:
-    driver: agent_ipmitool
-    driver_info:
-      power:
-        ipmi_address: 172.30.8.91
-        ipmi_username: admin
-    provisioning_ipv4_address: 10.20.0.131
-    ipv4_address: 172.30.13.91
-    ansible_ssh_host: 172.30.13.91
-    ipv4_gateway: 172.30.13.1
-    ipv4_interface_mac: 00:1e:67:f6:9b:37
-    ipv4_subnet_mask: 255.255.255.0
-    name: compute00.opnfvlocal
-    nics:
-      - mac: a4:bf:01:01:a9:d4
-      - mac: 00:1e:67:f6:9b:37
-    properties:
-      cpu_arch: x86_64
-      cpus: '44'
-      disk_size: '1800'
-      ram: '65536'
-    uuid: 0051e926-f242-e511-906e-0012795d96dd
-ipmi_passwords: {'172.30.8.90': 'octopus', '172.30.8.91': 'octopus'}
-neutron_subnet_cidr: '172.30.13.0/24'
-neutron_subnet_gateway: '172.30.13.1'
-neutron_subnet_allocation_pools:
-  - 'start=172.30.13.100,end=172.30.13.254'
-virt_type: 'kvm'
-dib_dev_user_password: devuser
diff --git a/puppet-infracloud/install_modules.sh b/puppet-infracloud/install_modules.sh
deleted file mode 100755 (executable)
index 5d5acd9..0000000
+++ /dev/null
@@ -1,121 +0,0 @@
-#!/bin/bash
-# Copyright 2014 OpenStack Foundation.
-# Copyright 2014 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-MODULE_PATH=`puppet config print modulepath | cut -d ':' -f 1`
-SCRIPT_NAME=$(basename $0)
-SCRIPT_DIR=$(readlink -f "$(dirname $0)")
-JUST_CLONED=0
-
-function remove_module {
-    local SHORT_MODULE_NAME=$1
-    if [ -n "$SHORT_MODULE_NAME" ]; then
-        rm -Rf "$MODULE_PATH/$SHORT_MODULE_NAME"
-    else
-        echo "ERROR: remove_module requires a SHORT_MODULE_NAME."
-    fi
-}
-
-function git_clone {
-    local MOD=$1
-    local DEST=$2
-
-    JUST_CLONED=1
-    for attempt in $(seq 0 3); do
-        clone_error=0
-        git clone $MOD $DEST && break || true
-        rm -rf $DEST
-        clone_error=1
-    done
-    return $clone_error
-}
-
-# Array of modules to be installed key:value is module:version.
-declare -A MODULES
-
-# Array of modues to be installed from source and without dependency resolution.
-# key:value is source location, revision to checkout
-declare -A SOURCE_MODULES
-
-# Array of modues to be installed from source and without dependency resolution from openstack git
-# key:value is source location, revision to checkout
-declare -A INTEGRATION_MODULES
-
-# load modules.env to populate MODULES[*] and SOURCE_MODULES[*]
-# for processing.
-MODULE_ENV_FILE=${MODULE_FILE:-modules.env}
-MODULE_ENV_PATH=${MODULE_ENV_PATH:-${SCRIPT_DIR}}
-if [ -f "${MODULE_ENV_PATH}/${MODULE_ENV_FILE}" ] ; then
-    . "${MODULE_ENV_PATH}/${MODULE_ENV_FILE}"
-fi
-
-if [ -z "${!MODULES[*]}" ] && [ -z "${!SOURCE_MODULES[*]}" ] ; then
-    echo ""
-    echo "WARNING: nothing to do, unable to find MODULES or SOURCE_MODULES"
-    echo "  export options, try setting MODULE_ENV_PATH or MODULE_ENV_FILE"
-    echo "  export to the proper location of modules.env file."
-    echo ""
-    exit 0
-fi
-
-MODULE_LIST=`puppet module list --color=false`
-
-# Install modules from source
-for MOD in ${!SOURCE_MODULES[*]} ; do
-    JUST_CLONED=0
-    # get the name of the module directory
-    if [ `echo $MOD | awk -F. '{print $NF}'` = 'git' ]; then
-        echo "Remote repos of the form repo.git are not supported: ${MOD}"
-        exit 1
-    fi
-
-    MODULE_NAME=`echo $MOD | awk -F- '{print $NF}'`
-
-    # set up git base command to use the correct path
-    GIT_CMD_BASE="git --git-dir=${MODULE_PATH}/${MODULE_NAME}/.git --work-tree ${MODULE_PATH}/${MODULE_NAME}"
-    # treat any occurrence of the module as a match
-    if ! echo $MODULE_LIST | grep "${MODULE_NAME}" >/dev/null 2>&1; then
-        # clone modules that are not installed
-        git_clone $MOD "${MODULE_PATH}/${MODULE_NAME}"
-    else
-        if [ ! -d ${MODULE_PATH}/${MODULE_NAME}/.git ]; then
-            echo "Found directory ${MODULE_PATH}/${MODULE_NAME} that is not a git repo, deleting it and reinstalling from source"
-            remove_module $MODULE_NAME
-            git_clone $MOD "${MODULE_PATH}/${MODULE_NAME}"
-        elif [ `${GIT_CMD_BASE} remote show origin | grep 'Fetch URL' | awk -F'URL: ' '{print $2}'` != $MOD ]; then
-            echo "Found remote in ${MODULE_PATH}/${MODULE_NAME} that does not match desired remote ${MOD}, deleting dir and re-cloning"
-            remove_module $MODULE_NAME
-            git_clone $MOD "${MODULE_PATH}/${MODULE_NAME}"
-        fi
-    fi
-
-    # fetch the latest refs from the repo
-    if [[ $JUST_CLONED -eq 0 ]] ; then
-        # If we just cloned the repo, we do not need to remote update
-        for attempt in $(seq 0 3); do
-            clone_error=0
-            $GIT_CMD_BASE remote update && break || true
-            clone_error=1
-        done
-        if [[ $clone_error -ne 0 ]] ; then
-            exit $clone_error
-        fi
-    fi
-    # make sure the correct revision is installed, I have to use rev-list b/c rev-parse does not work with tags
-    if [ `${GIT_CMD_BASE} rev-list HEAD --max-count=1` != `${GIT_CMD_BASE} rev-list ${SOURCE_MODULES[$MOD]} --max-count=1` ]; then
-        # checkout correct revision
-        $GIT_CMD_BASE checkout ${SOURCE_MODULES[$MOD]}
-    fi
-done
diff --git a/puppet-infracloud/install_puppet.sh b/puppet-infracloud/install_puppet.sh
deleted file mode 100755 (executable)
index ae25944..0000000
+++ /dev/null
@@ -1,297 +0,0 @@
-#!/bin/bash -x
-
-# Copyright 2013 OpenStack Foundation.
-# Copyright 2013 Hewlett-Packard Development Company, L.P.
-# Copyright 2013 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-
-#
-# Distro identification functions
-#  note, can't rely on lsb_release for these as we're bare-bones and
-#  it may not be installed yet)
-
-
-function is_fedora {
-    [ -f /usr/bin/yum ] && cat /etc/*release | grep -q -e "Fedora"
-}
-
-function is_rhel7 {
-    [ -f /usr/bin/yum ] && \
-        cat /etc/*release | grep -q -e "Red Hat" -e "CentOS" -e "CloudLinux" && \
-        cat /etc/*release | grep -q 'release 7'
-}
-
-function is_ubuntu {
-    [ -f /usr/bin/apt-get ]
-}
-
-function is_opensuse {
-    [ -f /usr/bin/zypper ] && \
-        cat /etc/os-release | grep -q -e "openSUSE"
-}
-
-function is_gentoo {
-    [ -f /usr/bin/emerge ]
-}
-
-# dnf is a drop-in replacement for yum on Fedora>=22
-YUM=yum
-if is_fedora && [[ $(lsb_release -rs) -ge 22 ]]; then
-    YUM=dnf
-fi
-
-
-#
-# Distro specific puppet installs
-#
-
-function _systemd_update {
-    # there is a bug (rhbz#1261747) where systemd can fail to enable
-    # services due to selinux errors after upgrade.  A work-around is
-    # to install the latest version of selinux and systemd here and
-    # restart the daemon for good measure after it is upgraded.
-    $YUM install -y selinux-policy
-    $YUM install -y systemd
-    systemctl daemon-reload
-}
-
-function setup_puppet_fedora {
-    _systemd_update
-
-    $YUM update -y
-
-    # NOTE: we preinstall lsb_release here to ensure facter sets
-    # lsbdistcodename
-    #
-    # Fedora declares some global hardening flags, which distutils
-    # pick up when building python modules.  redhat-rpm-config
-    # provides the required config options.  Really this should be a
-    # dependency of python-devel (fix in the works, see
-    # https://bugzilla.redhat.com/show_bug.cgi?id=1217376) and can be
-    # removed when that is sorted out.
-
-    $YUM install -y redhat-lsb-core git puppet \
-        redhat-rpm-config
-
-    mkdir -p /etc/puppet/modules/
-
-    # Puppet expects the pip command named as pip-python on
-    # Fedora, as per the packaged command name.  However, we're
-    # installing from get-pip.py so it's just 'pip'.  An easy
-    # work-around is to just symlink pip-python to "fool" it.
-    # See upstream issue:
-    #  https://tickets.puppetlabs.com/browse/PUP-1082
-    ln -fs /usr/bin/pip /usr/bin/pip-python
-    # Wipe out templatedir so we don't get warnings about it
-    sed -i '/templatedir/d' /etc/puppet/puppet.conf
-
-    # upstream is currently looking for /run/systemd files to check
-    # for systemd.  This fails in a chroot where /run isn't mounted
-    # (like when using dib).  Comment out this confine as fedora
-    # always has systemd
-    #  see
-    #   https://github.com/puppetlabs/puppet/pull/4481
-    #   https://bugzilla.redhat.com/show_bug.cgi?id=1254616
-    sudo sed -i.bak  '/^[^#].*/ s|\(^.*confine :exists => \"/run/systemd/system\".*$\)|#\ \1|' \
-        /usr/share/ruby/vendor_ruby/puppet/provider/service/systemd.rb
-
-    # upstream "requests" pip package vendors urllib3 and chardet
-    # packages.  The fedora packages un-vendor this, and symlink those
-    # sub-packages back to packaged versions.  We get into a real mess
-    # of if some of the puppet ends up pulling in "requests" from pip,
-    # and then something like devstack does a "yum install
-    # python-requests" which does a very bad job at overwriting the
-    # pip-installed version (symlinks and existing directories don't
-    # mix).  A solution is to pre-install the python-requests
-    # package; clear it out and re-install from pip.  This way, the
-    # package is installed for dependencies, and we have a pip-managed
-    # requests with correctly vendored sub-packages.
-    sudo ${YUM} install -y python2-requests
-    sudo rm -rf /usr/lib/python2.7/site-packages/requests/*
-    sudo rm -rf /usr/lib/python2.7/site-packages/requests-*.{egg,dist}-info
-    sudo pip install requests
-}
-
-function setup_puppet_rhel7 {
-    local puppet_pkg="https://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm"
-
-    # install a bootstrap epel repo to install latest epel-release
-    # package (which provides correct gpg keys, etc); then remove
-    # boostrap
-    cat > /etc/yum.repos.d/epel-bootstrap.repo <<EOF
-[epel-bootstrap]
-name=Bootstrap EPEL
-mirrorlist=https://mirrors.fedoraproject.org/mirrorlist?repo=epel-7&arch=\$basearch
-failovermethod=priority
-enabled=0
-gpgcheck=0
-EOF
-    yum --enablerepo=epel-bootstrap -y install epel-release
-    rm -f /etc/yum.repos.d/epel-bootstrap.repo
-
-    _systemd_update
-    yum update -y
-
-    # NOTE: we preinstall lsb_release to ensure facter sets lsbdistcodename
-    yum install -y redhat-lsb-core git puppet
-
-    rpm -ivh $puppet_pkg
-
-    # see comments in setup_puppet_fedora
-    ln -s /usr/bin/pip /usr/bin/pip-python
-    # Wipe out templatedir so we don't get warnings about it
-    sed -i '/templatedir/d' /etc/puppet/puppet.conf
-
-    # install RDO repo as well; this covers a few things like
-    # openvswitch that aren't available for EPEL
-    yum install -y https://rdoproject.org/repos/rdo-release.rpm
-}
-
-function setup_puppet_ubuntu {
-    if ! which lsb_release > /dev/null 2<&1 ; then
-        DEBIAN_FRONTEND=noninteractive apt-get --option 'Dpkg::Options::=--force-confold' \
-            --assume-yes install -y --force-yes lsb-release
-    fi
-
-    lsbdistcodename=`lsb_release -c -s`
-    if [ $lsbdistcodename != 'trusty' ] ; then
-        rubypkg=rubygems
-    else
-        rubypkg=ruby
-    fi
-
-
-    PUPPET_VERSION=3.*
-    PUPPETDB_VERSION=2.*
-    FACTER_VERSION=2.*
-
-    cat > /etc/apt/preferences.d/00-puppet.pref <<EOF
-Package: puppet puppet-common puppetmaster puppetmaster-common puppetmaster-passenger
-Pin: version $PUPPET_VERSION
-Pin-Priority: 501
-
-Package: puppetdb puppetdb-terminus
-Pin: version $PUPPETDB_VERSION
-Pin-Priority: 501
-
-Package: facter
-Pin: version $FACTER_VERSION
-Pin-Priority: 501
-EOF
-
-    # NOTE(pabelanger): Puppetlabs does not support ubuntu xenial. Instead use
-    # the version of puppet ship by xenial.
-    if [ $lsbdistcodename != 'xenial' ]; then
-        puppet_deb=puppetlabs-release-${lsbdistcodename}.deb
-        if type curl >/dev/null 2>&1; then
-            curl -O http://apt.puppetlabs.com/$puppet_deb
-        else
-            wget http://apt.puppetlabs.com/$puppet_deb -O $puppet_deb
-        fi
-        dpkg -i $puppet_deb
-        rm $puppet_deb
-    fi;
-
-    apt-get update
-    DEBIAN_FRONTEND=noninteractive apt-get --option 'Dpkg::Options::=--force-confold' \
-        --assume-yes dist-upgrade
-    DEBIAN_FRONTEND=noninteractive apt-get --option 'Dpkg::Options::=--force-confold' \
-        --assume-yes install -y --force-yes puppet git $rubypkg
-    # Wipe out templatedir so we don't get warnings about it
-    sed -i '/templatedir/d' /etc/puppet/puppet.conf
-}
-
-function setup_puppet_opensuse {
-    local version=`grep -e "VERSION_ID" /etc/os-release | tr -d "\"" | cut -d "=" -f2`
-    zypper ar http://download.opensuse.org/repositories/systemsmanagement:/puppet/openSUSE_${version}/systemsmanagement:puppet.repo
-    zypper -v --gpg-auto-import-keys --no-gpg-checks -n ref
-    zypper --non-interactive in --force-resolution puppet
-    # Wipe out templatedir so we don't get warnings about it
-    sed -i '/templatedir/d' /etc/puppet/puppet.conf
-}
-
-function setup_puppet_gentoo {
-    echo yes | emaint sync -a
-    emerge -q --jobs=4 puppet-agent
-    sed -i '/templatedir/d' /etc/puppetlabs/puppet/puppet.conf
-}
-
-#
-# pip setup
-#
-
-function setup_pip {
-    # Install pip using get-pip
-    local get_pip_url=https://bootstrap.pypa.io/get-pip.py
-    local ret=1
-
-    if [ -f ./get-pip.py ]; then
-        ret=0
-    elif type curl >/dev/null 2>&1; then
-        curl -O $get_pip_url
-        ret=$?
-    elif type wget >/dev/null 2>&1; then
-        wget $get_pip_url
-        ret=$?
-    fi
-
-    if [ $ret -ne 0 ]; then
-        echo "Failed to get get-pip.py"
-        exit 1
-    fi
-
-    if is_opensuse; then
-        zypper --non-interactive in --force-resolution python python-xml
-    fi
-
-    python get-pip.py
-    rm get-pip.py
-
-    # we are about to overwrite setuptools, but some packages we
-    # install later might depend on the python-setuptools package.  To
-    # avoid later conflicts, and because distro packages don't include
-    # enough info for pip to certain it can fully uninstall the old
-    # package, for safety we clear it out by hand (this seems to have
-    # been a problem with very old to new updates, e.g. centos6 to
-    # current-era, but less so for smaller jumps).  There is a bit of
-    # chicken-and-egg problem with pip in that it requires setuptools
-    # for some operations, such as wheel creation.  But just
-    # installing setuptools shouldn't require setuptools itself, so we
-    # are safe for this small section.
-    if is_rhel7 || is_fedora; then
-        yum install -y python-setuptools
-        rm -rf /usr/lib/python2.7/site-packages/setuptools*
-    fi
-
-    pip install -U setuptools
-}
-
-setup_pip
-
-if is_fedora; then
-    setup_puppet_fedora
-elif is_rhel7; then
-    setup_puppet_rhel7
-elif is_ubuntu; then
-    setup_puppet_ubuntu
-elif is_opensuse; then
-    setup_puppet_opensuse
-elif is_gentoo; then
-    setup_puppet_gentoo
-else
-    echo "*** Can not setup puppet: distribution not recognized"
-    exit 1
-fi
-
diff --git a/puppet-infracloud/manifests/site.pp b/puppet-infracloud/manifests/site.pp
deleted file mode 100644 (file)
index 3483b06..0000000
+++ /dev/null
@@ -1,104 +0,0 @@
-# SPDX-license-identifier: Apache-2.0
-##############################################################################
-# Copyright (c) 2016 RedHat and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
-node 'controller00.opnfvlocal' {
-  $group = 'infracloud'
-  include ::sudoers
-
-  class { '::opnfv::server':
-    iptables_public_tcp_ports => [80,5000,5671,8774,9292,9696,35357], # logs,keystone,rabbit,nova,glance,neutron,keystone
-    sysadmins                 => hiera('sysadmins', []),
-    enable_unbound            => false,
-    purge_apt_sources         => false,
-  }
-  class { '::opnfv::controller':
-    keystone_rabbit_password         => hiera('keystone_rabbit_password'),
-    neutron_rabbit_password          => hiera('neutron_rabbit_password'),
-    nova_rabbit_password             => hiera('nova_rabbit_password'),
-    root_mysql_password              => hiera('infracloud_mysql_password'),
-    keystone_mysql_password          => hiera('keystone_mysql_password'),
-    glance_mysql_password            => hiera('glance_mysql_password'),
-    neutron_mysql_password           => hiera('neutron_mysql_password'),
-    nova_mysql_password              => hiera('nova_mysql_password'),
-    keystone_admin_password          => hiera('keystone_admin_password'),
-    glance_admin_password            => hiera('glance_admin_password'),
-    neutron_admin_password           => hiera('neutron_admin_password'),
-    nova_admin_password              => hiera('nova_admin_password'),
-    keystone_admin_token             => hiera('keystone_admin_token'),
-    ssl_key_file_contents            => hiera('ssl_key_file_contents'),
-    ssl_cert_file_contents           => hiera('ssl_cert_file_contents'),
-    br_name                          => hiera('bridge_name'),
-    controller_public_address        => $::fqdn,
-    neutron_subnet_cidr              => hiera('neutron_subnet_cidr'),
-    neutron_subnet_gateway           => hiera('neutron_subnet_gateway'),
-    neutron_subnet_allocation_pools  => hiera('neutron_subnet_allocation_pools'),
-    opnfv_password                   => hiera('opnfv_password'),
-    require                          => Class['::opnfv::server'],
-  }
-}
-
-node 'compute00.opnfvlocal' {
-  $group = 'infracloud'
-  include ::sudoers
-
-  class { '::opnfv::server':
-    sysadmins                 => hiera('sysadmins', []),
-    enable_unbound            => false,
-    purge_apt_sources         => false,
-  }
-
-  class { '::opnfv::compute':
-    nova_rabbit_password             => hiera('nova_rabbit_password'),
-    neutron_rabbit_password          => hiera('neutron_rabbit_password'),
-    neutron_admin_password           => hiera('neutron_admin_password'),
-    ssl_cert_file_contents           => hiera('ssl_cert_file_contents'),
-    ssl_key_file_contents            => hiera('ssl_key_file_contents'),
-    br_name                          => hiera('bridge_name'),
-    controller_public_address        => 'controller00.opnfvlocal',
-    virt_type                        => hiera('virt_type'),
-    require                          => Class['::opnfv::server'],
-  }
-}
-
-node 'jumphost.opnfvlocal' {
-  class { '::opnfv::server':
-    sysadmins                 => hiera('sysadmins', []),
-    enable_unbound            => false,
-    purge_apt_sources         => false,
-  }
-}
-
-node 'baremetal.opnfvlocal', 'lfpod5-jumpserver' {
-  class { '::opnfv::server':
-    iptables_public_udp_ports => [67, 69],
-    sysadmins                 => hiera('sysadmins', []),
-    enable_unbound            => false,
-    purge_apt_sources         => false,
-  }
-
-  class { '::infracloud::bifrost':
-    ironic_inventory          => hiera('ironic_inventory', {}),
-    ironic_db_password        => hiera('ironic_db_password'),
-    mysql_password            => hiera('bifrost_mysql_password'),
-    ipmi_passwords            => hiera('ipmi_passwords'),
-    ssh_private_key           => hiera('bifrost_ssh_private_key'),
-    ssh_public_key            => hiera('bifrost_ssh_public_key'),
-    vlan                      => hiera('infracloud_vlan'),
-    gateway_ip                => hiera('infracloud_gateway_ip'),
-    default_network_interface => hiera('default_network_interface'),
-    dhcp_static_mask          => hiera('dhcp_static_mask'),
-    dhcp_pool_start           => hiera('dhcp_pool_start'),
-    dhcp_pool_end             => hiera('dhcp_pool_end'),
-    network_interface         => hiera('network_interface'),
-    ipv4_nameserver           => hiera('ipv4_nameserver'),
-    ipv4_subnet_mask          => hiera('ipv4_subnet_mask'),
-    bridge_name               => hiera('bridge_name'),
-    dib_dev_user_password     => hiera('dib_dev_user_password'),
-    require                   => Class['::opnfv::server'],
-  }
-}
diff --git a/puppet-infracloud/modules.env b/puppet-infracloud/modules.env
deleted file mode 100644 (file)
index 9c07ec9..0000000
+++ /dev/null
@@ -1,84 +0,0 @@
-# Copyright 2014 OpenStack Foundation.
-# Copyright 2016 RedHat.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-#
-# load additional modules from modules.env
-# modules.env should exist in the same folder as install_modules.sh
-#
-# - use export MODULE_FILE to specify an alternate config
-#   when calling install_modules.sh.
-#   This allows for testing environments that are configured with alternate
-#   module configuration.
-
-# Source modules should use tags, explicit refs or remote branches because
-# we do not update local branches in this script.
-# Keep sorted
-
-OPENSTACK_GIT_ROOT=https://git.openstack.org
-
-# InfraCloud modules
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-cinder"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-glance"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-ironic"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-keystone"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-neutron"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-nova"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-openstack_extras"]="origin/stable/mitaka"
-SOURCE_MODULES["$OPENSTACK_GIT_ROOT/openstack/puppet-openstacklib"]="origin/stable/mitaka"
-
-SOURCE_MODULES["https://git.openstack.org/openstack-infra/puppet-vcsrepo"]="0.0.8"
-SOURCE_MODULES["https://github.com/duritong/puppet-sysctl"]="v0.0.11"
-SOURCE_MODULES["https://github.com/nanliu/puppet-staging"]="1.0.0"
-SOURCE_MODULES["https://github.com/jfryman/puppet-selinux"]="v0.2.5"
-SOURCE_MODULES["https://github.com/maestrodev/puppet-wget"]="v1.6.0"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-apache"]="1.8.1"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-apt"]="2.1.0"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-concat"]="1.2.5"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-firewall"]="1.1.3"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-haproxy"]="1.5.0"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-inifile"]="1.1.3"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-mysql"]="3.6.2"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-ntp"]="3.2.1"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-rabbitmq"]="5.2.3"
-SOURCE_MODULES["https://github.com/puppetlabs/puppetlabs-stdlib"]="4.10.0"
-SOURCE_MODULES["https://github.com/rafaelfelix/puppet-pear"]="1.0.3"
-SOURCE_MODULES["https://github.com/saz/puppet-memcached"]="v2.6.0"
-SOURCE_MODULES["https://github.com/saz/puppet-timezone"]="v3.3.0"
-SOURCE_MODULES["https://github.com/stankevich/puppet-python"]="1.9.4"
-SOURCE_MODULES["https://github.com/vamsee/puppet-solr"]="0.0.8"
-SOURCE_MODULES["https://github.com/voxpupuli/puppet-alternatives"]="0.3.0"
-SOURCE_MODULES["https://github.com/voxpupuli/puppet-archive"]="v0.5.1"
-SOURCE_MODULES["https://github.com/voxpupuli/puppet-git_resource"]="0.3.0"
-SOURCE_MODULES["https://github.com/voxpupuli/puppet-nodejs"]="1.2.0"
-SOURCE_MODULES["https://github.com/voxpupuli/puppet-puppetboard"]="2.4.0"
-
-
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-ansible"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-httpd"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-infracloud"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-iptables"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-logrotate"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-pip"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-snmpd"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-ssh"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-ssl_cert_check"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-sudoers"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-ulimit"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-unattended_upgrades"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-unbound"]="origin/master"
-INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/openstack-infra/puppet-user"]="origin/master"
-
-for MOD in ${!INTEGRATION_MODULES[*]}; do
- SOURCE_MODULES[$MOD]=${INTEGRATION_MODULES[$MOD]}
-done
diff --git a/puppet-infracloud/modules/opnfv/manifests/compute.pp b/puppet-infracloud/modules/opnfv/manifests/compute.pp
deleted file mode 100644 (file)
index ca548a5..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-class opnfv::compute (
-  $nova_rabbit_password,
-  $neutron_rabbit_password,
-  $neutron_admin_password,
-  $ssl_cert_file_contents,
-  $ssl_key_file_contents,
-  $br_name,
-  $controller_public_address,
-  $virt_type = 'kvm',
-) {
-  class { '::infracloud::compute':
-    nova_rabbit_password          => $nova_rabbit_password,
-    neutron_rabbit_password       => $neutron_rabbit_password,
-    neutron_admin_password        => $neutron_admin_password,
-    ssl_cert_file_contents        => $ssl_cert_file_contents,
-    ssl_key_file_contents         => $ssl_key_file_contents,
-    br_name                       => $br_name,
-    controller_public_address     => $controller_public_address,
-    virt_type                     => $virt_type,
-  }
-
-}
-
diff --git a/puppet-infracloud/modules/opnfv/manifests/controller.pp b/puppet-infracloud/modules/opnfv/manifests/controller.pp
deleted file mode 100644 (file)
index 7522692..0000000
+++ /dev/null
@@ -1,85 +0,0 @@
-# SPDX-license-identifier: Apache-2.0
-##############################################################################
-# Copyright (c) 2016 RedHat and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
-class opnfv::controller (
-  $keystone_rabbit_password,
-  $neutron_rabbit_password,
-  $nova_rabbit_password,
-  $root_mysql_password,
-  $keystone_mysql_password,
-  $glance_mysql_password,
-  $neutron_mysql_password,
-  $nova_mysql_password,
-  $glance_admin_password,
-  $keystone_admin_password,
-  $neutron_admin_password,
-  $nova_admin_password,
-  $keystone_admin_token,
-  $ssl_key_file_contents,
-  $ssl_cert_file_contents,
-  $br_name,
-  $controller_public_address = $::fqdn,
-  $neutron_subnet_cidr,
-  $neutron_subnet_gateway,
-  $neutron_subnet_allocation_pools,
-  $opnfv_password,
-  $opnfv_email = 'opnfvuser@gmail.com',
-) {
-  class { '::infracloud::controller':
-    keystone_rabbit_password         => $keystone_rabbit_password,
-    neutron_rabbit_password          => $neutron_rabbit_password,
-    nova_rabbit_password             => $nova_rabbit_password,
-    root_mysql_password              => $root_mysql_password,
-    keystone_mysql_password          => $keystone_mysql_password,
-    glance_mysql_password            => $glance_mysql_password,
-    neutron_mysql_password           => $neutron_mysql_password,
-    nova_mysql_password              => $nova_mysql_password,
-    keystone_admin_password          => $keystone_admin_password,
-    glance_admin_password            => $glance_admin_password,
-    neutron_admin_password           => $neutron_admin_password,
-    nova_admin_password              => $nova_admin_password,
-    keystone_admin_token             => $keystone_admin_token,
-    ssl_key_file_contents            => $ssl_key_file_contents,
-    ssl_cert_file_contents           => $ssl_cert_file_contents,
-    br_name                          => $br_name,
-    controller_public_address        => $controller_public_address,
-    neutron_subnet_cidr              => $neutron_subnet_cidr,
-    neutron_subnet_gateway           => $neutron_subnet_gateway,
-    neutron_subnet_allocation_pools  => $neutron_subnet_allocation_pools,
-  }
-
-  # create keystone creds
-  keystone_domain { 'opnfv':
-    ensure  => present,
-    enabled => true,
-  }
-
-  keystone_tenant { 'opnfv':
-    ensure      => present,
-    enabled     => true,
-    description => 'OPNFV cloud',
-    domain      => 'opnfv',
-    require     => Keystone_domain['opnfv'],
-  }
-
-  keystone_user { 'opnfv':
-    ensure   => present,
-    enabled  => true,
-    domain   => 'opnfv',
-    email    => $opnfv_email,
-    password => $opnfv_password,
-    require  => Keystone_tenant['opnfv'],
-  }
-
-  keystone_role { 'user': ensure => present }
-
-  keystone_user_role { 'opnfv::opnfv@opnfv::opnfv':
-    roles => [ 'user', 'admin', ],
-  }
-}
-
diff --git a/puppet-infracloud/modules/opnfv/manifests/server.pp b/puppet-infracloud/modules/opnfv/manifests/server.pp
deleted file mode 100644 (file)
index d167973..0000000
+++ /dev/null
@@ -1,244 +0,0 @@
-# SPDX-license-identifier: Apache-2.0
-##############################################################################
-# Copyright (c) 2016 RedHat and others.
-# All rights reserved. This program and the accompanying materials
-# are made available under the terms of the Apache License, Version 2.0
-# which accompanies this distribution, and is available at
-# http://www.apache.org/licenses/LICENSE-2.0
-##############################################################################
-class opnfv::server (
-  $iptables_public_tcp_ports = [],
-  $iptables_public_udp_ports = [],
-  $iptables_rules4           = [],
-  $iptables_rules6           = [],
-  $sysadmins                 = [],
-  $enable_unbound            = true,
-  $purge_apt_sources         = true,
-) {
-  ###########################################################
-  # Classes for all hosts
-
-  include snmpd
-
-  class { 'iptables':
-    public_tcp_ports => $iptables_public_tcp_ports,
-    public_udp_ports => $iptables_public_udp_ports,
-    rules4           => $iptables_rules4,
-    rules6           => $iptables_rules6,
-  }
-
-  class { 'timezone':
-    timezone => 'Etc/UTC',
-  }
-
-  if ($enable_unbound) {
-    class { 'unbound':
-      install_resolv_conf => $install_resolv_conf
-    }
-  }
-
-  if ($::in_chroot) {
-    notify { 'rsyslog in chroot':
-      message => 'rsyslog not refreshed, running in chroot',
-    }
-    $rsyslog_notify = []
-  } else {
-    service { 'rsyslog':
-      ensure     => running,
-      enable     => true,
-      hasrestart => true,
-      require    => Package['rsyslog'],
-    }
-    $rsyslog_notify = [ Service['rsyslog'] ]
-  }
-
-  ###########################################################
-  # System tweaks
-
-  # Increase syslog message size in order to capture
-  # python tracebacks with syslog.
-  file { '/etc/rsyslog.d/99-maxsize.conf':
-    ensure  => present,
-    # Note MaxMessageSize is not a puppet variable.
-    content => '$MaxMessageSize 6k',
-    owner   => 'root',
-    group   => 'root',
-    mode    => '0644',
-    notify  => $rsyslog_notify,
-    require => Package['rsyslog'],
-  }
-
-  # We don't like byobu
-  file { '/etc/profile.d/Z98-byobu.sh':
-    ensure => absent,
-  }
-
-  if $::osfamily == 'Debian' {
-
-    # Ubuntu installs their whoopsie package by default, but it eats through
-    # memory and we don't need it on servers
-    package { 'whoopsie':
-      ensure => absent,
-    }
-
-    package { 'popularity-contest':
-      ensure => absent,
-    }
-  }
-
-  ###########################################################
-  # Package resources for all operating systems
-
-  package { 'at':
-    ensure => present,
-  }
-
-  package { 'lvm2':
-    ensure => present,
-  }
-
-  package { 'strace':
-    ensure => present,
-  }
-
-  package { 'tcpdump':
-    ensure => present,
-  }
-
-  package { 'rsyslog':
-    ensure => present,
-  }
-
-  package { 'git':
-    ensure => present,
-  }
-
-  package { 'rsync':
-    ensure => present,
-  }
-
-  case $::osfamily {
-    'RedHat': {
-      $packages = ['parted', 'puppet', 'wget', 'iputils']
-      $user_packages = ['emacs-nox', 'vim-enhanced']
-      $update_pkg_list_cmd = ''
-    }
-    'Debian': {
-      $packages = ['parted', 'puppet', 'wget', 'iputils-ping']
-      case $::operatingsystemrelease {
-        /^(12|14)\.(04|10)$/: {
-          $user_packages = ['emacs23-nox', 'vim-nox', 'iftop',
-                            'sysstat', 'iotop']
-        }
-        default: {
-          $user_packages = ['emacs-nox', 'vim-nox']
-        }
-      }
-      $update_pkg_list_cmd = 'apt-get update >/dev/null 2>&1;'
-    }
-    default: {
-      fail("Unsupported osfamily: ${::osfamily} The 'openstack_project' module only supports osfamily Debian or RedHat (slaves only).")
-    }
-  }
-  package { $packages:
-    ensure => present
-  }
-
-  ###########################################################
-  # Package resources for specific operating systems
-
-  case $::osfamily {
-    'Debian': {
-      # Purge and augment existing /etc/apt/sources.list if requested, and make
-      # sure apt-get update is run before any packages are installed
-      class { '::apt':
-        purge => { 'sources.list' => $purge_apt_sources }
-      }
-
-      # Make sure dig is installed
-      package { 'dnsutils':
-        ensure => present,
-      }
-    }
-    'RedHat': {
-      # Make sure dig is installed
-      package { 'bind-utils':
-        ensure => present,
-      }
-    }
-  }
-
-  ###########################################################
-  # Manage  ntp
-
-  include '::ntp'
-
-  if ($::osfamily == "RedHat") {
-    # Utils in ntp-perl are included in Debian's ntp package; we
-    # add it here for consistency.  See also
-    # https://tickets.puppetlabs.com/browse/MODULES-3660
-    package { 'ntp-perl':
-      ensure => present
-    }
-    # NOTE(pabelanger): We need to ensure ntpdate service starts on boot for
-    # centos-7.  Currently, ntpd explicitly require ntpdate to be running before
-    # the sync process can happen in ntpd.  As a result, if ntpdate is not
-    # running, ntpd will start but fail to sync because of DNS is not properly
-    # setup.
-    package { 'ntpdate':
-      ensure => present,
-    }
-    service { 'ntpdate':
-      enable => true,
-      require => Package['ntpdate'],
-    }
-  }
-
-  ###########################################################
-  # Manage  python/pip
-
-  $desired_virtualenv = '13.1.0'
-  class { '::pip':
-    optional_settings => {
-      'extra-index-url' => '',
-    },
-    manage_pip_conf => true,
-  }
-
-  if (( versioncmp($::virtualenv_version, $desired_virtualenv) < 0 )) {
-    $virtualenv_ensure = $desired_virtualenv
-  } else {
-    $virtualenv_ensure = present
-  }
-  package { 'virtualenv':
-    ensure   => $virtualenv_ensure,
-    provider => openstack_pip,
-    require  => Class['pip'],
-  }
-
-  # manage root ssh
-  if ! defined(File['/root/.ssh']) {
-    file { '/root/.ssh':
-      ensure => directory,
-      mode   => '0700',
-    }
-  }
-
-  # ensure that we have non-pass sudo, and
-  # not require tty
-  file_line { 'sudo_rule_no_pw':
-    path => '/etc/sudoers',
-    line => '%wheel     ALL=(ALL)       NOPASSWD: ALL',
-  }
-  file_line { 'sudo_rule_notty':
-    path   => '/etc/sudoers',
-    line   => 'Defaults    requiretty',
-    match  => '.*requiretty.*',
-    match_for_absence => true,
-    ensure => absent,
-    multiple => true,
-  }
-
-  # update hosts
-  create_resources('host', hiera_hash('hosts'))
-}
diff --git a/upstream/.gitkeep b/upstream/.gitkeep
new file mode 100644 (file)
index 0000000..e69de29