support Containerized compass-core 43/34943/25
authorhuangxiangyu <huangxiangyu5@huawei.com>
Thu, 18 May 2017 07:38:25 +0000 (15:38 +0800)
committerhuangxiangyu <huangxiangyu5@huawei.com>
Fri, 9 Jun 2017 02:07:11 +0000 (10:07 +0800)
JIRA: COMPASS-534

1. rm compass vm and add ansible to bring up 5 compass
   containers
2. use tar package instead of compass.iso which contains
   compass docker images, OS ISO, PPA, pip packages.
3. modify client.py to communicate with containerized
   compass-core
4. modify cobbler files and ansible callback files
   to adapt with containerized compass-core
5. upgrade openstack version to ocata
6. use the openstack-ansible to deploy openstack
7. virtual deploy external use nat

Change-Id: Ifa2a3f5b8c7c32224ac4276fd3d4cc2b0d270a26
Signed-off-by: huangxiangyu <huangxiangyu5@huawei.com>
73 files changed:
build.sh
build/build.conf
ci/deploy_ci.sh
deploy.sh
deploy/adapters/ansible/openstack/HA-ansible-multinodes.yml
deploy/adapters/ansible/openstack_ocata/.gitkeep [moved from deploy/adapters/ansible/openstack_newton/.gitkeep with 100% similarity]
deploy/adapters/ansible/roles/config-compute/handlers/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/config-compute/tasks/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/config-compute/templates/compute.j2 [new file with mode: 0644]
deploy/adapters/ansible/roles/config-compute/templates/exports [new file with mode: 0644]
deploy/adapters/ansible/roles/config-controller/controller.j2 [new file with mode: 0755]
deploy/adapters/ansible/roles/config-controller/handlers/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/config-controller/tasks/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/config-controller/templates/controller.j2 [new file with mode: 0755]
deploy/adapters/ansible/roles/config-deployment/files/cinder.yml [new file with mode: 0755]
deploy/adapters/ansible/roles/config-deployment/tasks/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/config-deployment/templates/ansible.cfg [new file with mode: 0644]
deploy/adapters/ansible/roles/config-deployment/templates/openstack_user_config.yml.j2 [new file with mode: 0644]
deploy/adapters/ansible/roles/config-deployment/templates/user_variables.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/pre-prepare/files/modules [new file with mode: 0644]
deploy/adapters/ansible/roles/pre-prepare/tasks/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/pre-prepare/templates/sources.list [new file with mode: 0644]
deploy/adapters/ansible/roles/pre-prepare/vars/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/setup-host/tasks/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/setup-infrastructure/tasks/main.yml [new file with mode: 0644]
deploy/adapters/ansible/roles/setup-openstack/tasks/main.yml [new file with mode: 0644]
deploy/adapters/cobbler/snippets/kickstart_post_anamon
deploy/adapters/cobbler/snippets/preseed_post_anamon
deploy/client.py
deploy/compass_conf/adapter/ansible_openstack_ocata.conf [moved from deploy/compass_conf/adapter/ansible_openstack_newton.conf with 55% similarity]
deploy/compass_conf/celeryconfig
deploy/compass_conf/flavor/openstack_ocata.conf [moved from deploy/compass_conf/flavor/openstack_newton.conf with 87% similarity]
deploy/compass_conf/flavor_mapping/HA-ansible-multinodes-ocata.conf [moved from deploy/compass_conf/flavor_mapping/HA-ansible-multinodes-newton.conf with 98% similarity]
deploy/compass_conf/flavor_metadata/HA-ansible-multinodes-ocata.conf [moved from deploy/compass_conf/flavor_metadata/HA-ansible-multinodes-newton.conf with 81% similarity]
deploy/compass_conf/package_installer/ansible-ocata.conf [moved from deploy/compass_conf/package_installer/ansible-newton.conf with 71% similarity]
deploy/compass_conf/repomd.xml [new file with mode: 0644]
deploy/compass_conf/role/openstack_ocata_ansible.conf [moved from deploy/compass_conf/role/openstack_newton_ansible.conf with 98% similarity]
deploy/compass_conf/setting
deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/allinone.tmpl [deleted file]
deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/multinodes.tmpl [deleted file]
deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/single-controller.tmpl [deleted file]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/HA-ansible-multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/HA-ansible-multinodes.tmpl with 52% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/allinone.tmpl [new file with mode: 0755]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/multinodes.tmpl [new file with mode: 0755]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/single-controller.tmpl [new file with mode: 0755]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/hosts/HA-ansible-multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/hosts/HA-ansible-multinodes.tmpl with 100% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/hosts/allinone.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/hosts/allinone.tmpl with 100% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/hosts/multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/hosts/multinodes.tmpl with 100% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/hosts/single-controller.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/hosts/single-controller.tmpl with 100% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/inventories/HA-ansible-multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/inventories/HA-ansible-multinodes.tmpl with 89% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/inventories/allinone.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/inventories/allinone.tmpl with 89% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/inventories/multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/inventories/multinodes.tmpl with 90% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/inventories/single-controller.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/inventories/single-controller.tmpl with 89% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/vars/HA-ansible-multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/vars/HA-ansible-multinodes.tmpl with 87% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/vars/allinone.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/vars/allinone.tmpl with 95% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/vars/multinodes.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/vars/multinodes.tmpl with 97% similarity]
deploy/compass_conf/templates/ansible_installer/openstack_ocata/vars/single-controller.tmpl [moved from deploy/compass_conf/templates/ansible_installer/openstack_newton/vars/single-controller.tmpl with 96% similarity]
deploy/compass_vm.sh
deploy/conf/base.conf
deploy/conf/compass.conf
deploy/conf/virtual.conf
deploy/conf/vm_environment/huawei-virtual1/network.yml
deploy/conf/vm_environment/huawei-virtual2/network.yml
deploy/conf/vm_environment/network.yml [new file with mode: 0644]
deploy/deploy_host.sh
deploy/host_virtual.sh
deploy/launch.sh
deploy/network.sh
deploy/playbook_done.py
deploy/prepare.sh
deploy/rename_nics.py
deploy/status_callback.py
quickstart.sh [new file with mode: 0755]

index 800d627..c4f73ae 100755 (executable)
--- a/build.sh
+++ b/build.sh
@@ -9,8 +9,11 @@
 ##############################################################################
 set -ex
 #COMPASS_PATH=$(cd "$(dirname "$0")"/..; pwd)
+BUILD_IMAGES=${BUILD_IMAGES:-"false"}
+
 COMPASS_PATH=`cd ${BASH_SOURCE[0]%/*};pwd`
 WORK_DIR=$COMPASS_PATH/work/building
+CACHE_DIR=$WORK_DIR/cache
 
 echo $COMPASS_PATH
 
@@ -18,28 +21,66 @@ echo $COMPASS_PATH
 REPO_PATH=$COMPASS_PATH/repo
 WORK_PATH=$COMPASS_PATH
 
-PACKAGES="fuse fuseiso createrepo genisoimage curl"
+REDHAT_REL=${REDHAT_REL:-"false"}
+
+PACKAGES="curl"
+
+mkdir -p $WORK_DIR $CACHE_DIR
+
+source $COMPASS_PATH/build/build.conf
+#cd $WORK_DIR
 
-# PACKAGE_URL will be reset in Jenkins for different branch
-export PACKAGE_URL=${PACKAGE_URL:-http://artifacts.opnfv.org/compass4nfv/package/master}
+function install_docker_ubuntu()
+{
+    sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual
+    sudo apt-get install -y apt-transport-https ca-certificates curl \
+                 software-properties-common
+    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+    sudo apt-key fingerprint 0EBFCD88
+    sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
+       $(lsb_release -cs) \
+       stable"
+    sudo apt-get update
+    sudo apt-get install -y docker-ce
+
+    sudo service docker start
+    sudo service docker restart
+}
 
-mkdir -p $WORK_DIR
+function install_docker_redhat()
+{
+    echo "TODO"
+    exit 1
+}
 
-cd $WORK_DIR
 function prepare_env()
 {
+    if [[ -f /etc/redhat-release ]]; then
+        REDHAT_REL=true
+    fi
+
     set +e
+    sudo docker version >/dev/null 2>&1
+    if [[ $? -ne 0 ]]; then
+        if [[ $REDHAT_REL == false ]]; then
+            install_docker_ubuntu
+        else
+            install_docker_redhat
+        fi
+    fi
+
     for i in $PACKAGES; do
-        if ! apt --installed list 2>/dev/null |grep "\<$i\>"
-        then
-            sudo apt-get install  -y --force-yes  $i
+        if [[ $REDHAT_REL == false ]]; then
+            if ! apt --installed list 2>/dev/null |grep "\<$i\>"
+            then
+                sudo apt-get install  -y --force-yes  $i
+            fi
+        fi
+        if [[ $REDHAT_REL == true ]]; then
+            sudo yum install $i -y
         fi
     done
     set -e
-
-    if [[ ! -d $CACHE_DIR ]]; then
-        mkdir -p $CACHE_DIR
-    fi
 }
 
 function download_git()
@@ -74,12 +115,6 @@ function download_url()
     fi
 
     curl --connect-timeout 10 -o $CACHE_DIR/$1 $2
-    local_md5=`md5sum $CACHE_DIR/$1 | cut -d ' ' -f 1`
-    repo_md5=`cat $CACHE_DIR/$1.md5 | cut -d ' ' -f 1`
-    if [[ $local_md5 != $repo_md5 ]]; then
-        echo "ERROR, the md5sum don't match"
-        exit 1
-    fi
 }
 
 function download_local()
@@ -89,12 +124,20 @@ function download_local()
     fi
 }
 
+function download_docker_images()
+{
+    for i in $COMPASS_DECK $COMPASS_TASKS $COMPASS_COBBLER \
+             $COMPASS_DB $COMPASS_MQ; do
+        basename=`basename $i`
+        sudo docker pull $i
+        sudo docker save $i -o $CACHE_DIR/${basename%:*}.tar
+    done
+}
+
 function download_packages()
 {
-     for i in $CENTOS_BASE $LOADERS $CIRROS $APP_PACKAGE \
-              $COMPASS_CORE $COMPASS_WEB $COMPASS_INSTALL $COMPASS_PKG \
-              $PIP_REPO $PIP_OPENSTACK_REPO \
-              $UBUNTU_ISO $CENTOS_ISO $XENIAL_NEWTON_PPA $CENTOS7_NEWTON_PPA; do
+    for i in $PIP_OPENSTACK_REPO $APP_PACKAGE $COMPASS_COMPOSE \
+             $UBUNTU_ISO $CENTOS_ISO $UBUNTU_PPA $CENTOS_PPA; do
 
          if [[ ! $i ]]; then
              continue
@@ -110,96 +153,27 @@ function download_packages()
          fi
      done
 
+    download_docker_images
 }
 
-function copy_file()
+function build_tar()
 {
-    new=$1
-
-    # main process
-    mkdir -p $new/compass $new/bootstrap $new/pip $new/pip-openstack $new/guestimg $new/app_packages $new/ansible
-    mkdir -p $new/repos/cobbler/{ubuntu,centos,redhat}/{iso,ppa}
-
-    rm -rf $new/.rr_moved
-
-    if [[ $UBUNTU_ISO ]]; then
-        cp $CACHE_DIR/`basename $UBUNTU_ISO` $new/repos/cobbler/ubuntu/iso/ -rf
-    fi
-
-    if [[  $XENIAL_NEWTON_PPA ]]; then
-        cp $CACHE_DIR/`basename $XENIAL_NEWTON_PPA` $new/repos/cobbler/ubuntu/ppa/ -rf
-    fi
-
-    if [[ $CENTOS_ISO ]]; then
-        cp $CACHE_DIR/`basename $CENTOS_ISO` $new/repos/cobbler/centos/iso/ -rf
-    fi
-
-    if [[  $CENTOS7_NEWTON_PPA ]]; then
-        cp $CACHE_DIR/`basename $CENTOS7_NEWTON_PPA` $new/repos/cobbler/centos/ppa/ -rf
-    fi
-
-    cp $CACHE_DIR/`basename $LOADERS` $new/ -rf || exit 1
-    cp $CACHE_DIR/`basename $APP_PACKAGE` $new/app_packages/ -rf || exit 1
-
-    if [[ $CIRROS ]]; then
-        cp $CACHE_DIR/`basename $CIRROS` $new/guestimg/ -rf || exit 1
-    fi
-
-    for i in $COMPASS_CORE $COMPASS_INSTALL $COMPASS_WEB; do
-        cp $CACHE_DIR/`basename $i | sed 's/.git//g'` $new/compass/ -rf
-    done
-
-    cp $COMPASS_PATH/deploy/adapters $new/compass/compass-adapters -rf
-    cp $COMPASS_PATH/deploy/compass_conf/* $new/compass/compass-core/conf -rf
-
-    tar -zxvf $CACHE_DIR/`basename $PIP_REPO` -C $new/
-    tar -zxvf $CACHE_DIR/`basename $PIP_OPENSTACK_REPO` -C $new/
-
-    find $new/compass -name ".git" | xargs rm -rf
-}
-
-function rebuild_ppa()
-{
-    name=`basename $COMPASS_PKG`
-    rm -rf ${name%%.*} $name
-    cp $CACHE_DIR/$name $WORK_DIR
-    cp $COMPASS_PATH/repo/openstack/make_ppa/centos/comps.xml $WORK_DIR
-    tar -zxvf $name
-    cp ${name%%.*}/*.rpm $1/Packages -f
-    rm -rf $1/repodata/*
-    createrepo -g $WORK_DIR/comps.xml $1
-}
-
-function make_iso()
-{
-    download_packages
-    name=`basename $CENTOS_BASE`
-    cp  $CACHE_DIR/$name ./ -f
-    # mount base iso
-    mkdir -p base new
-    fuseiso $name base
-    cd base;find .|cpio -pd ../new ;cd -
-    fusermount -u base
-    chmod 755 ./new -R
-
-    copy_file new
-    rebuild_ppa new
-
-    mkisofs -quiet -r -J -R -b isolinux/isolinux.bin \
-            -no-emul-boot -boot-load-size 4 \
-            -boot-info-table -hide-rr-moved \
-            -x "lost+found:" \
-            -o compass.iso new/
-
-    md5sum compass.iso > compass.iso.md5
-
-    # delete tmp file
-    rm -rf new base $name
+    cd $CACHE_DIR
+    sudo rm -rf compass_dists
+    mkdir -p compass_dists
+    sudo cp -f `basename $PIP_OPENSTACK_REPO` `basename $APP_PACKAGE` \
+    `basename $UBUNTU_ISO` `basename $CENTOS_ISO` \
+    `basename $UBUNTU_PPA` `basename $CENTOS_PPA` \
+    compass-deck.tar compass-tasks-osa.tar compass-cobbler.tar \
+    compass-db.tar compass-mq.tar compass_dists
+    sudo tar -zcf compass.tar.gz compass-docker-compose compass_dists
+    sudo mv compass.tar.gz $TAR_DIR/$TAR_NAME
+    cd -
 }
 
 function process_param()
 {
-    TEMP=`getopt -o c:d:f:s:t: --long iso-dir:,iso-name:,cache-dir:,openstack_build:,feature_build:,feature_version: -n 'build.sh' -- "$@"`
+    TEMP=`getopt -o c:d:f:s:t: --long tar-dir:,tar-name:,cache-dir:,openstack_build:,feature_build:,feature_version: -n 'build.sh' -- "$@"`
 
     if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi
 
@@ -207,9 +181,9 @@ function process_param()
 
     while :; do
         case "$1" in
-            -d|--iso-dir) export ISO_DIR=$2; shift 2;;
-            -f|--iso-name) export ISO_NAME=$2; shift 2;;
-            -c|--cache-dir) export CACHE_DIR=$2; shift 2;;
+            -d|--tar-dir) export TAR_DIR=$2; shift 2;;
+            -f|--tar-name) export TAR_NAME=$2; shift 2;;
+            -c|--cache-dir) export WORK_DIR=$2; shift 2;;
             -s|--openstack_build) export OPENSTACK_BUILD=$2; shift 2;;
             -t|--feature_build) export FEATURE_BUILD=$2; shift 2;;
             -v|--feature_version) export FEATURE_VERSION=$2; shift 2;;
@@ -218,59 +192,15 @@ function process_param()
         esac
     done
 
-    export CACHE_DIR=${CACHE_DIR:-$WORK_DIR/cache}
-    export ISO_DIR=${ISO_DIR:-$WORK_DIR}
-    export ISO_NAME=${ISO_NAME:-"compass.iso"}
+    export WORK_DIR=${WORK_DIR:-$WORK_DIR/cache}
+    export TAR_DIR=${TAR_DIR:-$WORK_DIR}
+    export TAR_NAME=${TAR_NAME:-"compass.tar.gz"}
     export OPENSTACK_BUILD=${OPENSTACK_BUILD:-"stable"}
     export FEATURE_BUILD=${FEATURE_BUILD:-"stable"}
 #    export FEATURE_VERSION=${FEATURE_VERSION:-"colorado"}
 }
 
-function copy_iso()
-{
-   if [[ $ISO_DIR/$ISO_NAME == $WORK_DIR/compass.iso ]]; then
-      return
-   fi
-
-   cp $WORK_DIR/compass.iso $ISO_DIR/$ISO_NAME -f
-}
-
-# get daily repo or stable repo
-function get_repo_pkg()
-{
-   source $COMPASS_PATH/repo/repo_func.sh
-
-   # switch to compass4nfv directory
-   cd $COMPASS_PATH
-
-   # set openstack ppa url
-   if [[ $OPENSTACK_BUILD == daily ]]; then
-       process_env
-       make_osppa
-       export PPA_URL=${PPA_URL:-$COMPASS_PATH/work/repo}
-   else
-       export PPA_URL=${PPA_URL:-$PACKAGE_URL}
-   fi
-
-   # set feature pkg url
-   if [[ $FEATURE_BUILD == daily ]]; then
-       process_env
-       make_repo --package-tag feature
-
-###TODO should the packages.tar.gz include all the packages from different OPNFV versions?
-
-       export FEATURE_URL=${FEATURE_URL:-$COMPASS_PATH/work/repo}
-   else
-       export FEATURE_URL=${FEATURE_URL:-$PACKAGE_URL}
-   fi
-
-   source $COMPASS_PATH/build/build.conf
-
-   # switch to building directory
-   cd $WORK_DIR
-}
 process_param $*
 prepare_env
-get_repo_pkg
-make_iso
-copy_iso
+download_packages
+build_tar
index 101f01b..fea50c8 100644 (file)
@@ -1,29 +1,42 @@
 TIMEOUT=10
 
+# PACKAGE_URL will be reset in Jenkins for different branch
+export PACKAGE_URL=${PACKAGE_URL:-http://artifacts.opnfv.org/compass4nfv/package/master}
+
 # Jumphost OS version
-export CENTOS_BASE=${CENTOS_BASE:-$PACKAGE_URL/CentOS-7-x86_64-Minimal-1511.iso}
+export CENTOS_BASE=${CENTOS_BASE:-$PACKAGE_URL/CentOS-7-x86_64-Minimal-1511.iso}
 
 # Compass git repository
-export COMPASS_CORE=${COMPASS_CORE:-https://github.com/openstack/compass-core.git}
-export COMPASS_WEB=${COMPASS_WEB:-https://github.com/openstack/compass-web.git}
-export COMPASS_INSTALL=${COMPASS_INSTALL:-http://github.com/baigk/compass-install.git}
+export COMPASS_CORE=${COMPASS_CORE:-https://github.com/openstack/compass-core.git}
+export COMPASS_WEB=${COMPASS_WEB:-https://github.com/openstack/compass-web.git}
+export COMPASS_INSTALL=${COMPASS_INSTALL:-http://github.com/baigk/compass-install.git}
 
 # Compass core packages
-export COMPASS_PKG=${COMPASS_PKG:-$PACKAGE_URL/centos7-compass-core.tar.gz}
-export PIP_REPO=${PIP_REPO:-$PACKAGE_URL/pip.tar.gz}
-export PIP_OPENSTACK_REPO=${PIP_OPENSTACK_REPO:-$PACKAGE_URL/pip-openstack.tar.gz}
+export COMPASS_PKG=${COMPASS_PKG:-$PACKAGE_URL/centos7-compass-core.tar.gz}
+export PIP_REPO=${PIP_REPO:-$PACKAGE_URL/pip.tar.gz}
+export PIP_OPENSTACK_REPO=${PIP_OPENSTACK_REPO:-$PACKAGE_URL/pip-openstack.tar.gz}
 
 # OS ISO for provisioning
 export CENTOS_ISO=${CENTOS_ISO:-$PACKAGE_URL/CentOS-7-x86_64-Minimal-1611.iso} # centos 7.3
 export UBUNTU_ISO=${UBUNTU_ISO:-$PACKAGE_URL/ubuntu-16.04-server-amd64.iso} # ubuntu 16.04
 
 # OpenStack Packages for deployment
-export XENIAL_NEWTON_PPA=${XENIAL_NEWTON_PPA:-$PPA_URL/xenial-newton-ppa.tar.gz}
-export CENTOS7_NEWTON_PPA=${CENTOS7_NEWTON_PPA:-$PPA_URL/centos7-newton-ppa.tar.gz}
+# export UBUNTU_PPA=${UBUNTU_PPA:-$PACKAGE_URL/xenial-ocata-ppa.tar.gz}
+# export CENTOS_PPA=${CENTOS_PPA:-$PACKAGE_URL/centos7-ocata-ppa.tar.gz}
 
 # SDN Packages for integration
-export APP_PACKAGE=${APP_PACKAGE:-$FEATURE_URL/packages.tar.gz}
+# export APP_PACKAGE=${APP_PACKAGE:-$PACKAGE_URL/packages.tar.gz}
 
 # Other Packages
-export LOADERS=${LOADERS:-$PACKAGE_URL/loaders.tar.gz}
-export CIRROS=${CIRROS:-$PACKAGE_URL/cirros-0.3.3-x86_64-disk.img}
+# export LOADERS=${LOADERS:-$PACKAGE_URL/loaders.tar.gz}
+# export CIRROS=${CIRROS:-$PACKAGE_URL/cirros-0.3.3-x86_64-disk.img}
+
+# Containerized compass-core docker images
+export COMPASS_DECK=${COMPASS_DECK:-huangxiangyu/compass-deck:v0.2}
+export COMPASS_TASKS=${COMPASS_TASKS:-wtwde/compass-tasks-osa:v0.2}
+export COMPASS_COBBLER=${COMPASS_COBBLER:-huangxiangyu/compass-cobbler:v0.1}
+export COMPASS_DB=${COMPASS_DB:-huangxiangyu/compass-db:v0.1}
+export COMPASS_MQ=${COMPASS_MQ:-huangxiangyu/compass-mq:v0.1}
+
+# Containerized compass-core ansible
+export COMPASS_COMPOSE=${COMPASS_COMPOSE:-https://github.com/hexhxy/compass-docker-compose.git}
index 1f20621..80a2383 100755 (executable)
@@ -54,4 +54,7 @@ echo 'OPENSTACK_VERSION='$OPENSTACK_VERSION
 echo "#############################################"
 set -x
 
+# clean up
+export TAR_URL=${TAR_URL:-$ISO_URL}
+sudo docker rm -f $(docker ps -aq)
 $CI_DIR/../deploy.sh
index e29b518..24af7f9 100755 (executable)
--- a/deploy.sh
+++ b/deploy.sh
@@ -13,8 +13,8 @@
 #export OS_VERSION=xenial/centos7
 
 # Set ISO image corresponding to your code
-# export ISO_URL=file:///home/compass/compass4nfv.iso
-#export ISO_URL=
+# export TAR_URL=file:///home/compass/compass4nfv.iso
+#export TAR_URL=
 
 # Set hardware deploy jumpserver PXE NIC
 # You need to comment out it when virtual deploy.
@@ -28,7 +28,7 @@
 # export NETWORK=/home/compass4nfv/deploy/conf/vm_environment/huawei-virtual1/network.yml
 #export NETWORK=
 
-export OPENSTACK_VERSION=${OPENSTACK_VERSION:-newton}
+export OPENSTACK_VERSION=${OPENSTACK_VERSION:-ocata}
 
 COMPASS_DIR=`cd ${BASH_SOURCE[0]%/*}/;pwd`
 export COMPASS_DIR
index f328d95..2a3e649 100644 (file)
 ---
 - hosts: all
   remote_user: root
-  pre_tasks:
-    - name: make sure ssh dir exist
-      file:
-        path: '{{ item.path }}'
-        owner: '{{ item.owner }}'
-        group: '{{ item.group }}'
-        state: directory
-        mode: 0755
-      with_items:
-        - path: /root/.ssh
-          owner: root
-          group: root
-
-    - name: write ssh config
-      copy:
-        content: "UserKnownHostsFile /dev/null\nStrictHostKeyChecking no"
-        dest: '{{ item.dest }}'
-        owner: '{{ item.owner }}'
-        group: '{{ item.group }}'
-        mode: 0600
-      with_items:
-        - dest: /root/.ssh/config
-          owner: root
-          group: root
-
-    - name: generate ssh keys
-      shell: if [ ! -f ~/.ssh/id_rsa.pub ]; \
-             then ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""; \
-             else echo "already gen ssh key!"; fi;
-
-    - name: fetch ssh keys
-      fetch:
-        src: /root/.ssh/id_rsa.pub
-        dest: /tmp/ssh-keys-{{ ansible_hostname }}
-        flat: "yes"
-
-    - authorized_key:
-        user: root
-        key: "{{ lookup('file', item) }}"
-      with_fileglob:
-        - /tmp/ssh-keys-*
-  max_fail_percentage: 0
-  roles:
-    - common
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - setup-network
-
-- hosts: ha
-  remote_user: root
-  max_fail_percentage: 0
   roles:
-    - ha
+    - pre-prepare
 
 - hosts: controller
   remote_user: root
-  max_fail_percentage: 0
   roles:
-    - memcached
-    - apache
-    - database
-    - mq
-    - keystone
-    - nova-controller
-    - neutron-controller
-    - cinder-controller
-    - glance
-    - neutron-common
-    - neutron-network
-    - ceilometer_controller
-    - dashboard
-    - heat
-    - aodh
-    - congress
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - storage
+    - config-controller
 
 - hosts: compute
   remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - nova-compute
-    - neutron-compute
-    - cinder-volume
-    - ceilometer_compute
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles: []
-#    - moon
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
   roles:
-    - secgroup
-
-- hosts: ceph_adm
-  remote_user: root
-  max_fail_percentage: 0
-  roles: []
-#    - ceph-deploy
-
-- hosts: ceph
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - ceph-purge
-    - ceph-config
-
-- hosts: ceph_mon
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - ceph-mon
-
-- hosts: ceph_osd
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - ceph-osd
-
-- hosts: ceph
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - ceph-openstack
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - monitor
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  tasks:
-    - name: set bash to nova
-      user:
-        name: nova
-        shell: /bin/bash
+    - config-compute
 
-    - name: make sure ssh dir exist
-      file:
-        path: '{{ item.path }}'
-        owner: '{{ item.owner }}'
-        group: '{{ item.group }}'
-        state: directory
-        mode: 0755
-      with_items:
-        - path: /var/lib/nova/.ssh
-          owner: nova
-          group: nova
-
-    - name: copy ssh keys for nova
-      shell: cp -rf /root/.ssh/id_rsa /var/lib/nova/.ssh;
-
-    - name: write ssh config
-      copy:
-        content: "UserKnownHostsFile /dev/null\nStrictHostKeyChecking no"
-        dest: '{{ item.dest }}'
-        owner: '{{ item.owner }}'
-        group: '{{ item.group }}'
-        mode: 0600
-      with_items:
-        - dest: /var/lib/nova/.ssh/config
-          owner: nova
-          group: nova
-
-    - authorized_key:
-        user: nova
-        key: "{{ lookup('file', item) }}"
-      with_fileglob:
-        - /tmp/ssh-keys-*
-
-    - name: chown ssh file
-      shell: chown -R nova:nova /var/lib/nova/.ssh;
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - odl_cluster
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - onos_cluster
-
-- hosts: all
-  remote_user: root
-  serial: 1
-  max_fail_percentage: 0
-  roles:
-    - odl_cluster_neutron
-
-- hosts: all
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - odl_cluster_post
-
-- hosts: controller
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - ext-network
-
-- hosts: controller
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-#    - tacker
-
-- hosts: controller
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - openstack-post
-
-- hosts: controller
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - boot-recovery
-
-- hosts: controller
-  remote_user: root
-  max_fail_percentage: 0
-  roles:
-    - controller-recovery
-
-- hosts: compute
+- hosts: localhost
   remote_user: root
-  max_fail_percentage: 0
   roles:
-    - compute-recovery
+    - config-deployment
+    - setup-host
+    - setup-infrastructure
+    - setup-openstack
diff --git a/deploy/adapters/ansible/roles/config-compute/handlers/main.yml b/deploy/adapters/ansible/roles/config-compute/handlers/main.yml
new file mode 100644 (file)
index 0000000..c565498
--- /dev/null
@@ -0,0 +1,14 @@
+##############################################################################
+## Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+## All rights reserved. This program and the accompanying materials
+## are made available under the terms of the Apache License, Version 2.0
+## which accompanies this distribution, and is available at
+## http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: restart network service
+  shell: "/sbin/ifconfig eth0 0 &&/sbin/ifdown -a && \
+          /sbin/ifup --ignore-errors -a"
+
+- name: restart nfs service
+  service: name=nfs-kernel-server state=restarted
diff --git a/deploy/adapters/ansible/roles/config-compute/tasks/main.yml b/deploy/adapters/ansible/roles/config-compute/tasks/main.yml
new file mode 100644 (file)
index 0000000..1c5b486
--- /dev/null
@@ -0,0 +1,36 @@
+##############################################################################
+# Copyright (c) 2017 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: configure network
+  template:
+    src: compute.j2
+    dest: /etc/network/interfaces
+  notify:
+    - restart network service
+
+- name: Install apt packages
+  apt:
+    pkg: "nfs-kernel-server"
+    state: "present"
+
+- name: make nfs dircetory
+  file: "dest=/images mode=0777 state=directory"
+
+- name: configure service
+  shell: "echo 'nfs        2049/tcp' >>  /etc/services; \
+          echo 'nfs        2049/udp' >>  /etc/services"
+
+- name: configure NFS
+  template:
+    src: exports
+    dest: /etc/exports
+  notify:
+    - restart nfs service
+
+- meta: flush_handlers
diff --git a/deploy/adapters/ansible/roles/config-compute/templates/compute.j2 b/deploy/adapters/ansible/roles/config-compute/templates/compute.j2
new file mode 100644 (file)
index 0000000..8337fbc
--- /dev/null
@@ -0,0 +1,81 @@
+# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+
+# The loopback network interface
+auto lo
+iface lo inet loopback
+
+
+# Physical interface
+auto eth0
+iface eth0 inet manual
+
+
+# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
+auto {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}}
+iface {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}} inet manual
+    vlan-raw-device {{sys_intf_mappings["mgmt"]["interface"]}}
+
+# Storage network VLAN interface (optional)
+auto {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}}
+iface {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}} inet manual
+    vlan-raw-device {{sys_intf_mappings["storage"]["interface"]}}
+
+# Container/Host management bridge
+auto br-mgmt
+iface br-mgmt inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports eth0
+    address {{host_info[inventory_hostname].MGMT_IP}}
+    netmask 255.255.255.0
+
+# compute1 VXLAN (tunnel/overlay) bridge config
+auto br-vxlan
+iface br-vxlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}}
+    address {{host_info[inventory_hostname].VXLAN_IP}}
+    netmask 255.255.252.0
+
+# OpenStack Networking VLAN bridge
+auto br-vlan
+iface br-vlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{ network_cfg["provider_net_mappings"][0]["interface"] }}
+    address {{ip_settings[inventory_hostname]["br-prv"]["ip"]}}
+    netmask 255.255.255.0
+    gateway {{ip_settings[inventory_hostname]["br-prv"]["gw"]}}
+    offload-sg off
+    # Create veth pair, don't bomb if already exists
+    pre-up ip link add br-vlan-veth type veth peer name eth12 || true
+    # Set both ends UP
+    pre-up ip link set br-vlan-veth up
+    pre-up ip link set eth12 up
+    # Delete veth pair on DOWN
+    post-down ip link del br-vlan-veth || true
+    bridge_ports br-vlan-veth
+
+# Add an additional address to br-vlan
+iface br-vlan inet static
+    # Flat network default gateway
+    # -- This needs to exist somewhere for network reachability
+    # -- from the router namespace for floating IP paths.
+    # -- Putting this here is primarily for tempest to work.
+    address {{host_info[inventory_hostname].VLAN_IP_SECOND}}
+    netmask 255.255.252.0
+
+# compute1 Storage bridge
+auto br-storage
+iface br-storage inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}}
+    address {{ip_settings[inventory_hostname]["storage"]["ip"]}}
+    netmask 255.255.252.0
diff --git a/deploy/adapters/ansible/roles/config-compute/templates/exports b/deploy/adapters/ansible/roles/config-compute/templates/exports
new file mode 100644 (file)
index 0000000..c2749c8
--- /dev/null
@@ -0,0 +1,11 @@
+# /etc/exports: the access control list for filesystems which may be exported
+#               to NFS clients.  See exports(5).
+#
+# Example for NFSv2 and NFSv3:
+# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
+#
+# Example for NFSv4:
+# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
+# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
+#
+/images         *(rw,sync,no_subtree_check,no_root_squash)
diff --git a/deploy/adapters/ansible/roles/config-controller/controller.j2 b/deploy/adapters/ansible/roles/config-controller/controller.j2
new file mode 100755 (executable)
index 0000000..a4f073f
--- /dev/null
@@ -0,0 +1,66 @@
+# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+
+# The loopback network interface
+auto lo
+iface lo inet loopback
+
+# Physical interface
+auto eth0
+iface eth0 inet manual
+
+# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
+auto {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}}
+iface {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}} inet manual
+    vlan-raw-device {{sys_intf_mappings["mgmt"]["interface"]}}
+
+# Storage network VLAN interface (optional)
+auto {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}}
+iface {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}} inet manual
+    vlan-raw-device {{sys_intf_mappings["storage"]["interface"]}}
+
+# Container/Host management bridge
+auto br-mgmt
+iface br-mgmt inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports eth0
+    address {{host_info[inventory_hostname].MGMT_IP}}
+    netmask 255.255.255.0
+
+# OpenStack Networking VXLAN (tunnel/overlay) bridge
+#
+# Only the COMPUTE and NETWORK nodes must have an IP address
+# on this bridge. When used by infrastructure nodes, the
+# IP addresses are assigned to containers which use this
+# bridge.
+#
+auto br-vxlan
+iface br-vxlan inet manual
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}}
+
+# OpenStack Networking VLAN bridge
+auto br-vlan
+iface br-vlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{ network_cfg["provider_net_mappings"][0]["interface"] }}
+    address {{ ip_settings[inventory_hostname]["br-prv"]["ip"] }}
+    netmask 255.255.255.0
+    gateway {{ ip_settings[inventory_hostname]["br-prv"]["gw"] }}
+    dns-nameserver 8.8.8.8 8.8.4.4
+
+# compute1 Storage bridge
+auto br-storage
+iface br-storage inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}}
+    address {{ ip_settings[inventory_hostname]["storage"]["ip"] }}
+    netmask 255.255.252.0
diff --git a/deploy/adapters/ansible/roles/config-controller/handlers/main.yml b/deploy/adapters/ansible/roles/config-controller/handlers/main.yml
new file mode 100644 (file)
index 0000000..3d979e6
--- /dev/null
@@ -0,0 +1,11 @@
+##############################################################################
+## Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+## All rights reserved. This program and the accompanying materials
+## are made available under the terms of the Apache License, Version 2.0
+## which accompanies this distribution, and is available at
+## http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: restart network service
+  shell: "/sbin/ifconfig eth0 0 &&/sbin/ifdown -a && \
+          /sbin/ifup --ignore-errors -a"
diff --git a/deploy/adapters/ansible/roles/config-controller/tasks/main.yml b/deploy/adapters/ansible/roles/config-controller/tasks/main.yml
new file mode 100644 (file)
index 0000000..54e4bf1
--- /dev/null
@@ -0,0 +1,17 @@
+##############################################################################
+# Copyright (c) 2017 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: configure controller network
+  template:
+    src: controller.j2
+    dest: /etc/network/interfaces
+  notify:
+    - restart network service
+
+- meta: flush_handlers
diff --git a/deploy/adapters/ansible/roles/config-controller/templates/controller.j2 b/deploy/adapters/ansible/roles/config-controller/templates/controller.j2
new file mode 100755 (executable)
index 0000000..a4f073f
--- /dev/null
@@ -0,0 +1,66 @@
+# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+
+# The loopback network interface
+auto lo
+iface lo inet loopback
+
+# Physical interface
+auto eth0
+iface eth0 inet manual
+
+# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
+auto {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}}
+iface {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}} inet manual
+    vlan-raw-device {{sys_intf_mappings["mgmt"]["interface"]}}
+
+# Storage network VLAN interface (optional)
+auto {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}}
+iface {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}} inet manual
+    vlan-raw-device {{sys_intf_mappings["storage"]["interface"]}}
+
+# Container/Host management bridge
+auto br-mgmt
+iface br-mgmt inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports eth0
+    address {{host_info[inventory_hostname].MGMT_IP}}
+    netmask 255.255.255.0
+
+# OpenStack Networking VXLAN (tunnel/overlay) bridge
+#
+# Only the COMPUTE and NETWORK nodes must have an IP address
+# on this bridge. When used by infrastructure nodes, the
+# IP addresses are assigned to containers which use this
+# bridge.
+#
+auto br-vxlan
+iface br-vxlan inet manual
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{sys_intf_mappings["mgmt"]["interface"]}}.{{sys_intf_mappings["mgmt"]["vlan_tag"]}}
+
+# OpenStack Networking VLAN bridge
+auto br-vlan
+iface br-vlan inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{ network_cfg["provider_net_mappings"][0]["interface"] }}
+    address {{ ip_settings[inventory_hostname]["br-prv"]["ip"] }}
+    netmask 255.255.255.0
+    gateway {{ ip_settings[inventory_hostname]["br-prv"]["gw"] }}
+    dns-nameserver 8.8.8.8 8.8.4.4
+
+# compute1 Storage bridge
+auto br-storage
+iface br-storage inet static
+    bridge_stp off
+    bridge_waitport 0
+    bridge_fd 0
+    bridge_ports {{sys_intf_mappings["storage"]["interface"]}}.{{sys_intf_mappings["storage"]["vlan_tag"]}}
+    address {{ ip_settings[inventory_hostname]["storage"]["ip"] }}
+    netmask 255.255.252.0
diff --git a/deploy/adapters/ansible/roles/config-deployment/files/cinder.yml b/deploy/adapters/ansible/roles/config-deployment/files/cinder.yml
new file mode 100755 (executable)
index 0000000..3a39935
--- /dev/null
@@ -0,0 +1,13 @@
+---
+# This file contains an example to show how to set
+# the cinder-volume service to run in a container.
+#
+# Important note:
+# When using LVM or any iSCSI-based cinder backends, such as NetApp with
+# iSCSI protocol, the cinder-volume service *must* run on metal.
+# Reference: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
+
+container_skel:
+  cinder_volumes_container:
+    properties:
+      is_metal: true
diff --git a/deploy/adapters/ansible/roles/config-deployment/tasks/main.yml b/deploy/adapters/ansible/roles/config-deployment/tasks/main.yml
new file mode 100644 (file)
index 0000000..b069601
--- /dev/null
@@ -0,0 +1,33 @@
+##############################################################################
+# Copyright (c) 2017 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: create osa log directory
+  file:
+    path: /var/log/osa/
+    state: directory
+
+- name: copy openstack_user_config
+  template:
+    src: openstack_user_config.yml.j2
+    dest: /etc/openstack_deploy/openstack_user_config.yml
+
+- name: copy user_variables
+  template:
+    src: user_variables.yml
+    dest: /etc/openstack_deploy/user_variables.yml
+
+- name: copy cinder.yml
+  copy:
+    src: cinder.yml
+    dest: /etc/openstack_deploy/env.d/cinder.yml
+
+- name: copy ansible.cfg
+  template:
+    src: ansible.cfg
+    dest: /opt/openstack-ansible/playbooks/
diff --git a/deploy/adapters/ansible/roles/config-deployment/templates/ansible.cfg b/deploy/adapters/ansible/roles/config-deployment/templates/ansible.cfg
new file mode 100644 (file)
index 0000000..41502fb
--- /dev/null
@@ -0,0 +1,3 @@
+[ssh_connection]
+retries = 5
+scp_if_ssh = True
diff --git a/deploy/adapters/ansible/roles/config-deployment/templates/openstack_user_config.yml.j2 b/deploy/adapters/ansible/roles/config-deployment/templates/openstack_user_config.yml.j2
new file mode 100644 (file)
index 0000000..38e1478
--- /dev/null
@@ -0,0 +1,220 @@
+---
+cidr_networks:
+  container: 10.1.0.0/24
+  tunnel: 172.29.240.0/22
+  storage: 172.16.2.0/24
+
+used_ips:
+  - "10.1.0.1,10.1.0.55"
+  - "10.1.0.100,10.1.0.110"
+  - "172.29.240.1,172.29.240.50"
+  - "172.16.2.1,172.16.2.50"
+  - "172.29.248.1,172.29.248.50"
+
+global_overrides:
+  internal_lb_vip_address: 10.1.0.22
+  external_lb_vip_address: {{ public_vip.ip }}
+  tunnel_bridge: "br-vxlan"
+  management_bridge: "br-mgmt"
+  provider_networks:
+    - network:
+        container_bridge: "br-mgmt"
+        container_type: "veth"
+        container_interface: "eth1"
+        ip_from_q: "container"
+        type: "raw"
+        group_binds:
+          - all_containers
+          - hosts
+        is_container_address: true
+        is_ssh_address: true
+    - network:
+        container_bridge: "br-vxlan"
+        container_type: "veth"
+        container_interface: "eth10"
+        ip_from_q: "tunnel"
+        type: "vxlan"
+        range: "1:1000"
+        net_name: "vxlan"
+        group_binds:
+          - neutron_linuxbridge_agent
+    - network:
+        container_bridge: "br-vlan"
+        container_type: "veth"
+        container_interface: "eth12"
+        host_bind_override: "eth12"
+        type: "flat"
+        net_name: "flat"
+        group_binds:
+          - neutron_linuxbridge_agent
+    - network:
+        container_bridge: "br-vlan"
+        container_type: "veth"
+        container_interface: "eth11"
+        type: "vlan"
+        range: "1:1"
+        net_name: "vlan"
+        group_binds:
+          - neutron_linuxbridge_agent
+    - network:
+        container_bridge: "br-storage"
+        container_type: "veth"
+        container_interface: "eth2"
+        ip_from_q: "storage"
+        type: "raw"
+        group_binds:
+          - glance_api
+          - cinder_api
+          - cinder_volume
+          - nova_compute
+
+###
+### Infrastructure
+###
+
+# galera, memcache, rabbitmq, utility
+shared-infra_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# repository (apt cache, python packages, etc)
+repo-infra_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# load balancer
+# Ideally the load balancer should not use the Infrastructure hosts.
+# Dedicated hardware is best for improved performance and security.
+haproxy_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# rsyslog server
+#log_hosts:
+ # log1:
+ #  ip: 10.1.0.53
+
+###
+### OpenStack
+###
+
+# keystone
+identity_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# cinder api services
+storage-infra_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# glance
+# The settings here are repeated for each infra host.
+# They could instead be applied as global settings in
+# user_variables, but are left here to illustrate that
+# each container could have different storage targets.
+image_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+    container_vars:
+       limit_container_types: glance
+       glance_nfs_client:
+         - server: "{{ip_settings[groups.compute[0]]['storage']['ip']}}"
+           remote_path: "/images"
+           local_path: "/var/lib/glance/images"
+           type: "nfs"
+           options: "_netdev,auto"
+{% endfor %}
+
+# nova api, conductor, etc services
+compute-infra_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# heat
+orchestration_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# horizon
+dashboard_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# neutron server, agents (L3, etc)
+network_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# ceilometer (telemetry API)
+metering-infra_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# aodh (telemetry alarm service)
+metering-alarm_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# gnocchi (telemetry metrics storage)
+metrics_hosts:
+{% for host in groups.controller%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# nova hypervisors
+compute_hosts:
+{% for host in groups.compute%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# ceilometer compute agent (telemetry)
+metering-compute_hosts:
+{% for host in groups.compute%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+{% endfor %}
+
+# cinder volume hosts (NFS-backed)
+# The settings here are repeated for each infra host.
+# They could instead be applied as global settings in
+# user_variables, but are left here to illustrate that
+# each container could have different storage targets.
+storage_hosts:
+{% for host in groups.compute%}
+  {{host}}:
+    ip: {{ hostvars[host]['ansible_ssh_host'] }}
+    container_vars:
+      cinder_backends:
+        limit_container_types: cinder_volume
+        lvm:
+          volume_group: cinder-volumes
+          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
+          volume_backend_name: LVM_iSCSI
+          iscsi_ip_address: "{{ip_settings[host]['storage']['ip']}}"
+{% endfor %}
diff --git a/deploy/adapters/ansible/roles/config-deployment/templates/user_variables.yml b/deploy/adapters/ansible/roles/config-deployment/templates/user_variables.yml
new file mode 100644 (file)
index 0000000..30b2c6b
--- /dev/null
@@ -0,0 +1,27 @@
+---
+# Copyright 2014, Rackspace US, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# ##
+# ## This file contains commonly used overrides for convenience. Please inspect
+# ## the defaults for each role to find additional override options.
+# ##
+
+# # Debug and Verbose options.
+debug: false
+
+haproxy_keepalived_external_vip_cidr: "{{ public_vip.ip }}/32"
+haproxy_keepalived_internal_vip_cidr: "10.1.0.22/32"
+haproxy_keepalived_external_interface: br-vlan
+haproxy_keepalived_internal_interface: br-mgmt
diff --git a/deploy/adapters/ansible/roles/pre-prepare/files/modules b/deploy/adapters/ansible/roles/pre-prepare/files/modules
new file mode 100644 (file)
index 0000000..c73925e
--- /dev/null
@@ -0,0 +1,7 @@
+# /etc/modules: kernel modules to load at boot time.
+# This file contains the names of kernel modules that should be loaded
+# at boot time, one per line. Lines beginning with "#" are ignored.
+# Parameters can be specified after the module name.
+
+bonding
+8021q
diff --git a/deploy/adapters/ansible/roles/pre-prepare/tasks/main.yml b/deploy/adapters/ansible/roles/pre-prepare/tasks/main.yml
new file mode 100644 (file)
index 0000000..5bd38f1
--- /dev/null
@@ -0,0 +1,74 @@
+##############################################################################
+# Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: make sure ssh dir exist
+  file:
+    path: '{{ item.path }}'
+    owner: '{{ item.owner }}'
+    group: '{{ item.group }}'
+    state: directory
+    mode: 0755
+  with_items:
+    - path: /root/.ssh
+      owner: root
+      group: root
+
+- name: write ssh config
+  copy:
+    content: "UserKnownHostsFile /dev/null\nStrictHostKeyChecking no"
+    dest: '{{ item.dest }}'
+    owner: '{{ item.owner }}'
+    group: '{{ item.group }}'
+    mode: 0600
+  with_items:
+    - dest: /root/.ssh/config
+      owner: root
+      group: root
+
+- name: generate ssh keys
+  shell: if [ ! -f ~/.ssh/id_rsa.pub ]; \
+         then ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""; \
+         else echo "already gen ssh key!"; fi;
+
+- name: fetch ssh keys
+  fetch:
+    src: /root/.ssh/id_rsa.pub
+    dest: /tmp/ssh-keys-{{ ansible_hostname }}
+    flat: "yes"
+
+- authorized_key:
+    user: root
+    key: "{{ lookup('file', item) }}"
+  with_fileglob:
+    - /tmp/ssh-keys-*
+    - /root/.ssh/id_rsa.pub
+
+- name: change sources list
+  template:
+    src: sources.list
+    dest: /etc/apt/sources.list
+
+- name: rm apt.conf
+  file:
+    path: /etc/apt/apt.conf
+    state: absent
+
+- name: Install apt packages
+  apt:
+    pkg: "{{ item }}"
+    state: "present"
+  with_items: "{{ packages }}"
+
+- name: restart ntp service
+  shell: "service ntp restart"
+
+- name: add the appropriate kernel modules
+  copy:
+    src: modules
+    dest: /etc/modules
diff --git a/deploy/adapters/ansible/roles/pre-prepare/templates/sources.list b/deploy/adapters/ansible/roles/pre-prepare/templates/sources.list
new file mode 100644 (file)
index 0000000..1c3ab41
--- /dev/null
@@ -0,0 +1,56 @@
+#
+
+# deb cdrom:[Ubuntu-Server 16.04.1 LTS _Xenial Xerus_ - Release amd64 (20160719)]/ xenial main restricted
+
+#deb cdrom:[Ubuntu-Server 16.04.1 LTS _Xenial Xerus_ - Release amd64 (20160719)]/ xenial main restricted
+
+# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
+# newer versions of the distribution.
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial main restricted
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial main restricted
+
+## Major bug fix updates produced after the final release of the
+## distribution.
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
+
+## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
+## team. Also, please note that software in universe WILL NOT receive any
+## review or updates from the Ubuntu security team.
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial universe
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial universe
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial-updates universe
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial-updates universe
+
+## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
+## team, and may not be under a free licence. Please satisfy yourself as to
+## your rights to use the software. Also, please note that software in
+## multiverse WILL NOT receive any review or updates from the Ubuntu
+## security team.
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial multiverse
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial multiverse
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial-updates multiverse
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial-updates multiverse
+
+## N.B. software from this repository may not have been tested as
+## extensively as that contained in the main release, although it includes
+## newer versions of some applications which may provide useful features.
+## Also, please note that software in backports WILL NOT receive any review
+## or updates from the Ubuntu security team.
+deb http://hk.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse
+# deb-src http://hk.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse
+
+## Uncomment the following two lines to add software from Canonical's
+## 'partner' repository.
+## This software is not part of Ubuntu, but is offered by Canonical and the
+## respective vendors as a service to Ubuntu users.
+# deb http://archive.canonical.com/ubuntu xenial partner
+# deb-src http://archive.canonical.com/ubuntu xenial partner
+
+deb http://security.ubuntu.com/ubuntu xenial-security main restricted
+# deb-src http://security.ubuntu.com/ubuntu xenial-security main restricted
+deb http://security.ubuntu.com/ubuntu xenial-security universe
+# deb-src http://security.ubuntu.com/ubuntu xenial-security universe
+deb http://security.ubuntu.com/ubuntu xenial-security multiverse
+# deb-src http://security.ubuntu.com/ubuntu xenial-security multiverse
+
diff --git a/deploy/adapters/ansible/roles/pre-prepare/vars/main.yml b/deploy/adapters/ansible/roles/pre-prepare/vars/main.yml
new file mode 100644 (file)
index 0000000..66cf66b
--- /dev/null
@@ -0,0 +1,13 @@
+---
+packages:
+- bridge-utils
+- debootstrap
+- ifenslave
+- ifenslave-2.6
+- lsof
+- lvm2
+- ntp
+- ntpdate
+- sudo
+- vlan
+- tcpdump
diff --git a/deploy/adapters/ansible/roles/setup-host/tasks/main.yml b/deploy/adapters/ansible/roles/setup-host/tasks/main.yml
new file mode 100644 (file)
index 0000000..f0b1051
--- /dev/null
@@ -0,0 +1,27 @@
+##############################################################################
+# Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: setup hosts
+  shell: "export ANSIBLE_LOG_PATH=/var/ansible/run/openstack_ocata-opnfv2/ansible.log; \
+          export ANSIBLE_SCP_IF_SSH=y; \
+          cd /opt/openstack-ansible/playbooks; \
+          openstack-ansible setup-hosts.yml \
+             | tee -a /var/log/osa/host.log > /dev/null"
+
+- name: read the ansible log file
+  shell: cat /var/log/osa/host.log | tail -n 1000
+  register: setup_host_result
+
+- fail:
+    msg: "there are some task failed when setup host."
+  when: setup_host_result.stdout.find('failed=1') != -1
+
+- fail:
+    msg: "some host are unreachable."
+  when: setup_host_result.stdout.find('unreachable=1') != -1
diff --git a/deploy/adapters/ansible/roles/setup-infrastructure/tasks/main.yml b/deploy/adapters/ansible/roles/setup-infrastructure/tasks/main.yml
new file mode 100644 (file)
index 0000000..5b70aee
--- /dev/null
@@ -0,0 +1,27 @@
+##############################################################################
+# Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: setup infrastructure
+  shell: "export ANSIBLE_LOG_PATH=/var/ansible/run/openstack_ocata-opnfv2/ansible.log; \
+          export ANSIBLE_SCP_IF_SSH=y; \
+          cd /opt/openstack-ansible/playbooks; \
+          openstack-ansible setup-infrastructure.yml \
+             | tee -a /var/log/osa/infrastructure.log > /dev/null"
+
+- name: read the ansible log file
+  shell: cat /var/log/osa/infrastructure.log | tail -n 1000
+  register: setup_infrastructure_result
+
+- fail:
+    msg: "there are some task failed when setup host."
+  when: setup_infrastructure_result.stdout.find('failed=1') != -1
+
+- fail:
+    msg: "some host are unreachable."
+  when: setup_infrastructure_result.stdout.find('unreachable=1') != -1
diff --git a/deploy/adapters/ansible/roles/setup-openstack/tasks/main.yml b/deploy/adapters/ansible/roles/setup-openstack/tasks/main.yml
new file mode 100644 (file)
index 0000000..e577024
--- /dev/null
@@ -0,0 +1,27 @@
+##############################################################################
+# Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+---
+- name: setup openstack
+  shell: "export ANSIBLE_LOG_PATH=/var/ansible/run/openstack_ocata-opnfv2/ansible.log; \
+          export ANSIBLE_SCP_IF_SSH=y; \
+          cd /opt/openstack-ansible/playbooks; \
+          openstack-ansible setup-openstack.yml \
+             | tee -a /var/log/osa/openstack.log > /dev/null"
+
+- name: read the ansible log file
+  shell: cat /var/log/osa/openstack.log | tail -n 1000
+  register: setup_openstack_result
+
+- fail:
+    msg: "some task failed when setup host."
+  when: setup_openstack_result.stdout.find('failed=1') != -1
+
+- fail:
+    msg: "some host are unreachable."
+  when: setup_openstack_result.stdout.find('unreachable=1') != -1
index d1dec7b..379809a 100644 (file)
@@ -76,7 +76,7 @@ cat << EOF > /etc/init.d/set_state
 #
 #end raw
 
-curl -H "Content-Type: application/json" -X POST -d '{"ready": true}' "http://$srv/api/hosts/${hostname}/state_internal"
+curl -H "Content-Type: application/json" -X POST -d '{"ready": true}' "http://$srv:5050/api/hosts/${hostname}/state_internal"
 chkconfig set_state off
 mv /etc/init.d/set_state /tmp/set_state
 EOF
index 76bbfad..a5658e1 100644 (file)
@@ -67,7 +67,7 @@ cat << EOF > /etc/init.d/set_state
 #              installation.
 #end raw
 sleep 100
-wget -O /tmp/os_state --post-data='{"ready": true}' --header=Content-Type:application/json "http://$srv/api/hosts/${hostname}/state_internal"
+wget -O /tmp/os_state --post-data='{"ready": true}' --header=Content-Type:application/json "http://$srv:5050/api/hosts/${hostname}/state_internal"
 update-rc.d -f set_state remove
 mv /etc/init.d/set_state /tmp/set_state
 EOF
index a0d7064..433d90e 100644 (file)
@@ -25,6 +25,7 @@ import requests
 import json
 import itertools
 import threading
+import multiprocessing
 from collections import defaultdict
 from restful import Client
 import log as logging
@@ -192,6 +193,12 @@ opts = [
     cfg.IntOpt('action_timeout',
                help='action timeout in seconds',
                default=60),
+    cfg.IntOpt('install_os_timeout',
+               help='OS install timeout in minutes',
+               default=60),
+    cfg.IntOpt('ansible_print_wait',
+               help='wait ansible-playbok ready',
+               default=5),
     cfg.IntOpt('deployment_timeout',
                help='deployment timeout in minutes',
                default=60),
@@ -883,7 +890,55 @@ class CompassClient(object):
 
         return status, cluster_state
 
-    def get_installing_progress(self, cluster_id):
+    def get_ansible_print(self):
+        def print_log(log):
+            try:
+                with open(log, 'r') as file:
+                    while True:
+                        line = file.readline()
+                        if not line:
+                            time.sleep(0.1)
+                            continue
+                        line = line.replace('\n', '')
+                        print line
+                        sys.stdout.flush()
+            except:
+                raise RuntimeError("open ansible.log error")
+
+        current_time = time.time()
+        install_timeout = current_time + 60 * CONF.install_os_timeout
+        while current_time < install_timeout:
+            ready = True
+            for id in self.host_mapping.values():
+                status, response = self.client.get_host_state(id)
+                if response['state'] != 'SUCCESSFUL':
+                    ready = False
+                break
+
+            current_time = time.time()
+            if not ready:
+                time.sleep(8)
+            else:
+                break
+
+        if current_time >= install_timeout:
+            raise RuntimeError("OS installation timeout")
+        else:
+            LOG.info("OS installation complete")
+
+        # time.sleep(CONF.ansible_start_wait)
+        compass_dir = os.getenv('COMPASS_DIR')
+        ansible_log = "%s/work/deploy/docker/ansible/run/%s-%s/ansible.log" \
+                      % (compass_dir, CONF.adapter_name, CONF.cluster_name)
+        os.system("sudo touch %s" % ansible_log)
+        os.system("sudo chmod +x -R %s/work/deploy/docker/ansible/run/"
+                  % compass_dir)
+        ansible_print = multiprocessing.Process(target=print_log,
+                                                args=(ansible_log,))
+        ansible_print.start()
+        return ansible_print
+
+    def get_installing_progress(self, cluster_id, ansible_print):
         def _get_installing_progress():
             """get intalling progress."""
             deployment_timeout = time.time() + 60 * float(CONF.deployment_timeout)  # noqa
@@ -905,23 +960,20 @@ class CompassClient(object):
                         (cluster_id, status, cluster_state)
                     )
 
-                time.sleep(5)
+                time.sleep(10)
 
             if current_time() >= deployment_timeout:
                 LOG.info("current_time=%s, deployment_timeout=%s"
                          % (current_time(), deployment_timeout))
                 LOG.info("cobbler status:")
-                os.system("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
-                                 -i %s root@192.168.200.2 \
-                                 'cobbler status'" % (CONF.rsa_file))
+                os.system("sudo docker exec compass-cobbler bash -c \
+                          'cobbler status'" % (CONF.rsa_file))
                 raise RuntimeError("installation timeout")
 
         try:
             _get_installing_progress()
         finally:
-            # do this twice, make sure process be killed
-            kill_print_proc()
-            kill_print_proc()
+            ansible_print.terminate()
 
     def check_dashboard_links(self, cluster_id):
         dashboard_url = CONF.dashboard_url
@@ -946,17 +998,11 @@ class CompassClient(object):
 
 
 def print_ansible_log():
-    os.system("ssh -o StrictHostKeyChecking=no -o \
-              UserKnownHostsFile=/dev/null -i %s root@192.168.200.2 \
-              'while ! tail -f \
-              /var/ansible/run/%s-%s/ansible.log 2>/dev/null; do :; \
-              sleep 1; done'" %
-              (CONF.rsa_file, CONF.adapter_name, CONF.cluster_name))
+    pass
 
 
 def kill_print_proc():
-    os.system(
-        "ps aux|grep -v grep|grep -E 'ssh.+root@192.168.200.2'|awk '{print $2}'|xargs kill -9")   # noqa
+    pass
 
 
 def deploy():
@@ -981,8 +1027,8 @@ def deploy():
         client.deploy_clusters(cluster_id)
 
         LOG.info("compass OS installtion is begin")
-        threading.Thread(target=print_ansible_log).start()
-        client.get_installing_progress(cluster_id)
+        ansible_print = client.get_ansible_print()
+        client.get_installing_progress(cluster_id, ansible_print)
         client.check_dashboard_links(cluster_id)
 
     else:
@@ -1,7 +1,7 @@
-NAME = 'openstack_newton'
-DISPLAY_NAME = 'Openstack Newton'
+NAME = 'openstack_ocata'
+DISPLAY_NAME = 'Openstack Ocata'
 PARENT = 'openstack'
-PACKAGE_INSTALLER = 'ansible_installer_newton'
+PACKAGE_INSTALLER = 'ansible_installer_ocata'
 OS_INSTALLER = 'cobbler'
 SUPPORTED_OS_PATTERNS = ['(?i)ubuntu-16\.04', '(?i)CentOS-7.*16.*']
 DEPLOYABLE = True
index f491127..4b4fd55 100755 (executable)
@@ -1,9 +1,12 @@
 ## Celery related setting: this is the default setting once we install RabbitMQ
 
 CELERY_RESULT_BACKEND ="amqp://"
+BROKER_URL = "amqp://guest:guest@compass-mq:5672//"
 
-BROKER_URL = "amqp://guest:guest@localhost:5672//"
 
 CELERY_IMPORTS=("compass.tasks.tasks",)
 CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
 C_FORCE_ROOT = 1
+CELERY_DEFAULT_QUEUE = 'admin@huawei.com'
+CELERY_DEFAULT_EXCHANGE = 'admin@huawei.com'
+CELERY_DEFAULT_ROUTING_KEY = 'admin@huawei.com'
similarity index 87%
rename from deploy/compass_conf/flavor/openstack_newton.conf
rename to deploy/compass_conf/flavor/openstack_ocata.conf
index 2861ccd..c532ac6 100755 (executable)
@@ -1,4 +1,4 @@
-ADAPTER_NAME = 'openstack_newton'
+ADAPTER_NAME = 'openstack_ocata'
 FLAVORS = [{
     'flavor': 'allinone',
     'display_name': 'All-In-One',
@@ -21,8 +21,8 @@ FLAVORS = [{
         'dashboard', 'identity', 'storage-controller', 'storage-volume'
     ],
 }, {
-    'flavor': 'HA-ansible-multinodes-newton',
-    'display_name': 'HA-ansible-multinodes-newton',
+    'flavor': 'HA-ansible-multinodes-ocata',
+    'display_name': 'HA-ansible-multinodes-ocata',
     'template': 'HA-ansible-multinodes.tmpl',
     'roles': [
         'controller', 'compute', 'ha', 'odl', 'onos', 'opencontrail', 'ceph', 'ceph-adm', 'ceph-mon', 'ceph-osd', 'sec-patch', 'ceph-osd-node'
@@ -1,5 +1,5 @@
-ADAPTER = 'openstack_newton'
-FLAVOR = 'HA-ansible-multinodes-newton'
+ADAPTER = 'openstack_ocata'
+FLAVOR = 'HA-ansible-multinodes-ocata'
 CONFIG_MAPPING = {
     "mapped_name": "flavor_config",
     "mapped_children": [{
@@ -1,5 +1,5 @@
-ADAPTER = 'openstack_newton'
-FLAVOR = 'HA-ansible-multinodes-newton'
+ADAPTER = 'openstack_ocata'
+FLAVOR = 'HA-ansible-multinodes-ocata'
 METADATA = {
     'ha_proxy': {
         '_self': {
@@ -1,5 +1,5 @@
 NAME = 'ansible_installer'
-INSTANCE_NAME = 'ansible_installer_newton'
+INSTANCE_NAME = 'ansible_installer_ocata'
 SETTINGS = {
     'ansible_dir': '/var/ansible',
     'ansible_run_dir': '/var/ansible/run',
@@ -8,6 +8,6 @@ SETTINGS = {
     'inventory_file': 'inventory.yml',
     'group_variable': 'all',
     'etc_hosts_path': 'roles/common/templates/hosts',
-    'runner_dirs': ['roles','openstack_newton/templates','openstack_newton/roles']
+    'runner_dirs': ['roles','openstack_ocata/templates','openstack_ocata/roles']
 }
 
diff --git a/deploy/compass_conf/repomd.xml b/deploy/compass_conf/repomd.xml
new file mode 100644 (file)
index 0000000..07dd65c
--- /dev/null
@@ -0,0 +1,55 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
+ <revision>1467350968</revision>
+<data type="filelists">
+  <checksum type="sha256">33b10dbe7bca8494bc1bec8cfb8edad979e2cba85fcbfe5f75cbcd6d246e7c28</checksum>
+  <open-checksum type="sha256">492476b2da75e2c3f7cf2c1aea8db833af88dcd8bd91e066182085443df9117d</open-checksum>
+  <location href="repodata/33b10dbe7bca8494bc1bec8cfb8edad979e2cba85fcbfe5f75cbcd6d246e7c28-filelists.xml.gz"/>
+  <timestamp>1467350969</timestamp>
+  <size>122292</size>
+  <open-size>1846481</open-size>
+</data>
+<data type="primary">
+  <checksum type="sha256">aecb3a50baa1503202f5045f77bad0ac381972fb91fb76c0facc49a51bd96ae5</checksum>
+  <open-checksum type="sha256">69a62db980dbe216e75bf0268dd960f2e5d427eb99e3c7c7d3cbe2208cae99c7</open-checksum>
+  <location href="repodata/aecb3a50baa1503202f5045f77bad0ac381972fb91fb76c0facc49a51bd96ae5-primary.xml.gz"/>
+  <timestamp>1467350969</timestamp>
+  <size>56883</size>
+  <open-size>400637</open-size>
+</data>
+<data type="primary_db">
+  <checksum type="sha256">413dc2303655da7638dbc38b2283df67b3ce9c9281d9c7d1c67afb9e85b8304a</checksum>
+  <open-checksum type="sha256">1422ad2d3bf66317336c5ca0b6bf76e20bbf156edafdb9bded408c0ddf6170b7</open-checksum>
+  <location href="repodata/413dc2303655da7638dbc38b2283df67b3ce9c9281d9c7d1c67afb9e85b8304a-primary.sqlite.bz2"/>
+  <timestamp>1467350970</timestamp>
+  <database_version>10</database_version>
+  <size>114072</size>
+  <open-size>525312</open-size>
+</data>
+<data type="other_db">
+  <checksum type="sha256">2b6c78eb1fd91f6619a995e28d252eb06b8a1ddb3f32513b4ccd18d42beba092</checksum>
+  <open-checksum type="sha256">627d5c93c6e2693cf1e0f1b7fd4dcfc507ce48caa68f619ec0b54f7f87c19a7a</open-checksum>
+  <location href="repodata/2b6c78eb1fd91f6619a995e28d252eb06b8a1ddb3f32513b4ccd18d42beba092-other.sqlite.bz2"/>
+  <timestamp>1467350969</timestamp>
+  <database_version>10</database_version>
+  <size>69387</size>
+  <open-size>284672</open-size>
+</data>
+<data type="other">
+  <checksum type="sha256">e8bc06739d823d3f3104db4a1f043da9c2ac8a23eddfd59e56f46d56a94ccad3</checksum>
+  <open-checksum type="sha256">dcabd4f594e2d696dbbec944756777b01cb74ba3908b5bea9d95afa022e66d1c</open-checksum>
+  <location href="repodata/e8bc06739d823d3f3104db4a1f043da9c2ac8a23eddfd59e56f46d56a94ccad3-other.xml.gz"/>
+  <timestamp>1467350969</timestamp>
+  <size>57313</size>
+  <open-size>305408</open-size>
+</data>
+<data type="filelists_db">
+  <checksum type="sha256">cf9a38da9e0a6eed7c0e10a14f933e2bc6b6b29ed1d051174722ad58764d4f59</checksum>
+  <open-checksum type="sha256">d7c3ba6491ba8b885c7336984bf304d5982865fd3ed03dc30de654d56d82b178</open-checksum>
+  <location href="repodata/cf9a38da9e0a6eed7c0e10a14f933e2bc6b6b29ed1d051174722ad58764d4f59-filelists.sqlite.bz2"/>
+  <timestamp>1467350970</timestamp>
+  <database_version>10</database_version>
+  <size>164933</size>
+  <open-size>1020928</open-size>
+</data>
+</repomd>
@@ -1,4 +1,4 @@
-ADAPTER_NAME = 'openstack_newton'
+ADAPTER_NAME = 'openstack_ocata'
 ROLES = [{
     'role': 'allinone-compute',
     'display_name': 'all in one',
index a1e9bff..d385939 100755 (executable)
@@ -2,7 +2,7 @@ CONFIG_DIR = '/etc/compass'
 DATABASE_TYPE = 'mysql'
 DATABASE_USER = 'root'
 DATABASE_PASSWORD = 'root'
-DATABASE_SERVER = '127.0.0.1:3306'
+DATABASE_SERVER = 'compass-db:3306'
 DATABASE_NAME = 'compass'
 SQLALCHEMY_DATABASE_URI = '%s://%s:%s@%s/%s' % (DATABASE_TYPE, DATABASE_USER, DATABASE_PASSWORD, DATABASE_SERVER, DATABASE_NAME)
 SQLALCHEMY_DATABASE_POOL_TYPE = 'instant'
diff --git a/deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/allinone.tmpl b/deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/allinone.tmpl
deleted file mode 100755 (executable)
index 8f0d3db..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#set cluster_name = $getVar('name', '')
-[defaults]
-log_path = /var/ansible/run/openstack_newton-$cluster_name/ansible.log
-host_key_checking = False
-callback_plugins = /opt/compass/bin/ansible_callbacks
-pipelining=True
diff --git a/deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/multinodes.tmpl b/deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/multinodes.tmpl
deleted file mode 100755 (executable)
index 8f0d3db..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#set cluster_name = $getVar('name', '')
-[defaults]
-log_path = /var/ansible/run/openstack_newton-$cluster_name/ansible.log
-host_key_checking = False
-callback_plugins = /opt/compass/bin/ansible_callbacks
-pipelining=True
diff --git a/deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/single-controller.tmpl b/deploy/compass_conf/templates/ansible_installer/openstack_newton/ansible_cfg/single-controller.tmpl
deleted file mode 100755 (executable)
index 8f0d3db..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#set cluster_name = $getVar('name', '')
-[defaults]
-log_path = /var/ansible/run/openstack_newton-$cluster_name/ansible.log
-host_key_checking = False
-callback_plugins = /opt/compass/bin/ansible_callbacks
-pipelining=True
@@ -1,11 +1,11 @@
 #set cluster_name = $getVar('name', '')
 [defaults]
-log_path = /var/ansible/run/openstack_newton-$cluster_name/ansible.log
+log_path = /var/ansible/run/openstack_ocata-$cluster_name/ansible.log
 host_key_checking = False
 callback_whitelist = playbook_done, status_callback
-callback_plugins = /opt/compass/bin/ansible_callbacks
-library = /opt/ansible-modules
+callback_plugins = /opt/ansible_callbacks
 forks=100
 
 [ssh_connection]
 pipelining=True
+retries = 5
diff --git a/deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/allinone.tmpl b/deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/allinone.tmpl
new file mode 100755 (executable)
index 0000000..7114aa1
--- /dev/null
@@ -0,0 +1,6 @@
+#set cluster_name = $getVar('name', '')
+[defaults]
+log_path = /var/ansible/run/openstack_ocata-$cluster_name/ansible.log
+host_key_checking = False
+callback_plugins = /opt/ansible_callbacks
+pipelining=True
diff --git a/deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/multinodes.tmpl b/deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/multinodes.tmpl
new file mode 100755 (executable)
index 0000000..7114aa1
--- /dev/null
@@ -0,0 +1,6 @@
+#set cluster_name = $getVar('name', '')
+[defaults]
+log_path = /var/ansible/run/openstack_ocata-$cluster_name/ansible.log
+host_key_checking = False
+callback_plugins = /opt/ansible_callbacks
+pipelining=True
diff --git a/deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/single-controller.tmpl b/deploy/compass_conf/templates/ansible_installer/openstack_ocata/ansible_cfg/single-controller.tmpl
new file mode 100755 (executable)
index 0000000..7114aa1
--- /dev/null
@@ -0,0 +1,6 @@
+#set cluster_name = $getVar('name', '')
+[defaults]
+log_path = /var/ansible/run/openstack_ocata-$cluster_name/ansible.log
+host_key_checking = False
+callback_plugins = /opt/ansible_callbacks
+pipelining=True
 #for controller in $controllers
     #set controller_ip = $controller.install.ip
     #set controller_hostname = $controller.hostname
-$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [compute]
 #for compute in $computes
     #set compute_ip = $compute.install.ip
     #set compute_hostname = $compute.hostname
-$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [ha]
 #for ha in $has
     #set ha_ip = $ha.install.ip
     #set ha_hostname = $ha.hostname
-$ha_hostname ansible_ssh_host=$ha_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$ha_hostname ansible_ssh_host=$ha_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [odl]
 #for odl in $odls
     #set odl_ip = $odl.install.ip
     #set odl_hostname = $odl.hostname
-$odl_hostname ansible_ssh_host=$odl_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$odl_hostname ansible_ssh_host=$odl_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [onos]
 #for onos in $onoss
     #set onos_ip = $onos.install.ip
     #set onos_hostname = $onos.hostname
-$onos_hostname ansible_ssh_host=$onos_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$onos_hostname ansible_ssh_host=$onos_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [opencontrail]
 #for opencontrail in $opencontrails
     #set opencontrail_ip = $opencontrail.install.ip
     #set opencontrail_hostname = $opencontrail.hostname
-$opencontrail_hostname ansible_ssh_host=$opencontrail_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$opencontrail_hostname ansible_ssh_host=$opencontrail_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [ceph_adm]
 #for ceph_adm in $ceph_adm_list
     #set ceph_adm_ip = $ceph_adm.install.ip
     #set ceph_adm_hostname = $ceph_adm.hostname
-$ceph_adm_hostname ansible_ssh_host=$ceph_adm_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$ceph_adm_hostname ansible_ssh_host=$ceph_adm_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [ceph_mon]
 #for ceph_mon in $ceph_mon_list
     #set ceph_mon_ip = $ceph_mon.install.ip
     #set ceph_mon_hostname = $ceph_mon.hostname
-$ceph_mon_hostname ansible_ssh_host=$ceph_mon_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$ceph_mon_hostname ansible_ssh_host=$ceph_mon_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [ceph_osd]
 #for ceph_osd in $ceph_osd_list
     #set ceph_osd_ip = $ceph_osd.install.ip
     #set ceph_osd_hostname = $ceph_osd.hostname
-$ceph_osd_hostname ansible_ssh_host=$ceph_osd_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$ceph_osd_hostname ansible_ssh_host=$ceph_osd_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [ceph:children]
 ceph_adm
 #for controller in $controllers
     #set controller_ip = $controller.management.ip
     #set controller_hostname = $controller.hostname
-$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [compute]
 #for compute in $computes
     #set compute_ip = $compute.management.ip
     #set compute_hostname = $compute.hostname
-$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [network]
 #for network in $networks
     #set network_ip = $network.management.ip
     #set network_hostname = $network.hostname
-$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [storage]
 #for storage in storages
     #set storage_ip = $storage.management.ip
     #set storage_hostname = $storage.hostname
-$storage_hostname ansible_ssh_host=$storage_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$storage_hostname ansible_ssh_host=$storage_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 #for controller in $compute_controllers
     #set controller_ip = $controller.management.ip
     #set controller_hostname = $controller.hostname
-$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [compute-worker]
 #for compute in $compute_workers
     #set compute_ip = $compute.management.ip
     #set compute_hostname = $compute.hostname
-$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [network-server]
 #for network in $network_servers
     #set network_ip = $network.management.ip
     #set network_hostname = $network.hostname
-$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [network-worker]
 #for network in $network_workers
     #set network_ip = $network.management.ip
     #set network_hostname = $network.hostname
-$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [database]
 #for worker in $databases
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [messaging]
 #for worker in $messagings
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [image]
 #for worker in $images
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [dashboard]
 #for worker in $dashboards
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [identity]
 #for worker in $identities
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [storage-controller]
 #for worker in $storage_controllers
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [storage-volume]
 #for worker in $storage_volumes
     #set worker_ip = $worker.management.ip
     #set worker_hostname = $worker.hostname
-$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$worker_hostname ansible_ssh_host=$worker_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 #for controller in $controllers
     #set controller_ip = $controller.management.ip
     #set controller_hostname = $controller.hostname
-$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$controller_hostname ansible_ssh_host=$controller_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [compute]
 #for compute in $computes
     #set compute_ip = $compute.management.ip
     #set compute_hostname = $compute.hostname
-$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$compute_hostname ansible_ssh_host=$compute_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [network]
 #for network in $networks
     #set network_ip = $network.management.ip
     #set network_hostname = $network.hostname
-$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$network_hostname ansible_ssh_host=$network_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 
 [storage]
 #for storage in storages
     #set storage_ip = $storage.management.ip
     #set storage_hostname = $storage.hostname
-$storage_hostname ansible_ssh_host=$storage_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$storage_hostname ansible_ssh_host=$storage_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [odl]
 #for odl in odls
     #set odl_ip = $odl.management.ip
     #set odl_hostname = $odl.hostname
-$odl_hostname ansible_ssh_host=$odl_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$odl_hostname ansible_ssh_host=$odl_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
 [storage]
 #for storage in storages
     #set storage_ip = $storage.management.ip
     #set storage_hostname = $storage.hostname
-$storage_hostname ansible_ssh_host=$storage_ip ansible_ssh_user=$username ansible_ssh_password=$password
+$storage_hostname ansible_ssh_host=$storage_ip ansible_ssh_user=$username ansible_ssh_pass=$password
 #end for
@@ -8,6 +8,35 @@
 #set $sys_intf_mappings[$intf_info["name"]] = $intf_info
 #end for
 
+#set controllers = $getVar('controller', [])
+#set computes = $getVar('compute', [])
+#set vlan_ip_sec_start = $getVar('vlan_ip_sec_start', '173.29.241.1')
+#set vxlan_ip_start = $getVar('vxlan_ip_start', '172.29.240.13')
+
+#def ipadd($ip, $inc)
+    #set list = $ip.split('.')
+    #set $list[3] = str(int($list[3]) + $inc)
+    #set res = '.'.join($list)
+$res
+#end def
+
+#set host_info = {}
+#for host in controllers
+    #set $host_info[$host['hostname']] = {'MGMT_IP': $host['install']['ip']}
+#end for
+
+#set inc = 0
+#for host in computes
+    #set info = {}
+    #set $info['MGMT_IP'] = $host['install']['ip']
+    #set $info['VLAN_IP_SECOND'] = $ipadd($vlan_ip_sec_start, $inc).strip('\n').encode('utf-8')
+    #set $info['VXLAN_IP'] = $ipadd($vxlan_ip_start, $inc).strip('\n').encode('utf-8')
+    #set $host_info[$host['hostname']] = $info
+    #set $inc = $inc + 1
+#end for
+
+host_info: $host_info
+
 #set ip_settings={}
 #for k,v in $getVar('ip_settings', {}).items()
 #set host_ip_settings={}
@@ -23,9 +52,6 @@
 #set has = $getVar('ha', [])
 #set ha_vip = $getVar('ha_vip', [])
 
-#set controllers = $getVar('controller', [])
-#set computers = $getVar('compute', [])
-
 enable_secgroup: $getVar('enable_secgroup', True)
 enable_fwaas: $getVar('enable_fwaas', True)
 enable_vpnaas: $getVar('enable_vpnaas', True)
@@ -39,7 +65,7 @@ network_cfg: $network_cfg
 sys_intf_mappings: $sys_intf_mappings
 deploy_type: $getVar('deploy_type', 'virtual')
 
-public_cidr: $computers[0]['install']['subnet']
+public_cidr: $computes[0]['install']['subnet']
 storage_cidr: "{{ ip_settings[inventory_hostname]['storage']['cidr'] }}"
 mgmt_cidr: "{{ ip_settings[inventory_hostname]['mgmt']['cidr'] }}"
 
@@ -131,8 +157,8 @@ NTP_SERVER_LOCAL: "{{ controllers_host }}"
 DB_HOST: "{{ db_host }}"
 MQ_BROKER: rabbitmq
 
-OPENSTACK_REPO: cloudarchive-newton.list
-newton_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/newton main
+OPENSTACK_REPO: cloudarchive-ocata.list
+ocata_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main
 ADMIN_TOKEN: admin
 CEILOMETER_TOKEN: c095d479023a0fd58a54
 erlang.cookie: DJJVECFMCJPVYQTJTDWG
@@ -50,8 +50,8 @@ NTP_SERVER_LOCAL: "{{ controller_host }}"
 DB_HOST: "{{ controller_host }}"
 MQ_BROKER: rabbitmq
 
-OPENSTACK_REPO: cloudarchive-newton.list
-newton_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/newton main
+OPENSTACK_REPO: cloudarchive-ocata.list
+ocata_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main
 ADMIN_TOKEN: admin
 CEILOMETER_TOKEN: c095d479023a0fd58a54
 
@@ -111,8 +111,8 @@ NTP_SERVER_LOCAL: "{{ compute_controller_host }}"
 DB_HOST: "{{ db_host }}"
 MQ_BROKER: rabbitmq
 
-OPENSTACK_REPO: cloudarchive-newton.list
-newton_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/newton main
+OPENSTACK_REPO: cloudarchive-ocata.list
+ocata_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main
 ADMIN_TOKEN: admin
 CEILOMETER_TOKEN: c095d479023a0fd58a54
 
@@ -62,8 +62,8 @@ NTP_SERVER_LOCAL: "{{ controller_host }}"
 DB_HOST: "{{ controller_host }}"
 MQ_BROKER: rabbitmq
 
-OPENSTACK_REPO: cloudarchive-newton.list
-newton_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/newton main
+OPENSTACK_REPO: cloudarchive-ocata.list
+ocata_cloud_archive: deb http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/ocata main
 ADMIN_TOKEN: admin
 CEILOMETER_TOKEN: c095d479023a0fd58a54
 
index 6199371..42fca80 100755 (executable)
@@ -27,13 +27,11 @@ function install_compass_core() {
 }
 
 function set_compass_machine() {
-    local config_file=$WORK_DIR/installer/compass-install/install/group_vars/all
-
-    sed -i -e '/test: true/d' -e '/pxe_boot_macs/d' $config_file
-    echo "test: true" >> $config_file
+    local config_file=$WORK_DIR/installer/compass-docker-compose/group_vars/all
+    sed -i '/pxe_boot_macs/d' $config_file
     echo "pxe_boot_macs: [${machines}]" >> $config_file
 
-    install_compass "compass_machine.yml"
+    ansible-playbook $WORK_DIR/installer/compass-docker-compose/add_machine.yml
 }
 
 function install_compass() {
@@ -93,8 +91,9 @@ function inject_compass_conf() {
 }
 
 function refresh_compass_core () {
-    cmd="/opt/compass/bin/refresh.sh"
-    exec_cmd_on_compass $cmd
+    sudo docker exec compass-deck bash -c "/opt/compass/bin/manage_db.py createdb"
+    sudo docker exec compass-deck bash -c "/root/compass-deck/bin/clean_installers.py"
+    sudo rm -rf $WORK_DIR/docker/ansible/run/*
 }
 
 function wait_ok() {
@@ -128,86 +127,10 @@ function wait_ok() {
 }
 
 function launch_compass() {
-    local old_mnt=$compass_vm_dir/old
-    local new_mnt=$compass_vm_dir/new
-    local old_iso=$WORK_DIR/iso/centos.iso
-    local new_iso=$compass_vm_dir/centos.iso
-
-    log_info "launch_compass enter"
-    tear_down_compass
-
-    set -e
-    mkdir -p $compass_vm_dir $old_mnt
-    sudo mount -o loop $old_iso $old_mnt
-    cd $old_mnt;find .|cpio -pd $new_mnt;cd -
-
-    sudo umount $old_mnt
-
-    chmod 755 -R $new_mnt
-
-    cp $COMPASS_DIR/util/isolinux.cfg $new_mnt/isolinux/ -f
-    cp $COMPASS_DIR/util/ks.cfg $new_mnt/isolinux/ -f
-
-    sed -i -e "s/REPLACE_MGMT_IP/$MGMT_IP/g" \
-           -e "s/REPLACE_MGMT_NETMASK/$MGMT_MASK/g" \
-           -e "s/REPLACE_GW/$MGMT_GW/g" \
-           -e "s/REPLACE_INSTALL_IP/$COMPASS_SERVER/g" \
-           -e "s/REPLACE_INSTALL_NETMASK/$INSTALL_MASK/g" \
-           -e "s/REPLACE_COMPASS_EXTERNAL_NETMASK/$COMPASS_EXTERNAL_MASK/g" \
-           -e "s/REPLACE_COMPASS_EXTERNAL_IP/$COMPASS_EXTERNAL_IP/g" \
-           -e "s/REPLACE_COMPASS_EXTERNAL_GW/$COMPASS_EXTERNAL_GW/g" \
-           $new_mnt/isolinux/isolinux.cfg
-
-    if [[ -n $COMPASS_DNS1 ]]; then
-        sed -i -e "s/REPLACE_COMPASS_DNS1/$COMPASS_DNS1/g" $new_mnt/isolinux/isolinux.cfg
-    fi
-
-    if [[ -n $COMPASS_DNS2 ]]; then
-        sed -i -e "s/REPLACE_COMPASS_DNS2/$COMPASS_DNS2/g" $new_mnt/isolinux/isolinux.cfg
-    fi
-
-    ssh-keygen -f $new_mnt/bootstrap/boot.rsa -t rsa -N ''
-    cp $new_mnt/bootstrap/boot.rsa $rsa_file
-
-    rm -rf $new_mnt/.rr_moved $new_mnt/rr_moved
-    sudo mkisofs -quiet -r -J -R -b isolinux/isolinux.bin  -no-emul-boot -boot-load-size 4 -boot-info-table -hide-rr-moved -x "lost+found:" -o $new_iso $new_mnt
-
-    rm -rf $old_mnt $new_mnt
-
-    qemu-img create -f qcow2 $compass_vm_dir/disk.img 100G
-
-    # create vm xml
-    sed -e "s/REPLACE_MEM/$COMPASS_VIRT_MEM/g" \
-        -e "s/REPLACE_CPU/$COMPASS_VIRT_CPUS/g" \
-        -e "s#REPLACE_IMAGE#$compass_vm_dir/disk.img#g" \
-        -e "s#REPLACE_ISO#$compass_vm_dir/centos.iso#g" \
-        -e "s/REPLACE_NET_MGMT/mgmt/g" \
-        -e "s/REPLACE_NET_INSTALL/install/g" \
-        -e "s/REPLACE_NET_EXTERNAL/external/g" \
-        $COMPASS_DIR/deploy/template/vm/compass.xml \
-        > $WORK_DIR/vm/compass/libvirt.xml
+    local group_vars=$WORK_DIR/installer/compass-docker-compose/group_vars/all
+    sed -i "s#^\(compass_dir:\).*#\1 $COMPASS_DIR#g" $group_vars
 
-    sudo virsh define $compass_vm_dir/libvirt.xml
-    sudo virsh start compass
-
-    exit_status=$?
-    if [ $exit_status != 0 ];then
-        log_error "virsh start compass failed"
-        exit 1
-    fi
-
-    if ! wait_ok 500;then
-        log_error "install os timeout"
-        exit 1
-    fi
-
-    if ! install_compass_core;then
-        log_error "install compass core failed"
-        exit 1
-    fi
-
-    set +e
-    log_info "launch_compass exit"
+    ansible-playbook $WORK_DIR/installer/compass-docker-compose/bring_up_compass.yml
 }
 
 function recover_compass() {
@@ -282,8 +205,6 @@ function wait_controller_nodes_ok() {
 }
 
 function get_public_vip () {
-    ssh $ssh_args root@$MGMT_IP "
-        cd /var/ansible/run/$ADAPTER_NAME'-'$CLUSTER_NAME
-        cat group_vars/all | grep -A 3 public_vip: | sed -n '2p' |sed -e 's/  ip: //g'
-    "
+    cat $WORK_DIR/docker/ansible/run/$ADAPTER_NAME'-'$CLUSTER_NAME/group_vars/all \
+    | grep -A 3 public_vip: | sed -n '2p' |sed -e 's/  ip: //g'
 }
index 7b9d829..b474e28 100644 (file)
@@ -1,17 +1,17 @@
 export DHA=${DHA:-$COMPASS_DIR/deploy/conf/vm_environment/os-nosdn-nofeature-ha.yml}
 export NEUTRON=${NEUTRON:-$COMPASS_DIR/deploy/conf/neutron_cfg.yaml}
 export NETWORK=${NETWORK:-$COMPASS_DIR/deploy/conf/network_cfg.yaml}
-export ISO_URL=${ISO_URL:-file://`pwd`/work/building/compass.iso}
-export INSTALL_IP=${INSTALL_IP:-10.1.0.12}
+export TAR_URL=${TAR_URL:-file://`pwd`/work/building/compass.tar.gz}
+export INSTALL_IP=${INSTALL_IP:-10.1.0.1}
 export INSTALL_MASK=${INSTALL_MASK:-255.255.255.0}
 export INSTALL_GW=${INSTALL_GW:-10.1.0.1}
 export INSTALL_IP_START=${INSTALL_IP_START:-10.1.0.1}
 export INSTALL_IP_END=${INSTALL_IP_END:-10.1.0.254}
-export MGMT_IP=${MGMT_IP:-192.168.200.2}
-export MGMT_MASK=${MGMT_MASK:-255.255.252.0}
-export MGMT_GW=${MGMT_GW:-192.168.200.1}
-export MGMT_IP_START=${MGMT_IP_START:-192.168.200.3}
-export MGMT_IP_END=${MGMT_IP_END:-192.168.200.254}
+export MGMT_IP=${MGMT_IP:-10.1.0.1}
+export EXT_NAT_MASK=${EXT_NAT_MASK:-255.255.252.0}
+export EXT_NAT_GW=${EXT_NAT_GW:-192.16.1.1}
+export EXT_NAT_IP_START=${EXT_NAT_IP_START:-192.16.1.3}
+export EXT_NAT_IP_END=${EXT_NAT_IP_END:-192.16.1.254}
 export EXTERNAL_NIC=${EXTERNAL_NIC:-eth0}
 export CLUSTER_NAME="opnfv2"
 export DOMAIN="ods.com"
index 6e38d70..2395965 100644 (file)
@@ -1,8 +1,8 @@
 export COMPASS_VIRT_CPUS=4
 export COMPASS_VIRT_MEM=4096
 export COMPASS_SERVER=$INSTALL_IP
-export COMPASS_SERVER_URL="http://$MGMT_IP/api"
-export HTTP_SERVER_URL="http://$MGMT_IP/api"
+export COMPASS_SERVER_URL="http://$MGMT_IP:5050/api"
+export HTTP_SERVER_URL="http://$MGMT_IP:5050/api"
 export COMPASS_USER_EMAIL="admin@huawei.com"
 export COMPASS_USER_PASSWORD="admin"
 export COMPASS_DNS1=${COMPASS_DNS1:-'8.8.8.8'}
@@ -11,6 +11,6 @@ export COMPASS_EXTERNAL_IP=${COMPASS_EXTERNAL_IP:-}
 export COMPASS_EXTERNAL_MASK=${COMPASS_EXTERNAL_MASK:-}
 export COMPASS_EXTERNAL_GW=${COMPASS_EXTERNAL_GW:-}
 export LANGUAGE="EN"
-export TIMEZONE="Asia/Shanghai"
+export TIMEZONE="America/Los_Angeles"
 export NTP_SERVER="$COMPASS_SERVER"
 export NAMESERVERS="$COMPASS_SERVER"
index 7f4fcf0..ab7e568 100644 (file)
@@ -1,5 +1,5 @@
 export VIRT_NUMBER=${VIRT_NUMBER:-5}
-export VIRT_CPUS=${VIRT_CPU:-4}
+export VIRT_CPUS=${VIRT_CPUS:-8}
 export VIRT_MEM=${VIRT_MEM:-16384}
 export VIRT_DISK=${VIRT_DISK:-200G}
 
index 5c2b025..ab485a8 100644 (file)
@@ -65,10 +65,10 @@ ip_settings:
 
   - name: external
     ip_ranges:
-      - - "192.168.107.210"
-        - "192.168.107.220"
-    cidr: "192.168.107.0/24"
-    gw: "192.168.107.1"
+      - - "192.16.1.210"
+        - "192.16.1.220"
+    cidr: "192.16.1.0/24"
+    gw: "192.16.1.1"
     role:
       - controller
       - compute
@@ -79,7 +79,7 @@ internal_vip:
   interface: mgmt
 
 public_vip:
-  ip: 192.168.107.222
+  ip: 192.16.1.222
   netmask: "24"
   interface: external
 
@@ -94,7 +94,7 @@ public_net_info:
   router: router-ext
   enable_dhcp: "False"
   no_gateway: "False"
-  external_gw: "192.168.107.1"
-  floating_ip_cidr: "192.168.107.0/24"
-  floating_ip_start: "192.168.107.101"
-  floating_ip_end: "192.168.107.199"
+  external_gw: "192.16.1.1"
+  floating_ip_cidr: "192.16.1.0/24"
+  floating_ip_start: "192.16.1.101"
+  floating_ip_end: "192.16.1.199"
index b869dd4..ab485a8 100644 (file)
@@ -65,10 +65,10 @@ ip_settings:
 
   - name: external
     ip_ranges:
-      - - "192.168.106.210"
-        - "192.168.106.220"
-    cidr: "192.168.106.0/24"
-    gw: "192.168.106.1"
+      - - "192.16.1.210"
+        - "192.16.1.220"
+    cidr: "192.16.1.0/24"
+    gw: "192.16.1.1"
     role:
       - controller
       - compute
@@ -79,7 +79,7 @@ internal_vip:
   interface: mgmt
 
 public_vip:
-  ip: 192.168.106.222
+  ip: 192.16.1.222
   netmask: "24"
   interface: external
 
@@ -94,7 +94,7 @@ public_net_info:
   router: router-ext
   enable_dhcp: "False"
   no_gateway: "False"
-  external_gw: "192.168.106.1"
-  floating_ip_cidr: "192.168.106.0/24"
-  floating_ip_start: "192.168.106.101"
-  floating_ip_end: "192.168.106.199"
+  external_gw: "192.16.1.1"
+  floating_ip_cidr: "192.16.1.0/24"
+  floating_ip_start: "192.16.1.101"
+  floating_ip_end: "192.16.1.199"
diff --git a/deploy/conf/vm_environment/network.yml b/deploy/conf/vm_environment/network.yml
new file mode 100644 (file)
index 0000000..ab485a8
--- /dev/null
@@ -0,0 +1,100 @@
+##############################################################################
+# Copyright (c) 2016 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+
+---
+nic_mappings: []
+bond_mappings: []
+
+provider_net_mappings:
+  - name: br-prv
+    network: physnet
+    interface: eth1
+    type: ovs
+    role:
+      - controller
+      - compute
+
+sys_intf_mappings:
+  - name: mgmt
+    interface: eth1
+    vlan_tag: 101
+    type: vlan
+    role:
+      - controller
+      - compute
+
+  - name: storage
+    interface: eth1
+    vlan_tag: 102
+    type: vlan
+    role:
+      - controller
+      - compute
+
+  - name: external
+    interface: br-prv
+    type: ovs
+    role:
+      - controller
+      - compute
+
+ip_settings:
+  - name: mgmt
+    ip_ranges:
+      - - "172.16.1.1"
+        - "172.16.1.254"
+    cidr: "172.16.1.0/24"
+    role:
+      - controller
+      - compute
+
+  - name: storage
+    ip_ranges:
+      - - "172.16.2.1"
+        - "172.16.2.254"
+    cidr: "172.16.2.0/24"
+    role:
+      - controller
+      - compute
+
+  - name: external
+    ip_ranges:
+      - - "192.16.1.210"
+        - "192.16.1.220"
+    cidr: "192.16.1.0/24"
+    gw: "192.16.1.1"
+    role:
+      - controller
+      - compute
+
+internal_vip:
+  ip: 172.16.1.222
+  netmask: "24"
+  interface: mgmt
+
+public_vip:
+  ip: 192.16.1.222
+  netmask: "24"
+  interface: external
+
+onos_nic: eth2
+public_net_info:
+  enable: "True"
+  network: ext-net
+  type: flat
+  segment_id: 1000
+  subnet: ext-subnet
+  provider_network: physnet
+  router: router-ext
+  enable_dhcp: "False"
+  no_gateway: "False"
+  external_gw: "192.16.1.1"
+  floating_ip_cidr: "192.16.1.0/24"
+  floating_ip_start: "192.16.1.101"
+  floating_ip_end: "192.16.1.199"
index 8c86304..16bfef6 100755 (executable)
@@ -19,11 +19,6 @@ function add_bonding(){
 
 function deploy_host(){
     export AYNC_TIMEOUT=20
-    ssh $ssh_args root@${MGMT_IP} mkdir -p /opt/compass/bin/ansible_callbacks
-    scp $ssh_args -r ${COMPASS_DIR}/deploy/status_callback.py root@${MGMT_IP}:/opt/compass/bin/ansible_callbacks/status_callback.py
-    scp $ssh_args -r ${COMPASS_DIR}/deploy/playbook_done.py root@${MGMT_IP}:/opt/compass/bin/ansible_callbacks/playbook_done.py
-    ssh $ssh_args root@${MGMT_IP} mkdir -p /opt/ansible-modules
-    scp $ssh_args -r ${COMPASS_DIR}/deploy/adapters/ansible/ansible_modules/* root@${MGMT_IP}:/opt/ansible-modules
 
     # avoid nodes reboot to fast, cobbler can not give response
     (sleep $AYNC_TIMEOUT; add_bonding; rename_nics; reboot_hosts) &
index 0a991f1..52f8a7b 100755 (executable)
@@ -42,8 +42,8 @@ function launch_host_vms() {
           -e "s#REPLACE_IMAGE#$vm_dir/disk.img#g" \
           -e "s/REPLACE_BOOT_MAC/${mac_array[i]}/g" \
           -e "s/REPLACE_NET_INSTALL/install/g" \
-          -e "s/REPLACE_NET_IAAS/external/g" \
-          -e "s/REPLACE_NET_TENANT/external/g" \
+          -e "s/REPLACE_NET_IAAS/external_nat/g" \
+          -e "s/REPLACE_NET_TENANT/external_nat/g" \
           $COMPASS_DIR/deploy/template/vm/host.xml\
           > $vm_dir/libvirt.xml
 
index 51094b2..d2ec081 100755 (executable)
@@ -84,15 +84,10 @@ else
     log_info "deploy host macs: $machines"
 fi
 
-
-if [[ -z "$REDEPLOY_HOST" || "$REDEPLOY_HOST" == "false" ]]; then
+if [[ "$REDEPLOY_HOST" != "true" ]]; then
     if ! set_compass_machine; then
         log_error "set_compass_machine fail"
     fi
-
-    # FIXME: refactor compass adapter and conf code, instead of doing
-    # hack conf injection.
-    inject_compass_conf
 fi
 
 if [[ "$DEPLOY_HOST" == "true" || $REDEPLOY_HOST == "true" ]]; then
index 0f5a7d5..e50f52a 100755 (executable)
@@ -75,9 +75,9 @@ function setup_bridge_external()
     sudo virsh net-destroy external
     sudo virsh net-undefine external
 
-    save_network_info
+    #save_network_info
     sed -e "s/REPLACE_NAME/external/g" \
-        -e "s/REPLACE_OVS/br-external/g" \
+        -e "s/REPLACE_OVS/br-external_nat/g" \
     $COMPASS_DIR/deploy/template/network/bridge_ovs.xml \
     > $WORK_DIR/network/external.xml
 
@@ -125,6 +125,7 @@ function recover_nat_net() {
 
 function setup_virtual_net() {
   setup_nat_net install $INSTALL_GW $INSTALL_MASK
+  setup_nat_net external_nat $EXT_NAT_GW $EXT_NAT_MASK $EXT_NAT_IP_START $EXT_NAT_IP_END
 }
 
 function recover_virtual_net() {
@@ -135,7 +136,8 @@ function setup_baremetal_net() {
   if [[ -z $INSTALL_NIC ]]; then
     exit 1
   fi
-  setup_bridge_net install $INSTALL_NIC
+  sudo ifconfig $INSTALL_NIC up
+  sudo ifconfig $INSTALL_NIC $INSTALL_GW
 }
 
 function recover_baremetal_net() {
@@ -151,7 +153,7 @@ function setup_network_boot_scripts() {
     sudo cat << EOF >> /usr/sbin/network_setup
 
 sleep 2
-save_network_info
+#save_network_info
 clear_forward_rejct_rules
 EOF
     sudo chmod 755 /usr/sbin/network_setup
@@ -163,13 +165,12 @@ EOF
 }
 
 function create_nets() {
-    setup_nat_net mgmt $MGMT_GW $MGMT_MASK $MGMT_IP_START $MGMT_IP_END
 
     # create install network
     setup_"$TYPE"_net
 
     # create external network
-    setup_bridge_external
+#    setup_bridge_external
     clear_forward_rejct_rules
 
     setup_network_boot_scripts
index c0a81a4..ddb8e8d 100644 (file)
@@ -24,14 +24,14 @@ current_dir = os.path.dirname(os.path.realpath(__file__))
 sys.path.append(current_dir + '/..')
 
 
-import switch_virtualenv  # noqa
+import switch_virtualenv  # noqa
 from compass.apiclient.restful import Client  # noqa: E402
 from compass.utils import flags  # noqa: E402
 
 
 flags.add('compass_server',
           help='compass server url',
-          default='http://127.0.0.1/api')
+          default='http://compass-deck/api')
 flags.add('compass_user_email',
           help='compass user email',
           default='admin@huawei.com')
@@ -109,5 +109,7 @@ class CallbackModule(CallbackBase):
         self._login(self.client)
 
         for host in hosts:
+            if host == "localhost":
+                continue
             clusterhost_name = host + "." + cluster_name
             self.client.clusterhost_ready(clusterhost_name)
index b7e5bfa..553449e 100755 (executable)
@@ -14,52 +14,56 @@ function print_logo()
     set +x; sleep 2; set -x
 }
 
-function download_iso()
+function install_docker()
 {
-    iso_name=`basename $ISO_URL`
-    rm -f $WORK_DIR/cache/"$iso_name.md5"
-    curl --connect-timeout 10 -o $WORK_DIR/cache/"$iso_name.md5" $ISO_URL.md5
-    if [[ -f $WORK_DIR/cache/$iso_name ]]; then
-        local_md5=`md5sum $WORK_DIR/cache/$iso_name | cut -d ' ' -f 1`
-        repo_md5=`cat $WORK_DIR/cache/$iso_name.md5 | cut -d ' ' -f 1`
-        if [[ "$local_md5" == "$repo_md5" ]]; then
-            return
-        fi
-    fi
+    sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual
+    sudo apt-get install -y apt-transport-https ca-certificates curl \
+                 software-properties-common
+    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+    sudo apt-key fingerprint 0EBFCD88
+    sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
+       $(lsb_release -cs) \
+       stable"
+    sudo apt-get update
+    sudo apt-get install -y docker-ce
+
+    sudo service docker start
+    sudo service docker restart
+}
 
-    curl --connect-timeout 10 -o $WORK_DIR/cache/$iso_name $ISO_URL
+function extract_tar()
+{
+    tar_name=`basename $TAR_URL`
+    rm -f $WORK_DIR/cache/$tar_name
+    curl --connect-timeout 10 -o $WORK_DIR/cache/$tar_name $TAR_URL
+    tar -zxf $WORK_DIR/cache/$tar_name -C $WORK_DIR/installer
 }
 
 function prepare_env() {
-    sed -i -e 's/^#user =.*/user = "root"/g' /etc/libvirt/qemu.conf
-    sed -i -e 's/^#group =.*/group = "root"/g' /etc/libvirt/qemu.conf
+    sudo sed -i -e 's/^#user =.*/user = "root"/g' /etc/libvirt/qemu.conf
+    sudo sed -i -e 's/^#group =.*/group = "root"/g' /etc/libvirt/qemu.conf
     sudo service libvirt-bin restart
     if sudo service openvswitch-switch status|grep stop; then
         sudo service openvswitch-switch start
     fi
 
     # prepare work dir
-    rm -rf $WORK_DIR/{installer,vm,network,iso}
+    sudo rm -rf $WORK_DIR/{installer,vm,network,iso,docker}
     mkdir -p $WORK_DIR/installer
     mkdir -p $WORK_DIR/vm
     mkdir -p $WORK_DIR/network
     mkdir -p $WORK_DIR/iso
     mkdir -p $WORK_DIR/cache
+    mkdir -p $WORK_DIR/docker
 
-    download_iso
-
-    cp $WORK_DIR/cache/`basename $ISO_URL` $WORK_DIR/iso/centos.iso -f
-
-    # copy compass
-    mkdir -p $WORK_DIR/mnt
-    sudo mount -o loop $WORK_DIR/iso/centos.iso $WORK_DIR/mnt
-    cp -rf $WORK_DIR/mnt/compass/compass-core $WORK_DIR/installer/
-    cp -rf $WORK_DIR/mnt/compass/compass-install $WORK_DIR/installer/
-    sudo umount $WORK_DIR/mnt
-    rm -rf $WORK_DIR/mnt
+    extract_tar
 
     chmod 755 $WORK_DIR -R
 
+    if [[ ! -d /etc/libvirt/hooks ]]; then
+        sudo mkdir -p /etc/libvirt/hooks
+    fi
+
     sudo cp ${COMPASS_DIR}/deploy/qemu_hook.sh /etc/libvirt/hooks/qemu
 }
 
@@ -72,12 +76,22 @@ function  _prepare_python_env() {
         if [[ ! -z "$JHPKG_URL" ]]; then
              _pre_env_setup
         else
-             sudo apt-get update -y
-             sudo apt-get install -y --force-yes mkisofs bc curl ipmitool openvswitch-switch
-             sudo apt-get install -y --force-yes git python-dev python-pip figlet sshpass
-             sudo apt-get install -y --force-yes libxslt-dev libxml2-dev libvirt-dev build-essential qemu-utils qemu-kvm libvirt-bin virtinst libmysqld-dev
-             sudo apt-get install -y --force-yes libffi-dev libssl-dev
-
+            if [[ ! -f /etc/redhat-release ]]; then
+                sudo apt-get update -y
+                sudo apt-get install -y --force-yes mkisofs bc curl ipmitool openvswitch-switch
+                sudo apt-get install -y --force-yes git python-dev python-pip figlet sshpass
+                sudo apt-get install -y --force-yes libxslt-dev libxml2-dev libvirt-dev build-essential qemu-utils qemu-kvm libvirt-bin virtinst libmysqld-dev
+                sudo apt-get install -y --force-yes libffi-dev libssl-dev
+            else
+                sudo yum install -y centos-release-openstack-ocata
+                sudo yum install -y epel-release
+                sudo yum install openvswitch -y --nogpgcheck
+                sudo yum install -y git python-devel python-pip figlet sshpass mkisofs bc curl ipmitool
+                sudo yum install -y libxslt-devel libxml2-devel libvirt-devel libmysqld-devel
+                sudo yum install -y qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer
+                sudo yum install -y libffi libffi-devel openssl-devel
+                sudo yum groupinstall -y 'Development Tools'
+            fi
         fi
    fi
 
@@ -97,6 +111,7 @@ function  _prepare_python_env() {
         pip install --upgrade netaddr
         pip install --upgrade oslo.config
         pip install --upgrade ansible
+        sudo pip install --upgrade docker-compose
    fi
 }
 
@@ -148,11 +163,21 @@ EOF
          build-essential qemu-utils qemu-kvm libvirt-bin \
          virtinst libmysqld-dev \
          libssl-dev libffi-dev python-cffi
+
+     sudo docker version >/dev/null 2>&1
+     if [[ $? -ne 0 ]]; then
+         install_docker
+     fi
+
      pid=$(ps -ef | grep SimpleHTTPServer | grep 9998 | awk '{print $2}')
      echo $pid
      kill -9 $pid
 
-     sudo cp ${COMPASS_DIR}/deploy/qemu_hook.sh /etc/libvirt/hooks/qemu
+     if [[ ! -d /etc/libvirt/hooks ]]; then
+         sudo mkdir -p /etc/libvirt/hooks
+     fi
+
+     sudo cp -f ${COMPASS_DIR}/deploy/qemu_hook.sh /etc/libvirt/hooks/qemu
 
      rm -rf /etc/apt/sources.list
      if [[ -f /etc/apt/sources.list.bak ]]; then
index e959759..2672c99 100644 (file)
@@ -26,14 +26,12 @@ def rename_nics(dha_info, rsa_file, compass_ip, os_version):
                 nic_name = interface.keys()[0]
                 mac = interface.values()[0]
 
-                exec_cmd("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
-                          -i %s root@%s \
-                          'cobbler system edit --name=%s --interface=%s --mac=%s --static=1'"   # noqa
-                         % (rsa_file, compass_ip, host_name, nic_name, mac))   # noqa
-
-    exec_cmd("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
-              -i %s root@%s \
-              'cobbler sync'" % (rsa_file, compass_ip))
+                exec_cmd("sudo docker exec compass-cobbler bash -c \
+                         'cobbler system edit --name=%s --interface=%s --mac=%s --static=1'"   # noqa
+                         % (host_name, nic_name, mac))   # noqa
+
+    exec_cmd("sudo docker exec compass-cobbler bash -c \
+             'cobbler sync'")
 
 if __name__ == "__main__":
     assert(len(sys.argv) == 5)
index 9136804..47df1d3 100644 (file)
@@ -20,7 +20,7 @@ def task_error(display, host, data):
 #    if isinstance(data, dict):
 #        invocation = data.pop('invocation', {})
 
-    notify_host(display, "localhost", host, "failed")
+    notify_host(display, "compass-deck", host, "failed")
 
 
 class CallbackModule(CallbackBase):
@@ -38,8 +38,9 @@ class CallbackModule(CallbackBase):
     def v2_on_any(self, *args, **kwargs):
         pass
 
-    def v2_runner_on_failed(self, host, res, ignore_errors=False):
-        task_error(self._display, host, res)
+    def v2_runner_on_failed(self, res, ignore_errors=False):
+        # task_error(self._display, host, res)
+        pass
 
     def v2_runner_on_ok(self, host, res):
         pass
@@ -60,7 +61,8 @@ class CallbackModule(CallbackBase):
         pass
 
     def v2_runner_on_async_failed(self, host, res, jid):
-        task_error(self._display, host, res)
+        # task_error(self._display, host, res)
+        pass
 
     def v2_playbook_on_start(self):
         pass
@@ -97,29 +99,27 @@ class CallbackModule(CallbackBase):
 
     def v2_playbook_on_stats(self, stats):
         self._display.display("playbook_on_stats enter")
-        all_vars = self.play.get_variable_manager().get_vars(self.loader)
-        host_vars = all_vars["hostvars"]
         hosts = sorted(stats.processed.keys())
-        cluster_name = host_vars[hosts[0]]['cluster_name']
         failures = False
         unreachable = False
 
         for host in hosts:
             summary = stats.summarize(host)
+            # self._display.display("host: %s \nsummary: %s\n" % (host, summary)) # noqa
 
             if summary['failures'] > 0:
                 failures = True
             if summary['unreachable'] > 0:
                 unreachable = True
 
+        clusterhosts = set(hosts) - set(['localhost'])
         if failures or unreachable:
-            for host in hosts:
-                notify_host(self._display, "localhost", host, "error")
+            for host in clusterhosts:
+                notify_host(self._display, "compass-deck", host, "error")
             return
 
-        for host in hosts:
-            clusterhost_name = host + "." + cluster_name
-            notify_host(self._display, "localhost", clusterhost_name, "succ")
+        for host in clusterhosts:
+            notify_host(self._display, "compass-deck", host, "succ")
 
 
 def raise_for_status(resp):
@@ -144,13 +144,13 @@ def auth(conn):
 
 
 def notify_host(display, compass_host, host, status):
+    display.display("hostname: %s" % host)
+    host = host.strip("host")
+    url = "/api/clusterhosts/%s/state" % host
     if status == "succ":
-        body = {"ready": True}
-        url = "/api/clusterhosts/%s/state_internal" % host
+        body = {"state": "SUCCESSFUL"}
     elif status == "error":
         body = {"state": "ERROR"}
-        host = host.strip("host")
-        url = "/api/clusterhosts/%s/state" % host
     else:
         display.error("notify_host: host %s with status %s is not supported"
                       % (host, status))
diff --git a/quickstart.sh b/quickstart.sh
new file mode 100755 (executable)
index 0000000..db56ee2
--- /dev/null
@@ -0,0 +1,26 @@
+#!/bin/bash
+##############################################################################
+# Copyright (c) 2016-2017 HUAWEI TECHNOLOGIES CO.,LTD and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+sudo apt-get update
+sudo apt-get install -y git
+
+git clone https://gerrit.opnfv.org/gerrit/compass4nfv
+
+pushd compass4nfv
+
+CURRENT_DIR=$PWD
+SCENARIO=${SCENARIO:-os-nosdn-nofeature-ha.yml}
+
+./build.sh
+
+export TAR_URL=file://$CURRENT_DIR/work/building/compass.tar.gz
+export DHA=$CURRENT_DIR/deploy/conf/vm_environment/$SCENARIO
+export NETWORK=$CURRENT_DIR/deploy/conf/vm_environment/network.yml
+
+./deploy.sh