From: Dan Radez Date: Fri, 8 May 2015 15:21:28 +0000 (+0000) Subject: Merge "Small correction in build.sh default VAR settings. JIRA:" X-Git-Tag: arno.2015.1.0~55 X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=commitdiff_plain;h=db3ea337d530925f15bde83f332405f380708fc3;hp=f0f5b19a1d10ec9c66f832287da1e31e5af434b9;p=genesis.git Merge "Small correction in build.sh default VAR settings. JIRA:" --- diff --git a/LICENSE b/LICENSE deleted file mode 100644 index eab0924..0000000 --- a/LICENSE +++ /dev/null @@ -1,13 +0,0 @@ -Copyright 2015 Open Platform for NFV Project, Inc. and its contributors - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/fuel/LICENSE.rst b/LICENSE.rst similarity index 72% rename from fuel/LICENSE.rst rename to LICENSE.rst index 9537658..e8fa309 100644 --- a/fuel/LICENSE.rst +++ b/LICENSE.rst @@ -26,7 +26,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -Other applicable upstream project Licenses relevant for Fuel@OPNFV +Other applicable upstream project Licenses ================================================================== You may not use the content of this software bundle except in compliance with the Licenses as listed below: @@ -58,8 +58,23 @@ Licenses as listed below: | Linux | GPLv3 | | | https://www.gnu.org/copyleft/gpl.html | +----------------+-----------------------------------------------------+ +| Ceph | GPL v2 | +| | https://www.gnu.org/licenses/gpl-2.0.html | ++----------------+-----------------------------------------------------+ +| Puppet | Apache License 2.0 | +| | https://www.apache.org/licenses/LICENSE-2.0 | ++----------------+-----------------------------------------------------+ + +Other applicable upstream project Licenses used by Fuel ISO +================================================================== +You may not use the content of this software bundle except in compliance with the +Licenses as listed below: + ++----------------+-----------------------------------------------------+ +| **Component** | **Licence** | ++----------------+-----------------------------------------------------+ | Docker | Apache License 2.0 | -| | https://www.apache.org/licenses/LICENSE-2.0 +| | https://www.apache.org/licenses/LICENSE-2.0 | +----------------+-----------------------------------------------------+ | Fuel | Apache License 2.0 | | | https://www.apache.org/licenses/LICENSE-2.0 | @@ -67,12 +82,6 @@ Licenses as listed below: | OpenJDK/JRE | GPL v2 | | | https://www.gnu.org/licenses/gpl-2.0.html | +----------------+-----------------------------------------------------+ -| Ceph | GPL v2 | -| | https://www.gnu.org/licenses/gpl-2.0.html | -+----------------+-----------------------------------------------------+ -| Puppet | Apache License 2.0 | -| | https://www.apache.org/licenses/LICENSE-2.0 | -+----------------+-----------------------------------------------------+ | Cobbler | GPL v2 | | | https://www.gnu.org/licenses/gpl-2.0.html | +----------------+-----------------------------------------------------+ @@ -83,3 +92,25 @@ Licenses as listed below: | | https://www.apache.org/licenses/LICENSE-2.0 | +----------------+-----------------------------------------------------+ +Other applicable upstream project Licenses used by Foreman ISO +================================================================== +You may not use the content of this software bundle except in compliance with the +Licenses as listed below: + ++----------------+-----------------------------------------------------+ +| **Component** | **Licence** | ++----------------+-----------------------------------------------------+ +| Foreman | Creative Commons Attribution-ShareAlike 3.0 | +| | http://creativecommons.org/licenses/by-sa/3.0/ | ++----------------+-----------------------------------------------------+ +| VirtualBox | GPL v2 | +| | https://www.gnu.org/licenses/gpl-2.0.html | ++----------------+-----------------------------------------------------+ +| Vagrant | The MIT License ++----------------+-----------------------------------------------------+ +| Ansible | GPL v3 | +| | https://www.gnu.org/licenses/gpl-3.0.html | ++----------------+-----------------------------------------------------+ +| Khaleesi | GPL v3 | +| | https://www.gnu.org/licenses/gpl-3.0.html | ++----------------+-----------------------------------------------------+ diff --git a/foreman/ci/clean.sh b/foreman/ci/clean.sh new file mode 100755 index 0000000..25352a8 --- /dev/null +++ b/foreman/ci/clean.sh @@ -0,0 +1,151 @@ +#!/usr/bin/env bash + +#Clean script to uninstall provisioning server for Foreman/QuickStack +#author: Tim Rozet (trozet@redhat.com) +# +#Uses Vagrant and VirtualBox +# +#Destroys Vagrant VM running in /tmp/bgs_vagrant +#Shuts down all nodes found in Khaleesi settings +#Removes hypervisor kernel modules (VirtualBox) + +##VARS +reset=`tput sgr0` +blue=`tput setaf 4` +red=`tput setaf 1` +green=`tput setaf 2` +##END VARS + +##FUNCTIONS +display_usage() { + echo -e "\n\n${blue}This script is used to uninstall Foreman/QuickStack Installer and Clean OPNFV Target System${reset}\n\n" + echo -e "\nUsage:\n$0 [arguments] \n" + echo -e "\n -no_parse : No variable parsing into config. Flag. \n" + echo -e "\n -base_config : Full path of ksgen settings file to parse. Required. Will provide BMC info to shutdown hosts. Example: -base_config /opt/myinventory.yml \n" +} + +##END FUNCTIONS + +if [[ ( $1 == "--help") || $1 == "-h" ]]; then + display_usage + exit 0 +fi + +echo -e "\n\n${blue}This script is used to uninstall Foreman/QuickStack Installer and Clean OPNFV Target System${reset}\n\n" +echo "Use -h to display help" +sleep 2 + +while [ "`echo $1 | cut -c1`" = "-" ] +do + echo $1 + case "$1" in + -base_config) + base_config=$2 + shift 2 + ;; + *) + display_usage + exit 1 + ;; +esac +done + + +##install ipmitool +if ! yum list installed | grep -i ipmitool; then + if ! yum -y install ipmitool; then + echo "${red}Unable to install ipmitool!${reset}" + exit 1 + fi +else + echo "${blue}Skipping ipmitool as it is already installed!${reset}" +fi + +###find all the bmc IPs and number of nodes +node_counter=0 +output=`grep bmc_ip $base_config | grep -Eo '[0-9]+.[0-9]+.[0-9]+.[0-9]+'` +for line in ${output} ; do + bmc_ip[$node_counter]=$line + ((node_counter++)) +done + +max_nodes=$((node_counter-1)) + +###find bmc_users per node +node_counter=0 +output=`grep bmc_user $base_config | sed 's/\s*bmc_user:\s*//'` +for line in ${output} ; do + bmc_user[$node_counter]=$line + ((node_counter++)) +done + +###find bmc_pass per node +node_counter=0 +output=`grep bmc_pass $base_config | sed 's/\s*bmc_pass:\s*//'` +for line in ${output} ; do + bmc_pass[$node_counter]=$line + ((node_counter++)) +done + +for mynode in `seq 0 $max_nodes`; do + echo "${blue}Node: ${bmc_ip[$mynode]} ${bmc_user[$mynode]} ${bmc_pass[$mynode]} ${reset}" + if ipmitool -I lanplus -P ${bmc_pass[$mynode]} -U ${bmc_user[$mynode]} -H ${bmc_ip[$mynode]} chassis power off; then + echo "${blue}Node: $mynode, ${bmc_ip[$mynode]} powered off!${reset}" + else + echo "${red}Error: Unable to power off $mynode, ${bmc_ip[$mynode]} ${reset}" + exit 1 + fi +done + +###check to see if vbox is installed +vboxpkg=`rpm -qa | grep VirtualBox` +if [ $? -eq 0 ]; then + skip_vagrant=0 +else + skip_vagrant=1 +fi + +###destroy vagrant +if [ $skip_vagrant -eq 0 ]; then + cd /tmp/bgs_vagrant + if vagrant destroy -f; then + echo "${blue}Successfully destroyed Foreman VM ${reset}" + else + echo "${red}Unable to destroy Foreman VM ${reset}" + echo "${blue}Checking if vagrant was already destroyed and no process is active...${reset}" + if ps axf | grep vagrant; then + echo "${red}Vagrant VM still exists...exiting ${reset}" + exit 1 + else + echo "${blue}Vagrant process doesn't exist. Moving on... ${reset}" + fi + fi + + ###kill virtualbox + echo "${blue}Killing VirtualBox ${reset}" + killall virtualbox + killall VboxHeadless + + ###remove virtualbox + echo "${blue}Removing VirtualBox ${reset}" + yum -y remove $vboxpkg + +else + echo "${blue}Skipping Vagrant destroy + Vbox Removal as VirtualBox package is already removed ${reset}" +fi + + +###remove kernel modules +echo "${blue}Removing kernel modules ${reset}" +for kernel_mod in vboxnetadp vboxnetflt vboxpci vboxdrv; do + if ! rmmod $kernel_mod; then + if rmmod $kernel_mod 2>&1 | grep -i 'not currently loaded'; then + echo "${blue} $kernel_mod is not currently loaded! ${reset}" + else + echo "${red}Error trying to remove Kernel Module: $kernel_mod ${reset}" + exit 1 + fi + else + echo "${blue}Removed Kernel Module: $kernel_mod ${reset}" + fi +done diff --git a/foreman/ci/deploy.sh b/foreman/ci/deploy.sh index 49e1590..ae585b0 100755 --- a/foreman/ci/deploy.sh +++ b/foreman/ci/deploy.sh @@ -24,6 +24,7 @@ blue=`tput setaf 4` red=`tput setaf 1` green=`tput setaf 2` +declare -A interface_arr ##END VARS ##FUNCTIONS @@ -206,6 +207,14 @@ else printf '%s\n' 'deploy.sh: Skipping kernel module for virtualbox. Already Installed' fi +##install Ansible +if ! yum list installed | grep -i ansible; then + if ! yum -y install ansible; then + printf '%s\n' 'deploy.sh: Unable to install Ansible package' >&2 + exit 1 + fi +fi + ##install Vagrant if ! rpm -qa | grep vagrant; then if ! rpm -Uvh https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.2_x86_64.rpm; then @@ -252,7 +261,7 @@ cd bgs_vagrant echo "${blue}Detecting network configuration...${reset}" ##detect host 1 or 3 interface configuration #output=`ip link show | grep -E "^[0-9]" | grep -Ev ": lo|tun|virbr|vboxnet" | awk '{print $2}' | sed 's/://'` -output=`ifconfig | grep -E "^[a-Z0-9]+:"| grep -Ev "lo|tun|virbr|vboxnet:" | awk '{print $1}' | sed 's/://'` +output=`ifconfig | grep -E "^[a-zA-Z0-9]+:"| grep -Ev "lo|tun|virbr|vboxnet" | awk '{print $1}' | sed 's/://'` if [ ! "$output" ]; then printf '%s\n' 'deploy.sh: Unable to detect interfaces to bridge to' >&2 @@ -274,6 +283,7 @@ for interface in ${output}; do if [ ! "$new_ip" ]; then continue fi + interface_arr[$interface]=$if_counter interface_ip_arr[$if_counter]=$new_ip subnet_mask=$(find_netmask $interface) if [ "$if_counter" -eq 1 ]; then @@ -310,15 +320,47 @@ fi echo "${blue}Network detected: ${deployment_type}! ${reset}" if route | grep default; then - defaultgw=$(route | grep default | awk '{print $2}') - echo "${blue}Default gateway detected: $defaultgw ${reset}" - sed -i 's/^.*default_gw =.*$/ default_gw = '\""$defaultgw"\"'/' Vagrantfile + echo "${blue}Default Gateway Detected ${reset}" + host_default_gw=$(ip route | grep default | awk '{print $3}') + echo "${blue}Default Gateway: $host_default_gw ${reset}" + default_gw_interface=$(ip route get $host_default_gw | awk '{print $3}') + case "${interface_arr[$default_gw_interface]}" in + 0) + echo "${blue}Default Gateway Detected on Admin Interface!${reset}" + sed -i 's/^.*default_gw =.*$/ default_gw = '\""$host_default_gw"\"'/' Vagrantfile + node_default_gw=$host_default_gw + ;; + 1) + echo "${red}Default Gateway Detected on Private Interface!${reset}" + echo "${red}Private subnet should be private and not have Internet access!${reset}" + exit 1 + ;; + 2) + echo "${blue}Default Gateway Detected on Public Interface!${reset}" + sed -i 's/^.*default_gw =.*$/ default_gw = '\""$host_default_gw"\"'/' Vagrantfile + echo "${blue}Will setup NAT from Admin -> Public Network on VM!${reset}" + sed -i 's/^.*nat_flag =.*$/ nat_flag = true/' Vagrantfile + echo "${blue}Setting node gateway to be VM Admin IP${reset}" + node_default_gw=${interface_ip_arr[0]} + ;; + 3) + echo "${red}Default Gateway Detected on Storage Interface!${reset}" + echo "${red}Storage subnet should be private and not have Internet access!${reset}" + exit 1 + ;; + *) + echo "${red}Unable to determine which interface default gateway is on..Exiting!${reset}" + exit 1 + ;; + esac else - defaultgw=`echo ${interface_arr_ip[0]} | cut -d. -f1-3` + #assumes 24 bit mask + defaultgw=`echo ${interface_ip_arr[0]} | cut -d. -f1-3` firstip=.1 defaultgw=$defaultgw$firstip echo "${blue}Unable to find default gateway. Assuming it is $defaultgw ${reset}" sed -i 's/^.*default_gw =.*$/ default_gw = '\""$defaultgw"\"'/' Vagrantfile + node_default_gw=$defaultgw fi if [ $base_config ]; then @@ -339,7 +381,7 @@ echo "${blue}Gathering network parameters for Target System...this may take a fe ##if single node deployment all the variables will have the same ip ##interface names will be enp0s3, enp0s8, enp0s9 in chef/centos7 -sed -i 's/^.*default_gw:.*$/default_gw:'" $defaultgw"'/' opnfv_ksgen_settings.yml +sed -i 's/^.*default_gw:.*$/default_gw:'" $node_default_gw"'/' opnfv_ksgen_settings.yml ##replace private interface parameter ##private interface will be of hosts, so we need to know the provisioned host interface name diff --git a/foreman/ci/inventory/lf_pod2_ksgen_settings.yml b/foreman/ci/inventory/lf_pod2_ksgen_settings.yml new file mode 100644 index 0000000..ff6e3e0 --- /dev/null +++ b/foreman/ci/inventory/lf_pod2_ksgen_settings.yml @@ -0,0 +1,349 @@ +global_params: + admin_email: opnfv@opnfv.com + ha_flag: "true" + odl_flag: "true" + private_network: + storage_network: + controllers_hostnames_array: oscontroller1,oscontroller2,oscontroller3 + controllers_ip_array: + amqp_vip: + private_subnet: + cinder_admin_vip: + cinder_private_vip: + cinder_public_vip: + db_vip: + glance_admin_vip: + glance_private_vip: + glance_public_vip: + heat_admin_vip: + heat_private_vip: + heat_public_vip: + heat_cfn_admin_vip: + heat_cfn_private_vip: + heat_cfn_public_vip: + horizon_admin_vip: + horizon_private_vip: + horizon_public_vip: + keystone_admin_vip: + keystone_private_vip: + keystone_public_vip: + loadbalancer_vip: + neutron_admin_vip: + neutron_private_vip: + neutron_public_vip: + nova_admin_vip: + nova_private_vip: + nova_public_vip: +network_type: multi_network +default_gw: +foreman: + seed_values: + - { name: heat_cfn, oldvalue: true, newvalue: false } +workaround_puppet_version_lock: false +opm_branch: master +installer: + name: puppet + short_name: pupt + network: + auto_assign_floating_ip: false + variant: + short_name: m2vx + plugin: + name: neutron +workaround_openstack_packstack_rpm: false +tempest: + repo: + Fedora: + '19': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-19/ + '20': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-20/ + RedHat: + '7.0': https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/ + use_virtual_env: false + public_allocation_end: 10.2.84.71 + skip: + files: null + tests: null + public_allocation_start: 10.2.84.51 + physnet: physnet1 + use_custom_repo: false + public_subnet_cidr: 10.2.84.0/24 + public_subnet_gateway: 10.2.84.1 + additional_default_settings: + - section: compute + option: flavor_ref + value: 1 + cirros_image_file: cirros-0.3.1-x86_64-disk.img + setup_method: tempest/rpm + test_name: all + rdo: + version: juno + rpm: http://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm + rpm: + version: 20141201 + dir: ~{{ nodes.tempest.remote_user }}/tempest-dir +tmp: + node_prefix: '{{ node.prefix | reject("none") | join("-") }}-' + anchors: + - https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm + - http://repos.fedorapeople.org/repos/openstack/openstack-juno/ +opm_repo: https://github.com/redhat-openstack/openstack-puppet-modules.git +workaround_vif_plugging: false +openstack_packstack_rpm: http://REPLACE_ME/brewroot/packages/openstack-puppet-modules/2013.2/9.el6ost/noarch/openstack-puppet-modules-2013.2-9.el6ost.noarch.rpm +nodes: + compute1: + name: oscompute11.opnfv.com + hostname: oscompute11.opnfv.com + short_name: oscompute11 + type: compute + host_type: baremetal + hostgroup: Compute + mac_address: "00:25:b5:a0:00:5e" + bmc_ip: 172.30.8.74 + bmc_mac: "74:a2:e6:a4:14:9c" + bmc_user: admin + bmc_pass: octopus + ansible_ssh_pass: "Op3nStack" + admin_password: "" + groups: + - compute + - foreman_nodes + - puppet + - rdo + - neutron + compute2: + name: oscompute12.opnfv.com + hostname: oscompute12.opnfv.com + short_name: oscompute12 + type: compute + host_type: baremetal + hostgroup: Compute + mac_address: "00:25:b5:a0:00:3e" + bmc_ip: 172.30.8.73 + bmc_mac: "a8:9d:21:a0:15:9c" + bmc_user: admin + bmc_pass: octopus + ansible_ssh_pass: "Op3nStack" + admin_password: "" + groups: + - compute + - foreman_nodes + - puppet + - rdo + - neutron + controller1: + name: oscontroller1.opnfv.com + hostname: oscontroller1.opnfv.com + short_name: oscontroller1 + type: controller + host_type: baremetal + hostgroup: Controller_Network_ODL + mac_address: "00:25:b5:a0:00:af" + bmc_ip: 172.30.8.66 + bmc_mac: "a8:9d:21:c9:8b:56" + bmc_user: admin + bmc_pass: octopus + private_ip: controller1_private + private_mac: "00:25:b5:b0:00:1f" + ansible_ssh_pass: "Op3nStack" + admin_password: "octopus" + groups: + - controller + - foreman_nodes + - puppet + - rdo + - neutron + controller2: + name: oscontroller2.opnfv.com + hostname: oscontroller2.opnfv.com + short_name: oscontroller2 + type: controller + host_type: baremetal + hostgroup: Controller_Network + mac_address: "00:25:b5:a0:00:9e" + bmc_ip: 172.30.8.75 + bmc_mac: "a8:9d:21:c9:4d:26" + bmc_user: admin + bmc_pass: octopus + private_ip: controller2_private + private_mac: "00:25:b5:b0:00:de" + ansible_ssh_pass: "Op3nStack" + admin_password: "octopus" + groups: + - controller + - foreman_nodes + - puppet + - rdo + - neutron + controller3: + name: oscontroller3.opnfv.com + hostname: oscontroller3.opnfv.com + short_name: oscontroller3 + type: controller + host_type: baremetal + hostgroup: Controller_Network + mac_address: "00:25:b5:a0:00:7e" + bmc_ip: 172.30.8.65 + bmc_mac: "a8:9d:21:c9:3a:92" + bmc_user: admin + bmc_pass: octopus + private_ip: controller3_private + private_mac: "00:25:b5:b0:00:be" + ansible_ssh_pass: "Op3nStack" + admin_password: "octopus" + groups: + - controller + - foreman_nodes + - puppet + - rdo + - neutron +workaround_mysql_centos7: true +distro: + name: centos + centos: + '7.0': + repos: [] + short_name: c + short_version: 70 + version: '7.0' + rhel: + '7.0': + kickstart_url: http://REPLACE_ME/released/RHEL-7/7.0/Server/x86_64/os/ + repos: + - section: rhel7-server-rpms + name: Packages for RHEL 7 - $basearch + baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0/x86_64/ + gpgcheck: 0 + - section: rhel-7-server-update-rpms + name: Update Packages for Enterprise Linux 7 - $basearch + baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0-z/x86_64/ + gpgcheck: 0 + - section: rhel-7-server-optional-rpms + name: Optional Packages for Enterprise Linux 7 - $basearch + baseurl: http://REPLACE_ME/released/RHEL-7/7.0/Server-optional/x86_64/os/ + gpgcheck: 0 + - section: rhel-7-server-extras-rpms + name: Optional Packages for Enterprise Linux 7 - $basearch + baseurl: http://REPLACE_ME/rel-eng/EXTRAS-7.0-RHEL-7-20140610.0/compose/Server/x86_64/os/ + gpgcheck: 0 + '6.5': + kickstart_url: http://REPLACE_ME/released/RHEL-6/6.5/Server/x86_64/os/ + repos: + - section: rhel6.5-server-rpms + name: Packages for RHEL 6.5 - $basearch + baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/$basearch/os/Server + gpgcheck: 0 + - section: rhel-6.5-server-update-rpms + name: Update Packages for Enterprise Linux 6.5 - $basearch + baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/$basearch/ + gpgcheck: 0 + - section: rhel-6.5-server-optional-rpms + name: Optional Packages for Enterprise Linux 6.5 - $basearch + baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/$basearch/os + gpgcheck: 0 + - section: rhel6.5-server-rpms-32bit + name: Packages for RHEL 6.5 - i386 + baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/i386/os/Server + gpgcheck: 0 + enabled: 1 + - section: rhel-6.5-server-update-rpms-32bit + name: Update Packages for Enterprise Linux 6.5 - i686 + baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/i686/ + gpgcheck: 0 + enabled: 1 + - section: rhel-6.5-server-optional-rpms-32bit + name: Optional Packages for Enterprise Linux 6.5 - i386 + baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/i386/os + gpgcheck: 0 + enabled: 1 + subscription: + username: REPLACE_ME + password: HWj8TE28Qi0eP2c + pool: 8a85f9823e3d5e43013e3ddd4e2a0977 + config: + selinux: permissive + ntp_server: 0.pool.ntp.org + dns_servers: + - 10.4.1.1 + - 10.4.0.2 + reboot_delay: 1 + initial_boot_timeout: 180 +node: + prefix: + - rdo + - pupt + - ffqiotcxz1 + - null +product: + repo_type: production + name: rdo + short_name: rdo + rpm: + CentOS: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm + Fedora: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm + RedHat: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm + short_version: ju + repo: + production: + CentOS: + 7.0.1406: http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7 + '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6 + '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7 + Fedora: + '20': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-20 + '21': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-21 + RedHat: + '6.6': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6 + '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6 + '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7 + version: juno + config: + enable_epel: y + short_repo: prod +tester: + name: tempest +distro_reboot_options: '--no-wall '' Reboot is triggered by Ansible'' ' +job: + verbosity: 1 + archive: + - '{{ tempest.dir }}/etc/tempest.conf' + - '{{ tempest.dir }}/etc/tempest.conf.sample' + - '{{ tempest.dir }}/*.log' + - '{{ tempest.dir }}/*.xml' + - /root/ + - /var/log/ + - /etc/nova + - /etc/ceilometer + - /etc/cinder + - /etc/glance + - /etc/keystone + - /etc/neutron + - /etc/ntp + - /etc/puppet + - /etc/qpid + - /etc/qpidd.conf + - /root + - /etc/yum.repos.d + - /etc/yum.repos.d +topology: + name: multinode + short_name: mt +workaround_neutron_ovs_udev_loop: true +workaround_glance_table_utf8: false +verbosity: + debug: 0 + info: 1 + warning: 2 + warn: 2 + errors: 3 +provisioner: + username: admin + network: + type: nova + name: external + skip: skip_provision + foreman_url: https://10.2.84.2/api/v2/ + password: octopus + type: foreman +workaround_nova_compute_fix: false +workarounds: + enabled: true diff --git a/fuel/prototypes/deploy/README.rst b/fuel/prototypes/deploy/README.rst index ad77583..35898b0 100644 --- a/fuel/prototypes/deploy/README.rst +++ b/fuel/prototypes/deploy/README.rst @@ -14,8 +14,11 @@ Conceptually the deployer contains of a number of entities: Both the dea.yaml and dha.yaml can be created from an existing Fuel deployment, in a way making a xerox copy of it for re-deployment. For this, the create_templates structure is copied to the Fuel master and the create_templates.sh is run there. -In the examples directory, VM and network definitions for libvirt together with matching dea.yaml and dha.yaml can be found. The DEA configuration is made using a opnfv-59 deployment. +In the examples/libvirt directory, VM and network definitions for libvirt together with matching dea.yaml and dha.yaml can be found. The DEA configuration is made using a opnfv-59 deployment. + +There is also a hybrid libirt/IPMI adapter with an example dea.yaml and dha.yaml for a small one controller + one compute deploy in examples/ipmi. The details and API description for DEA and DHA can be found in the documentation directory. -See the README in examples to learn how to get a libvirt Fuel deploy up and running! +See the READMEs in the examples dirctories to get going with a Fuel deployment for your environment - or write and contribute your own hardware adapter for your environment! + diff --git a/fuel/prototypes/deploy/deploy/deploy.sh b/fuel/prototypes/deploy/deploy/deploy.sh index 50488a4..938efb6 100755 --- a/fuel/prototypes/deploy/deploy/deploy.sh +++ b/fuel/prototypes/deploy/deploy/deploy.sh @@ -107,18 +107,20 @@ fuelGateway=`dea getFuelGateway` || error_exit "Could not get Fuel Gateway" fuelHostname=`dea getFuelHostname` || error_exit "Could not get Fuel hostname" fuelDns=`dea getFuelDns` || error_exit "Could not get Fuel DNS" fuelNodeId=`dha getFuelNodeId` || error_exit "Could not get fuel node id" +dha useFuelCustomInstall +fuelCustom=$? # Stop all VMs for id in `dha getAllNodeIds` do - if [ $nofuel -eq 0 ]; then + if [ $nofuel -eq 0 -o $fuelCustom -eq 0 ]; then if [ $fuelNodeId -ne $id ]; then echo "Powering off id $id" dha nodePowerOff $id fi else - echo "Powering off id $id" - dha nodePowerOff $id + echo "Powering off id $id" + dha nodePowerOff $id fi done @@ -135,7 +137,7 @@ if [ $nofuel -eq 1 ]; then isofile=$deployiso if dha useFuelCustomInstall; then echo "Custom Fuel install" - dha fuelCustomInstall || error_exit "Failed to run Fuel custom install" + dha fuelCustomInstall $isofile || error_exit "Failed to run Fuel custom install" else echo "Ordinary Fuel install" . ${functions}/install_iso.sh || error_exit "Failed to install Fuel" diff --git a/fuel/prototypes/deploy/deploy/dha-adapters/ipmi.sh b/fuel/prototypes/deploy/deploy/dha-adapters/ipmi.sh new file mode 100755 index 0000000..00a621d --- /dev/null +++ b/fuel/prototypes/deploy/deploy/dha-adapters/ipmi.sh @@ -0,0 +1,440 @@ +#!/bin/bash +############################################################################## +# Copyright (c) 2015 Ericsson AB and others. +# stefan.k.berg@ericsson.com +# jonas.bjurel@ericsson.com +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Apache License, Version 2.0 +# which accompanies this distribution, and is available at +# http://www.apache.org/licenses/LICENSE-2.0 +############################################################################## + + + +######################################################################## +# Internal functions BEGIN + + +dha_f_err() +{ + local rc + local cmd + + rc=$1 + shift + + echo "$@" >&2 + echo "Exit with code $rc" >&2 + + exit $rc +} + +dha_f_run() +{ + $@ + rc=$? + if [ $rc -ne 0 ]; then + dha_f_err $rc "running $@" >&2 + exit $rc + fi +} + + +dha_f_ipmi() +{ + + local nodeId + local ipmiIp + local ipmiUser + local ipmiPass + + nodeId=$1 + shift + + ipmiIp=$($DHAPARSE $DHAFILE getNodeProperty $nodeId ipmiIp) + ipmiUser=$($DHAPARSE $DHAFILE getNodeProperty $nodeId ipmiUser) + ipmiPass=$($DHAPARSE $DHAFILE getNodeProperty $nodeId ipmiPass) + + test -n "$ipmiIp" || error_exit "Could not get IPMI IP" + test -n "$ipmiUser" || error_exit "Could not get IPMI username" + test -n "$ipmiPass" || error_exit "Could not get IPMI password" + + ipmitool -I lanplus -A password -H $ipmiIp -U $ipmiUser -P $ipmiPass \ + $@ +} + +# Internal functions END +######################################################################## + + +true=0 +false=1 + +# API: Get the DHA API version supported by this adapter +dha_getApiVersion () +{ + echo "1.0" +} + +# API: Get the name of this adapter +dha_getAdapterName () +{ + echo "ipmi" +} + +# API: ### Node identity functions ### +# API: Node numbering is sequential. + +# API: Get a list of all defined node ids, sorted in ascending order +dha_getAllNodeIds() +{ + dha_f_run $DHAPARSE $DHAFILE getNodes | sort -n +} + + +# API: Get ID for Fuel node ID +dha_getFuelNodeId() +{ + for node in `dha_getAllNodeIds` + do + if [ -n "`dha_f_run $DHAPARSE $DHAFILE getNodeProperty $node isFuel`" ] + then + echo $node + fi + done +} + +# API: Get node property +# API: Argument 1: node id +# API: Argument 2: Property +dha_getNodeProperty() +{ + dha_f_run $DHAPARSE $DHAFILE getNodeProperty $1 $2 +} + + +# API: Get MAC address for the PXE interface of this node. If not +# API: defined, an empty string will be returned. +# API: Argument 1: Node id +dha_getNodePxeMac() +{ + dha_getNodeProperty $1 pxeMac +} + + +### Node operation functions ### + +# API: Use custom installation method for Fuel master? +# API: Returns 0 if true, 1 if false +dha_useFuelCustomInstall() +{ + $DHAPARSE $DHAFILE get fuelCustomInstall | grep -qi true + rc=$? + return $rc +} + +# API: Fuel custom installation method +# API: Leaving the Fuel master powered on and booting from ISO at exit +# API: Argument 1: Full path to ISO file to install +dha_fuelCustomInstall() +{ + if [ ! -e $1 ]; then + error_exit "Could not access ISO file $1" + fi + + dha_useFuelCustomInstall || dha_f_err 1 "dha_fuelCustomInstall not supported" + + fuelIp=`dea getFuelIp` || error_exit "Could not get fuel IP" + fuelNodeId=`dha getFuelNodeId` || error_exit "Could not get fuel node id" + virtName=`$DHAPARSE $DHAFILE getNodeProperty $fuelNodeId libvirtName` + + # Power off the node + virsh destroy $virtName + sleep 5 + + # Zero the MBR + fueldisk=`virsh dumpxml $virtName | \ + grep "" | \ + sed "/<\/os>/i\ + ${bootline}" > $tmpdir/vm.xml || error_exit "Could not set bootorder" + virsh define $tmpdir/vm.xml || error_exit "Could not set bootorder" + + + # Get name of CD device + cdDev=`virsh domblklist $virtName | tail -n +3 | awk '{ print $1 }' | grep ^hd` + + # Eject and insert ISO + virsh change-media $virtName --config --eject $cdDev + sleep 5 + virsh change-media $virtName --config --insert $cdDev $1 || error_exit "Could not insert CD $1" + sleep 5 + + virsh start $virtName || error_exit "Could not start $virtName" + sleep 5 + + # wait for node up + echo "Waiting for Fuel master to accept SSH" + while true + do + ssh root@${fuelIp} date 2>/dev/null + if [ $? -eq 0 ]; then + break + fi + sleep 10 + done + + # Wait until fuelmenu is up + echo "Waiting for fuelmenu to come up" + menuPid="" + while [ -z "$menuPid" ] + do + menuPid=`ssh root@${fuelIp} "ps -ef" 2>&1 | grep fuelmenu | grep -v grep | awk '{ print $2 }'` + sleep 10 + done + + # This is where we inject our own astute.yaml settings + scp -q $deafile root@${fuelIp}:. || error_exit "Could not copy DEA file to Fuel" + echo "Uploading build tools to Fuel server" + ssh root@${fuelIp} rm -rf tools || error_exit "Error cleaning old tools structure" + scp -qrp $topdir/tools root@${fuelIp}:. || error_exit "Error copying tools" + echo "Running transplant #0" + ssh root@${fuelIp} "cd tools; ./transplant0.sh ../`basename $deafile`" \ + || error_exit "Error running transplant sequence #0" + + + + # Let the Fuel deployment continue + echo "Found menu as PID $menuPid, now killing it" + ssh root@${fuelIp} "kill $menuPid" 2>/dev/null + + # Wait until installation complete + echo "Waiting for bootstrap of Fuel node to complete" + while true + do + ssh root@${fuelIp} "ps -ef" 2>/dev/null \ + | grep -q /usr/local/sbin/bootstrap_admin_node + if [ $? -ne 0 ]; then + break + fi + sleep 10 + done + + echo "Waiting for one minute for Fuel to stabilize" + sleep 1m + +} + +# API: Get power on strategy from DHA +# API: Returns one of two values: +# API: all: Power on all nodes simultaneously +# API: sequence: Power on node by node, wait for Fuel detection +dha_getPowerOnStrategy() +{ + local strategy + + strategy=`$DHAPARSE $DHAFILE get powerOnStrategy` + + if [ "$strategy" == "all" ]; then + echo $strategy + elif + [ "$strategy" == "sequence" ]; then + echo $strategy + else + dha_f_err 1 "Could not parse strategy from DHA, got $strategy" + fi +} + +# API: Power on node +# API: Argument 1: node id +dha_nodePowerOn() +{ + local nodeId + + nodeId=$1 + state=$(dha_f_ipmi $1 chassis power status) || error_exit "Could not get IPMI power status" + echo "state $state" + + + if [ "$(echo $state | sed 's/.* //')" == "off" ]; then + dha_f_ipmi $1 chassis power on + fi +} + +# API: Power off node +# API: Argument 1: node id +dha_nodePowerOff() +{ + local nodeId + + nodeId=$1 + state=$(dha_f_ipmi $1 chassis power status) || error_exit "Could not get IPMI power status" + echo "state $state" + + + if [ "$(echo $state | sed 's/.* //')" != "off" ]; then + dha_f_ipmi $1 chassis power off + fi +} + +# API: Reset node +# API: Argument 1: node id +dha_nodeReset() +{ + local nodeId + + nodeId=$1 + state=$(dha_f_ipmi $1 chassis power reset) || error_exit "Could not get IPMI power status" + echo "state $state" + + + if [ "$(echo $state | sed 's/.* //')" != "off" ]; then + dha_f_ipmi $1 chassis power reset + fi +} + +# Boot order and ISO boot file + +# API: Is the node able to commit boot order without power toggle? +# API: Argument 1: node id +# API: Returns 0 if true, 1 if false +dha_nodeCanSetBootOrderLive() +{ + return $true +} + +# API: Set node boot order +# API: Argument 1: node id +# API: Argument 2: Space separated line of boot order - boot ids are "pxe", "disk" and "iso" +# Strategy for IPMI: Always set boot order to persistent except in the case of CDROM. +dha_nodeSetBootOrder() +{ + local id + local order + + id=$1 + shift + order=$1 + + if [ "$order" == "pxe" ]; then + dha_f_ipmi $id chassis bootdev pxe options=persistent || error_exit "Could not get IPMI power status" + elif [ "$order" == "iso" ]; then + dha_f_ipmi $id chassis bootdev cdrom || error_exit "Could not get IPMI power status" + elif [ "$order" == "disk" ]; then + dha_f_ipmi $id chassis bootdev disk options=persistent || error_exit "Could not get IPMI power status" + else + error_exit "Unknown boot type: $order" + fi +} + +# API: Is the node able to operate on ISO media? +# API: Argument 1: node id +# API: Returns 0 if true, 1 if false +dha_nodeCanSetIso() +{ + return $false +} + +# API: Is the node able to insert add eject ISO files without power toggle? +# API: Argument 1: node id +# API: Returns 0 if true, 1 if false +dha_nodeCanHandeIsoLive() +{ + return $false +} + +# API: Insert ISO into virtualDVD +# API: Argument 1: node id +# API: Argument 2: iso file +dha_nodeInsertIso() +{ + error_exit "Node can not handle InsertIso" +} + +# API: Eject ISO from virtual DVD +# API: Argument 1: node id +dha_nodeEjectIso() +{ + error_exit "Node can not handle InsertIso" +} + +# API: Wait until a suitable time to change the boot order to +# API: "disk iso" when ISO has been booted. Can't be too long, nor +# API: too short... +# API: We should make a smart trigger for this somehow... +dha_waitForIsoBoot() +{ + echo "waitForIsoBoot: Not used by ipmi" +} + +# API: Is the node able to reset its MBR? +# API: Returns 0 if true, 1 if false +dha_nodeCanZeroMBR() +{ + return $false +} + +# API: Reset the node's MBR +dha_nodeZeroMBR() +{ + error_exit "Node $1 does not support ZeroMBR" +} + + +# API: Entry point for dha functions +# API: Typically do not call "dha_node_zeroMBR" but "dha node_ZeroMBR" +# API: +# API: Before calling dha, the adapter file must gave been sourced with +# API: the DHA file name as argument +dha() +{ + if [ -z "$DHAFILE" ]; then + error_exit "dha_setup has not been run" + fi + + + if type dha_$1 &>/dev/null; then + cmd=$1 + shift + dha_$cmd $@ + return $? + else + error_exit "No such function dha_$1 defined" + fi +} + +if [ "$1" == "api" ]; then + egrep "^# API: |dha.*\(\)" $0 | sed 's/^# API: /# /' | grep -v dha_f_ | sed 's/)$/)\n/' +else + dhatopdir=$(dirname $(readlink -f $BASH_SOURCE)) + DHAPARSE="$dhatopdir/dhaParse.py" + DHAFILE=$1 + + if [ ! -f $DHAFILE ]; then + error_exit "No such DHA file: $DHAFILE" + else + echo "Adapter init" + echo "$@" + echo "DHAPARSE: $DHAPARSE" + echo "DHAFILE: $DHAFILE" + fi + +fi diff --git a/fuel/prototypes/deploy/deploy/dha-adapters/libvirt.sh b/fuel/prototypes/deploy/deploy/dha-adapters/libvirt.sh index 0e91f49..8d9edde 100755 --- a/fuel/prototypes/deploy/deploy/dha-adapters/libvirt.sh +++ b/fuel/prototypes/deploy/deploy/dha-adapters/libvirt.sh @@ -248,7 +248,7 @@ dha_nodeInsertIso() virtName=`$DHAPARSE $DHAFILE getNodeProperty $1 libvirtName` isoFile=$2 - virsh change-media fuel-master --insert hdc $isoFile + virsh change-media $virtName --insert hdc $isoFile } # API: Eject ISO from virtual DVD @@ -263,7 +263,7 @@ dha_nodeEjectIso() virsh change-media $virtName --eject hdc } -# API: Wait until a suitable time to change the boot order to +# API: Wait until a suitable time to change the boot order to # API: "disk iso" when ISO has been booted. Can't be too long, nor # API: too short... # API: We should make a smart trigger for this somehow... diff --git a/fuel/prototypes/deploy/deploy/functions/dea-api.sh b/fuel/prototypes/deploy/deploy/functions/dea-api.sh index 9401192..61d670f 100755 --- a/fuel/prototypes/deploy/deploy/functions/dea-api.sh +++ b/fuel/prototypes/deploy/deploy/functions/dea-api.sh @@ -101,7 +101,7 @@ dea_getFuelDns() # API: Convert a normal MAC to a Fuel short mac for --node-id dea_convertMacToShortMac() { - echo $1 | sed 's/.*..:..:..:..:\(..:..\).*/\1/' + echo $1 | sed 's/.*..:..:..:..:\(..:..\).*/\1/' | tr [A-Z] [a-z] } diff --git a/fuel/prototypes/deploy/deploy/functions/deploy_env.sh b/fuel/prototypes/deploy/deploy/functions/deploy_env.sh index 139fcc5..e650f4d 100755 --- a/fuel/prototypes/deploy/deploy/functions/deploy_env.sh +++ b/fuel/prototypes/deploy/deploy/functions/deploy_env.sh @@ -14,6 +14,10 @@ echo "Uploading build tools to Fuel server" ssh root@${fuelIp} rm -rf tools || error_exit "Error cleaning old tools structure" scp -qrp $topdir/tools root@${fuelIp}:. || error_exit "Error copying tools" +echo "Uploading templating tols to Fuel server" +ssh root@${fuelIp} rm -rf create_templates || error_exit "Error cleaning old create_templates structure" +scp -qrp $topdir/../create_templates root@${fuelIp}:. || error_exit "Error copying create_templates" + # Refuse to run if environment already present envcnt=`fuel env | tail -n +3 | grep -v '^$' | wc -l` if [ $envcnt -ne 0 ]; then diff --git a/fuel/prototypes/deploy/deploy/functions/patch-iso.sh b/fuel/prototypes/deploy/deploy/functions/patch-iso.sh index da1996b..933281f 100755 --- a/fuel/prototypes/deploy/deploy/functions/patch-iso.sh +++ b/fuel/prototypes/deploy/deploy/functions/patch-iso.sh @@ -77,6 +77,7 @@ sed -i "s/ hostname=[^ ]*/ hostname=$fuelHostname/" isolinux/isolinux.cfg sed -i "s/ showmenu=[^ ]*/ showmenu=yes/" isolinux/isolinux.cfg echo "isolinux.cfg after: `grep netmask isolinux/isolinux.cfg`" +rm -vf $newiso echo "Creating iso $newiso" mkisofs -quiet -r \ -J -R -b isolinux/isolinux.bin \ diff --git a/fuel/prototypes/deploy/deploy/tools/transplant_interfaces.py b/fuel/prototypes/deploy/deploy/tools/transplant_interfaces.py index 758372a..609f360 100755 --- a/fuel/prototypes/deploy/deploy/tools/transplant_interfaces.py +++ b/fuel/prototypes/deploy/deploy/tools/transplant_interfaces.py @@ -63,11 +63,14 @@ for interface in doc1: assigned = [] nw = {} interface["assigned_networks"] = [] - for nwname in nodeInfo["interfaces"][interface["name"]]: - iface = {} - iface["id"] = nwlookup[nwname] - iface["name"] = nwname - interface["assigned_networks"].append(iface) + try: + for nwname in nodeInfo["interfaces"][interface["name"]]: + iface = {} + iface["id"] = nwlookup[nwname] + iface["name"] = nwname + interface["assigned_networks"].append(iface) + except: + print "No match for interface " + interface["name"] f3 = open(infile, 'w') f3.write(yaml.dump(doc1, default_flow_style=False)) diff --git a/fuel/prototypes/deploy/deploy/verify_dha.sh b/fuel/prototypes/deploy/deploy/verify_dha.sh index 5b09721..6e2b75f 100755 --- a/fuel/prototypes/deploy/deploy/verify_dha.sh +++ b/fuel/prototypes/deploy/deploy/verify_dha.sh @@ -11,7 +11,7 @@ error_exit() { - echo "Erroxxxr: $@" + echo "Error: $@" >&2 exit 1 } @@ -77,7 +77,7 @@ do else libvirtName="" fi - + if [ $id == "`dha getFuelNodeId`" ]; then echo "$id: `dha getNodeProperty $id pxeMac` $libvirtName <--- Fuel master" else @@ -122,5 +122,4 @@ else echo "no" fi - echo "Done" diff --git a/fuel/prototypes/deploy/examples/ipmi/README.txt b/fuel/prototypes/deploy/examples/ipmi/README.txt new file mode 100644 index 0000000..2cbffa9 --- /dev/null +++ b/fuel/prototypes/deploy/examples/ipmi/README.txt @@ -0,0 +1,10 @@ +This is a hybrid IPMI DHA, where the Fuel master is run as a KVM +VM, but all other nodes are real iron under IPMI control. + +In "conf" is an example dea.yaml, dha.yaml and a VM definition for the +Fuel master. You need to tune these so they match your specific +environment. In addition you need to create a bridge from the VM to +the admin (PXE) network of the physical nodes. An example snippet for +/etc/network/interfaces which also configures NAT can be found in the +README.txt in conf. + diff --git a/fuel/prototypes/deploy/examples/ipmi/conf/README.txt b/fuel/prototypes/deploy/examples/ipmi/conf/README.txt new file mode 100644 index 0000000..a8608dc --- /dev/null +++ b/fuel/prototypes/deploy/examples/ipmi/conf/README.txt @@ -0,0 +1,12 @@ +Add this snippet into /etc/network/interfaces after making sure to +replace p1p1.20 with your actual outbound interface in order to +provide network access to the Fuel master for DNS and NTP. + +iface vfuelnet inet static + bridge_ports em1 + address 10.30.0.1 + netmask 255.255.255.0 + pre-down iptables -t nat -D POSTROUTING --out-interface p1p1.20 -j MASQUERADE -m comment --comment "vfuelnet" + pre-down iptables -D FORWARD --in-interface vfuelnet --out-interface p1p1.20 -m comment --comment "vfuelnet" + post-up iptables -t nat -A POSTROUTING --out-interface p1p1.20 -j MASQUERADE -m comment --comment "vfuelnet" + post-up iptables -A FORWARD --in-interface vfuelnet --out-interface p1p1.20 -m comment --comment "vfuelnet" diff --git a/fuel/prototypes/deploy/examples/ipmi/conf/dea.yaml b/fuel/prototypes/deploy/examples/ipmi/conf/dea.yaml new file mode 100644 index 0000000..166b68a --- /dev/null +++ b/fuel/prototypes/deploy/examples/ipmi/conf/dea.yaml @@ -0,0 +1,983 @@ +title: Deployment Environment Adapter (DEA) +# DEA API version supported +version: 1.1 +created: Tue May 5 15:33:07 UTC 2015 +comment: Test environment Ericsson Montreal +nodes: +- id: 1 + interfaces: + eth0: + - fuelweb_admin + eth2: + - public + - management + - storage + - private + role: controller +- id: 2 + interfaces: + eth0: + - fuelweb_admin + eth2: + - public + - management + - storage + - private + role: compute +environment_mode: multinode +environment_name: Stefan3_auto +fuel: + ADMIN_NETWORK: + dhcp_pool_end: 10.30.0.254 + dhcp_pool_start: 10.30.0.3 + ipaddress: 10.30.0.2 + netmask: 255.255.255.0 + DNS_DOMAIN: opnfvericsson.ca + DNS_SEARCH: opnfvericsson.ca + DNS_UPSTREAM: 10.118.32.193 + FUEL_ACCESS: + password: admin + user: admin + HOSTNAME: mrberg-fuel + NTP1: 0.ca.pool.ntp.org + NTP2: 1.ca.pool.ntp.org + NTP3: 2.ca.pool.ntp.org +controller: +- action: add-br + name: br-eth0 +- action: add-port + bridge: br-eth0 + name: eth0 +- action: add-br + name: br-eth1 +- action: add-port + bridge: br-eth1 + name: eth1 +- action: add-br + name: br-eth2 +- action: add-port + bridge: br-eth2 + name: eth2 +- action: add-br + name: br-eth3 +- action: add-port + bridge: br-eth3 + name: eth3 +- action: add-br + name: br-eth4 +- action: add-port + bridge: br-eth4 + name: eth4 +- action: add-br + name: br-eth5 +- action: add-port + bridge: br-eth5 + name: eth5 +- action: add-br + name: br-ex +- action: add-br + name: br-mgmt +- action: add-br + name: br-storage +- action: add-br + name: br-fw-admin +- action: add-patch + bridges: + - br-eth2 + - br-storage + tags: + - 220 + - 0 + vlan_ids: + - 220 + - 0 +- action: add-patch + bridges: + - br-eth2 + - br-mgmt + tags: + - 320 + - 0 + vlan_ids: + - 320 + - 0 +- action: add-patch + bridges: + - br-eth0 + - br-fw-admin + trunks: + - 0 +- action: add-patch + bridges: + - br-eth2 + - br-ex + tags: + - 120 + - 0 + vlan_ids: + - 120 + - 0 +- action: add-br + name: br-prv +- action: add-patch + bridges: + - br-eth2 + - br-prv +compute: +- action: add-br + name: br-eth0 +- action: add-port + bridge: br-eth0 + name: eth0 +- action: add-br + name: br-eth1 +- action: add-port + bridge: br-eth1 + name: eth1 +- action: add-br + name: br-eth2 +- action: add-port + bridge: br-eth2 + name: eth2 +- action: add-br + name: br-eth3 +- action: add-port + bridge: br-eth3 + name: eth3 +- action: add-br + name: br-eth4 +- action: add-port + bridge: br-eth4 + name: eth4 +- action: add-br + name: br-eth5 +- action: add-port + bridge: br-eth5 + name: eth5 +- action: add-br + name: br-mgmt +- action: add-br + name: br-storage +- action: add-br + name: br-fw-admin +- action: add-patch + bridges: + - br-eth2 + - br-storage + tags: + - 220 + - 0 + vlan_ids: + - 220 + - 0 +- action: add-patch + bridges: + - br-eth2 + - br-mgmt + tags: + - 320 + - 0 + vlan_ids: + - 320 + - 0 +- action: add-patch + bridges: + - br-eth0 + - br-fw-admin + trunks: + - 0 +- action: add-br + name: br-prv +- action: add-patch + bridges: + - br-eth2 + - br-prv +opnfv: + compute: {} + controller: {} +network: + networking_parameters: + base_mac: fa:16:3e:00:00:00 + dns_nameservers: + - 10.118.32.193 + - 8.8.8.8 + floating_ranges: + - - 172.16.0.130 + - 172.16.0.254 + gre_id_range: + - 2 + - 65535 + internal_cidr: 192.168.111.0/24 + internal_gateway: 192.168.111.1 + net_l23_provider: ovs + segmentation_type: vlan + vlan_range: + - 2022 + - 2023 + networks: + - cidr: 172.16.0.0/24 + gateway: 172.16.0.1 + ip_ranges: + - - 172.16.0.2 + - 172.16.0.126 + meta: + assign_vip: true + cidr: 172.16.0.0/24 + configurable: true + floating_range_var: floating_ranges + ip_range: + - 172.16.0.2 + - 172.16.0.126 + map_priority: 1 + name: public + notation: ip_ranges + render_addr_mask: public + render_type: null + use_gateway: true + vlan_start: null + name: public + vlan_start: 120 + - cidr: 192.168.0.0/24 + gateway: null + ip_ranges: + - - 192.168.0.2 + - 192.168.0.254 + meta: + assign_vip: true + cidr: 192.168.0.0/24 + configurable: true + map_priority: 2 + name: management + notation: cidr + render_addr_mask: internal + render_type: cidr + use_gateway: false + vlan_start: 101 + name: management + vlan_start: 320 + - cidr: 192.168.1.0/24 + gateway: null + ip_ranges: + - - 192.168.1.2 + - 192.168.1.254 + meta: + assign_vip: false + cidr: 192.168.1.0/24 + configurable: true + map_priority: 2 + name: storage + notation: cidr + render_addr_mask: storage + render_type: cidr + use_gateway: false + vlan_start: 102 + name: storage + vlan_start: 220 + - cidr: null + gateway: null + ip_ranges: [] + meta: + assign_vip: false + configurable: false + map_priority: 2 + name: private + neutron_vlan_range: true + notation: null + render_addr_mask: null + render_type: null + seg_type: vlan + use_gateway: false + vlan_start: null + name: private + vlan_start: null + - cidr: 10.30.0.0/24 + gateway: null + ip_ranges: + - - 10.30.0.3 + - 10.30.0.254 + meta: + assign_vip: false + configurable: false + map_priority: 0 + notation: ip_ranges + render_addr_mask: null + render_type: null + unmovable: true + use_gateway: true + name: fuelweb_admin + vlan_start: null +settings: + editable: + access: + email: + description: Email address for Administrator + label: email + type: text + value: admin@localhost + weight: 40 + metadata: + label: Access + weight: 10 + password: + description: Password for Administrator + label: password + type: password + value: admin + weight: 20 + tenant: + description: Tenant (project) name for Administrator + label: tenant + regex: + error: Invalid tenant name + source: ^(?!services$)(?!nova$)(?!glance$)(?!keystone$)(?!neutron$)(?!cinder$)(?!swift$)(?!ceph$)(?![Gg]uest$).* + type: text + value: admin + weight: 30 + user: + description: Username for Administrator + label: username + regex: + error: Invalid username + source: ^(?!services$)(?!nova$)(?!glance$)(?!keystone$)(?!neutron$)(?!cinder$)(?!swift$)(?!ceph$)(?![Gg]uest$).* + type: text + value: admin + weight: 10 + additional_components: + ceilometer: + description: If selected, Ceilometer component will be installed + label: Install Ceilometer + type: checkbox + value: false + weight: 40 + heat: + description: '' + label: '' + type: hidden + value: true + weight: 30 + metadata: + label: Additional Components + weight: 20 + murano: + description: If selected, Murano component will be installed + label: Install Murano + restrictions: + - cluster:net_provider != 'neutron' + type: checkbox + value: false + weight: 20 + sahara: + description: If selected, Sahara component will be installed + label: Install Sahara + type: checkbox + value: false + weight: 10 + common: + auth_key: + description: Public key(s) to include in authorized_keys on deployed nodes + label: Public Key + type: text + value: '' + weight: 70 + auto_assign_floating_ip: + description: If selected, OpenStack will automatically assign a floating IP + to a new instance + label: Auto assign floating IP + restrictions: + - cluster:net_provider == 'neutron' + type: checkbox + value: false + weight: 40 + compute_scheduler_driver: + label: Scheduler driver + type: radio + value: nova.scheduler.filter_scheduler.FilterScheduler + values: + - data: nova.scheduler.filter_scheduler.FilterScheduler + description: Currently the most advanced OpenStack scheduler. See the OpenStack + documentation for details. + label: Filter scheduler + - data: nova.scheduler.simple.SimpleScheduler + description: This is 'naive' scheduler which tries to find the least loaded + host + label: Simple scheduler + weight: 40 + debug: + description: Debug logging mode provides more information, but requires more + disk space. + label: OpenStack debug logging + type: checkbox + value: false + weight: 20 + disable_offload: + description: If set, generic segmentation offload (gso) and generic receive + offload (gro) on physical nics will be disabled. See ethtool man. + label: Disable generic offload on physical nics + restrictions: + - action: hide + condition: cluster:net_provider == 'neutron' and networking_parameters:segmentation_type + == 'gre' + type: checkbox + value: true + weight: 80 + libvirt_type: + label: Hypervisor type + type: radio + value: kvm + values: + - data: kvm + description: Choose this type of hypervisor if you run OpenStack on hardware + label: KVM + restrictions: + - settings:common.libvirt_type.value == 'vcenter' + - data: qemu + description: Choose this type of hypervisor if you run OpenStack on virtual + hosts. + label: QEMU + restrictions: + - settings:common.libvirt_type.value == 'vcenter' + - data: vcenter + description: Choose this type of hypervisor if you run OpenStack in a vCenter + environment. + label: vCenter + restrictions: + - settings:common.libvirt_type.value != 'vcenter' or cluster:net_provider + == 'neutron' + weight: 30 + metadata: + label: Common + weight: 30 + nova_quota: + description: Quotas are used to limit CPU and memory usage for tenants. Enabling + quotas will increase load on the Nova database. + label: Nova quotas + type: checkbox + value: false + weight: 25 + resume_guests_state_on_host_boot: + description: Whether to resume previous guests state when the host reboots. + If enabled, this option causes guests assigned to the host to resume their + previous state. If the guest was running a restart will be attempted when + nova-compute starts. If the guest was not running previously, a restart + will not be attempted. + label: Resume guests state on host boot + type: checkbox + value: true + weight: 60 + use_cow_images: + description: For most cases you will want qcow format. If it's disabled, raw + image format will be used to run VMs. OpenStack with raw format currently + does not support snapshotting. + label: Use qcow format for images + type: checkbox + value: true + weight: 50 + corosync: + group: + description: '' + label: Group + type: text + value: 226.94.1.1 + weight: 10 + metadata: + label: Corosync + restrictions: + - action: hide + condition: 'true' + weight: 50 + port: + description: '' + label: Port + type: text + value: '12000' + weight: 20 + verified: + description: Set True only if multicast is configured correctly on router. + label: Need to pass network verification. + type: checkbox + value: false + weight: 10 + external_dns: + dns_list: + description: List of upstream DNS servers, separated by comma + label: DNS list + type: text + value: 10.118.32.193, 8.8.8.8 + weight: 10 + metadata: + label: Upstream DNS + weight: 90 + external_ntp: + metadata: + label: Upstream NTP + weight: 100 + ntp_list: + description: List of upstream NTP servers, separated by comma + label: NTP servers list + type: text + value: 0.pool.ntp.org, 1.pool.ntp.org + weight: 10 + kernel_params: + kernel: + description: Default kernel parameters + label: Initial parameters + type: text + value: console=ttyS0,9600 console=tty0 rootdelay=90 nomodeset + weight: 45 + metadata: + label: Kernel parameters + weight: 40 + neutron_mellanox: + metadata: + enabled: true + label: Mellanox Neutron components + toggleable: false + weight: 50 + plugin: + label: Mellanox drivers and SR-IOV plugin + type: radio + value: disabled + values: + - data: disabled + description: If selected, Mellanox drivers, Neutron and Cinder plugin will + not be installed. + label: Mellanox drivers and plugins disabled + restrictions: + - settings:storage.iser.value == true + - data: drivers_only + description: If selected, Mellanox Ethernet drivers will be installed to + support networking over Mellanox NIC. Mellanox Neutron plugin will not + be installed. + label: Install only Mellanox drivers + restrictions: + - settings:common.libvirt_type.value != 'kvm' + - data: ethernet + description: If selected, both Mellanox Ethernet drivers and Mellanox network + acceleration (Neutron) plugin will be installed. + label: Install Mellanox drivers and SR-IOV plugin + restrictions: + - settings:common.libvirt_type.value != 'kvm' or not (cluster:net_provider + == 'neutron' and networking_parameters:segmentation_type == 'vlan') + weight: 60 + vf_num: + description: Note that one virtual function will be reserved to the storage + network, in case of choosing iSER. + label: Number of virtual NICs + restrictions: + - settings:neutron_mellanox.plugin.value != 'ethernet' + type: text + value: '16' + weight: 70 + nsx_plugin: + connector_type: + description: Default network transport type to use + label: NSX connector type + type: select + value: stt + values: + - data: gre + label: GRE + - data: ipsec_gre + label: GRE over IPSec + - data: stt + label: STT + - data: ipsec_stt + label: STT over IPSec + - data: bridge + label: Bridge + weight: 80 + l3_gw_service_uuid: + description: UUID for the default L3 gateway service to use with this cluster + label: L3 service UUID + regex: + error: Invalid L3 gateway service UUID + source: '[a-f\d]{8}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{12}' + type: text + value: '' + weight: 50 + metadata: + enabled: false + label: VMware NSX + restrictions: + - action: hide + condition: cluster:net_provider != 'neutron' or networking_parameters:net_l23_provider + != 'nsx' + weight: 20 + nsx_controllers: + description: One or more IPv4[:port] addresses of NSX controller node, separated + by comma (e.g. 10.30.30.2,192.168.110.254:443) + label: NSX controller endpoint + regex: + error: Invalid controller endpoints, specify valid IPv4[:port] pair + source: ^(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])(:(6553[0-5]|655[0-2][\d]|65[0-4][\d]{2}|6[0-4][\d]{3}|5[\d]{4}|[\d][\d]{0,3}))?(,(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])(:(6553[0-5]|655[0-2][\d]|65[0-4][\d]{2}|6[0-4][\d]{3}|5[\d]{4}|[\d][\d]{0,3}))?)*$ + type: text + value: '' + weight: 60 + nsx_password: + description: Password for Administrator + label: NSX password + regex: + error: Empty password + source: \S + type: password + value: '' + weight: 30 + nsx_username: + description: NSX administrator's username + label: NSX username + regex: + error: Empty username + source: \S + type: text + value: admin + weight: 20 + packages_url: + description: URL to NSX specific packages + label: URL to NSX bits + regex: + error: Invalid URL, specify valid HTTP/HTTPS URL with IPv4 address (e.g. + http://10.20.0.2/nsx) + source: ^https?://(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])(:(6553[0-5]|655[0-2][\d]|65[0-4][\d]{2}|6[0-4][\d]{3}|5[\d]{4}|[\d][\d]{0,3}))?(/.*)?$ + type: text + value: '' + weight: 70 + replication_mode: + description: '' + label: NSX cluster has Service nodes + type: checkbox + value: true + weight: 90 + transport_zone_uuid: + description: UUID of the pre-existing default NSX Transport zone + label: Transport zone UUID + regex: + error: Invalid transport zone UUID + source: '[a-f\d]{8}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{12}' + type: text + value: '' + weight: 40 + provision: + metadata: + label: Provision + restrictions: + - action: hide + condition: not ('experimental' in version:feature_groups) + weight: 80 + method: + description: Which provision method to use for this cluster. + label: Provision method + type: radio + value: cobbler + values: + - data: image + description: Copying pre-built images on a disk. + label: Image + - data: cobbler + description: Install from scratch using anaconda or debian-installer. + label: Classic (use anaconda or debian-installer) + public_network_assignment: + assign_to_all_nodes: + description: When disabled, public network will be assigned to controllers + and zabbix-server only + label: Assign public network to all nodes + type: checkbox + value: false + weight: 10 + metadata: + label: Public network assignment + restrictions: + - action: hide + condition: cluster:net_provider != 'neutron' + weight: 50 + storage: + ephemeral_ceph: + description: Configures Nova to store ephemeral volumes in RBD. This works + best if Ceph is enabled for volumes and images, too. Enables live migration + of all types of Ceph backed VMs (without this option, live migration will + only work with VMs launched from Cinder volumes). + label: Ceph RBD for ephemeral volumes (Nova) + restrictions: + - settings:common.libvirt_type.value == 'vcenter' + type: checkbox + value: false + weight: 75 + images_ceph: + description: Configures Glance to use the Ceph RBD backend to store images. + If enabled, this option will prevent Swift from installing. + label: Ceph RBD for images (Glance) + type: checkbox + value: false + weight: 30 + images_vcenter: + description: Configures Glance to use the vCenter/ESXi backend to store images. + If enabled, this option will prevent Swift from installing. + label: VMWare vCenter/ESXi datastore for images (Glance) + restrictions: + - settings:common.libvirt_type.value != 'vcenter' + type: checkbox + value: false + weight: 35 + iser: + description: 'High performance block storage: Cinder volumes over iSER protocol + (iSCSI over RDMA). This feature requires SR-IOV capabilities in the NIC, + and will use a dedicated virtual function for the storage network.' + label: iSER protocol for volumes (Cinder) + restrictions: + - settings:storage.volumes_lvm.value != true or settings:common.libvirt_type.value + != 'kvm' + type: checkbox + value: false + weight: 11 + metadata: + label: Storage + weight: 60 + objects_ceph: + description: Configures RadosGW front end for Ceph RBD. This exposes S3 and + Swift API Interfaces. If enabled, this option will prevent Swift from installing. + label: Ceph RadosGW for objects (Swift API) + restrictions: + - settings:storage.images_ceph.value == false + type: checkbox + value: false + weight: 80 + osd_pool_size: + description: Configures the default number of object replicas in Ceph. This + number must be equal to or lower than the number of deployed 'Storage - + Ceph OSD' nodes. + label: Ceph object replication factor + regex: + error: Invalid number + source: ^[1-9]\d*$ + restrictions: + - settings:common.libvirt_type.value == 'vcenter' + type: text + value: '2' + weight: 85 + vc_datacenter: + description: Inventory path to a datacenter. If you want to use ESXi host + as datastore, it should be "ha-datacenter". + label: Datacenter name + regex: + error: Empty datacenter + source: \S + restrictions: + - action: hide + condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_type.value + != 'vcenter' + type: text + value: '' + weight: 65 + vc_datastore: + description: Datastore associated with the datacenter. + label: Datastore name + regex: + error: Empty datastore + source: \S + restrictions: + - action: hide + condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_type.value + != 'vcenter' + type: text + value: '' + weight: 60 + vc_host: + description: IP Address of vCenter/ESXi + label: vCenter/ESXi IP + regex: + error: Specify valid IPv4 address + source: ^(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])$ + restrictions: + - action: hide + condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_type.value + != 'vcenter' + type: text + value: '' + weight: 45 + vc_image_dir: + description: The name of the directory where the glance images will be stored + in the VMware datastore. + label: Datastore Images directory + regex: + error: Empty images directory + source: \S + restrictions: + - action: hide + condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_type.value + != 'vcenter' + type: text + value: /openstack_glance + weight: 70 + vc_password: + description: vCenter/ESXi admin password + label: Password + regex: + error: Empty password + source: \S + restrictions: + - action: hide + condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_type.value + != 'vcenter' + type: password + value: '' + weight: 55 + vc_user: + description: vCenter/ESXi admin username + label: Username + regex: + error: Empty username + source: \S + restrictions: + - action: hide + condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_type.value + != 'vcenter' + type: text + value: '' + weight: 50 + volumes_ceph: + description: Configures Cinder to store volumes in Ceph RBD images. + label: Ceph RBD for volumes (Cinder) + restrictions: + - settings:storage.volumes_lvm.value == true or settings:common.libvirt_type.value + == 'vcenter' + type: checkbox + value: false + weight: 20 + volumes_lvm: + description: Requires at least one Storage - Cinder LVM node. + label: Cinder LVM over iSCSI for volumes + restrictions: + - settings:storage.volumes_ceph.value == true + type: checkbox + value: false + weight: 10 + volumes_vmdk: + description: Configures Cinder to store volumes via VMware vCenter. + label: VMware vCenter for volumes (Cinder) + restrictions: + - settings:common.libvirt_type.value != 'vcenter' or settings:storage.volumes_lvm.value + == true + type: checkbox + value: false + weight: 15 + syslog: + metadata: + label: Syslog + weight: 50 + syslog_port: + description: Remote syslog port + label: Port + regex: + error: Invalid Syslog port + source: ^([1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$ + type: text + value: '514' + weight: 20 + syslog_server: + description: Remote syslog hostname + label: Hostname + type: text + value: '' + weight: 10 + syslog_transport: + label: Syslog transport protocol + type: radio + value: tcp + values: + - data: udp + description: '' + label: UDP + - data: tcp + description: '' + label: TCP + weight: 30 + vcenter: + cluster: + description: vCenter cluster name. If you have multiple clusters, use comma + to separate names + label: Cluster + regex: + error: Invalid cluster list + source: ^([^,\ ]+([\ ]*[^,\ ])*)(,[^,\ ]+([\ ]*[^,\ ])*)*$ + type: text + value: '' + weight: 40 + datastore_regex: + description: The Datastore regexp setting specifies the data stores to use + with Compute. For example, "nas.*". If you want to use all available datastores, + leave this field blank + label: Datastore regexp + regex: + error: Invalid datastore regexp + source: ^(\S.*\S|\S|)$ + type: text + value: '' + weight: 50 + host_ip: + description: IP Address of vCenter + label: vCenter IP + regex: + error: Specify valid IPv4 address + source: ^(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])$ + type: text + value: '' + weight: 10 + metadata: + label: vCenter + restrictions: + - action: hide + condition: settings:common.libvirt_type.value != 'vcenter' + weight: 20 + use_vcenter: + description: '' + label: '' + type: hidden + value: true + weight: 5 + vc_password: + description: vCenter admin password + label: Password + regex: + error: Empty password + source: \S + type: password + value: admin + weight: 30 + vc_user: + description: vCenter admin username + label: Username + regex: + error: Empty username + source: \S + type: text + value: admin + weight: 20 + vlan_interface: + description: Physical ESXi host ethernet adapter for VLAN networking (e.g. + vmnic1). If empty "vmnic0" is used by default + label: ESXi VLAN interface + restrictions: + - action: hide + condition: cluster:net_provider != 'nova_network' or networking_parameters:net_manager + != 'VlanManager' + type: text + value: '' + weight: 60 + zabbix: + metadata: + label: Zabbix Access + restrictions: + - action: hide + condition: not ('experimental' in version:feature_groups) + weight: 70 + password: + description: Password for Zabbix Administrator + label: password + type: password + value: zabbix + weight: 20 + username: + description: Username for Zabbix Administrator + label: username + type: text + value: admin + weight: 10 diff --git a/fuel/prototypes/deploy/examples/ipmi/conf/dha.yaml b/fuel/prototypes/deploy/examples/ipmi/conf/dha.yaml new file mode 100644 index 0000000..97629b7 --- /dev/null +++ b/fuel/prototypes/deploy/examples/ipmi/conf/dha.yaml @@ -0,0 +1,52 @@ +title: Deployment Hardware Adapter (DHA) +# DHA API version supported +version: 1.1 +created: Mon May 4 09:03:46 UTC 2015 +comment: Test environment Ericsson Montreal + +# Adapter to use for this definition +adapter: ipmi + +# Node list. +# Mandatory properties are id and role. +# The MAC address of the PXE boot interface for Fuel is not +# mandatory to be defined. +# All other properties are adapter specific. + +nodes: +- id: 1 + pxeMac: 14:58:D0:55:E2:E0 + ipmiIp: 10.118.32.202 + ipmiUser: username + ipmiPass: password +- id: 2 + pxeMac: 9C:B6:54:8A:25:C0 + ipmiIp: 10.118.32.213 + ipmiUser: username + ipmiPass: password +# Adding the Fuel node as node id 3 which may not be correct - please +# adjust as needed. +- id: 3 + pxeMac: 52:54:00:bd:e4:21 + libvirtName: vFuel + isFuel: yes + +# Deployment power on strategy +# all: Turn on all nodes at once. There will be no correlation +# between the DHA and DEA node numbering. MAC addresses +# will be used to select the node roles though. +# sequence: Turn on the nodes in sequence starting with the lowest order +# node and wait for the node to be detected by Fuel. Not until +# the node has been detected and assigned a role will the next +# node be turned on. +powerOnStrategy: sequence + +# If fuelCustomInstall is set to true, Fuel is assumed to be installed by +# calling the DHA adapter function "dha_fuelCustomInstall()" with two +# arguments: node ID and the ISO file name to deploy. The custom install +# function is then to handle all necessary logic to boot the Fuel master +# from the ISO and then return. +# Allowed values: true, false + +fuelCustomInstall: true + diff --git a/fuel/prototypes/deploy/examples/ipmi/conf/vm/vFuel b/fuel/prototypes/deploy/examples/ipmi/conf/vm/vFuel new file mode 100644 index 0000000..2186539 --- /dev/null +++ b/fuel/prototypes/deploy/examples/ipmi/conf/vm/vFuel @@ -0,0 +1,115 @@ + + vFuel + daf21ecf-0dfe-4937-a155-8edde4f3ea76 + 8290304 + 8290304 + 2 + + /machine + + + hvm + + + + + + + + + + + SandyBridge + + + + + + + destroy + restart + restart + + + + + + /usr/bin/kvm-spice + + + + +
+ + + + + + +
+ + +
+ + + +
+ + + +
+ + + +
+ + + +
+ + +
+ + + + + +
+ + + + + + + + + +
+ + + + + + + +
+ +