--- /dev/null
+# opensds-ansible\r
+This is an installation tool for opensds using ansible.\r
+\r
+## 1. How to install an opensds local cluster\r
+This installation document assumes there is a clean Ubuntu 16.04 environment. If golang is already installed in the environment, make sure the following parameters are configured in ```/etc/profile``` and run ``source /etc/profile``:\r
+```conf\r
+export GOROOT=/usr/local/go\r
+export GOPATH=$HOME/gopath\r
+export PATH=$PATH:$GOROOT/bin:$GOPATH/bin\r
+```\r
+\r
+### Pre-config (Ubuntu 16.04)\r
+First download some system packages:\r
+```\r
+sudo apt-get install -y openssh-server git make gcc\r
+```\r
+Then config ```/etc/ssh/sshd_config``` file and change one line:\r
+```conf\r
+PermitRootLogin yes\r
+```\r
+Next generate ssh-token:\r
+```bash\r
+ssh-keygen -t rsa\r
+ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation\r
+```\r
+\r
+### Install docker\r
+If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.\r
+\r
+### Install ansible tool\r
+```bash\r
+sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.\r
+sudo apt-get update\r
+sudo apt-get install ansible\r
+ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.\r
+```\r
+\r
+### Download opensds source code\r
+```bash\r
+mkdir -p $HOME/gopath/src/github.com/opensds && cd $HOME/gopath/src/github.com/opensds\r
+git clone https://github.com/opensds/opensds.git -b <specified_branch_name>\r
+cd opensds/contrib/ansible\r
+```\r
+\r
+### Configure opensds cluster variables:\r
+##### System environment:\r
+Configure the ```workplace``` in `group_vars/common.yml`:\r
+```yaml\r
+workplace: /home/your_username # Change this field according to your username. If login as root, configure this parameter to '/root'\r
+```\r
+\r
+##### LVM\r
+If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:\r
+```yaml\r
+enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'\r
+pv_device: "your_pv_device_path" # Specify a block device and ensure it exists if lvm is chosen\r
+vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm\r
+```\r
+Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above:\r
+```yaml\r
+"vg001" # change pool name to be the same as vg_name\r
+```\r
+##### Ceph\r
+If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:\r
+```yaml\r
+enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.\r
+ceph_pool_name: "specified_pool_name" # Specify a name for ceph pool if choosing ceph\r
+```\r
+Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`:\r
+```yaml\r
+"rbd" # change pool name to be the same as ceph pool\r
+```\r
+Configure two files under ```group_vars/ceph```: `all.yml` and `osds.yml`. Here is an example:\r
+\r
+```group_vars/ceph/all.yml```:\r
+```yml\r
+ceph_origin: repository\r
+ceph_repository: community\r
+ceph_stable_release: luminous # Choose luminous as default version\r
+public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address\r
+cluster_network: "{{ public_network }}"\r
+monitor_interface: eth1 # Change to the network interface on the target machine\r
+```\r
+```group_vars/ceph/osds.yml```:\r
+```yml\r
+devices: # For ceph devices, append one or multiple devices like the example below:\r
+ - '/dev/sda' # Ensure this device exists and available if ceph is chosen\r
+ - '/dev/sdb' # Ensure this device exists and available if ceph is chosen\r
+osd_scenario: collocated\r
+```\r
+\r
+##### Cinder\r
+If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:\r
+```yaml\r
+enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'\r
+\r
+# Use block-box install cinder_standalone if true, see details in:\r
+use_cinder_standalone: true\r
+# If true, you can configure cinder_container_platform, cinder_image_tag,\r
+# cinder_volume_group.\r
+\r
+# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.\r
+cinder_container_platform: debian:stretch\r
+# The image tag can be arbitrarily modified, as long as follow the image naming\r
+# conventions, default: debian-cinder\r
+cinder_image_tag: debian-cinder\r
+# The cinder standalone use lvm driver as default driver, therefore `volume_group`\r
+# should be configured, the default is: cinder-volumes. The volume group will be\r
+# removed when use ansible script clean environment.\r
+cinder_volume_group: cinder-volumes\r
+```\r
+\r
+Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.\r
+\r
+### Check if the hosts can be reached\r
+```bash\r
+sudo ansible all -m ping -i local.hosts\r
+```\r
+\r
+### Run opensds-ansible playbook to start deploy\r
+```bash\r
+sudo ansible-playbook site.yml -i local.hosts\r
+```\r
+\r
+## 2. How to test opensds cluster\r
+\r
+### Configure opensds CLI tool\r
+```bash\r
+sudo cp $GOPATH/src/github.com/opensds/opensds/build/out/bin/osdsctl /usr/local/bin\r
+export OPENSDS_ENDPOINT=http://127.0.0.1:50040\r
+osdsctl pool list # Check if the pool resource is available\r
+```\r
+\r
+### Create a default profile first.\r
+```\r
+osdsctl profile create '{"name": "default", "description": "default policy"}'\r
+```\r
+\r
+### Create a volume.\r
+```\r
+osdsctl volume create 1 --name=test-001\r
+```\r
+For cinder, az needs to be specified.\r
+```\r
+osdsctl volume create 1 --name=test-001 --az nova\r
+```\r
+\r
+### List all volumes.\r
+```\r
+osdsctl volume list\r
+```\r
+\r
+### Delete the volume.\r
+```\r
+osdsctl volume delete <your_volume_id>\r
+```\r
+\r
+\r
+## 3. How to purge and clean opensds cluster\r
+\r
+### Run opensds-ansible playbook to clean the environment\r
+```bash\r
+sudo ansible-playbook clean.yml -i local.hosts\r
+```\r
+\r
+### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed\r
+```bash\r
+cd /tmp/ceph-ansible\r
+sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts\r
+```\r
+\r
+In addition, clean up the logical partition on the physical block device used by ceph, using the ```fdisk``` tool.\r
+\r
+### Remove ceph-ansible source code (optional)\r
+```bash\r
+cd ..\r
+sudo rm -rf /tmp/ceph-ansible\r
+```\r
--- /dev/null
+---\r
+# Defines some clean processes when banishing the cluster.\r
+\r
+- name: destory an opensds cluster\r
+ hosts: all\r
+ remote_user: root\r
+ vars_files:\r
+ - group_vars/common.yml\r
+ - group_vars/osdsdb.yml\r
+ - group_vars/osdsdock.yml\r
+ gather_facts: false\r
+ become: True\r
+ roles:\r
+ - cleaner
\ No newline at end of file
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override vars by using host or group vars\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+######################################\r
+# Releases name to number dictionary #\r
+######################################\r
+#ceph_release_num:\r
+# dumpling: 0.67\r
+# emperor: 0.72\r
+# firefly: 0.80\r
+# giant: 0.87\r
+# hammer: 0.94\r
+# infernalis: 9\r
+# jewel: 10\r
+# kraken: 11\r
+# luminous: 12\r
+# mimic: 13\r
+\r
+# Directory to fetch cluster fsid, keys etc...\r
+#fetch_directory: fetch/\r
+\r
+# The 'cluster' variable determines the name of the cluster.\r
+# Changing the default value to something else means that you will\r
+# need to change all the command line calls as well, for example if\r
+# your cluster name is 'foo':\r
+# "ceph health" will become "ceph --cluster foo health"\r
+#\r
+# An easier way to handle this is to use the environment variable CEPH_ARGS\r
+# So run: "export CEPH_ARGS="--cluster foo"\r
+# With that you will be able to run "ceph health" normally\r
+#cluster: ceph\r
+\r
+# Inventory host group variables\r
+#mon_group_name: mons\r
+#osd_group_name: osds\r
+#rgw_group_name: rgws\r
+#mds_group_name: mdss\r
+#nfs_group_name: nfss\r
+#restapi_group_name: restapis\r
+#rbdmirror_group_name: rbdmirrors\r
+#client_group_name: clients\r
+#iscsi_gw_group_name: iscsi-gws\r
+#mgr_group_name: mgrs\r
+\r
+# If check_firewall is true, then ansible will try to determine if the\r
+# Ceph ports are blocked by a firewall. If the machine running ansible\r
+# cannot reach the Ceph ports for some other reason, you may need or\r
+# want to set this to False to skip those checks.\r
+#check_firewall: False\r
+\r
+\r
+############\r
+# PACKAGES #\r
+############\r
+#debian_package_dependencies:\r
+# - python-pycurl\r
+# - hdparm\r
+\r
+#centos_package_dependencies:\r
+# - python-pycurl\r
+# - hdparm\r
+# - epel-release\r
+# - python-setuptools\r
+# - libselinux-python\r
+\r
+#redhat_package_dependencies:\r
+# - python-pycurl\r
+# - hdparm\r
+# - python-setuptools\r
+\r
+# Whether or not to install the ceph-test package.\r
+#ceph_test: false\r
+\r
+# Enable the ntp service by default to avoid clock skew on\r
+# ceph nodes\r
+#ntp_service_enabled: true\r
+\r
+# Set uid/gid to default '64045' for bootstrap directories.\r
+# '64045' is used for debian based distros. It must be set to 167 in case of rhel based distros.\r
+# These values have to be set according to the base OS used by the container image, NOT the host.\r
+#bootstrap_dirs_owner: "64045"\r
+#bootstrap_dirs_group: "64045"\r
+\r
+# This variable determines if ceph packages can be updated. If False, the\r
+# package resources will use "state=present". If True, they will use\r
+# "state=latest".\r
+#upgrade_ceph_packages: False\r
+\r
+#ceph_use_distro_backports: false # DEBIAN ONLY\r
+\r
+\r
+###########\r
+# INSTALL #\r
+###########\r
+#ceph_rhcs_cdn_install: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_repository_type: "{{ 'cdn' if ceph_rhcs_cdn_install else 'iso' if ceph_rhcs_iso_install else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_rhcs_iso_install: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_rhcs: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_stable: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_dev: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_stable_uca: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_custom: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+\r
+# ORIGIN SOURCE\r
+#\r
+# Choose between:\r
+# - 'repository' means that you will get ceph installed through a new repository. Later below choose between 'community', 'rhcs' or 'dev'\r
+# - 'distro' means that no separate repo file will be added\r
+# you will get whatever version of Ceph is included in your Linux distro.\r
+# 'local' means that the ceph binaries will be copied over from the local machine\r
+#ceph_origin: "{{ 'repository' if ceph_rhcs or ceph_stable or ceph_dev or ceph_stable_uca or ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#valid_ceph_origins:\r
+# - repository\r
+# - distro\r
+# - local\r
+ceph_origin: repository\r
+ceph_repository: community\r
+\r
+#ceph_repository: "{{ 'community' if ceph_stable else 'rhcs' if ceph_rhcs else 'dev' if ceph_dev else 'uca' if ceph_stable_uca else 'custom' if ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#valid_ceph_repository:\r
+# - community\r
+# - rhcs\r
+# - dev\r
+# - uca\r
+# - custom\r
+\r
+\r
+# REPOSITORY: COMMUNITY VERSION\r
+#\r
+# Enabled when ceph_repository == 'community'\r
+#\r
+#ceph_mirror: http://download.ceph.com\r
+#ceph_stable_key: https://download.ceph.com/keys/release.asc\r
+ceph_stable_release: luminous\r
+#ceph_stable_repo: "{{ ceph_mirror }}/debian-{{ ceph_stable_release }}"\r
+\r
+#nfs_ganesha_stable: true # use stable repos for nfs-ganesha\r
+#nfs_ganesha_stable_branch: V2.5-stable\r
+#nfs_ganesha_stable_deb_repo: "{{ ceph_mirror }}/nfs-ganesha/deb-{{ nfs_ganesha_stable_branch }}/{{ ceph_stable_release }}"\r
+\r
+\r
+# Use the option below to specify your applicable package tree, eg. when using non-LTS Ubuntu versions\r
+# # for a list of available Debian distributions, visit http://download.ceph.com/debian-{{ ceph_stable_release }}/dists/\r
+# for more info read: https://github.com/ceph/ceph-ansible/issues/305\r
+#ceph_stable_distro_source: "{{ ansible_lsb.codename }}"\r
+\r
+# This option is needed for _both_ stable and dev version, so please always fill the right version\r
+# # for supported distros, see http://download.ceph.com/rpm-{{ ceph_stable_release }}/\r
+#ceph_stable_redhat_distro: el7\r
+\r
+\r
+# REPOSITORY: RHCS VERSION RED HAT STORAGE (from 1.3)\r
+#\r
+# Enabled when ceph_repository == 'rhcs'\r
+#\r
+# This version is only supported on RHEL >= 7.1\r
+# As of RHEL 7.1, libceph.ko and rbd.ko are now included in Red Hat's kernel\r
+# packages natively. The RHEL 7.1 kernel packages are more stable and secure than\r
+# using these 3rd-party kmods with RHEL 7.0. Please update your systems to RHEL\r
+# 7.1 or later if you want to use the kernel RBD client.\r
+#\r
+# The CephFS kernel client is undergoing rapid development upstream, and we do\r
+# not recommend running the CephFS kernel module on RHEL 7's 3.10 kernel at this\r
+# time. Please use ELRepo's latest upstream 4.x kernels if you want to run CephFS\r
+# on RHEL 7.\r
+#\r
+#\r
+#ceph_rhcs_version: "{{ ceph_stable_rh_storage_version | default(2) }}"\r
+#valid_ceph_repository_type:\r
+# - cdn\r
+# - iso\r
+#ceph_rhcs_iso_path: "{{ ceph_stable_rh_storage_iso_path | default('') }}"\r
+#ceph_rhcs_mount_path: "{{ ceph_stable_rh_storage_mount_path | default('/tmp/rh-storage-mount') }}"\r
+#ceph_rhcs_repository_path: "{{ ceph_stable_rh_storage_repository_path | default('/tmp/rh-storage-repo') }}" # where to copy iso's content\r
+\r
+# RHCS installation in Debian systems\r
+#ceph_rhcs_cdn_debian_repo: https://customername:customerpasswd@rhcs.download.redhat.com\r
+#ceph_rhcs_cdn_debian_repo_version: "/3-release/" # for GA, later for updates use /3-updates/\r
+\r
+\r
+# REPOSITORY: UBUNTU CLOUD ARCHIVE\r
+#\r
+# Enabled when ceph_repository == 'uca'\r
+#\r
+# This allows the install of Ceph from the Ubuntu Cloud Archive. The Ubuntu Cloud Archive\r
+# usually has newer Ceph releases than the normal distro repository.\r
+#\r
+#\r
+#ceph_stable_repo_uca: "http://ubuntu-cloud.archive.canonical.com/ubuntu"\r
+#ceph_stable_openstack_release_uca: liberty\r
+#ceph_stable_release_uca: "{{ansible_lsb.codename}}-updates/{{ceph_stable_openstack_release_uca}}"\r
+\r
+\r
+# REPOSITORY: DEV\r
+#\r
+# Enabled when ceph_repository == 'dev'\r
+#\r
+#ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack\r
+#ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built)\r
+\r
+#nfs_ganesha_dev: false # use development repos for nfs-ganesha\r
+\r
+# Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman\r
+# flavors so far include: ceph_master, ceph_jewel, ceph_kraken, ceph_luminous\r
+#nfs_ganesha_flavor: "ceph_master"\r
+\r
+#ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways\r
+\r
+\r
+# REPOSITORY: CUSTOM\r
+#\r
+# Enabled when ceph_repository == 'custom'\r
+#\r
+# Use a custom repository to install ceph. For RPM, ceph_custom_repo should be\r
+# a URL to the .repo file to be installed on the targets. For deb,\r
+# ceph_custom_repo should be the URL to the repo base.\r
+#\r
+#ceph_custom_repo: https://server.domain.com/ceph-custom-repo\r
+\r
+\r
+# ORIGIN: LOCAL CEPH INSTALLATION\r
+#\r
+# Enabled when ceph_repository == 'local'\r
+#\r
+# Path to DESTDIR of the ceph install\r
+#ceph_installation_dir: "/path/to/ceph_installation/"\r
+# Whether or not to use installer script rundep_installer.sh\r
+# This script takes in rundep and installs the packages line by line onto the machine\r
+# If this is set to false then it is assumed that the machine ceph is being copied onto will already have\r
+# all runtime dependencies installed\r
+#use_installer: false\r
+# Root directory for ceph-ansible\r
+#ansible_dir: "/path/to/ceph-ansible"\r
+\r
+\r
+######################\r
+# CEPH CONFIGURATION #\r
+######################\r
+\r
+## Ceph options\r
+#\r
+# Each cluster requires a unique, consistent filesystem ID. By\r
+# default, the playbook generates one for you and stores it in a file\r
+# in `fetch_directory`. If you want to customize how the fsid is\r
+# generated, you may find it useful to disable fsid generation to\r
+# avoid cluttering up your ansible repo. If you set `generate_fsid` to\r
+# false, you *must* generate `fsid` in another way.\r
+# ACTIVATE THE FSID VARIABLE FOR NON-VAGRANT DEPLOYMENT\r
+#fsid: "{{ cluster_uuid.stdout }}"\r
+#generate_fsid: true\r
+\r
+#ceph_conf_key_directory: /etc/ceph\r
+\r
+#cephx: true\r
+\r
+## Client options\r
+#\r
+#rbd_cache: "true"\r
+#rbd_cache_writethrough_until_flush: "true"\r
+#rbd_concurrent_management_ops: 20\r
+\r
+#rbd_client_directories: true # this will create rbd_client_log_path and rbd_client_admin_socket_path directories with proper permissions\r
+\r
+# Permissions for the rbd_client_log_path and\r
+# rbd_client_admin_socket_path. Depending on your use case for Ceph\r
+# you may want to change these values. The default, which is used if\r
+# any of the variables are unset or set to a false value (like `null`\r
+# or `false`) is to automatically determine what is appropriate for\r
+# the Ceph version with non-OpenStack workloads -- ceph:ceph and 0770\r
+# for infernalis releases, and root:root and 1777 for pre-infernalis\r
+# releases.\r
+#\r
+# For other use cases, including running Ceph with OpenStack, you'll\r
+# want to set these differently:\r
+#\r
+# For OpenStack on RHEL, you'll want:\r
+# rbd_client_directory_owner: "qemu"\r
+# rbd_client_directory_group: "libvirtd" (or "libvirt", depending on your version of libvirt)\r
+# rbd_client_directory_mode: "0755"\r
+#\r
+# For OpenStack on Ubuntu or Debian, set:\r
+# rbd_client_directory_owner: "libvirt-qemu"\r
+# rbd_client_directory_group: "kvm"\r
+# rbd_client_directory_mode: "0755"\r
+#\r
+# If you set rbd_client_directory_mode, you must use a string (e.g.,\r
+# 'rbd_client_directory_mode: "0755"', *not*\r
+# 'rbd_client_directory_mode: 0755', or Ansible will complain: mode\r
+# must be in octal or symbolic form\r
+#rbd_client_directory_owner: null\r
+#rbd_client_directory_group: null\r
+#rbd_client_directory_mode: null\r
+\r
+#rbd_client_log_path: /var/log/ceph\r
+#rbd_client_log_file: "{{ rbd_client_log_path }}/qemu-guest-$pid.log" # must be writable by QEMU and allowed by SELinux or AppArmor\r
+#rbd_client_admin_socket_path: /var/run/ceph # must be writable by QEMU and allowed by SELinux or AppArmor\r
+\r
+## Monitor options\r
+#\r
+# You must define either monitor_interface, monitor_address or monitor_address_block.\r
+# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).\r
+# Eg. If you want to specify for each monitor which address the monitor will bind to you can set it in your **inventory host file** by using 'monitor_address' variable.\r
+# Preference will go to monitor_address if both monitor_address and monitor_interface are defined.\r
+# To use an IPv6 address, use the monitor_address setting instead (and set ip_version to ipv6)\r
+monitor_interface: ens3\r
+#monitor_address: 0.0.0.0\r
+#monitor_address_block: subnet\r
+# set to either ipv4 or ipv6, whichever your network is using\r
+#ip_version: ipv4\r
+#mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf\r
+\r
+## OSD options\r
+#\r
+journal_size: 100 # OSD journal size in MB\r
+public_network: 100.64.128.40/24\r
+cluster_network: "{{ public_network }}"\r
+#osd_mkfs_type: xfs\r
+#osd_mkfs_options_xfs: -f -i size=2048\r
+#osd_mount_options_xfs: noatime,largeio,inode64,swalloc\r
+#osd_objectstore: filestore\r
+\r
+# xattrs. by default, 'filestore xattr use omap' is set to 'true' if\r
+# 'osd_mkfs_type' is set to 'ext4'; otherwise it isn't set. This can\r
+# be set to 'true' or 'false' to explicitly override those\r
+# defaults. Leave it 'null' to use the default for your chosen mkfs\r
+# type.\r
+#filestore_xattr_use_omap: null\r
+\r
+## MDS options\r
+#\r
+#mds_use_fqdn: false # if set to true, the MDS name used will be the fqdn in the ceph.conf\r
+#mds_allow_multimds: false\r
+#mds_max_mds: 3\r
+\r
+## Rados Gateway options\r
+#\r
+#radosgw_dns_name: your.subdomain.tld # subdomains used by radosgw. See http://ceph.com/docs/master/radosgw/config/#enabling-subdomain-s3-calls\r
+#radosgw_resolve_cname: false # enable for radosgw to resolve DNS CNAME based bucket names\r
+#radosgw_civetweb_port: 8080\r
+#radosgw_civetweb_num_threads: 100\r
+# For additional civetweb configuration options available such as SSL, logging,\r
+# keepalive, and timeout settings, please see the civetweb docs at\r
+# https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md\r
+#radosgw_civetweb_options: "num_threads={{ radosgw_civetweb_num_threads }}"\r
+# You must define either radosgw_interface, radosgw_address.\r
+# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).\r
+# Eg. If you want to specify for each radosgw node which address the radosgw will bind to you can set it in your **inventory host file** by using 'radosgw_address' variable.\r
+# Preference will go to radosgw_address if both radosgw_address and radosgw_interface are defined.\r
+# To use an IPv6 address, use the radosgw_address setting instead (and set ip_version to ipv6)\r
+#radosgw_interface: interface\r
+#radosgw_address: "{{ '0.0.0.0' if rgw_containerized_deployment else 'address' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#radosgw_address_block: subnet\r
+#radosgw_keystone: false # activate OpenStack Keystone options full detail here: http://ceph.com/docs/master/radosgw/keystone/\r
+# Rados Gateway options\r
+#email_address: foo@bar.com\r
+\r
+## REST API options\r
+#\r
+#restapi_interface: "{{ monitor_interface }}"\r
+#restapi_address: "{{ monitor_address }}"\r
+#restapi_port: 5000\r
+\r
+## Testing mode\r
+# enable this mode _only_ when you have a single node\r
+# if you don't want it keep the option commented\r
+#common_single_host_mode: true\r
+\r
+## Handlers - restarting daemons after a config change\r
+# if for whatever reasons the content of your ceph configuration changes\r
+# ceph daemons will be restarted as well. At the moment, we can not detect\r
+# which config option changed so all the daemons will be restarted. Although\r
+# this restart will be serialized for each node, in between a health check\r
+# will be performed so we make sure we don't move to the next node until\r
+# ceph is not healthy\r
+# Obviously between the checks (for monitors to be in quorum and for osd's pgs\r
+# to be clean) we have to wait. These retries and delays can be configurable\r
+# for both monitors and osds.\r
+#\r
+# Monitor handler checks\r
+#handler_health_mon_check_retries: 5\r
+#handler_health_mon_check_delay: 10\r
+#\r
+# OSD handler checks\r
+#handler_health_osd_check_retries: 40\r
+#handler_health_osd_check_delay: 30\r
+#handler_health_osd_check: true\r
+#\r
+# MDS handler checks\r
+#handler_health_mds_check_retries: 5\r
+#handler_health_mds_check_delay: 10\r
+#\r
+# RGW handler checks\r
+#handler_health_rgw_check_retries: 5\r
+#handler_health_rgw_check_delay: 10\r
+\r
+# NFS handler checks\r
+#handler_health_nfs_check_retries: 5\r
+#handler_health_nfs_check_delay: 10\r
+\r
+# RBD MIRROR handler checks\r
+#handler_health_rbd_mirror_check_retries: 5\r
+#handler_health_rbd_mirror_check_delay: 10\r
+\r
+# MGR handler checks\r
+#handler_health_mgr_check_retries: 5\r
+#handler_health_mgr_check_delay: 10\r
+\r
+###############\r
+# NFS-GANESHA #\r
+###############\r
+\r
+# Confiure the type of NFS gatway access. At least one must be enabled for an\r
+# NFS role to be useful\r
+#\r
+# Set this to true to enable File access via NFS. Requires an MDS role.\r
+#nfs_file_gw: false\r
+# Set this to true to enable Object access via NFS. Requires an RGW role.\r
+#nfs_obj_gw: true\r
+\r
+###################\r
+# CONFIG OVERRIDE #\r
+###################\r
+\r
+# Ceph configuration file override.\r
+# This allows you to specify more configuration options\r
+# using an INI style format.\r
+# The following sections are supported: [global], [mon], [osd], [mds], [rgw]\r
+#\r
+# Example:\r
+# ceph_conf_overrides:\r
+# global:\r
+# foo: 1234\r
+# bar: 5678\r
+#\r
+#ceph_conf_overrides: {}\r
+\r
+\r
+#############\r
+# OS TUNING #\r
+#############\r
+\r
+#disable_transparent_hugepage: true\r
+#os_tuning_params:\r
+# - { name: kernel.pid_max, value: 4194303 }\r
+# - { name: fs.file-max, value: 26234859 }\r
+# - { name: vm.zone_reclaim_mode, value: 0 }\r
+# - { name: vm.swappiness, value: 10 }\r
+# - { name: vm.min_free_kbytes, value: "{{ vm_min_free_kbytes }}" }\r
+\r
+# For Debian & Red Hat/CentOS installs set TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES\r
+# Set this to a byte value (e.g. 134217728)\r
+# A value of 0 will leave the package default.\r
+#ceph_tcmalloc_max_total_thread_cache: 0\r
+\r
+\r
+##########\r
+# DOCKER #\r
+##########\r
+#docker_exec_cmd:\r
+#docker: false\r
+#ceph_docker_image: "ceph/daemon"\r
+#ceph_docker_image_tag: latest\r
+#ceph_docker_registry: docker.io\r
+#ceph_docker_enable_centos_extra_repo: false\r
+#ceph_docker_on_openstack: false\r
+#ceph_mon_docker_interface: "{{ monitor_interface }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_mon_docker_subnet: "{{ public_network }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#mon_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#osd_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#mds_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#rgw_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#containerized_deployment: "{{ True if mon_containerized_deployment or osd_containerized_deployment or mds_containerized_deployment or rgw_containerized_deployment else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+\r
+\r
+############\r
+# KV store #\r
+############\r
+#containerized_deployment_with_kv: false\r
+#mon_containerized_default_ceph_conf_with_kv: false\r
+#kv_type: etcd\r
+#kv_endpoint: 127.0.0.1\r
+#kv_port: 2379\r
+\r
+\r
+# this is only here for usage with the rolling_update.yml playbook\r
+# do not ever change this here\r
+#rolling_update: false\r
+\r
+\r
--- /dev/null
+[mons]\r
+localhost ansible_connection=local\r
+\r
+[osds]\r
+localhost ansible_connection=local\r
+\r
+[mgrs]\r
+localhost ansible_connection=local\r
--- /dev/null
+configFile: /etc/ceph/ceph.conf\r
+pool:\r
+ "rbd": # change pool name same to ceph pool, but don't change it if you choose lvm backend\r
+ diskType: SSD\r
+ AZ: default
\ No newline at end of file
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override default vars defined in defaults/main.yml here,\r
+# but I would advice to use host or group vars instead\r
+\r
+#raw_journal_devices: "{{ dedicated_devices }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#raw_multi_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#dmcrytpt_journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#dmcrypt_dedicated_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+# Even though OSD nodes should not have the admin key\r
+# at their disposal, some people might want to have it\r
+# distributed on OSD nodes. Setting 'copy_admin_key' to 'true'\r
+# will copy the admin key to the /etc/ceph/ directory\r
+#copy_admin_key: false\r
+\r
+\r
+####################\r
+# OSD CRUSH LOCATION\r
+####################\r
+\r
+# /!\\r
+#\r
+# BE EXTREMELY CAREFUL WITH THIS OPTION\r
+# DO NOT USE IT UNLESS YOU KNOW WHAT YOU ARE DOING\r
+#\r
+# /!\\r
+#\r
+# It is probably best to keep this option to 'false' as the default\r
+# suggests it. This option should only be used while doing some complex\r
+# CRUSH map. It allows you to force a specific location for a set of OSDs.\r
+#\r
+# The following options will build a ceph.conf with OSD sections\r
+# Example:\r
+# [osd.X]\r
+# osd crush location = "root=location"\r
+#\r
+# This works with your inventory file\r
+# To match the following 'osd_crush_location' option the inventory must look like:\r
+#\r
+# [osds]\r
+# osd0 ceph_crush_root=foo ceph_crush_rack=bar\r
+\r
+#crush_location: false\r
+#osd_crush_location: "\"root={{ ceph_crush_root }} rack={{ ceph_crush_rack }} host={{ ansible_hostname }}\""\r
+\r
+\r
+##############\r
+# CEPH OPTIONS\r
+##############\r
+\r
+# Devices to be used as OSDs\r
+# You can pre-provision disks that are not present yet.\r
+# Ansible will just skip them. Newly added disk will be\r
+# automatically configured during the next run.\r
+#\r
+\r
+\r
+# Declare devices to be used as OSDs\r
+# All scenario(except 3rd) inherit from the following device declaration\r
+\r
+devices:\r
+# - /dev/sda\r
+# - /dev/sdc\r
+# - /dev/sdd\r
+# - /dev/sde\r
+\r
+#devices: []\r
+\r
+\r
+#'osd_auto_discovery' mode prevents you from filling out the 'devices' variable above.\r
+# You can use this option with First and Forth and Fifth OSDS scenario.\r
+# Device discovery is based on the Ansible fact 'ansible_devices'\r
+# which reports all the devices on a system. If chosen all the disks\r
+# found will be passed to ceph-disk. You should not be worried on using\r
+# this option since ceph-disk has a built-in check which looks for empty devices.\r
+# Thus devices with existing partition tables will not be used.\r
+#\r
+#osd_auto_discovery: false\r
+\r
+# Encrypt your OSD device using dmcrypt\r
+# If set to True, no matter which osd_objecstore and osd_scenario you use the data will be encrypted\r
+#dmcrypt: "{{ True if dmcrytpt_journal_collocation or dmcrypt_dedicated_journal else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+\r
+\r
+# I. First scenario: collocated\r
+#\r
+# To enable this scenario do: osd_scenario: collocated\r
+#\r
+#\r
+# If osd_objectstore: filestore is enabled both 'ceph data' and 'ceph journal' partitions\r
+# will be stored on the same device.\r
+#\r
+# If osd_objectstore: bluestore is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored\r
+# on the same device. The device will get 2 partitions:\r
+# - One for 'data', called 'ceph data'\r
+# - One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'\r
+#\r
+# Example of what you will get:\r
+# [root@ceph-osd0 ~]# blkid /dev/sda*\r
+# /dev/sda: PTTYPE="gpt"\r
+# /dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"\r
+# /dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"\r
+#\r
+\r
+#osd_scenario: "{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#valid_osd_scenarios:\r
+# - collocated\r
+# - non-collocated\r
+# - lvm\r
+osd_scenario: collocated\r
+\r
+# II. Second scenario: non-collocated\r
+#\r
+# To enable this scenario do: osd_scenario: non-collocated\r
+#\r
+# If osd_objectstore: filestore is enabled 'ceph data' and 'ceph journal' partitions\r
+# will be stored on different devices:\r
+# - 'ceph data' will be stored on the device listed in 'devices'\r
+# - 'ceph journal' will be stored on the device listed in 'dedicated_devices'\r
+#\r
+# Let's take an example, imagine 'devices' was declared like this:\r
+#\r
+# devices:\r
+# - /dev/sda\r
+# - /dev/sdb\r
+# - /dev/sdc\r
+# - /dev/sdd\r
+#\r
+# And 'dedicated_devices' was declared like this:\r
+#\r
+# dedicated_devices:\r
+# - /dev/sdf\r
+# - /dev/sdf\r
+# - /dev/sdg\r
+# - /dev/sdg\r
+#\r
+# This will result in the following mapping:\r
+# - /dev/sda will have /dev/sdf1 as journal\r
+# - /dev/sdb will have /dev/sdf2 as a journal\r
+# - /dev/sdc will have /dev/sdg1 as a journal\r
+# - /dev/sdd will have /dev/sdg2 as a journal\r
+#\r
+#\r
+# If osd_objectstore: bluestore is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored\r
+# on a dedicated device.\r
+#\r
+# So the following will happen:\r
+# - The devices listed in 'devices' will get 2 partitions, one for 'block' and one for 'data'.\r
+# 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata.\r
+# 'block' will store all your actual data.\r
+# - The devices in 'dedicated_devices' will get 1 partition for RocksDB DB, called 'block.db'\r
+# and one for RocksDB WAL, called 'block.wal'\r
+#\r
+# By default dedicated_devices will represent block.db\r
+#\r
+# Example of what you will get:\r
+# [root@ceph-osd0 ~]# blkid /dev/sd*\r
+# /dev/sda: PTTYPE="gpt"\r
+# /dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"\r
+# /dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"\r
+# /dev/sdb: PTTYPE="gpt"\r
+# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"\r
+# /dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"\r
+#dedicated_devices: []\r
+\r
+\r
+# More device granularity for Bluestore\r
+#\r
+# ONLY if osd_objectstore: bluestore is enabled.\r
+#\r
+# By default, if 'bluestore_wal_devices' is empty, it will get the content of 'dedicated_devices'.\r
+# If set, then you will have a dedicated partition on a specific device for block.wal.\r
+#\r
+# Example of what you will get:\r
+# [root@ceph-osd0 ~]# blkid /dev/sd*\r
+# /dev/sda: PTTYPE="gpt"\r
+# /dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"\r
+# /dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"\r
+# /dev/sdb: PTTYPE="gpt"\r
+# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"\r
+# /dev/sdc: PTTYPE="gpt"\r
+# /dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"\r
+#bluestore_wal_devices: "{{ dedicated_devices }}"\r
+\r
+# III. Use ceph-volume to create OSDs from logical volumes.\r
+# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals\r
+# when using lvm, not collocated journals.\r
+# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name\r
+# key. Any logical volume or logical group used must be a name and not a path.\r
+# data must be a logical volume\r
+# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.\r
+# data_vg must be the volume group name of the data lv\r
+# journal_vg is optional and must be the volume group name of the journal lv, if applicable\r
+# For example:\r
+# lvm_volumes:\r
+# - data: data-lv1\r
+# data_vg: vg1\r
+# journal: journal-lv1\r
+# journal_vg: vg2\r
+# - data: data-lv2\r
+# journal: /dev/sda\r
+# data_vg: vg1\r
+# - data: data-lv3\r
+# journal: /dev/sdb1\r
+# data_vg: vg2\r
+#lvm_volumes: []\r
+\r
+\r
+##########\r
+# DOCKER #\r
+##########\r
+\r
+#ceph_config_keys: [] # DON'T TOUCH ME\r
+\r
+# Resource limitation\r
+# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints\r
+# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations\r
+# These options can be passed using the 'ceph_osd_docker_extra_env' variable.\r
+#ceph_osd_docker_memory_limit: 1g\r
+#ceph_osd_docker_cpu_limit: 1\r
+\r
+# PREPARE DEVICE\r
+#\r
+# WARNING /!\ DMCRYPT scenario ONLY works with Docker version 1.12.5 and above\r
+#\r
+#ceph_osd_docker_devices: "{{ devices }}"\r
+#ceph_osd_docker_prepare_env: -e OSD_JOURNAL_SIZE={{ journal_size }}\r
+\r
+# ACTIVATE DEVICE\r
+#\r
+#ceph_osd_docker_extra_env:\r
+#ceph_osd_docker_run_script_path: "/usr/share" # script called by systemd to run the docker command\r
+\r
+\r
+###########\r
+# SYSTEMD #\r
+###########\r
+\r
+# ceph_osd_systemd_overrides will override the systemd settings\r
+# for the ceph-osd services.\r
+# For example,to set "PrivateDevices=false" you can specify:\r
+#ceph_osd_systemd_overrides:\r
+# Service:\r
+# PrivateDevices: False\r
+\r
--- /dev/null
+authOptions:\r
+ noAuth: true\r
+ endpoint: "http://127.0.0.1/identity"\r
+ cinderEndpoint: "http://127.0.0.1:8776/v2"\r
+ domainId: "Default"\r
+ domainName: "Default"\r
+ username: ""\r
+ password: ""\r
+ tenantId: "myproject"\r
+ tenantName: "myproject"\r
+pool:\r
+ "cinder-lvm@lvm#lvm":\r
+ AZ: nova\r
+ thin: true\r
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override default vars defined in defaults/main.yml here,\r
+# but I would advice to use host or group vars instead\r
+\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+workplace: /home/krej # Change this field according to your username\r
+\r
+# These fields are NOT suggested to be modified\r
+remote_url: https://github.com/opensds/opensds.git\r
+opensds_root_dir: "{{ workplace }}/gopath/src/github.com/opensds/opensds"\r
+opensds_build_dir: "{{ opensds_root_dir }}/build"\r
+opensds_config_dir: /etc/opensds\r
+opensds_log_dir: /var/log/opensds\r
--- /dev/null
+tgtBindIp: 127.0.0.1\r
+pool:\r
+ "vg001": # change pool name same to vg_name, but don't change it if you choose ceph backend\r
+ diskType: SSD\r
+ AZ: default
\ No newline at end of file
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override default vars defined in defaults/main.yml here,\r
+# but I would advice to use host or group vars instead\r
+\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+db_driver: etcd\r
+db_endpoint: localhost:2379,localhost:2380\r
+#db_credential: opensds:password@127.0.0.1:3306/dbname\r
+\r
+###########\r
+# ETCD #\r
+###########\r
+\r
+etcd_release: v3.2.0\r
+\r
+# These fields are not suggested to be modified\r
+etcd_tarball: etcd-{{ etcd_release }}-linux-amd64.tar.gz\r
+etcd_download_url: https://github.com/coreos/etcd/releases/download/{{ etcd_release }}/{{ etcd_tarball }}\r
+etcd_dir: /tmp/etcd-{{ etcd_release }}-linux-amd64\r
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override default vars defined in defaults/main.yml here,\r
+# but I would advice to use host or group vars instead\r
+\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+# Change it according to your backend, currently support 'lvm', 'ceph', 'cinder'\r
+enabled_backend: lvm\r
+\r
+# These fields are NOT suggested to be modified\r
+dock_endpoint: localhost:50050\r
+dock_log_file: "{{ opensds_log_dir }}/osdsdock.log"\r
+\r
+###########\r
+# LVM #\r
+###########\r
+\r
+pv_device: /dev/sdb # Specify a block device and ensure it existed if you choose lvm\r
+vg_name: vg001 # Specify a name randomly\r
+\r
+# These fields are NOT suggested to be modified\r
+lvm_name: lvm backend\r
+lvm_description: This is a lvm backend service\r
+lvm_driver_name: lvm\r
+lvm_config_path: "{{ opensds_config_dir }}/driver/lvm.yaml"\r
+\r
+###########\r
+# CEPH #\r
+###########\r
+\r
+ceph_pool_name: rbd # Specify a name randomly\r
+\r
+# These fields are NOT suggested to be modified\r
+ceph_name: ceph backend\r
+ceph_description: This is a ceph backend service\r
+ceph_driver_name: ceph\r
+ceph_config_path: "{{ opensds_config_dir }}/driver/ceph.yaml"\r
+\r
+###########\r
+# CINDER #\r
+###########\r
+\r
+# Use block-box install cinder_standalone if true, see details in:\r
+# https://github.com/openstack/cinder/tree/master/contrib/block-box\r
+use_cinder_standalone: true\r
+# If true, you can configure cinder_container_platform, cinder_image_tag,\r
+# cinder_volume_group.\r
+\r
+# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.\r
+cinder_container_platform: debian:stretch\r
+# The image tag can be arbitrarily modified, as long as follow the image naming\r
+# conventions, default: debian-cinder\r
+cinder_image_tag: debian-cinder\r
+# The cinder standalone use lvm driver as default driver, therefore `volume_group`\r
+# should be configured, the default is: cinder-volumes. The volume group will be\r
+# removed when use ansible script clean environment.\r
+cinder_volume_group: cinder-volumes\r
+# All source code and volume group file will be placed in the cinder_data_dir:\r
+cinder_data_dir: "{{ workplace }}/cinder_data_dir"\r
+\r
+\r
+# These fields are not suggested to be modified\r
+cinder_name: cinder backend\r
+cinder_description: This is a cinder backend service\r
+cinder_driver_name: cinder\r
+cinder_config_path: "{{ opensds_config_dir }}/driver/cinder.yaml"\r
+\r
+###########\r
+# DOCKER #\r
+###########\r
+\r
+dock_docker_image: dockerio/opensds-dock:zealand\r
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override default vars defined in defaults/main.yml here,\r
+# but I would advice to use host or group vars instead\r
+\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+# These fields are NOT suggested to be modified\r
+controller_endpoint: localhost:50040\r
+controller_log_file: "{{ opensds_log_dir }}/osdslet.log"\r
+\r
+###########\r
+# DOCKER #\r
+###########\r
+\r
+controller_docker_image: dockerio/opensds-controller:zealand\r
--- /dev/null
+[controllers]\r
+localhost ansible_connection=local\r
+\r
+[docks]\r
+localhost ansible_connection=local\r
--- /dev/null
+---\r
+- name: kill etcd daemon service\r
+ shell: killall etcd\r
+ ignore_errors: yes\r
+ when: db_driver == "etcd"\r
+\r
+- name: remove etcd service data\r
+ file:\r
+ path: "{{ etcd_dir }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+ when: db_driver == "etcd"\r
+\r
+- name: remove etcd tarball\r
+ file:\r
+ path: "/tmp/{{ etcd_tarball }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+ when: db_driver == "etcd"\r
+\r
+- name: kill osdslet daemon service\r
+ shell: killall osdslet\r
+ ignore_errors: yes\r
+\r
+- name: kill osdsdock daemon service\r
+ shell: killall osdsdock\r
+ ignore_errors: yes\r
+\r
+- name: clean all opensds build files\r
+ file:\r
+ path: "{{ opensds_build_dir }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+\r
+- name: clean all opensds configuration files\r
+ file:\r
+ path: "{{ opensds_config_dir }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+\r
+- name: clean all opensds log files\r
+ file:\r
+ path: "{{ opensds_log_dir }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+\r
+- name: check if it existed before cleaning a volume group\r
+ shell: vgdisplay {{ vg_name }}\r
+ ignore_errors: yes\r
+ register: vg_existed\r
+ when: enabled_backend == "lvm"\r
+\r
+- name: remove a volume group if lvm backend specified\r
+ shell: vgremove {{ vg_name }}\r
+ when: enabled_backend == "lvm" and vg_existed.rc == 0\r
+\r
+- name: check if it existed before cleaning a physical volume\r
+ shell: pvdisplay {{ pv_device }}\r
+ ignore_errors: yes\r
+ register: pv_existed\r
+ when: enabled_backend == "lvm"\r
+\r
+- name: remove a physical volume if lvm backend specified\r
+ shell: pvremove {{ pv_device }}\r
+ when: enabled_backend == "lvm" and pv_existed.rc == 0\r
+\r
+- name: stop cinder-standalone service\r
+ shell: docker-compose down\r
+ become: true\r
+ args:\r
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"\r
+ when: enabled_backend == "cinder"\r
+\r
+- name: clean the volume group of cinder\r
+ shell:\r
+ _raw_params: |\r
+\r
+ # _clean_lvm_volume_group removes all default LVM volumes\r
+ #\r
+ # Usage: _clean_lvm_volume_group $vg\r
+ function _clean_lvm_volume_group {\r
+ local vg=$1\r
+\r
+ # Clean out existing volumes\r
+ sudo lvremove -f $vg\r
+ }\r
+\r
+ # _remove_lvm_volume_group removes the volume group\r
+ #\r
+ # Usage: _remove_lvm_volume_group $vg\r
+ function _remove_lvm_volume_group {\r
+ local vg=$1\r
+\r
+ # Remove the volume group\r
+ sudo vgremove -f $vg\r
+ }\r
+\r
+ # _clean_lvm_backing_file() removes the backing file of the\r
+ # volume group\r
+ #\r
+ # Usage: _clean_lvm_backing_file() $backing_file\r
+ function _clean_lvm_backing_file {\r
+ local backing_file=$1\r
+\r
+ # If the backing physical device is a loop device, it was probably setup by DevStack\r
+ if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then\r
+ local vg_dev\r
+ vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')\r
+ if [[ -n "$vg_dev" ]]; then\r
+ sudo losetup -d $vg_dev\r
+ fi\r
+ rm -f $backing_file\r
+ fi\r
+ }\r
+\r
+ # clean_lvm_volume_group() cleans up the volume group and removes the\r
+ # backing file\r
+ #\r
+ # Usage: clean_lvm_volume_group $vg\r
+ function clean_lvm_volume_group {\r
+ local vg=$1\r
+\r
+ _clean_lvm_volume_group $vg\r
+ _remove_lvm_volume_group $vg\r
+ # if there is no logical volume left, it's safe to attempt a cleanup\r
+ # of the backing file\r
+ if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then\r
+ _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img\r
+ fi\r
+ }\r
+\r
+ clean_lvm_volume_group {{cinder_volume_group}}\r
+\r
+ args:\r
+ executable: /bin/bash\r
+ become: true\r
+ when: enabled_backend == "cinder"\r
--- /dev/null
+---\r
+# If we can't get golang installed before any module is used we will fail\r
+# so just try what we can to get it installed\r
+- name: check for golang\r
+ stat:\r
+ path: /usr/local/go\r
+ ignore_errors: yes\r
+ register: systemgolang\r
+\r
+- name: install golang for debian based systems\r
+ shell:\r
+ cmd: |\r
+ set -e\r
+ set -x\r
+\r
+ wget https://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz\r
+ tar xvf go1.9.linux-amd64.tar.gz -C /usr/local/\r
+ cat >> /etc/profile <<GOLANG__CONFIG_DOC\r
+ export GOROOT=/usr/local/go\r
+ export GOPATH=\$HOME/gopath\r
+ export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin\r
+ GOLANG__CONFIG_DOC\r
+\r
+ executable: /bin/bash\r
+ ignore_errors: yes\r
+ when:\r
+ - systemgolang.stat.exists is undefined or systemgolang.stat.exists == false\r
+\r
+- name: Run the equivalent of "apt-get update" as a separate step\r
+ apt:\r
+ update_cache: yes\r
+\r
+- name: install librados-dev external package\r
+ apt:\r
+ name: librados-dev\r
+\r
+- name: install librbd-dev external package\r
+ apt:\r
+ name: librbd-dev\r
+\r
+- name: check for opensds source code existed\r
+ stat:\r
+ path: "{{ opensds_root_dir }}"\r
+ ignore_errors: yes\r
+ register: opensdsexisted\r
+\r
+- name: download opensds source code\r
+ git:\r
+ repo: "{{ remote_url }}"\r
+ dest: "{{ opensds_root_dir }}"\r
+ when:\r
+ - opensdsexisted.stat.exists is undefined or opensdsexisted.stat.exists == false\r
+\r
+- name: check for opensds binary file existed\r
+ stat:\r
+ path: "{{ opensds_build_dir }}"\r
+ ignore_errors: yes\r
+ register: opensdsbuilt\r
+\r
+- name: build opensds binary file\r
+ shell: . /etc/profile; make\r
+ args:\r
+ chdir: "{{ opensds_root_dir }}"\r
+ when:\r
+ - opensdsbuilt.stat.exists is undefined or opensdsbuilt.stat.exists == false\r
+\r
+- name: create opensds global config directory if it doesn't exist\r
+ file:\r
+ path: "{{ opensds_config_dir }}/driver"\r
+ state: directory\r
+ mode: 0755\r
+\r
+- name: create opensds log directory if it doesn't exist\r
+ file:\r
+ path: "{{ opensds_log_dir }}"\r
+ state: directory\r
+ mode: 0755\r
+\r
+- name: configure opensds global info\r
+ shell: |\r
+ cat > opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC\r
+ [osdslet]\r
+ api_endpoint = {{ controller_endpoint }}\r
+ graceful = True\r
+ log_file = {{ controller_log_file }}\r
+ socket_order = inc\r
+\r
+ [osdsdock]\r
+ api_endpoint = {{ dock_endpoint }}\r
+ log_file = {{ dock_log_file }}\r
+ # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on.\r
+ enabled_backends = {{ enabled_backend }}\r
+\r
+ [lvm]\r
+ name = {{ lvm_name }}\r
+ description = {{ lvm_description }}\r
+ driver_name = {{ lvm_driver_name }}\r
+ config_path = {{ lvm_config_path }}\r
+\r
+ [ceph]\r
+ name = {{ ceph_name }}\r
+ description = {{ ceph_description }}\r
+ driver_name = {{ ceph_driver_name }}\r
+ config_path = {{ ceph_config_path }}\r
+\r
+ [cinder]\r
+ name = {{ cinder_name }}\r
+ description = {{ cinder_description }}\r
+ driver_name = {{ cinder_driver_name }}\r
+ config_path = {{ cinder_config_path }}\r
+\r
+ [database]\r
+ endpoint = {{ db_endpoint }}\r
+ driver = {{ db_driver }}\r
+ args:\r
+ chdir: "{{ opensds_config_dir }}"\r
+ ignore_errors: yes\r
+\r
+- name: copy opensds lvm backend file if specify lvm backend\r
+ copy:\r
+ src: ../../../group_vars/lvm/lvm.yaml\r
+ dest: "{{ lvm_config_path }}"\r
+ when: enabled_backend == "lvm"\r
+\r
+- name: copy opensds ceph backend file if specify ceph backend\r
+ copy:\r
+ src: ../../../group_vars/ceph/ceph.yaml\r
+ dest: "{{ ceph_config_path }}"\r
+ when: enabled_backend == "ceph"\r
+\r
+- name: copy opensds cinder backend file if specify cinder backend\r
+ copy:\r
+ src: ../../../group_vars/cinder/cinder.yaml\r
+ dest: "{{ cinder_config_path }}"\r
+ when: enabled_backend == "cinder"\r
--- /dev/null
+---\r
+- name: check for etcd existed\r
+ stat:\r
+ path: "{{ etcd_dir }}/etcd"\r
+ ignore_errors: yes\r
+ register: etcdexisted\r
+\r
+- name: download etcd\r
+ get_url:\r
+ url={{ etcd_download_url }}\r
+ dest=/tmp/{{ etcd_tarball }}\r
+ when:\r
+ - etcdexisted.stat.exists is undefined or etcdexisted.stat.exists == false\r
+\r
+- name: extract the etcd tarball\r
+ unarchive:\r
+ src=/tmp/{{ etcd_tarball }}\r
+ dest=/tmp/\r
+ when:\r
+ - etcdexisted.stat.exists is undefined or etcdexisted.stat.exists == false\r
+\r
+- name: Check if etcd is running\r
+ shell: ps aux | grep etcd | grep -v grep\r
+ ignore_errors: true\r
+ register: service_etcd_status\r
+\r
+- name: run etcd daemon service\r
+ shell: nohup ./etcd &>>etcd.log &\r
+ become: true\r
+ args:\r
+ chdir: "{{ etcd_dir }}"\r
+ when: service_etcd_status.rc != 0\r
+\r
+- name: check etcd cluster health\r
+ shell: ./etcdctl cluster-health\r
+ become: true\r
+ args:\r
+ chdir: "{{ etcd_dir }}"\r
--- /dev/null
+---\r
+- name: include scenarios/etcd.yml\r
+ include: scenarios/etcd.yml\r
+ when: db_driver == "etcd"
\ No newline at end of file
--- /dev/null
+---\r
+- name: install ceph-common external package when ceph backend enabled\r
+ apt:\r
+ name: ceph-common\r
+ when: enabled_backend == "ceph"\r
+\r
+- name: check for ceph-ansible source code existed\r
+ stat:\r
+ path: /tmp/ceph-ansible\r
+ ignore_errors: yes\r
+ register: cephansibleexisted\r
+\r
+- name: download ceph-ansible source code\r
+ git:\r
+ repo: https://github.com/ceph/ceph-ansible.git\r
+ dest: /tmp/ceph-ansible\r
+ when:\r
+ - cephansibleexisted.stat.exists is undefined or cephansibleexisted.stat.exists == false\r
+\r
+- name: copy ceph inventory host into ceph-ansible directory\r
+ copy:\r
+ src: ../../../group_vars/ceph/ceph.hosts\r
+ dest: /tmp/ceph-ansible/ceph.hosts\r
+\r
+- name: copy ceph all.yml file into ceph-ansible group_vars directory\r
+ copy:\r
+ src: ../../../group_vars/ceph/all.yml\r
+ dest: /tmp/ceph-ansible/group_vars/all.yml\r
+\r
+- name: copy ceph osds.yml file into ceph-ansible group_vars directory\r
+ copy:\r
+ src: ../../../group_vars/ceph/osds.yml\r
+ dest: /tmp/ceph-ansible/group_vars/osds.yml\r
+\r
+- name: copy site.yml.sample to site.yml in ceph-ansible\r
+ copy:\r
+ src: /tmp/ceph-ansible/site.yml.sample\r
+ dest: /tmp/ceph-ansible/site.yml\r
+\r
+- name: ping all hosts\r
+ shell: ansible all -m ping -i ceph.hosts\r
+ become: true\r
+ args:\r
+ chdir: /tmp/ceph-ansible\r
+\r
+- name: run ceph-ansible playbook\r
+ shell: ansible-playbook site.yml -i ceph.hosts\r
+ become: true\r
+ args:\r
+ chdir: /tmp/ceph-ansible\r
+\r
+- name: Check if ceph osd is running\r
+ shell: ps aux | grep ceph-osd | grep -v grep\r
+ ignore_errors: false\r
+ changed_when: false\r
+ register: service_ceph_osd_status\r
+\r
+- name: Check if ceph mon is running\r
+ shell: ps aux | grep ceph-mon | grep -v grep\r
+ ignore_errors: false\r
+ changed_when: false\r
+ register: service_ceph_mon_status\r
+\r
+- name: Create a pool and initialize it.\r
+ shell: ceph osd pool create {{ ceph_pool_name }} 100 && ceph osd pool set {{ ceph_pool_name }} size 1\r
+ ignore_errors: yes\r
+ changed_when: false\r
+ register: ceph_init_pool\r
+ when: service_ceph_mon_status.rc == 0 and service_ceph_osd_status.rc == 0
\ No newline at end of file
--- /dev/null
+---\r
+\r
+- name: install python-pip\r
+ apt:\r
+ name: python-pip\r
+\r
+- name: install lvm2\r
+ apt:\r
+ name: lvm2\r
+\r
+- name: install thin-provisioning-tools\r
+ apt:\r
+ name: thin-provisioning-tools\r
+\r
+- name: install docker-compose\r
+ pip:\r
+ name: docker-compose\r
+\r
+- name: create directory to save source code and volume group file\r
+ file:\r
+ path: "{{ cinder_data_dir }}"\r
+ state: directory\r
+ recurse: yes\r
+\r
+- name: create volume group in thin mode\r
+ shell:\r
+ _raw_params: |\r
+ function _create_lvm_volume_group {\r
+ local vg=$1\r
+ local size=$2\r
+\r
+ local backing_file={{ cinder_data_dir }}/${vg}.img\r
+ if ! sudo vgs $vg; then\r
+ # Only create if the file doesn't already exists\r
+ [[ -f $backing_file ]] || truncate -s $size $backing_file\r
+ local vg_dev\r
+ vg_dev=`sudo losetup -f --show $backing_file`\r
+\r
+ # Only create volume group if it doesn't already exist\r
+ if ! sudo vgs $vg; then\r
+ sudo vgcreate $vg $vg_dev\r
+ fi\r
+ fi\r
+ }\r
+ modprobe dm_thin_pool\r
+ _create_lvm_volume_group {{ cinder_volume_group }} 10G\r
+ args:\r
+ executable: /bin/bash\r
+ become: true\r
+\r
+- name: check for python-cinderclient source code existed\r
+ stat:\r
+ path: "{{ cinder_data_dir }}/python-cinderclient"\r
+ ignore_errors: yes\r
+ register: cinderclient_existed\r
+\r
+- name: download python-cinderclient source code\r
+ git:\r
+ repo: https://github.com/openstack/python-cinderclient.git\r
+ dest: "{{ cinder_data_dir }}/python-cinderclient"\r
+ when:\r
+ - cinderclient_existed.stat.exists is undefined or cinderclient_existed.stat.exists == false\r
+\r
+# Tested successfully in this version `ab0185bfc6e8797a35a2274c2a5ee03afb03dd60`\r
+# git checkout -b ab0185bfc6e8797a35a2274c2a5ee03afb03dd60\r
+- name: pip install cinderclinet\r
+ shell: |\r
+ pip install -e .\r
+ become: true\r
+ args:\r
+ chdir: "{{ cinder_data_dir }}/python-cinderclient"\r
+\r
+- name: check for python-brick-cinderclient-ext source code existed\r
+ stat:\r
+ path: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"\r
+ ignore_errors: yes\r
+ register: brick_existed\r
+\r
+- name: download python-brick-cinderclient-ext source code\r
+ git:\r
+ repo: https://github.com/openstack/python-brick-cinderclient-ext.git\r
+ dest: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"\r
+ when:\r
+ - brick_existed.stat.exists is undefined or brick_existed.stat.exists == false\r
+\r
+# Tested successfully in this version `a281e67bf9c12521ea5433f86cec913854826a33`\r
+# git checkout -b a281e67bf9c12521ea5433f86cec913854826a33\r
+- name: pip install python-brick-cinderclient-ext\r
+ shell: |\r
+ pip install -e .\r
+ become: true\r
+ args:\r
+ chdir: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"\r
+\r
+\r
+- name: check for cinder source code existed\r
+ stat:\r
+ path: "{{ cinder_data_dir }}/cinder"\r
+ ignore_errors: yes\r
+ register: cinder_existed\r
+\r
+- name: download cinder source code\r
+ git:\r
+ repo: https://github.com/openstack/cinder.git\r
+ dest: "{{ cinder_data_dir }}/cinder"\r
+ when:\r
+ - cinder_existed.stat.exists is undefined or cinder_existed.stat.exists == false\r
+\r
+# Tested successfully in this version `7bbc95344d3961d0bf059252723fa40b33d4b3fe`\r
+# git checkout -b 7bbc95344d3961d0bf059252723fa40b33d4b3fe\r
+- name: update blockbox configuration\r
+ shell: |\r
+ sed -i "s/PLATFORM ?= debian:stretch/PLATFORM ?= {{ cinder_container_platform }}/g" Makefile\r
+ sed -i "s/TAG ?= debian-cinder:latest/TAG ?= {{ cinder_image_tag }}:latest/g" Makefile\r
+\r
+ sed -i "s/image: debian-cinder/image: {{ cinder_image_tag }}/g" docker-compose.yml\r
+ sed -i "s/image: lvm-debian-cinder/image: {{ cinder_image_tag }}/g" docker-compose.yml\r
+\r
+ sed -i "s/volume_group = cinder-volumes /volume_group = {{ cinder_volume_group }}/g" etc/cinder.conf\r
+ become: true\r
+ args:\r
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"\r
+\r
+- name: make blockbox\r
+ shell: make blockbox\r
+ become: true\r
+ args:\r
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"\r
+\r
+- name: start cinder-standalone service\r
+ shell: docker-compose up -d\r
+ become: true\r
+ args:\r
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"\r
+\r
+- name: wait for cinder service to start normally\r
+ wait_for:\r
+ host: 127.0.0.1\r
+ port: 8776\r
+ delay: 2\r
+ timeout: 120\r
--- /dev/null
+---\r
+- name: install lvm2 external package when lvm backend enabled\r
+ apt:\r
+ name: lvm2\r
+\r
+- name: check if physical volume existed\r
+ shell: pvdisplay {{ pv_device }}\r
+ ignore_errors: yes\r
+ register: pv_existed\r
+\r
+- name: create a physical volume\r
+ shell: pvcreate {{ pv_device }}\r
+ when: pv_existed is undefined or pv_existed.rc != 0\r
+\r
+- name: check if volume group existed\r
+ shell: vgdisplay {{ vg_name }}\r
+ ignore_errors: yes\r
+ register: vg_existed\r
+\r
+- name: create a volume group\r
+ shell: vgcreate {{ vg_name }} {{ pv_device }}\r
+ when: vg_existed is undefined or vg_existed.rc != 0\r
--- /dev/null
+---\r
+- name: include scenarios/lvm.yml\r
+ include: scenarios/lvm.yml\r
+ when: enabled_backend == "lvm"\r
+\r
+- name: include scenarios/ceph.yml\r
+ include: scenarios/ceph.yml\r
+ when: enabled_backend == "ceph"\r
+\r
+- name: include scenarios/cinder.yml\r
+ include: scenarios/cinder.yml\r
+ when: enabled_backend == "cinder" and use_cinder_standalone == false\r
+\r
+- name: include scenarios/cinder_standalone.yml\r
+ include: scenarios/cinder_standalone.yml\r
+ when: enabled_backend == "cinder" and use_cinder_standalone == true\r
+\r
+- name: run osdsdock daemon service\r
+ shell:\r
+ cmd: |\r
+ i=0\r
+ while\r
+ i="$((i+1))"\r
+ [ "$i" -lt 4 ]\r
+ do\r
+ nohup bin/osdsdock &>/dev/null &\r
+ sleep 5\r
+ ps aux | grep osdsdock | grep -v grep && break\r
+ done\r
+ args:\r
+ chdir: "{{ opensds_build_dir }}/out"\r
--- /dev/null
+---\r
+- name: run osdslet daemon service\r
+ shell:\r
+ cmd: |\r
+ i=0\r
+ while\r
+ i="$((i+1))"\r
+ [ "$i" -lt 4 ]\r
+ do\r
+ nohup bin/osdslet > osdslet.out 2> osdslet.err < /dev/null &\r
+ sleep 5\r
+ ps aux | grep osdslet | grep -v grep && break\r
+ done\r
+ args:\r
+ chdir: "{{ opensds_build_dir }}/out"\r
--- /dev/null
+---\r
+# Defines deployment design and assigns role to server groups\r
+\r
+- name: deploy an opensds local cluster\r
+ hosts: all\r
+ remote_user: root\r
+ vars_files:\r
+ - group_vars/common.yml\r
+ - group_vars/osdsdb.yml\r
+ - group_vars/osdslet.yml\r
+ - group_vars/osdsdock.yml\r
+ gather_facts: false\r
+ become: True\r
+ roles:\r
+ - common\r
+ - osdsdb\r
+ - osdslet\r
+ - osdsdock\r