+++ /dev/null
-# opensds-ansible\r
-This is an installation tool for opensds using ansible.\r
-\r
-## 1. How to install an opensds local cluster\r
-### Pre-config (Ubuntu 16.04)\r
-First download some system packages:\r
-```\r
-sudo apt-get install -y openssh-server git make gcc\r
-```\r
-Then config ```/etc/ssh/sshd_config``` file and change one line:\r
-```conf\r
-PermitRootLogin yes\r
-```\r
-Next generate ssh-token:\r
-```bash\r
-ssh-keygen -t rsa\r
-ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation\r
-```\r
-\r
-### Install docker\r
-If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.\r
-\r
-### Install ansible tool\r
-To install ansible, you can run `install_ansible.sh` directly or input these commands below:\r
-```bash\r
-sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.\r
-sudo apt-get update\r
-sudo apt-get install ansible\r
-ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.\r
-```\r
-\r
-### Configure opensds cluster variables:\r
-##### System environment:\r
-Configure these variables below in `group_vars/common.yml`:\r
-```yaml\r
-opensds_release: v0.1.4 # The version should be at least v0.1.4.\r
-nbp_release: v0.1.0 # The version should be at least v0.1.0.\r
-\r
-container_enabled: <false_or_true>\r
-```\r
-\r
-If you want to integrate OpenSDS with cloud platform (for example k8s), please modify `nbp_plugin_type` variable in `group_vars/common.yml`:\r
-```yaml\r
-nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'\r
-```\r
-\r
-#### Database configuration\r
-Currently OpenSDS adopts `etcd` as database backend, and the default db endpoint is `localhost:2379,localhost:2380`. But to avoid some conflicts with existing environment (k8s local cluster), we suggest you change the port of etcd cluster in `group_vars/osdsdb.yml`:\r
-```yaml\r
-db_endpoint: localhost:62379,localhost:62380\r
-\r
-etcd_host: 127.0.0.1\r
-etcd_port: 62379\r
-etcd_peer_port: 62380\r
-```\r
-\r
-##### LVM\r
-If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:\r
-```yaml\r
-enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'\r
-pv_devices: # Specify block devices and ensure them existed if you choose lvm\r
- #- /dev/sdc\r
- #- /dev/sdd\r
-vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm\r
-```\r
-Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above:\r
-```yaml\r
-"vg001" # change pool name to be the same as vg_name\r
-```\r
-##### Ceph\r
-If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:\r
-```yaml\r
-enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.\r
-ceph_pools: # Specify pool name randomly if choosing ceph\r
- - rbd\r
- #- ssd\r
- #- sas\r
-```\r
-Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`. But if you enable multiple pools, please append the current pool format:\r
-```yaml\r
-"rbd" # change pool name to be the same as ceph pool\r
-```\r
-Configure two files under ```group_vars/ceph```: `all.yml` and `osds.yml`. Here is an example:\r
-\r
-```group_vars/ceph/all.yml```:\r
-```yml\r
-ceph_origin: repository\r
-ceph_repository: community\r
-ceph_stable_release: luminous # Choose luminous as default version\r
-public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address\r
-cluster_network: "{{ public_network }}"\r
-monitor_interface: eth1 # Change to the network interface on the target machine\r
-```\r
-```group_vars/ceph/osds.yml```:\r
-```yml\r
-devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:\r
- - '/dev/sda' # Ensure this device exists and available if ceph is chosen\r
- - '/dev/sdb' # Ensure this device exists and available if ceph is chosen\r
-osd_scenario: collocated\r
-```\r
-\r
-##### Cinder\r
-If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:\r
-```yaml\r
-enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'\r
-\r
-# Use block-box install cinder_standalone if true, see details in:\r
-use_cinder_standalone: true\r
-# If true, you can configure cinder_container_platform, cinder_image_tag,\r
-# cinder_volume_group.\r
-\r
-# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.\r
-cinder_container_platform: debian:stretch\r
-# The image tag can be arbitrarily modified, as long as follow the image naming\r
-# conventions, default: debian-cinder\r
-cinder_image_tag: debian-cinder\r
-# The cinder standalone use lvm driver as default driver, therefore `volume_group`\r
-# should be configured, the default is: cinder-volumes. The volume group will be\r
-# removed when use ansible script clean environment.\r
-cinder_volume_group: cinder-volumes\r
-```\r
-\r
-Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.\r
-\r
-### Check if the hosts can be reached\r
-```bash\r
-sudo ansible all -m ping -i local.hosts\r
-```\r
-\r
-### Run opensds-ansible playbook to start deploy\r
-```bash\r
-sudo ansible-playbook site.yml -i local.hosts\r
-```\r
-\r
-## 2. How to test opensds cluster\r
-\r
-### Configure opensds CLI tool\r
-```bash\r
-sudo cp /opt/opensds-{opensds-release}-linux-amd64/bin/osdsctl /usr/local/bin\r
-export OPENSDS_ENDPOINT=http://127.0.0.1:50040\r
-export OPENSDS_AUTH_STRATEGY=noauth\r
-\r
-osdsctl pool list # Check if the pool resource is available\r
-```\r
-\r
-### Create a default profile first.\r
-```\r
-osdsctl profile create '{"name": "default", "description": "default policy"}'\r
-```\r
-\r
-### Create a volume.\r
-```\r
-osdsctl volume create 1 --name=test-001\r
-```\r
-For cinder, az needs to be specified.\r
-```\r
-osdsctl volume create 1 --name=test-001 --az nova\r
-```\r
-\r
-### List all volumes.\r
-```\r
-osdsctl volume list\r
-```\r
-\r
-### Delete the volume.\r
-```\r
-osdsctl volume delete <your_volume_id>\r
-```\r
-\r
-\r
-## 3. How to purge and clean opensds cluster\r
-\r
-### Run opensds-ansible playbook to clean the environment\r
-```bash\r
-sudo ansible-playbook clean.yml -i local.hosts\r
-```\r
-\r
-### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed\r
-```bash\r
-cd /opt/ceph-ansible\r
-sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts\r
-```\r
-\r
-### Remove ceph-ansible source code (optional)\r
-```bash\r
-cd ..\r
-sudo rm -rf /opt/ceph-ansible\r
-```\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Defines some clean processes when banishing the cluster.\r
\r
remote_user: root\r
vars_files:\r
- group_vars/common.yml\r
+ - group_vars/auth.yml\r
- group_vars/osdsdb.yml\r
+ - group_vars/osdslet.yml\r
- group_vars/osdsdock.yml\r
+ - group_vars/dashboard.yml\r
gather_facts: false\r
become: True\r
roles:\r
- - cleaner
\ No newline at end of file
+ - cleaner\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+
+###########
+# GENERAL #
+###########
+
+# OpenSDS authentication strategy, support 'noauth' and 'keystone'.
+opensds_auth_strategy: keystone
+
+# The URL should be replaced with the keystone actual URL
+keystone_os_auth_url: http://127.0.0.1/identity
+
+############
+# KEYSTONE #
+############
+
+# Execute the unstack.sh
+uninstall_keystone: true
+
+# Execute the stack.sh
+cleanup_keystone: true
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Variables here are applicable to all host groups NOT roles\r
\r
\r
# You can override vars by using host or group vars\r
\r
+ceph_origin: repository\r
+ceph_repository: community\r
+ceph_stable_release: luminous\r
+public_network: "192.168.3.0/24"\r
+cluster_network: "{{ public_network }}"\r
+monitor_interface: eth1\r
+devices:\r
+ - '/dev/sda'\r
+ #- '/dev/sdb'\r
+osd_scenario: collocated\r
+\r
###########\r
# GENERAL #\r
###########\r
# - repository\r
# - distro\r
# - local\r
-ceph_origin: repository\r
-ceph_repository: community\r
+\r
\r
#ceph_repository: "{{ 'community' if ceph_stable else 'rhcs' if ceph_rhcs else 'dev' if ceph_dev else 'uca' if ceph_stable_uca else 'custom' if ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#valid_ceph_repository:\r
#\r
#ceph_mirror: http://download.ceph.com\r
#ceph_stable_key: https://download.ceph.com/keys/release.asc\r
-ceph_stable_release: luminous\r
+#ceph_stable_release: dummy\r
#ceph_stable_repo: "{{ ceph_mirror }}/debian-{{ ceph_stable_release }}"\r
\r
#nfs_ganesha_stable: true # use stable repos for nfs-ganesha\r
# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).\r
# Eg. If you want to specify for each monitor which address the monitor will bind to you can set it in your **inventory host file** by using 'monitor_address' variable.\r
# Preference will go to monitor_address if both monitor_address and monitor_interface are defined.\r
-# To use an IPv6 address, use the monitor_address setting instead (and set ip_version to ipv6)\r
-monitor_interface: ens3\r
+#monitor_interface: "{{ ceph_mon_docker_interface if ceph_mon_docker_interface != 'interface' else 'interface' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#monitor_address: 0.0.0.0\r
#monitor_address_block: subnet\r
# set to either ipv4 or ipv6, whichever your network is using\r
\r
## OSD options\r
#\r
-journal_size: 100 # OSD journal size in MB\r
-public_network: 100.64.128.40/24\r
-cluster_network: "{{ public_network }}"\r
+#journal_size: 5120 # OSD journal size in MB\r
+#public_network: "{{ ceph_mon_docker_subnet if ceph_mon_docker_subnet != '0.0.0.0/0' else '0.0.0.0/0' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#cluster_network: "{{ public_network | regex_replace(' ', '') }}"\r
#osd_mkfs_type: xfs\r
#osd_mkfs_options_xfs: -f -i size=2048\r
#osd_mount_options_xfs: noatime,largeio,inode64,swalloc\r
# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).\r
# Eg. If you want to specify for each radosgw node which address the radosgw will bind to you can set it in your **inventory host file** by using 'radosgw_address' variable.\r
# Preference will go to radosgw_address if both radosgw_address and radosgw_interface are defined.\r
-# To use an IPv6 address, use the radosgw_address setting instead (and set ip_version to ipv6)\r
#radosgw_interface: interface\r
#radosgw_address: "{{ '0.0.0.0' if rgw_containerized_deployment else 'address' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#radosgw_address_block: subnet\r
-#radosgw_keystone: false # activate OpenStack Keystone options full detail here: http://ceph.com/docs/master/radosgw/keystone/\r
+#radosgw_keystone_ssl: false # activate this when using keystone PKI keys\r
# Rados Gateway options\r
#email_address: foo@bar.com\r
\r
#ceph_docker_registry: docker.io\r
#ceph_docker_enable_centos_extra_repo: false\r
#ceph_docker_on_openstack: false\r
-#ceph_mon_docker_interface: "{{ monitor_interface }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-#ceph_mon_docker_subnet: "{{ public_network }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_mon_docker_interface: "interface" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
+#ceph_mon_docker_subnet: "0.0.0.0/0" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#mon_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#osd_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#mds_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
#rolling_update: false\r
\r
\r
+#####################\r
+# Docker pull retry #\r
+#####################\r
+#docker_pull_retry: 3\r
+#docker_pull_timeout: "300s"\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
[mons]\r
localhost ansible_connection=local\r
\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
configFile: /etc/ceph/ceph.conf\r
pool:\r
- "rbd": # change pool name same to ceph pool, but don't change it if you choose lvm backend\r
- diskType: SSD\r
- AZ: default\r
- accessProtocol: rbd\r
- thinProvisioned: true\r
- compressed: false\r
+ rbd: # change pool name same to ceph pool, but don't change it if you choose lvm backend\r
+ storageType: block\r
+ availabilityZone: default\r
+ extras:\r
+ dataStorage:\r
+ provisioningPolicy: Thin\r
+ isSpaceEfficient: true\r
+ ioConnectivity:\r
+ accessProtocol: rbd\r
+ maxIOPS: 6000000\r
+ maxBWS: 500\r
+ advanced:\r
+ diskType: SSD\r
+ latency: 5ms\r
+++ /dev/null
----\r
-# Variables here are applicable to all host groups NOT roles\r
-\r
-# This sample file generated by generate_group_vars_sample.sh\r
-\r
-# Dummy variable to avoid error because ansible does not recognize the\r
-# file as a good configuration file when no variable in it.\r
-dummy:\r
-\r
-# You can override default vars defined in defaults/main.yml here,\r
-# but I would advice to use host or group vars instead\r
-\r
-#raw_journal_devices: "{{ dedicated_devices }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-#journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-#raw_multi_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-#dmcrytpt_journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-#dmcrypt_dedicated_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-\r
-\r
-###########\r
-# GENERAL #\r
-###########\r
-\r
-# Even though OSD nodes should not have the admin key\r
-# at their disposal, some people might want to have it\r
-# distributed on OSD nodes. Setting 'copy_admin_key' to 'true'\r
-# will copy the admin key to the /etc/ceph/ directory\r
-#copy_admin_key: false\r
-\r
-\r
-####################\r
-# OSD CRUSH LOCATION\r
-####################\r
-\r
-# /!\\r
-#\r
-# BE EXTREMELY CAREFUL WITH THIS OPTION\r
-# DO NOT USE IT UNLESS YOU KNOW WHAT YOU ARE DOING\r
-#\r
-# /!\\r
-#\r
-# It is probably best to keep this option to 'false' as the default\r
-# suggests it. This option should only be used while doing some complex\r
-# CRUSH map. It allows you to force a specific location for a set of OSDs.\r
-#\r
-# The following options will build a ceph.conf with OSD sections\r
-# Example:\r
-# [osd.X]\r
-# osd crush location = "root=location"\r
-#\r
-# This works with your inventory file\r
-# To match the following 'osd_crush_location' option the inventory must look like:\r
-#\r
-# [osds]\r
-# osd0 ceph_crush_root=foo ceph_crush_rack=bar\r
-\r
-#crush_location: false\r
-#osd_crush_location: "\"root={{ ceph_crush_root }} rack={{ ceph_crush_rack }} host={{ ansible_hostname }}\""\r
-\r
-\r
-##############\r
-# CEPH OPTIONS\r
-##############\r
-\r
-# Devices to be used as OSDs\r
-# You can pre-provision disks that are not present yet.\r
-# Ansible will just skip them. Newly added disk will be\r
-# automatically configured during the next run.\r
-#\r
-\r
-\r
-# Declare devices to be used as OSDs\r
-# All scenario(except 3rd) inherit from the following device declaration\r
-\r
-devices:\r
-# - /dev/sda\r
-# - /dev/sdc\r
-# - /dev/sdd\r
-# - /dev/sde\r
-\r
-#devices: []\r
-\r
-\r
-#'osd_auto_discovery' mode prevents you from filling out the 'devices' variable above.\r
-# You can use this option with First and Forth and Fifth OSDS scenario.\r
-# Device discovery is based on the Ansible fact 'ansible_devices'\r
-# which reports all the devices on a system. If chosen all the disks\r
-# found will be passed to ceph-disk. You should not be worried on using\r
-# this option since ceph-disk has a built-in check which looks for empty devices.\r
-# Thus devices with existing partition tables will not be used.\r
-#\r
-#osd_auto_discovery: false\r
-\r
-# Encrypt your OSD device using dmcrypt\r
-# If set to True, no matter which osd_objecstore and osd_scenario you use the data will be encrypted\r
-#dmcrypt: "{{ True if dmcrytpt_journal_collocation or dmcrypt_dedicated_journal else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-\r
-\r
-# I. First scenario: collocated\r
-#\r
-# To enable this scenario do: osd_scenario: collocated\r
-#\r
-#\r
-# If osd_objectstore: filestore is enabled both 'ceph data' and 'ceph journal' partitions\r
-# will be stored on the same device.\r
-#\r
-# If osd_objectstore: bluestore is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored\r
-# on the same device. The device will get 2 partitions:\r
-# - One for 'data', called 'ceph data'\r
-# - One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'\r
-#\r
-# Example of what you will get:\r
-# [root@ceph-osd0 ~]# blkid /dev/sda*\r
-# /dev/sda: PTTYPE="gpt"\r
-# /dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"\r
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"\r
-#\r
-\r
-#osd_scenario: "{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1\r
-#valid_osd_scenarios:\r
-# - collocated\r
-# - non-collocated\r
-# - lvm\r
-osd_scenario: collocated\r
-\r
-# II. Second scenario: non-collocated\r
-#\r
-# To enable this scenario do: osd_scenario: non-collocated\r
-#\r
-# If osd_objectstore: filestore is enabled 'ceph data' and 'ceph journal' partitions\r
-# will be stored on different devices:\r
-# - 'ceph data' will be stored on the device listed in 'devices'\r
-# - 'ceph journal' will be stored on the device listed in 'dedicated_devices'\r
-#\r
-# Let's take an example, imagine 'devices' was declared like this:\r
-#\r
-# devices:\r
-# - /dev/sda\r
-# - /dev/sdb\r
-# - /dev/sdc\r
-# - /dev/sdd\r
-#\r
-# And 'dedicated_devices' was declared like this:\r
-#\r
-# dedicated_devices:\r
-# - /dev/sdf\r
-# - /dev/sdf\r
-# - /dev/sdg\r
-# - /dev/sdg\r
-#\r
-# This will result in the following mapping:\r
-# - /dev/sda will have /dev/sdf1 as journal\r
-# - /dev/sdb will have /dev/sdf2 as a journal\r
-# - /dev/sdc will have /dev/sdg1 as a journal\r
-# - /dev/sdd will have /dev/sdg2 as a journal\r
-#\r
-#\r
-# If osd_objectstore: bluestore is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored\r
-# on a dedicated device.\r
-#\r
-# So the following will happen:\r
-# - The devices listed in 'devices' will get 2 partitions, one for 'block' and one for 'data'.\r
-# 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata.\r
-# 'block' will store all your actual data.\r
-# - The devices in 'dedicated_devices' will get 1 partition for RocksDB DB, called 'block.db'\r
-# and one for RocksDB WAL, called 'block.wal'\r
-#\r
-# By default dedicated_devices will represent block.db\r
-#\r
-# Example of what you will get:\r
-# [root@ceph-osd0 ~]# blkid /dev/sd*\r
-# /dev/sda: PTTYPE="gpt"\r
-# /dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"\r
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"\r
-# /dev/sdb: PTTYPE="gpt"\r
-# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"\r
-# /dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"\r
-#dedicated_devices: []\r
-\r
-\r
-# More device granularity for Bluestore\r
-#\r
-# ONLY if osd_objectstore: bluestore is enabled.\r
-#\r
-# By default, if 'bluestore_wal_devices' is empty, it will get the content of 'dedicated_devices'.\r
-# If set, then you will have a dedicated partition on a specific device for block.wal.\r
-#\r
-# Example of what you will get:\r
-# [root@ceph-osd0 ~]# blkid /dev/sd*\r
-# /dev/sda: PTTYPE="gpt"\r
-# /dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"\r
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"\r
-# /dev/sdb: PTTYPE="gpt"\r
-# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"\r
-# /dev/sdc: PTTYPE="gpt"\r
-# /dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"\r
-#bluestore_wal_devices: "{{ dedicated_devices }}"\r
-\r
-# III. Use ceph-volume to create OSDs from logical volumes.\r
-# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals\r
-# when using lvm, not collocated journals.\r
-# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name\r
-# key. Any logical volume or logical group used must be a name and not a path.\r
-# data must be a logical volume\r
-# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.\r
-# data_vg must be the volume group name of the data lv\r
-# journal_vg is optional and must be the volume group name of the journal lv, if applicable\r
-# For example:\r
-# lvm_volumes:\r
-# - data: data-lv1\r
-# data_vg: vg1\r
-# journal: journal-lv1\r
-# journal_vg: vg2\r
-# - data: data-lv2\r
-# journal: /dev/sda\r
-# data_vg: vg1\r
-# - data: data-lv3\r
-# journal: /dev/sdb1\r
-# data_vg: vg2\r
-#lvm_volumes: []\r
-\r
-\r
-##########\r
-# DOCKER #\r
-##########\r
-\r
-#ceph_config_keys: [] # DON'T TOUCH ME\r
-\r
-# Resource limitation\r
-# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints\r
-# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations\r
-# These options can be passed using the 'ceph_osd_docker_extra_env' variable.\r
-#ceph_osd_docker_memory_limit: 1g\r
-#ceph_osd_docker_cpu_limit: 1\r
-\r
-# PREPARE DEVICE\r
-#\r
-# WARNING /!\ DMCRYPT scenario ONLY works with Docker version 1.12.5 and above\r
-#\r
-#ceph_osd_docker_devices: "{{ devices }}"\r
-#ceph_osd_docker_prepare_env: -e OSD_JOURNAL_SIZE={{ journal_size }}\r
-\r
-# ACTIVATE DEVICE\r
-#\r
-#ceph_osd_docker_extra_env:\r
-#ceph_osd_docker_run_script_path: "/usr/share" # script called by systemd to run the docker command\r
-\r
-\r
-###########\r
-# SYSTEMD #\r
-###########\r
-\r
-# ceph_osd_systemd_overrides will override the systemd settings\r
-# for the ceph-osd services.\r
-# For example,to set "PrivateDevices=false" you can specify:\r
-#ceph_osd_systemd_overrides:\r
-# Service:\r
-# PrivateDevices: False\r
-\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
authOptions:\r
noAuth: true\r
endpoint: "http://127.0.0.1/identity"\r
tenantName: "myproject"\r
pool:\r
"cinder-lvm@lvm#lvm":\r
- AZ: nova\r
- thin: true\r
- accessProtocol: iscsi\r
- thinProvisioned: true\r
- compressed: true\r
+ storageType: block\r
+ availabilityZone: default\r
+ extras:\r
+ dataStorage:\r
+ provisioningPolicy: Thin\r
+ isSpaceEfficient: false\r
+ ioConnectivity:\r
+ accessProtocol: iscsi\r
+ maxIOPS: 7000000\r
+ maxBWS: 600\r
+ advanced:\r
+ diskType: SSD\r
+ latency: 3ms\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Dummy variable to avoid error because ansible does not recognize the\r
# file as a good configuration file when no variable in it.\r
# GENERAL #\r
###########\r
\r
-opensds_release: v0.1.4 # The version should be at least v0.1.4.\r
-nbp_release: v0.1.0 # The version should be at least v0.1.0.\r
-\r
-# These fields are not suggested to be modified\r
-opensds_download_url: https://github.com/opensds/opensds/releases/download/{{ opensds_release }}/opensds-{{ opensds_release }}-linux-amd64.tar.gz\r
-opensds_tarball_url: /opt/opensds-{{ opensds_release }}-linux-amd64.tar.gz\r
-opensds_dir: /opt/opensds-{{ opensds_release }}-linux-amd64\r
-nbp_download_url: https://github.com/opensds/nbp/releases/download/{{ nbp_release }}/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz\r
-nbp_tarball_url: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz\r
-nbp_dir: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64\r
+# This field indicates which way user prefers to install, currently support\r
+# 'repository', 'release' and 'container'\r
+install_from: repository\r
\r
+# These fields are NOT suggested to be modifie\r
+opensds_work_dir: /opt/opensds-linux-amd64\r
+nbp_work_dir: /opt/opensds-k8s-linux-amd64\r
opensds_config_dir: /etc/opensds\r
+opensds_driver_config_dir: "{{ opensds_config_dir }}/driver"\r
opensds_log_dir: /var/log/opensds\r
\r
\r
+##############\r
+# REPOSITORY #\r
+##############\r
+\r
+# If user specifies intalling from repository, then he can choose the specific\r
+# repository branch\r
+opensds_repo_branch: master\r
+nbp_repo_branch: master\r
+\r
+# These fields are NOT suggested to be modified\r
+opensds_remote_url: https://github.com/opensds/opensds.git\r
+nbp_remote_url: https://github.com/opensds/nbp.git\r
+\r
+\r
###########\r
-# PLUGIN #\r
+# RELEASE #\r
###########\r
\r
-nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'\r
+# If user specifies intalling from release,then he can choose the specific version\r
+opensds_release: v0.2.0 # The version should be at least v0.2.0\r
+nbp_release: v0.2.0 # The version should be at least v0.2.0\r
\r
-flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds\r
+# These fields are NOT suggested to be modified\r
+opensds_download_url: https://github.com/opensds/opensds/releases/download/{{ opensds_release }}/opensds-{{ opensds_release }}-linux-amd64.tar.gz\r
+opensds_tarball_dir: /tmp/opensds-{{ opensds_release }}-linux-amd64\r
+nbp_download_url: https://github.com/opensds/nbp/releases/download/{{ nbp_release }}/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz\r
+nbp_tarball_dir: /tmp/opensds-k8s-{{ nbp_release }}-linux-amd64\r
+\r
+\r
+#############\r
+# CONTAINER #\r
+#############\r
+\r
+container_enabled: false\r
\r
\r
###########\r
-#CONTAINER#\r
+# PLUGIN #\r
###########\r
\r
-container_enabled: false\r
+# 'hotpot_only' is the default integration way, but you can change it to 'csi'\r
+# or 'flexvolume'\r
+nbp_plugin_type: hotpot_only\r
+# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP\r
+opensds_endpoint: http://127.0.0.1:50040\r
+\r
+# These fields are NOT suggested to be modified\r
+flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+
+###########
+# GENERAL #
+###########
+
+# Dashboard installation types are: 'container', 'source_code'
+dashboard_installation_type: container
+
+
+###########
+# DOCKER #
+###########
+
+dashboard_docker_image: opensdsio/dashboard:latest
-tgtBindIp: 127.0.0.1\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
+tgtBindIp: 127.0.0.1 # change tgtBindIp to your real host ip, run 'ifconfig' to check\r
+tgtConfDir: /etc/tgt/conf.d\r
pool:\r
- "vg001": # change pool name same to vg_name, but don't change it if you choose ceph backend\r
- diskType: SSD\r
- AZ: default\r
- accessProtocol: iscsi\r
- thinProvisioned: false\r
- compressed: false\r
+ opensds-volumes: # change pool name same to vg_name, but don't change it if you choose ceph backend\r
+ storageType: block\r
+ availabilityZone: default\r
+ extras:\r
+ dataStorage:\r
+ provisioningPolicy: Thin\r
+ isSpaceEfficient: false\r
+ ioConnectivity:\r
+ accessProtocol: iscsi\r
+ maxIOPS: 7000000\r
+ maxBWS: 600\r
+ advanced:\r
+ diskType: SSD\r
+ latency: 5ms\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Dummy variable to avoid error because ansible does not recognize the\r
# file as a good configuration file when no variable in it.\r
###########\r
\r
db_driver: etcd\r
-db_endpoint: localhost:2379,localhost:2380\r
-#db_credential: opensds:password@127.0.0.1:3306/dbname\r
+db_endpoint: "{{ etcd_host }}:{{ etcd_port }},{{ etcd_host }}:{{ etcd_peer_port }}"\r
\r
\r
###########\r
etcd_port: 2379\r
etcd_peer_port: 2380\r
\r
-# These fields are not suggested to be modified\r
+# These fields are NOT suggested to be modified\r
etcd_tarball: etcd-{{ etcd_release }}-linux-amd64.tar.gz\r
etcd_download_url: https://github.com/coreos/etcd/releases/download/{{ etcd_release }}/{{ etcd_tarball }}\r
etcd_dir: /opt/etcd-{{ etcd_release }}-linux-amd64\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Dummy variable to avoid error because ansible does not recognize the\r
# file as a good configuration file when no variable in it.\r
\r
# Change it according to your backend, currently support 'lvm', 'ceph', 'cinder'\r
enabled_backend: lvm\r
+# Change it according to your node type (host or target), currently support\r
+# 'provisioner', 'attacher'\r
+dock_type: provisioner\r
\r
# These fields are NOT suggested to be modified\r
dock_endpoint: localhost:50050\r
dock_log_file: "{{ opensds_log_dir }}/osdsdock.log"\r
\r
+\r
###########\r
# LVM #\r
###########\r
\r
-pv_devices: # Specify block devices and ensure them existed if you choose lvm\r
- #- /dev/sdc\r
- #- /dev/sdd\r
-vg_name: vg001 # Specify a name randomly\r
\r
# These fields are NOT suggested to be modified\r
lvm_name: lvm backend\r
lvm_description: This is a lvm backend service\r
lvm_driver_name: lvm\r
-lvm_config_path: "{{ opensds_config_dir }}/driver/lvm.yaml"\r
+lvm_config_path: "{{ opensds_driver_config_dir }}/lvm.yaml"\r
+opensds_volume_group: opensds-volumes\r
+\r
+\r
\r
###########\r
# CEPH #\r
ceph_name: ceph backend\r
ceph_description: This is a ceph backend service\r
ceph_driver_name: ceph\r
-ceph_config_path: "{{ opensds_config_dir }}/driver/ceph.yaml"\r
+ceph_config_path: "{{ opensds_driver_config_dir }}/ceph.yaml"\r
+\r
\r
###########\r
# CINDER #\r
# removed when use ansible script clean environment.\r
cinder_volume_group: cinder-volumes\r
# All source code and volume group file will be placed in the cinder_data_dir:\r
-cinder_data_dir: "{{ workplace }}/cinder_data_dir"\r
-\r
+cinder_data_dir: "/opt/cinder_data_dir"\r
\r
-# These fields are not suggested to be modified\r
+# These fields are NOT suggested to be modified\r
cinder_name: cinder backend\r
cinder_description: This is a cinder backend service\r
cinder_driver_name: cinder\r
-cinder_config_path: "{{ opensds_config_dir }}/driver/cinder.yaml"\r
+cinder_config_path: "{{ opensds_driver_config_dir }}/cinder.yaml"\r
+\r
\r
###########\r
# DOCKER #\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Dummy variable to avoid error because ansible does not recognize the\r
# file as a good configuration file when no variable in it.\r
#!/bin/bash\r
\r
-sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
+# This step is needed to upgrade ansible to version 2.4.2 which is required for\r
+# the ceph backend.\r
+sudo add-apt-repository ppa:ansible/ansible-2.4\r
\r
sudo apt-get update\r
sudo apt-get install -y ansible\r
sleep 3\r
\r
-ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.\r
+ansible --version # Ansible version 2.4.2 is required for ceph; 2.0.2 or higher is needed for other backends.\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
[controllers]\r
localhost ansible_connection=local\r
\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: uninstall keystone
+ shell: "{{ item }}"
+ with_items:
+ - bash ./script/keystone.sh uninstall
+ when: opensds_auth_strategy == "keystone" and uninstall_keystone == true
+ ignore_errors: yes
+ become: yes
+
+- name: cleanup keystone
+ shell: "{{ item }}"
+ with_items:
+ - bash ./script/keystone.sh cleanup
+ when: opensds_auth_strategy == "keystone" and cleanup_keystone == true
+ ignore_errors: yes
+ become: yes
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: clean the volume group of lvm
+ shell:
+ _raw_params: |
+
+ # _clean_lvm_volume_group removes all default LVM volumes
+ #
+ # Usage: _clean_lvm_volume_group $vg
+ function _clean_lvm_volume_group {
+ local vg=$1
+
+ # Clean out existing volumes
+ sudo lvremove -f $vg
+ }
+
+ # _remove_lvm_volume_group removes the volume group
+ #
+ # Usage: _remove_lvm_volume_group $vg
+ function _remove_lvm_volume_group {
+ local vg=$1
+
+ # Remove the volume group
+ sudo vgremove -f $vg
+ }
+
+ # _clean_lvm_backing_file() removes the backing file of the
+ # volume group
+ #
+ # Usage: _clean_lvm_backing_file() $backing_file
+ function _clean_lvm_backing_file {
+ local backing_file=$1
+
+ # If the backing physical device is a loop device, it was probably setup by DevStack
+ if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then
+ local vg_dev
+ vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')
+ if [[ -n "$vg_dev" ]]; then
+ sudo losetup -d $vg_dev
+ fi
+ rm -f $backing_file
+ fi
+ }
+
+ # clean_lvm_volume_group() cleans up the volume group and removes the
+ # backing file
+ #
+ # Usage: clean_lvm_volume_group $vg
+ function clean_lvm_volume_group {
+ local vg=$1
+
+ _clean_lvm_volume_group $vg
+ _remove_lvm_volume_group $vg
+ # if there is no logical volume left, it's safe to attempt a cleanup
+ # of the backing file
+ if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
+ _clean_lvm_backing_file {{ opensds_work_dir }}/volumegroups/${vg}.img
+ fi
+ }
+
+ clean_lvm_volume_group {{opensds_volume_group}}
+
+ args:
+ executable: /bin/bash
+ become: true
+ when: enabled_backend == "lvm"
+ ignore_errors: yes
+
+- name: stop cinder-standalone service
+ shell: docker-compose down
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
+ when: enabled_backend == "cinder"
+ ignore_errors: yes
+
+- name: clean the volume group of cinder
+ shell:
+ _raw_params: |
+
+ # _clean_lvm_volume_group removes all default LVM volumes
+ #
+ # Usage: _clean_lvm_volume_group $vg
+ function _clean_lvm_volume_group {
+ local vg=$1
+
+ # Clean out existing volumes
+ sudo lvremove -f $vg
+ }
+
+ # _remove_lvm_volume_group removes the volume group
+ #
+ # Usage: _remove_lvm_volume_group $vg
+ function _remove_lvm_volume_group {
+ local vg=$1
+
+ # Remove the volume group
+ sudo vgremove -f $vg
+ }
+
+ # _clean_lvm_backing_file() removes the backing file of the
+ # volume group
+ #
+ # Usage: _clean_lvm_backing_file() $backing_file
+ function _clean_lvm_backing_file {
+ local backing_file=$1
+
+ # If the backing physical device is a loop device, it was probably setup by DevStack
+ if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then
+ local vg_dev
+ vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')
+ if [[ -n "$vg_dev" ]]; then
+ sudo losetup -d $vg_dev
+ fi
+ rm -f $backing_file
+ fi
+ }
+
+ # clean_lvm_volume_group() cleans up the volume group and removes the
+ # backing file
+ #
+ # Usage: clean_lvm_volume_group $vg
+ function clean_lvm_volume_group {
+ local vg=$1
+
+ _clean_lvm_volume_group $vg
+ _remove_lvm_volume_group $vg
+ # if there is no logical volume left, it's safe to attempt a cleanup
+ # of the backing file
+ if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
+ _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img
+ fi
+ }
+
+ clean_lvm_volume_group {{cinder_volume_group}}
+
+ args:
+ executable: /bin/bash
+ become: true
+ when: enabled_backend == "cinder"
+ ignore_errors: yes
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: clean up all release files if installed from release
+ file:
+ path: "{{ item }}"
+ state: absent
+ force: yes
+ with_items:
+ - "{{ opensds_tarball_dir }}"
+ - "{{ nbp_tarball_dir }}"
+ ignore_errors: yes
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- set_fact:
+ go_path: "{{ lookup('env', 'GOPATH') }}"
+
+- name: check go_path
+ shell: "{{ item }}"
+ with_items:
+ - echo "The environment variable GOPATH must be set and cannot be an empty string!"
+ - /bin/false
+ when: go_path == ""
+
+- name: clean opensds controller data
+ shell: make clean
+ args:
+ chdir: "{{ go_path }}/src/github.com/opensds/opensds"
+ when: install_from == "repository"
+
+- name: clean opensds northbound plugin data
+ shell: make clean
+ args:
+ chdir: "{{ go_path }}/src/github.com/opensds/nbp"
+ when: install_from == "repository" and nbp_plugin_type != "hotpot_only"
+
+- name: clean opensds dashboard data
+ shell: make clean
+ args:
+ chdir: "{{ go_path }}/src/github.com/opensds/opensds/dashboard"
+ when: dashboard_installation_type == "source_code"
+ become: yes
+ ignore_errors: yes
----\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
- name: kill osdslet daemon service\r
- shell: killall osdslet\r
- ignore_errors: yes\r
+ shell: killall osdslet osdsdock\r
when: container_enabled == false\r
+ ignore_errors: true\r
\r
- name: kill osdslet containerized service\r
- docker:\r
- image: opensdsio/opensds-controller:latest\r
+ docker_container:\r
+ name: osdslet\r
+ image: "{{ controller_docker_image }}"\r
state: stopped\r
when: container_enabled == true\r
\r
-- name: kill osdsdock daemon service\r
- shell: killall osdsdock\r
- ignore_errors: yes\r
- when: container_enabled == false\r
-\r
- name: kill osdsdock containerized service\r
- docker:\r
- image: opensdsio/opensds-dock:latest\r
+ docker_container:\r
+ name: osdsdock\r
+ image: "{{ dock_docker_image }}"\r
state: stopped\r
when: container_enabled == true\r
\r
-- name: kill etcd daemon service\r
- shell: killall etcd\r
- ignore_errors: yes\r
- when: db_driver == "etcd" and container_enabled == false\r
-\r
-- name: kill etcd containerized service\r
- docker:\r
- image: "{{ etcd_docker_image }}"\r
+- name: stop container where dashboard is located\r
+ docker_container:\r
+ name: dashboard\r
+ image: "{{ dashboard_docker_image }}"\r
state: stopped\r
- when: db_driver == "etcd" and container_enabled == true\r
-\r
-- name: remove etcd service data\r
- file:\r
- path: "{{ etcd_dir }}"\r
- state: absent\r
- force: yes\r
- ignore_errors: yes\r
- when: db_driver == "etcd"\r
-\r
-- name: remove etcd tarball\r
- file:\r
- path: "/opt/{{ etcd_tarball }}"\r
- state: absent\r
- force: yes\r
- ignore_errors: yes\r
- when: db_driver == "etcd"\r
-\r
-- name: clean opensds release files\r
- file:\r
- path: "{{ opensds_dir }}"\r
- state: absent\r
- force: yes\r
- ignore_errors: yes\r
+ when: dashboard_installation_type == "container"\r
\r
-- name: clean opensds release tarball file\r
- file:\r
- path: "{{ opensds_tarball_url }}"\r
- state: absent\r
- force: yes\r
- ignore_errors: yes\r
-\r
-- name: clean opensds flexvolume plugins binary file\r
+- name: clean opensds flexvolume plugins binary file if flexvolume specified\r
file:\r
path: "{{ flexvolume_plugin_dir }}"\r
state: absent\r
ignore_errors: yes\r
when: nbp_plugin_type == "flexvolume"\r
\r
-- name: clean nbp release files\r
- file:\r
- path: "{{ nbp_dir }}"\r
- state: absent\r
- force: yes\r
- ignore_errors: yes\r
-\r
-- name: clean nbp release tarball file\r
- file:\r
- path: "{{ nbp_tarball_url }}"\r
- state: absent\r
- force: yes\r
- ignore_errors: yes\r
-\r
-- name: clean all opensds configuration files\r
- file:\r
- path: "{{ opensds_config_dir }}"\r
- state: absent\r
- force: yes\r
+- name: clean opensds csi plugin if csi plugin specified\r
+ shell: |\r
+ . /etc/profile\r
+ kubectl delete -f deploy/kubernetes\r
+ args:\r
+ chdir: "{{ nbp_work_dir }}/csi"\r
ignore_errors: yes\r
+ when: nbp_plugin_type == "csi"\r
\r
-- name: clean all opensds log files\r
+- name: clean all configuration and log files in opensds and nbp work directory\r
file:\r
- path: "{{ opensds_log_dir }}"\r
+ path: "{{ item }}"\r
state: absent\r
force: yes\r
+ with_items:\r
+ - "{{ opensds_work_dir }}"\r
+ - "{{ nbp_work_dir }}"\r
+ - "{{ opensds_config_dir }}"\r
+ - "{{ opensds_log_dir }}"\r
ignore_errors: yes\r
\r
-- name: check if it existed before cleaning a volume group\r
- shell: vgdisplay {{ vg_name }}\r
- ignore_errors: yes\r
- register: vg_existed\r
- when: enabled_backend == "lvm"\r
-\r
-- name: remove a volume group if lvm backend specified\r
- lvg:\r
- vg: "{{ vg_name }}"\r
- state: absent\r
- when: enabled_backend == "lvm" and vg_existed.rc == 0\r
-\r
-- name: remove physical volumes if lvm backend specified\r
- shell: pvremove {{ item }}\r
- with_items: "{{ pv_devices }}"\r
- when: enabled_backend == "lvm"\r
-\r
-- name: stop cinder-standalone service\r
- shell: docker-compose down\r
- become: true\r
- args:\r
- chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"\r
- when: enabled_backend == "cinder"\r
-\r
-- name: clean the volume group of cinder\r
- shell:\r
- _raw_params: |\r
-\r
- # _clean_lvm_volume_group removes all default LVM volumes\r
- #\r
- # Usage: _clean_lvm_volume_group $vg\r
- function _clean_lvm_volume_group {\r
- local vg=$1\r
+- name: include scenarios/auth-keystone.yml when specifies keystone\r
+ include_tasks: scenarios/auth-keystone.yml\r
+ when: opensds_auth_strategy == "keystone"\r
\r
- # Clean out existing volumes\r
- sudo lvremove -f $vg\r
- }\r
+- name: include scenarios/repository.yml if installed from repository\r
+ include_tasks: scenarios/repository.yml\r
+ when: install_from == "repository" or dashboard_installation_type == "source_code"\r
\r
- # _remove_lvm_volume_group removes the volume group\r
- #\r
- # Usage: _remove_lvm_volume_group $vg\r
- function _remove_lvm_volume_group {\r
- local vg=$1\r
+- name: include scenarios/release.yml if installed from release\r
+ include_tasks: scenarios/release.yml\r
+ when: install_from == "release"\r
\r
- # Remove the volume group\r
- sudo vgremove -f $vg\r
- }\r
-\r
- # _clean_lvm_backing_file() removes the backing file of the\r
- # volume group\r
- #\r
- # Usage: _clean_lvm_backing_file() $backing_file\r
- function _clean_lvm_backing_file {\r
- local backing_file=$1\r
-\r
- # If the backing physical device is a loop device, it was probably setup by DevStack\r
- if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then\r
- local vg_dev\r
- vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')\r
- if [[ -n "$vg_dev" ]]; then\r
- sudo losetup -d $vg_dev\r
- fi\r
- rm -f $backing_file\r
- fi\r
- }\r
-\r
- # clean_lvm_volume_group() cleans up the volume group and removes the\r
- # backing file\r
- #\r
- # Usage: clean_lvm_volume_group $vg\r
- function clean_lvm_volume_group {\r
- local vg=$1\r
-\r
- _clean_lvm_volume_group $vg\r
- _remove_lvm_volume_group $vg\r
- # if there is no logical volume left, it's safe to attempt a cleanup\r
- # of the backing file\r
- if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then\r
- _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img\r
- fi\r
- }\r
-\r
- clean_lvm_volume_group {{cinder_volume_group}}\r
-\r
- args:\r
- executable: /bin/bash\r
- become: true\r
- when: enabled_backend == "cinder"\r
+- name: include scenarios/backend.yml for cleaning up storage backend service\r
+ include_tasks: scenarios/backend.yml\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: install docker-py package with pip when enabling containerized deployment
+ pip:
+ name: docker-py
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: check for opensds release files existed
+ stat:
+ path: "{{ opensds_tarball_dir }}"
+ ignore_errors: yes
+ register: opensdsreleasesexisted
+
+- name: download and extract the opensds release tarball if not exists
+ unarchive:
+ src={{ opensds_download_url }}
+ dest=/tmp/
+ when:
+ - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false
+
+- name: change the mode of all binary files in opensds release
+ file:
+ path: "{{ opensds_tarball_dir }}/bin"
+ mode: 0755
+ recurse: yes
+
+- name: copy opensds tarball into opensds work directory
+ copy:
+ src: "{{ opensds_tarball_dir }}/"
+ dest: "{{ opensds_work_dir }}"
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- set_fact:
+ go_path: "{{ lookup('env', 'GOPATH') }}"
+
+- name: check go_path
+ shell: "{{ item }}"
+ with_items:
+ - echo "The environment variable GOPATH must be set and cannot be an empty string!"
+ - /bin/false
+ when: go_path == ""
+
+- name: check for opensds source code existed
+ stat:
+ path: "{{ go_path }}/src/github.com/opensds/opensds"
+ ignore_errors: yes
+ register: opensdsexisted
+
+- name: download opensds source code if not exists
+ git:
+ repo: "{{ opensds_remote_url }}"
+ dest: "{{ go_path }}/src/github.com/opensds/opensds"
+ version: "{{ opensds_repo_branch }}"
+ when:
+ - opensdsexisted.stat.exists is undefined or opensdsexisted.stat.exists == false
+
+- name: build opensds binary file
+ shell: make
+ environment:
+ GOPATH: "{{ go_path }}"
+ args:
+ chdir: "{{ go_path }}/src/github.com/opensds/opensds"
+
+- name: copy opensds binary files into opensds work directory
+ copy:
+ src: "{{ go_path }}/src/github.com/opensds/opensds/build/out/"
+ dest: "{{ opensds_work_dir }}"
+
+- name: change the permissions of opensds executable files
+ file:
+ path: "{{ opensds_work_dir }}/bin"
+ state: directory
+ mode: 0755
+ recurse: yes
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
+- name: set script dir permissions\r
+ file:\r
+ path: ./script\r
+ mode: 0755\r
+ recurse: yes\r
+ ignore_errors: yes\r
+ become: yes\r
+ \r
+- name: check ansible version\r
+ shell: "{{ item }}"\r
+ with_items:\r
+ - bash ./script/check_ansible_version.sh\r
+ become: yes\r
+\r
- name: run the equivalent of "apt-get update" as a separate step\r
apt:\r
update_cache: yes\r
\r
-- name: install librados-dev and librbd-dev external packages\r
+- name: install make, gcc and pip external packages\r
apt:\r
name: "{{ item }}"\r
state: present\r
with_items:\r
- - librados-dev\r
- - librbd-dev\r
+ - make\r
+ - gcc\r
+ - python-pip\r
\r
-- name: install docker-py package with pip when enabling containerized deployment\r
- pip:\r
- name: docker-py\r
- when: container_enabled == true\r
-\r
-- name: check for opensds release files existed\r
- stat:\r
- path: "{{ opensds_dir }}"\r
- ignore_errors: yes\r
- register: opensdsreleasesexisted\r
-\r
-- name: download opensds release files\r
- get_url:\r
- url={{ opensds_download_url }}\r
- dest={{ opensds_tarball_url }}\r
- when:\r
- - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false\r
-\r
-- name: extract the opensds release tarball\r
- unarchive:\r
- src={{ opensds_tarball_url }}\r
- dest=/opt/\r
- when:\r
- - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false\r
-\r
-- name: check for nbp release files existed\r
- stat:\r
- path: "{{ nbp_dir }}"\r
- ignore_errors: yes\r
- register: nbpreleasesexisted\r
-\r
-- name: download nbp release files\r
- get_url:\r
- url={{ nbp_download_url }}\r
- dest={{ nbp_tarball_url }}\r
- when:\r
- - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false\r
-\r
-- name: extract the nbp release tarball\r
- unarchive:\r
- src={{ nbp_tarball_url }}\r
- dest=/opt/\r
- when:\r
- - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false\r
-\r
-- name: change the mode of all binary files in opensds release\r
+- name: create opensds work directory if it doesn't exist\r
file:\r
- path: "{{ opensds_dir }}/bin"\r
+ path: "{{ item }}"\r
+ state: directory\r
mode: 0755\r
- recurse: yes\r
+ with_items:\r
+ - "{{ opensds_work_dir }}"\r
+ - "{{ opensds_config_dir }}"\r
+ - "{{ opensds_driver_config_dir }}"\r
+ - "{{ opensds_log_dir }}"\r
\r
-- name: change the mode of all binary files in nbp release\r
- file:\r
- path: "{{ nbp_dir }}/flexvolume"\r
- mode: 0755\r
- recurse: yes\r
+- name: include scenarios/repository.yml when installing from repository\r
+ include: scenarios/repository.yml\r
+ when: install_from == "repository"\r
\r
-- name: create opensds global config directory if it doesn't exist\r
- file:\r
- path: "{{ opensds_config_dir }}/driver"\r
- state: directory\r
- mode: 0755\r
+- name: include scenarios/release.yml when installing from release\r
+ include: scenarios/release.yml\r
+ when: install_from == "release"\r
\r
-- name: create opensds log directory if it doesn't exist\r
- file:\r
- path: "{{ opensds_log_dir }}"\r
- state: directory\r
- mode: 0755\r
+- name: include scenarios/container.yml when installing from container\r
+ include: scenarios/container.yml\r
+ when: install_from == "container"\r
+\r
+- name: copy config templates into opensds global config folder\r
+ copy:\r
+ src: ../../../../conf/\r
+ dest: "{{ opensds_config_dir }}"\r
\r
- name: configure opensds global info\r
shell: |\r
graceful = True\r
log_file = {{ controller_log_file }}\r
socket_order = inc\r
+ auth_strategy = {{ opensds_auth_strategy }}\r
\r
[osdsdock]\r
api_endpoint = {{ dock_endpoint }}\r
log_file = {{ dock_log_file }}\r
+ # Choose the type of dock resource, only support 'provisioner' and 'attacher'.\r
+ dock_type = {{ dock_type }}\r
# Specify which backends should be enabled, sample,ceph,cinder,lvm and so on.\r
enabled_backends = {{ enabled_backend }}\r
\r
- [lvm]\r
- name = {{ lvm_name }}\r
- description = {{ lvm_description }}\r
- driver_name = {{ lvm_driver_name }}\r
- config_path = {{ lvm_config_path }}\r
-\r
- [ceph]\r
- name = {{ ceph_name }}\r
- description = {{ ceph_description }}\r
- driver_name = {{ ceph_driver_name }}\r
- config_path = {{ ceph_config_path }}\r
-\r
- [cinder]\r
- name = {{ cinder_name }}\r
- description = {{ cinder_description }}\r
- driver_name = {{ cinder_driver_name }}\r
- config_path = {{ cinder_config_path }}\r
-\r
[database]\r
endpoint = {{ db_endpoint }}\r
driver = {{ db_driver }}\r
args:\r
chdir: "{{ opensds_config_dir }}"\r
ignore_errors: yes\r
+\r
+- name: include nbp-installer role if nbp_plugin_type != hotpot_only\r
+ include_role:\r
+ name: nbp-installer\r
+ when: nbp_plugin_type != "hotpot_only"\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: install docker-py package with pip when enabling containerized deployment
+ pip:
+ name: docker-py
+
+- name: run dashboard containerized service
+ docker_container:
+ name: dashboard
+ image: opensdsio/dashboard:latest
+ state: started
+ network_mode: host
+ restart_policy: always
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- set_fact:
+ go_path: "{{ lookup('env', 'GOPATH') }}"
+
+- name: check go_path
+ shell: "{{ item }}"
+ with_items:
+ - echo "The environment variable GOPATH must be set and cannot be an empty string!"
+ - /bin/false
+ when: go_path == ""
+
+- name: check for opensds source code existed
+ stat:
+ path: "{{ go_path }}/src/github.com/opensds/opensds"
+ register: opensdsexisted
+
+- name: download opensds source code if not exists
+ git:
+ repo: "{{ opensds_remote_url }}"
+ dest: "{{ go_path }}/src/github.com/opensds/opensds"
+ version: "{{ opensds_repo_branch }}"
+ when:
+ - opensdsexisted.stat.exists is undefined or opensdsexisted.stat.exists == false
+
+- name: build and configure opensds dashboard
+ shell: "{{ item }}"
+ with_items:
+ - service apache2 stop
+ - make
+ - service apache2 start
+ args:
+ chdir: "{{ go_path }}/src/github.com/opensds/opensds/dashboard"
+ warn: false
+ become: yes
+
+- name: update nginx default config
+ become: yes
+ shell: bash ./script/set_nginx_config.sh
+
+- name: restart nginx
+ service:
+ name: nginx
+ state: restarted
+
\ No newline at end of file
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: use container to install dashboard
+ include_tasks: scenarios/container.yml
+ when: dashboard_installation_type == "container"
+
+- name: use source code to install dashboard
+ include_tasks: scenarios/source-code.yml
+ when: dashboard_installation_type == "source_code"
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: Configure opensds endpoint IP in opensds csi plugin
+ lineinfile:
+ dest: "{{ nbp_work_dir }}/csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml"
+ regexp: '^ opensdsendpoint'
+ line: ' opensdsendpoint: {{ opensds_endpoint }}'
+ backup: yes
+
+- name: Configure opensds auth strategy in opensds csi plugin
+ lineinfile:
+ dest: "{{ nbp_work_dir }}/csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml"
+ regexp: '^ opensdsauthstrategy'
+ line: ' opensdsauthstrategy: {{ opensds_auth_strategy }}'
+ backup: yes
+
+- name: Configure keystone os auth url in opensds csi plugin
+ lineinfile:
+ dest: "{{ nbp_work_dir }}/csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml"
+ regexp: '^ osauthurl'
+ line: ' osauthurl: {{ keystone_os_auth_url }}'
+ backup: yes
+ when: opensds_auth_strategy == "keystone"
+
+- name: Prepare and deploy opensds csi plugin
+ shell: |
+ . /etc/profile
+ kubectl create -f deploy/kubernetes
+ args:
+ chdir: "{{ nbp_work_dir }}/csi"
+ ignore_errors: yes
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: Create flexvolume plugin directory if not existed\r
file:\r
\r
- name: Copy opensds flexvolume plugin binary file into flexvolume plugin dir\r
copy:\r
- src: "{{ nbp_dir }}/flexvolume/opensds"\r
+ src: "{{ nbp_work_dir }}/flexvolume/opensds"\r
dest: "{{ flexvolume_plugin_dir }}/opensds"\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: check for nbp release files existed
+ stat:
+ path: "{{ nbp_tarball_dir }}"
+ ignore_errors: yes
+ register: nbpreleasesexisted
+
+- name: download and extract the nbp release tarball if not exists
+ unarchive:
+ src={{ nbp_download_url }}
+ dest=/tmp/
+ when:
+ - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false
+
+- name: change the mode of all binary files in nbp release
+ file:
+ path: "{{ nbp_tarball_dir }}/flexvolume"
+ mode: 0755
+ recurse: yes
+
+- name: copy nbp tarball into nbp work directory
+ copy:
+ src: "{{ nbp_tarball_dir }}/"
+ dest: "{{ nbp_work_dir }}"
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- set_fact:
+ go_path: "{{ lookup('env', 'GOPATH') }}"
+
+- name: check go_path
+ shell: "{{ item }}"
+ with_items:
+ - echo "The environment variable GOPATH must be set and cannot be an empty string!"
+ - /bin/false
+ when: go_path == ""
+
+- name: check for nbp source code existed
+ stat:
+ path: "{{ go_path }}/src/github.com/opensds/nbp"
+ ignore_errors: yes
+ register: nbpexisted
+
+- name: download nbp source code if not exists
+ git:
+ repo: "{{ nbp_remote_url }}"
+ dest: "{{ go_path }}/src/github.com/opensds/nbp"
+ version: "{{ nbp_repo_branch }}"
+ when:
+ - nbpexisted.stat.exists is undefined or nbpexisted.stat.exists == false
+
+- name: build nbp binary file
+ shell: make
+ environment:
+ GOPATH: "{{ go_path }}"
+ args:
+ chdir: "{{ go_path }}/src/github.com/opensds/nbp"
+
+- name: create nbp install directory if it doesn't exist
+ file:
+ path: "{{ item }}"
+ state: directory
+ mode: 0755
+ with_items:
+ - "{{ nbp_work_dir }}/csi"
+ - "{{ nbp_work_dir }}/flexvolume"
+ - "{{ nbp_work_dir }}/provisioner"
+
+- name: copy nbp csi deploy scripts into nbp work directory
+ copy:
+ src: "{{ item }}"
+ dest: "{{ nbp_work_dir }}/csi/"
+ directory_mode: yes
+ with_items:
+ - "{{ go_path }}/src/github.com/opensds/nbp/csi/server/deploy"
+ - "{{ go_path }}/src/github.com/opensds/nbp/csi/server/examples"
+
+- name: copy nbp flexvolume binary file into nbp work directory
+ copy:
+ src: "{{ go_path }}/src/github.com/opensds/nbp/.output/flexvolume.server.opensds"
+ dest: "{{ nbp_work_dir }}/flexvolume/opensds"
+ mode: 0755
+
+- name: copy nbp provisioner deploy scripts into nbp work directory
+ copy:
+ src: "{{ item }}"
+ dest: "{{ nbp_work_dir }}/provisioner/"
+ directory_mode: yes
+ with_items:
+ - "{{ go_path }}/src/github.com/opensds/nbp/opensds-provisioner/deploy"
+ - "{{ go_path }}/src/github.com/opensds/nbp/opensds-provisioner/examples"
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
-- name: include scenarios/flexvolume.yml\r
+- name: install open-iscsi external packages\r
+ apt:\r
+ name: "{{ item }}"\r
+ state: present\r
+ with_items:\r
+ - open-iscsi\r
+\r
+- name: create nbp work directory if it doesn't exist\r
+ file:\r
+ path: "{{ item }}"\r
+ state: directory\r
+ mode: 0755\r
+ with_items:\r
+ - "{{ nbp_work_dir }}"\r
+\r
+- name: include scenarios/repository.yml when installing from repository\r
+ include: scenarios/repository.yml\r
+ when: install_from == "repository"\r
+\r
+- name: include scenarios/release.yml when installing from release\r
+ include: scenarios/release.yml\r
+ when: install_from == "release"\r
+\r
+- name: include scenarios/flexvolume.yml when nbp plugin type is flexvolume\r
include: scenarios/flexvolume.yml\r
when: nbp_plugin_type == "flexvolume"\r
\r
-- name: include scenarios/csi.yml\r
+- name: include scenarios/csi.yml when nbp plugin type is csi\r
include: scenarios/csi.yml\r
when: nbp_plugin_type == "csi"\r
--- /dev/null
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+- name: install keystone
+ shell: "{{ item }}"
+ with_items:
+ - bash ./script/keystone.sh install
+ when: opensds_auth_strategy == "keystone"
+ become: yes
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: run etcd containerized service\r
- docker:\r
+ docker_container:\r
name: myetcd\r
image: "{{ etcd_docker_image }}"\r
- command: /usr/local/bin/etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -advertise-client-urls http://{{ etcd_host }}:{{ etcd_peer_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }}\r
+ command: /usr/local/bin/etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }}\r
state: started\r
- net: host\r
+ network_mode: host\r
volumes:\r
- "/usr/share/ca-certificates/:/etc/ssl/certs"\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: check for etcd existed\r
stat:\r
register: service_etcd_status\r
\r
- name: run etcd daemon service\r
- shell: nohup ./etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }} &>>etcd.log &\r
+ shell: nohup ./etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }} &>>etcd.log &\r
become: true\r
args:\r
chdir: "{{ etcd_dir }}"\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: include scenarios/etcd.yml\r
include: "{{ item }}"\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: install ceph-common external package when ceph backend enabled\r
apt:\r
state: present\r
with_items:\r
- ceph-common\r
- when: enabled_backend == "ceph"\r
\r
-- name: copy opensds ceph backend file if specify ceph backend\r
+- name: configure ceph section in opensds global info if specify ceph backend\r
+ shell: |\r
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC\r
+\r
+ [ceph]\r
+ name = {{ ceph_name }}\r
+ description = {{ ceph_description }}\r
+ driver_name = {{ ceph_driver_name }}\r
+ config_path = {{ ceph_config_path }}\r
+ args:\r
+ chdir: "{{ opensds_config_dir }}"\r
+ ignore_errors: yes\r
+\r
+- name: copy opensds ceph backend file to ceph config file if specify ceph backend\r
copy:\r
src: ../../../group_vars/ceph/ceph.yaml\r
dest: "{{ ceph_config_path }}"\r
git:\r
repo: https://github.com/ceph/ceph-ansible.git\r
dest: /opt/ceph-ansible\r
+ version: stable-3.0\r
when:\r
- cephansibleexisted.stat.exists is undefined or cephansibleexisted.stat.exists == false\r
\r
src: ../../../group_vars/ceph/all.yml\r
dest: /opt/ceph-ansible/group_vars/all.yml\r
\r
-- name: copy ceph osds.yml file into ceph-ansible group_vars directory\r
- copy:\r
- src: ../../../group_vars/ceph/osds.yml\r
- dest: /opt/ceph-ansible/group_vars/osds.yml\r
-\r
- name: copy site.yml.sample to site.yml in ceph-ansible\r
copy:\r
src: /opt/ceph-ansible/site.yml.sample\r
dest: /opt/ceph-ansible/site.yml\r
\r
-- name: ping all hosts\r
- shell: ansible all -m ping -i ceph.hosts\r
- become: true\r
- args:\r
- chdir: /opt/ceph-ansible\r
-\r
-- name: run ceph-ansible playbook\r
- shell: ansible-playbook site.yml -i ceph.hosts | tee /var/log/ceph_ansible.log\r
+- name: ping all hosts and run ceph-ansible playbook\r
+ shell: "{{ item }}"\r
become: true\r
+ with_items:\r
+ - ansible all -m ping -i ceph.hosts\r
+ - ansible-playbook site.yml -i ceph.hosts | tee /var/log/ceph_ansible.log\r
args:\r
chdir: /opt/ceph-ansible\r
\r
-#- name: Check if ceph osd is running\r
-# shell: ps aux | grep ceph-osd | grep -v grep\r
-# ignore_errors: false\r
-# changed_when: false\r
-# register: service_ceph_osd_status\r
+- name: check if ceph osd is running\r
+ shell: ps aux | grep ceph-osd | grep -v grep\r
+ ignore_errors: false\r
+ changed_when: false\r
+ register: service_ceph_osd_status\r
\r
-- name: Check if ceph mon is running\r
+- name: check if ceph mon is running\r
shell: ps aux | grep ceph-mon | grep -v grep\r
ignore_errors: false\r
changed_when: false\r
register: service_ceph_mon_status\r
\r
-- name: Create specified pools and initialize them with default pool size.\r
+- name: configure profile and disable some features in ceph for kernel compatible.\r
+ shell: "{{ item }}"\r
+ become: true\r
+ ignore_errors: yes\r
+ with_items:\r
+ - ceph osd crush tunables hammer\r
+ - grep -q "^rbd default features" /etc/ceph/ceph.conf || sed -i '/\[global\]/arbd default features = 1' /etc/ceph/ceph.conf\r
+ when: service_ceph_mon_status.rc == 0 and service_ceph_osd_status.rc == 0\r
+\r
+- name: create specified pools and initialize them with default pool size.\r
shell: ceph osd pool create {{ item }} 100 && ceph osd pool set {{ item }} size 1\r
ignore_errors: yes\r
changed_when: false\r
with_items: "{{ ceph_pools }}"\r
- when: service_ceph_mon_status.rc == 0 # and service_ceph_osd_status.rc == 0\r
+ when: service_ceph_mon_status.rc == 0 and service_ceph_osd_status.rc == 0\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
-- name: copy opensds cinder backend file if specify cinder backend\r
+- name: configure cinder section in opensds global info if specify cinder backend\r
+ shell: |\r
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC\r
+\r
+ [cinder]\r
+ name = {{ cinder_name }}\r
+ description = {{ cinder_description }}\r
+ driver_name = {{ cinder_driver_name }}\r
+ config_path = {{ cinder_config_path }}\r
+ args:\r
+ chdir: "{{ opensds_config_dir }}"\r
+ ignore_errors: yes\r
+\r
+- name: copy opensds cinder backend file to cinder config path if specify cinder backend\r
copy:\r
src: ../../../group_vars/cinder/cinder.yaml\r
dest: "{{ cinder_config_path }}"\r
----\r
-- name: install python-pip\r
- apt:\r
- name: python-pip\r
-\r
-- name: install lvm2\r
- apt:\r
- name: lvm2\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
\r
-- name: install thin-provisioning-tools\r
+---\r
+- name: install python-pip, lvm2, thin-provisioning-tools and docker-compose\r
apt:\r
- name: thin-provisioning-tools\r
+ name: "{{ item }}"\r
+ state: present\r
+ with_items:\r
+ - python-pip\r
+ - lvm2\r
+ - thin-provisioning-tools\r
+ - docker-compose\r
+\r
+- name: configure cinder section in opensds global info if specify cinder backend\r
+ shell: |\r
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC\r
\r
-- name: install docker-compose\r
- pip:\r
- name: docker-compose\r
+ [cinder]\r
+ name = {{ cinder_name }}\r
+ description = {{ cinder_description }}\r
+ driver_name = {{ cinder_driver_name }}\r
+ config_path = {{ cinder_config_path }}\r
+ args:\r
+ chdir: "{{ opensds_config_dir }}"\r
+ ignore_errors: yes\r
\r
-- name: copy opensds cinder backend file if specify cinder backend\r
+- name: copy opensds cinder backend file to cinder config path if specify cinder backend\r
copy:\r
src: ../../../group_vars/cinder/cinder.yaml\r
dest: "{{ cinder_config_path }}"\r
local vg_dev\r
vg_dev=`sudo losetup -f --show $backing_file`\r
\r
+ # Only create physical volume if it doesn't already exist\r
+ if ! sudo pvs $vg_dev; then\r
+ sudo pvcreate $vg_dev\r
+ fi\r
+\r
# Only create volume group if it doesn't already exist\r
if ! sudo vgs $vg; then\r
sudo vgcreate $vg $vg_dev\r
sed -i "s/image: debian-cinder/image: {{ cinder_image_tag }}/g" docker-compose.yml\r
sed -i "s/image: lvm-debian-cinder/image: lvm-{{ cinder_image_tag }}/g" docker-compose.yml\r
\r
+ sed -i "s/3306:3306/3307:3306/g" docker-compose.yml\r
+\r
sed -i "s/volume_group = cinder-volumes /volume_group = {{ cinder_volume_group }}/g" etc/cinder.conf\r
become: true\r
args:\r
wait_for:\r
host: 127.0.0.1\r
port: 8776\r
- delay: 2\r
+ delay: 15\r
timeout: 120\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
-- name: install lvm2 external package when lvm backend enabled\r
+- name: install lvm2 ang tgt external package when lvm backend enabled\r
apt:\r
- name: lvm2\r
+ name: "{{ item }}"\r
+ state: present\r
+ with_items:\r
+ - lvm2\r
+ - tgt\r
+ - thin-provisioning-tools\r
+\r
+- name: configure lvm section in opensds global info if specify lvm backend\r
+ shell: |\r
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC\r
+\r
+ [lvm]\r
+ name = {{ lvm_name }}\r
+ description = {{ lvm_description }}\r
+ driver_name = {{ lvm_driver_name }}\r
+ config_path = {{ lvm_config_path }}\r
+ args:\r
+ chdir: "{{ opensds_config_dir }}"\r
+ ignore_errors: yes\r
\r
-- name: copy opensds lvm backend file if specify lvm backend\r
+- name: copy opensds lvm backend file to lvm config path if specify lvm backend\r
copy:\r
src: ../../../group_vars/lvm/lvm.yaml\r
dest: "{{ lvm_config_path }}"\r
\r
-- name: check if volume group existed\r
- shell: vgdisplay {{ vg_name }}\r
- ignore_errors: yes\r
- register: vg_existed\r
+- name: create directory to volume group file\r
+ file:\r
+ path: "{{ opensds_work_dir }}/volumegroups"\r
+ state: directory\r
+ recurse: yes\r
+\r
+- name: create volume group in thin mode\r
+ shell:\r
+ _raw_params: |\r
+ function _create_lvm_volume_group {\r
+ local vg=$1\r
+ local size=$2\r
+\r
+ local backing_file={{ opensds_work_dir }}/volumegroups/${vg}.img\r
+ if ! sudo vgs $vg; then\r
+ # Only create if the file doesn't already exists\r
+ [[ -f $backing_file ]] || truncate -s $size $backing_file\r
+ local vg_dev\r
+ vg_dev=`sudo losetup -f --show $backing_file`\r
+\r
+ # Only create physical volume if it doesn't already exist\r
+ if ! sudo pvs $vg_dev; then\r
+ sudo pvcreate $vg_dev\r
+ fi\r
\r
-- name: create a volume group and initialize it\r
- lvg:\r
- vg: "{{ vg_name }}"\r
- pvs: "{{ pv_devices }}"\r
- when: vg_existed is undefined or vg_existed.rc != 0\r
+ # Only create volume group if it doesn't already exist\r
+ if ! sudo vgs $vg; then\r
+ sudo vgcreate $vg $vg_dev\r
+ fi\r
+ fi\r
+ }\r
+ modprobe dm_thin_pool\r
+ _create_lvm_volume_group {{ opensds_volume_group }} 10G\r
+ args:\r
+ executable: /bin/bash\r
+ become: true\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: include scenarios/lvm.yml\r
include: scenarios/lvm.yml\r
ps aux | grep osdsdock | grep -v grep && break\r
done\r
args:\r
- chdir: "{{ opensds_dir }}"\r
+ chdir: "{{ opensds_work_dir }}"\r
when: container_enabled == false\r
\r
- name: run osdsdock containerized service\r
- docker:\r
+ docker_container:\r
name: osdsdock\r
image: opensdsio/opensds-dock:latest\r
state: started\r
- net: host\r
+ network_mode: host\r
privileged: true\r
volumes:\r
- "/etc/opensds/:/etc/opensds"\r
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
- name: run osdslet daemon service\r
shell:\r
ps aux | grep osdslet | grep -v grep && break\r
done\r
args:\r
- chdir: "{{ opensds_dir }}"\r
+ chdir: "{{ opensds_work_dir }}"\r
when: container_enabled == false\r
\r
- name: run osdslet containerized service\r
- docker:\r
+ docker_container:\r
name: osdslet\r
image: opensdsio/opensds-controller:latest\r
state: started\r
- net: host\r
+ network_mode: host\r
volumes:\r
- "/etc/opensds/:/etc/opensds"\r
when: container_enabled == true\r
--- /dev/null
+#!/usr/bin/env bash
+
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+ansiblever=$(ansible --version |grep -Eow '^ansible [^ ]+' |gawk '{ print $2 }')
+echo "The actual version of ansible is $ansiblever"
+
+if [[ "$ansiblever" < '2.4.2' ]]; then
+ echo "Ansible version 2.4.2 or higher is required"
+ exit 1
+fi
+
+exit 0
+
--- /dev/null
+#!/usr/bin/env bash
+
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# 'stack' user is just for install keystone through devstack
+
+create_user(){
+ if id "${STACK_USER_NAME}" &> /dev/null; then
+ return
+ fi
+ sudo useradd -s /bin/bash -d "${STACK_HOME}" -m "${STACK_USER_NAME}"
+ echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
+}
+
+
+remove_user(){
+ userdel "${STACK_USER_NAME}" -f -r
+ rm /etc/sudoers.d/stack
+}
+
+devstack_local_conf(){
+DEV_STACK_LOCAL_CONF=${DEV_STACK_DIR}/local.conf
+cat > "$DEV_STACK_LOCAL_CONF" << DEV_STACK_LOCAL_CONF_DOCK
+[[local|localrc]]
+# use TryStack git mirror
+GIT_BASE=$STACK_GIT_BASE
+
+# If the "*_PASSWORD" variables are not set here you will be prompted to enter
+# values for them by "stack.sh" and they will be added to "local.conf".
+ADMIN_PASSWORD=$STACK_PASSWORD
+DATABASE_PASSWORD=$STACK_PASSWORD
+RABBIT_PASSWORD=$STACK_PASSWORD
+SERVICE_PASSWORD=$STACK_PASSWORD
+
+# Neither is set by default.
+HOST_IP=$HOST_IP
+
+# path of the destination log file. A timestamp will be appended to the given name.
+LOGFILE=\$DEST/logs/stack.sh.log
+
+# Old log files are automatically removed after 7 days to keep things neat. Change
+# the number of days by setting "LOGDAYS".
+LOGDAYS=2
+
+ENABLED_SERVICES=mysql,key
+# Using stable/queens branches
+# ---------------------------------
+KEYSTONE_BRANCH=$STACK_BRANCH
+KEYSTONECLIENT_BRANCH=$STACK_BRANCH
+DEV_STACK_LOCAL_CONF_DOCK
+chown stack:stack "$DEV_STACK_LOCAL_CONF"
+}
+
+opensds_conf() {
+cat >> "$OPENSDS_CONFIG_DIR/opensds.conf" << OPENSDS_GLOBAL_CONFIG_DOC
+
+
+[keystone_authtoken]
+memcached_servers = $HOST_IP:11211
+signing_dir = /var/cache/opensds
+cafile = /opt/stack/data/ca-bundle.pem
+auth_uri = http://$HOST_IP/identity
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = $STACK_PASSWORD
+username = $OPENSDS_SERVER_NAME
+auth_url = http://$HOST_IP/identity
+auth_type = password
+
+OPENSDS_GLOBAL_CONFIG_DOC
+
+cp "$OPENSDS_DIR/examples/policy.json" "$OPENSDS_CONFIG_DIR"
+}
+
+create_user_and_endpoint(){
+ . "$DEV_STACK_DIR/openrc" admin admin
+ openstack user create --domain default --password "$STACK_PASSWORD" "$OPENSDS_SERVER_NAME"
+ openstack role add --project service --user opensds admin
+ openstack group create service
+ openstack group add user service opensds
+ openstack role add service --project service --group service
+ openstack group add user admins admin
+ openstack service create --name "opensds$OPENSDS_VERSION" --description "OpenSDS Block Storage" "opensds$OPENSDS_VERSION"
+ openstack endpoint create --region RegionOne "opensds$OPENSDS_VERSION" public "http://$HOST_IP:50040/$OPENSDS_VERSION/%\(tenant_id\)s"
+ openstack endpoint create --region RegionOne "opensds$OPENSDS_VERSION" internal "http://$HOST_IP:50040/$OPENSDS_VERSION/%\(tenant_id\)s"
+ openstack endpoint create --region RegionOne "opensds$OPENSDS_VERSION" admin "http://$HOST_IP:50040/$OPENSDS_VERSION/%\(tenant_id\)s"
+}
+
+delete_redundancy_data() {
+ . "$DEV_STACK_DIR/openrc" admin admin
+ openstack project delete demo
+ openstack project delete alt_demo
+ openstack project delete invisible_to_admin
+ openstack user delete demo
+ openstack user delete alt_demo
+}
+
+download_code(){
+ if [ ! -d "${DEV_STACK_DIR}" ];then
+ git clone "${STACK_GIT_BASE}/openstack-dev/devstack.git" -b "${STACK_BRANCH}" "${DEV_STACK_DIR}"
+ chown stack:stack -R "${DEV_STACK_DIR}"
+ fi
+}
+
+install(){
+ create_user
+ download_code
+ opensds_conf
+
+ # If keystone is ready to start, there is no need continue next step.
+ if wait_for_url "http://$HOST_IP/identity" "keystone" 0.25 4; then
+ return
+ fi
+ devstack_local_conf
+ cd "${DEV_STACK_DIR}"
+ su "$STACK_USER_NAME" -c "${DEV_STACK_DIR}/stack.sh" >/dev/null
+ create_user_and_endpoint
+ delete_redundancy_data
+}
+
+cleanup() {
+ su "$STACK_USER_NAME" -c "${DEV_STACK_DIR}/clean.sh" >/dev/null
+}
+
+uninstall(){
+ su "$STACK_USER_NAME" -c "${DEV_STACK_DIR}/unstack.sh" >/dev/null
+}
+
+uninstall_purge(){
+ rm "${STACK_HOME:?'STACK_HOME must be defined and cannot be empty'}/*" -rf
+ remove_user
+}
+
+# ***************************
+TOP_DIR=$(cd $(dirname "$0") && pwd)
+
+# OpenSDS configuration directory
+OPENSDS_CONFIG_DIR=${OPENSDS_CONFIG_DIR:-/etc/opensds}
+
+source "$TOP_DIR/util.sh"
+source "$TOP_DIR/sdsrc"
+
+case "$# $1" in
+ "1 install")
+ echo "Starting install keystone..."
+ install
+ ;;
+ "1 uninstall")
+ echo "Starting uninstall keystone..."
+ uninstall
+ ;;
+ "1 cleanup")
+ echo "Starting cleanup keystone..."
+ cleanup
+ ;;
+ "1 uninstall_purge")
+ echo "Starting uninstall purge keystone..."
+ uninstall_purge
+ ;;
+ *)
+ echo "The value of the parameter can only be one of the following: install/uninstall/cleanup/uninstall_purge"
+ exit 1
+ ;;
+esac
+
--- /dev/null
+#!/usr/bin/env bash
+
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Global
+HOST_IP=${HOST_IP:-}
+HOST_IP=$(get_default_host_ip "$HOST_IP" "inet")
+if [ "$HOST_IP" == "" ]; then
+ die $LINENO "Could not determine host ip address. See local.conf for suggestions on setting HOST_IP."
+fi
+
+# OpenSDS configuration.
+OPENSDS_VERSION=${OPENSDS_VERSION:-v1beta}
+
+# OpenSDS service name in keystone.
+OPENSDS_SERVER_NAME=${OPENSDS_SERVER_NAME:-opensds}
+
+# devstack keystone configuration
+STACK_GIT_BASE=${STACK_GIT_BASE:-https://git.openstack.org}
+STACK_USER_NAME=${STACK_USER_NAME:-stack}
+STACK_PASSWORD=${STACK_PASSWORD:-opensds@123}
+STACK_HOME=${STACK_HOME:-/opt/stack}
+STACK_BRANCH=${STACK_BRANCH:-stable/queens}
+DEV_STACK_DIR=$STACK_HOME/devstack
+
+GOPATH=${GOPATH:-$HOME/gopath}
+OPENSDS_DIR=${GOPATH}/src/github.com/opensds/opensds
+
--- /dev/null
+#!/usr/bin/env bash
+
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+TOP_DIR=$(cd $(dirname "$0") && pwd)
+source "$TOP_DIR/util.sh"
+source "$TOP_DIR/sdsrc"
+
+cat > /etc/nginx/sites-available/default <<EOF
+ server {
+ listen 8088 default_server;
+ listen [::]:8088 default_server;
+ root /var/www/html;
+ index index.html index.htm index.nginx-debian.html;
+ server_name _;
+ location /v3/ {
+ proxy_pass http://$HOST_IP/identity/v3/;
+ }
+ location /v1beta/ {
+ proxy_pass http://$HOST_IP:50040/$OPENSDS_VERSION/;
+ }
+ }
+EOF
+
+
--- /dev/null
+#!/bin/bash
+
+# Copyright (c) 2017 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Echo text to the log file, summary log file and stdout
+# echo_summary "something to say"
+function echo_summary {
+ echo -e "$@"
+}
+
+wait_for_url() {
+ local url=$1
+ local prefix=${2:-}
+ local wait=${3:-1}
+ local times=${4:-30}
+
+ which curl >/dev/null || {
+ echo_summary "curl must be installed"
+ exit 1
+ }
+
+ local i
+ for i in $(seq 1 "$times"); do
+ local out
+ if out=$(curl --max-time 1 -gkfs "$url" 2>/dev/null); then
+ echo_summary "On try ${i}, ${prefix}: ${out}"
+ return 0
+ fi
+ sleep "${wait}"
+ done
+ echo_summary "Timed out waiting for ${prefix} to answer at ${url}; tried ${times} waiting ${wait} between each"
+ return 1
+}
+
+# Prints line number and "message" in error format
+# err $LINENO "message"
+err() {
+ local exitcode=$?
+ local xtrace
+ xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+ local msg="[ERROR] ${BASH_SOURCE[2]}:$1 $2"
+ echo "$msg"
+ $xtrace
+ return $exitcode
+}
+
+# Prints line number and "message" then exits
+# die $LINENO "message"
+die() {
+ local exitcode=$?
+ set +o xtrace
+ local line=$1; shift
+ if [ $exitcode == 0 ]; then
+ exitcode=1
+ fi
+ err "$line" "$*"
+ # Give buffers a second to flush
+ sleep 1
+ exit $exitcode
+}
+
+get_default_host_ip() {
+ local host_ip=$1
+ local af=$2
+ # Search for an IP unless an explicit is set by ``HOST_IP`` environment variable
+ if [ -z "$host_ip" ]; then
+ host_ip=""
+ # Find the interface used for the default route
+ host_ip_iface=${host_ip_iface:-$(ip -f "$af" route | awk '/default/ {print $5}' | head -1)}
+ local host_ips
+ host_ips=$(LC_ALL=C ip -f "$af" addr show "${host_ip_iface}" | sed /temporary/d |awk /$af'/ {split($2,parts,"/"); print parts[1]}')
+ local ip
+ for ip in $host_ips; do
+ host_ip=$ip
+ break;
+ done
+ fi
+ echo "$host_ip"
+}
+
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.\r
+#\r
+# Licensed under the Apache License, Version 2.0 (the "License");\r
+# you may not use this file except in compliance with the License.\r
+# You may obtain a copy of the License at\r
+#\r
+# http://www.apache.org/licenses/LICENSE-2.0\r
+#\r
+# Unless required by applicable law or agreed to in writing, software\r
+# distributed under the License is distributed on an "AS IS" BASIS,\r
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r
+# See the License for the specific language governing permissions and\r
+# limitations under the License.\r
+\r
---\r
# Defines deployment design and assigns role to server groups\r
\r
remote_user: root\r
vars_files:\r
- group_vars/common.yml\r
+ - group_vars/auth.yml\r
- group_vars/osdsdb.yml\r
- group_vars/osdslet.yml\r
- group_vars/osdsdock.yml\r
+ - group_vars/dashboard.yml\r
gather_facts: false\r
become: True\r
roles:\r
- common\r
+ - osdsauth\r
- osdsdb\r
- osdslet\r
- osdsdock\r
- - nbp-installer\r
+ - dashboard-installer\r
--- /dev/null
+{
+ "admin_or_owner": "is_admin:True or (role:admin and is_admin_project:True) or tenant_id:%(tenant_id)s",
+ "default": "rule:admin_or_owner",
+ "admin_api": "is_admin:True or (role:admin and is_admin_project:True)",
+
+
+ "profile:create":"rule:admin_api",
+ "profile:list":"",
+ "profile:get":"",
+ "profile:update":"rule:admin_api",
+ "profile:delete":"rule:admin_api",
+ "profile:add_extra_property": "rule:admin_api",
+ "profile:list_extra_properties": "",
+ "profile:remove_extra_property": "rule:admin_api",
+ "volume:create": "rule:admin_or_owner",
+ "volume:list": "rule:admin_or_owner",
+ "volume:get": "rule:admin_or_owner",
+ "volume:update": "rule:admin_or_owner",
+ "volume:extend": "rule:admin_or_owner",
+ "volume:delete": "rule:admin_or_owner",
+ "volume:create_attachment": "rule:admin_or_owner",
+ "volume:list_attachments": "rule:admin_or_owner",
+ "volume:get_attachment": "rule:admin_or_owner",
+ "volume:update_attachment": "rule:admin_or_owner",
+ "volume:delete_attachment": "rule:admin_or_owner",
+ "snapshot:create": "rule:admin_or_owner",
+ "snapshot:list": "rule:admin_or_owner",
+ "snapshot:get": "rule:admin_or_owner",
+ "snapshot:update": "rule:admin_or_owner",
+ "snapshot:delete": "rule:admin_or_owner",
+ "dock:list": "rule:admin_api",
+ "dock:get": "rule:admin_api",
+ "pool:list": "rule:admin_api",
+ "pool:get": "rule:admin_api",
+ "replication:create": "rule:admin_or_owner",
+ "replication:list": "rule:admin_or_owner",
+ "replication:list_detail": "rule:admin_or_owner",
+ "replication:get": "rule:admin_or_owner",
+ "replication:update": "rule:admin_or_owner",
+ "replication:delete": "rule:admin_or_owner",
+ "replication:action:enable": "rule:admin_or_owner",
+ "replication:action:disable": "rule:admin_or_owner",
+ "replication:action:failover": "rule:admin_or_owner",
+ "volume_group:create": "rule:admin_or_owner",
+ "volume_group:list": "rule:admin_or_owner",
+ "volume_group:get": "rule:admin_or_owner",
+ "volume_group:update": "rule:admin_or_owner",
+ "volume_group:delete": "rule:admin_or_owner"
+}
\ No newline at end of file
```\r
\r
### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster\r
-* You can startup the v1.9.0 k8s local cluster by executing commands blow:\r
+* You can startup `v1.10.0` k8s local cluster by executing commands blow:\r
\r
```\r
cd $HOME\r
git clone https://github.com/kubernetes/kubernetes.git\r
cd $HOME/kubernetes\r
- git checkout v1.9.0\r
+ git checkout v1.10.0\r
make\r
echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile\r
ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh\r
```\r
\r
### [opensds](https://github.com/opensds/opensds) local cluster\r
-* For testing purposes you can deploy OpenSDS refering to ```ansible/README.md```.\r
+* For testing purposes you can deploy OpenSDS refering to [OpenSDS Cluster Installation through Ansible](https://github.com/opensds/opensds/wiki/OpenSDS-Cluster-Installation-through-Ansible).\r
\r
## Testing steps ##\r
\r
* Change the workplace\r
\r
```\r
- cd /opt/opensds-k8s-v0.1.0-linux-amd64\r
+ cd /opt/opensds-k8s-linux-amd64\r
```\r
\r
-* Configure opensds endpoint IP\r
-\r
- ```\r
- vim csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml\r
- ```\r
-\r
- The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP.\r
- ```yaml\r
- kind: ConfigMap\r
- apiVersion: v1\r
- metadata:\r
- name: csi-configmap-opensdsplugin\r
- data:\r
- opensdsendpoint: http://127.0.0.1:50040\r
- ```\r
-\r
-* Create opensds CSI pods.\r
-\r
- ```\r
- kubectl create -f csi/deploy/kubernetes\r
- ```\r
-\r
- After this three pods can be found by ```kubectl get pods``` like below:\r
-\r
- - csi-provisioner-opensdsplugin\r
- - csi-attacher-opensdsplugin\r
- - csi-nodeplugin-opensdsplugin\r
-\r
- You can find more design details from\r
- [CSI Volume Plugins in Kubernetes Design Doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)\r
-\r
* Create example nginx application\r
\r
```\r
## Prerequisite ##\r
-\r
### ubuntu\r
* Version information\r
\r
- ```\r
+ ```bash\r
root@proxy:~# cat /etc/issue\r
Ubuntu 16.04.2 LTS \n \l\r
```\r
-\r
### docker\r
* Version information\r
\r
- ```\r
+ ```bash\r
root@proxy:~# docker version\r
Client:\r
Version: 1.12.6\r
Git commit: 78d1802\r
Built: Tue Jan 31 23:35:14 2017\r
OS/Arch: linux/amd64\r
-\r
+ \r
Server:\r
Version: 1.12.6\r
API version: 1.24\r
OS/Arch: linux/amd64\r
```\r
\r
-### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster\r
+### [golang](https://redirector.gvt1.com/edgedl/go/go1.9.2.linux-amd64.tar.gz) \r
* Version information\r
+\r
+ ```bash\r
+ root@proxy:~# go version\r
+ go version go1.9.2 linux/amd64\r
```\r
+\r
+* You can install golang by executing commands blow:\r
+\r
+ ```bash\r
+ wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz\r
+ tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz\r
+ export PATH=$PATH:/usr/local/go/bin\r
+ export GOPATH=$HOME/gopath\r
+ ```\r
+\r
+### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster\r
+* Version information\r
+ ```bash\r
root@proxy:~# kubectl version\r
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}\r
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}\r
```\r
* You can startup the k8s local cluster by executing commands blow:\r
\r
- ```\r
+ ```bash\r
cd $HOME\r
git clone https://github.com/kubernetes/kubernetes.git\r
cd $HOME/kubernetes\r
echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile\r
RUNTIME_CONFIG=settings.k8s.io/v1alpha1=true AUTHORIZATION_MODE=Node,RBAC hack/local-up-cluster.sh -O\r
```\r
-\r
+**NOTE**: \r
+<div> Due to opensds using etcd as the database which is same with kubernetes so you should startup kubernetes firstly.\r
+</div>\r
\r
### [opensds](https://github.com/opensds/opensds) local cluster\r
-* For testing purposes you can deploy OpenSDS local cluster referring to ```ansible/README.md```.\r
+* For testing purposes you can deploy OpenSDS referring the [Local Cluster Installation with LVM](https://github.com/opensds/opensds/wiki/Local-Cluster-Installation-with-LVM) wiki.\r
\r
## Testing steps ##\r
+* Load some ENVs which is setted before.\r
\r
-* Create service account, role and bind them.\r
+ ```bash\r
+ source /etc/profile\r
+ ```\r
+* Download nbp source code.\r
+\r
+ using git clone \r
+ ```bash\r
+ git clone https://github.com/opensds/nbp.git $GOPATH/src/github.com/opensds/nbp\r
+ ```\r
+ \r
+ or using go get \r
+ ```bash\r
+ go get -v github.com/opensds/nbp/...\r
+ ``` \r
+\r
+* Build the FlexVolume.\r
+\r
+ ```bash\r
+ cd $GOPATH/src/github.com/opensds/nbp/flexvolume\r
+ go build -o opensds ./cmd/flex-plugin/\r
```\r
- cd /opt/opensds-k8s-{release version}-linux-amd64/provisioner\r
+ \r
+ FlexVolume plugin binary is on the current directory. \r
+\r
+\r
+* Copy the OpenSDS FlexVolume binary file to k8s kubelet `volume-plugin-dir`. \r
+ if you don't specify the `volume-plugin-dir`, you can execute commands blow:\r
+\r
+ ```bash\r
+ mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds/\r
+ cp $GOPATH/src/github.com/opensds/nbp/flexvolume/opensds /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds/\r
+ ``` \r
+ \r
+ **NOTE**: \r
+ <div>\r
+ OpenSDS FlexVolume will get the opensds api endpoint from the environment variable `OPENSDS_ENDPOINT`, if you don't specify it, the FlexVolume will use the default vaule: `http://127.0.0.1:50040`. if you want to specify the `OPENSDS_ENDPOINT` executing command `export OPENSDS_ENDPOINT=http://ip:50040` and restart the k8s local cluster.\r
+</div>\r
+\r
+* Build the provisioner docker image.\r
+\r
+ ```bash\r
+ cd $GOPATH/src/github.com/opensds/nbp/opensds-provisioner\r
+ make container\r
+ ```\r
+\r
+* Create service account, role and bind them.\r
+ ```bash\r
+ cd $GOPATH/src/github.com/opensds/nbp/opensds-provisioner/examples\r
kubectl create -f serviceaccount.yaml\r
kubectl create -f clusterrole.yaml\r
kubectl create -f clusterrolebinding.yaml\r
```\r
\r
-* Change the opensds endpoint IP in pod-provisioner.yaml\r
-The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual endpoint IP.\r
+* Change the opensds endpoint IP in pod-provisioner.yaml \r
+The IP (192.168.56.106) should be replaced with the OpenSDS osdslet actual endpoint IP.\r
```yaml\r
kind: Pod\r
apiVersion: v1\r
serviceAccount: opensds-provisioner\r
containers:\r
- name: opensds-provisioner\r
- image: opensdsio/opensds-provisioner:latest\r
+ image: opensdsio/opensds-provisioner\r
securityContext:\r
args:\r
- "-endpoint=http://192.168.56.106:50040" # should be replaced\r
```\r
\r
* Create provisioner pod.\r
- ```\r
+ ```bash\r
kubectl create -f pod-provisioner.yaml\r
```\r
-\r
+ \r
+ Execute `kubectl get pod` to check if the opensds-provisioner is ok.\r
+ ```bash\r
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pod\r
+ NAME READY STATUS RESTARTS AGE\r
+ opensds-provisioner 1/1 Running 0 42m\r
+ ```\r
* You can use the following cammands to test the OpenSDS FlexVolume and Proversioner functions.\r
\r
- ```\r
+ Create storage class.\r
+ ```bash\r
kubectl create -f sc.yaml # Create StorageClass\r
+ ```\r
+ Execute `kubectl get sc` to check if the storage class is ok. \r
+ ```bash\r
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get sc\r
+ NAME PROVISIONER AGE\r
+ opensds opensds/nbp-provisioner 46m\r
+ standard (default) kubernetes.io/host-path 49m\r
+ ```\r
+ Create PVC.\r
+ ```bash\r
kubectl create -f pvc.yaml # Create PVC\r
- kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.\r
```\r
+ Execute `kubectl get pvc` to check if the pvc is ok. \r
+ ```bash\r
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pvc\r
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\r
+ opensds-pvc Bound 731da41e-c9ee-4180-8fb3-d1f6c7f65378 1Gi RWO opensds 48m\r
\r
+ ```\r
+ Create busybox pod.\r
+ \r
+ ```bash\r
+ kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.\r
+ ```\r
+ Execute `kubectl get pod` to check if the busybox pod is ok. \r
+ ```bash\r
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pod\r
+ NAME READY STATUS RESTARTS AGE\r
+ busy-pod 1/1 Running 0 49m\r
+ opensds-provisioner 1/1 Running 0 50m\r
+ ```\r
Execute the `findmnt|grep opensds` to confirm whether the volume has been provided.\r
+ If there is some thing that goes wrong, you can check the log files in directory `/var/log/opensds`.\r
\r
## Clean up steps ##\r
\r
kubectl delete -f clusterrolebinding.yaml\r
kubectl delete -f clusterrole.yaml\r
kubectl delete -f serviceaccount.yaml\r
-```
\ No newline at end of file
+```\r
--- /dev/null
+## 1. How to install an opensds local cluster
+### Pre-config (Ubuntu 16.04)
+All the installation work is tested on `Ubuntu 16.04`, please make sure you have installed the right one. Also `root` user is suggested before the installation work starts.
+
+* packages
+
+Install following packages:
+```bash
+apt-get install -y git curl wget
+```
+* docker
+
+Install docker:
+```bash
+wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+```
+* golang
+
+Check golang version information:
+```bash
+root@proxy:~# go version
+go version go1.9.2 linux/amd64
+```
+You can install golang by executing commands below:
+```bash
+wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
+tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
+echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
+echo 'export GOPATH=$HOME/gopath' >> /etc/profile
+source /etc/profile
+```
+
+### Download opensds-installer code
+```bash
+git clone https://gerrit.opnfv.org/gerrit/stor4nfv
+cd stor4nfv/ci/ansible
+```
+
+### Install ansible tool
+To install ansible, run the commands below:
+```bash
+# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command.
+chmod +x ./install_ansible.sh && ./install_ansible.sh
+ansible --version # Ansible version 2.4.x is required.
+```
+
+### Configure opensds cluster variables:
+##### System environment:
+If you want to integrate stor4nfv with k8s csi, please modify `nbp_plugin_type` to `csi` and also change `opensds_endpoint` field in `group_vars/common.yml`:
+```yaml
+# 'hotpot_only' is the default integration way, but you can change it to 'csi'
+# or 'flexvolume'
+nbp_plugin_type: hotpot_only
+# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP
+opensds_endpoint: http://127.0.0.1:50040
+```
+
+##### LVM
+If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: lvm
+```
+
+Modify ```group_vars/lvm/lvm.yaml```, change `tgtBindIp` to your real host ip if needed:
+```yaml
+tgtBindIp: 127.0.0.1
+```
+
+##### Ceph
+If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
+```
+
+Configure ```group_vars/ceph/all.yml``` with an example below:
+```yml
+ceph_origin: repository
+ceph_repository: community
+ceph_stable_release: luminous # Choose luminous as default version
+public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
+cluster_network: "{{ public_network }}"
+monitor_interface: eth1 # Change to the network interface on the target machine
+devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
+ - '/dev/sda' # Ensure this device exists and available if ceph is chosen
+ #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen
+osd_scenario: collocated
+```
+
+##### Cinder
+If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
+
+# Use block-box install cinder_standalone if true, see details in:
+use_cinder_standalone: true
+```
+
+Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.
+
+### Check if the hosts can be reached
+```bash
+ansible all -m ping -i local.hosts
+```
+
+### Run opensds-ansible playbook to start deploy
+```bash
+ansible-playbook site.yml -i local.hosts
+```
+
+## 2. How to test opensds cluster
+### OpenSDS CLI
+Firstly configure opensds CLI tool:
+```bash
+sudo cp /opt/opensds-linux-amd64/bin/osdsctl /usr/local/bin/
+export OPENSDS_ENDPOINT=http://{your_real_host_ip}:50040
+export OPENSDS_AUTH_STRATEGY=keystone
+source /opt/stack/devstack/openrc admin admin
+
+osdsctl pool list # Check if the pool resource is available
+```
+
+Then create a default profile:
+```
+osdsctl profile create '{"name": "default", "description": "default policy"}'
+```
+
+Create a volume:
+```
+osdsctl volume create 1 --name=test-001
+```
+
+List all volumes:
+```
+osdsctl volume list
+```
+
+Delete the volume:
+```
+osdsctl volume delete <your_volume_id>
+```
+
+### OpenSDS UI
+OpenSDS UI dashboard is available at `http://{your_host_ip}:8088`, please login the dashboard using the default admin credentials: `admin/opensds@123`. Create tenant, user, and profiles as admin.
+
+Logout of the dashboard as admin and login the dashboard again as a non-admin user to create volume, snapshot, expand volume, create volume from snapshot, create volume group.
+
+## 3. How to purge and clean opensds cluster
+
+### Run opensds-ansible playbook to clean the environment
+```bash
+ansible-playbook clean.yml -i local.hosts
+```
+
+### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed
+```bash
+cd /opt/ceph-ansible
+sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts
+```
+
+In addition, clean up the logical partition on the physical block device used by ceph, using the ```fdisk``` tool.
+
+### Remove ceph-ansible source code (optional)
+```bash
+sudo rm -rf /opt/ceph-ansible
+```
--- /dev/null
+# OpenSDS Integration with OpenStack on Ubuntu
+
+All the installation work is tested on `Ubuntu 16.04`, please make sure you have
+installed the right one.
+
+## Environment Prepare
+
+* OpenStack (Supposed you have deployed)
+```shell
+openstack endpoint list # Check the endpoint of the killed cinder service
+```
+
+* packages
+
+Install following packages:
+```bash
+apt-get install -y git curl wget
+```
+* docker
+
+Install docker:
+```bash
+wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+```
+* golang
+
+Check golang version information:
+```bash
+root@proxy:~# go version
+go version go1.9.2 linux/amd64
+```
+You can install golang by executing commands below:
+```bash
+wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
+tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
+echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
+echo 'export GOPATH=$HOME/gopath' >> /etc/profile
+source /etc/profile
+```
+
+## Start deployment
+### Download opensds-installer code
+```bash
+git clone https://gerrit.opnfv.org/gerrit/stor4nfv
+cd stor4nfv/ci/ansible
+```
+
+### Install ansible tool
+To install ansible, run the commands below:
+```bash
+# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command.
+chmod +x ./install_ansible.sh && ./install_ansible.sh
+ansible --version # Ansible version 2.4.x is required.
+```
+
+### Configure opensds cluster variables:
+##### System environment:
+Change `opensds_endpoint` field in `group_vars/common.yml`:
+```yaml
+# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP
+opensds_endpoint: http://127.0.0.1:50040
+```
+
+Change `opensds_auth_strategy` field to `noauth` in `group_vars/auth.yml`:
+```yaml
+# OpenSDS authentication strategy, support 'noauth' and 'keystone'.
+opensds_auth_strategy: noauth
+```
+
+##### Ceph
+If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
+```
+
+Configure ```group_vars/ceph/all.yml``` with an example below:
+```yml
+ceph_origin: repository
+ceph_repository: community
+ceph_stable_release: luminous # Choose luminous as default version
+public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
+cluster_network: "{{ public_network }}"
+monitor_interface: eth1 # Change to the network interface on the target machine
+devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
+ - '/dev/sda' # Ensure this device exists and available if ceph is chosen
+ #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen
+osd_scenario: collocated
+```
+
+### Check if the hosts can be reached
+```bash
+ansible all -m ping -i local.hosts
+```
+
+### Run opensds-ansible playbook to start deploy
+```bash
+ansible-playbook site.yml -i local.hosts
+```
+
+And next build and run cindercompatibleapi module:
+```shell
+cd $GOPATH/src/github.com/opensds/opensds
+go build -o ./build/out/bin/cindercompatibleapi github.com/opensds/opensds/contrib/cindercompatibleapi
+```
+
+## Test
+```shell
+export CINDER_ENDPOINT=http://10.10.3.173:8776/v3 # Use endpoint shown above
+export OPENSDS_ENDPOINT=http://127.0.0.1:50040
+
+./build/out/bin/cindercompatibleapi
+```
+
+Then you can execute some cinder cli commands to see if the result is correct,
+for example if you execute the command `cinder type-list`, the result will show
+the profile of opnesds.
+
+For detailed test instruction, please refer to the 5.3 section in
+[OpenSDS Aruba PoC Plan](https://github.com/opensds/opensds/blob/development/docs/test-plans/OpenSDS_Aruba_POC_Plan.pdf).