2 This is an installation tool for opensds using ansible.
\r
4 ## 1. How to install an opensds local cluster
\r
5 ### Pre-config (Ubuntu 16.04)
\r
6 First download some system packages:
\r
8 sudo apt-get install -y openssh-server git make gcc
\r
10 Then config ```/etc/ssh/sshd_config``` file and change one line:
\r
14 Next generate ssh-token:
\r
17 ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation
\r
21 If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.
\r
23 ### Install ansible tool
\r
24 To install ansible, you can run `install_ansible.sh` directly or input these commands below:
\r
26 sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
\r
28 sudo apt-get install ansible
\r
29 ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
\r
32 ### Configure opensds cluster variables:
\r
33 ##### System environment:
\r
34 Configure these variables below in `group_vars/common.yml`:
\r
36 opensds_release: v0.1.4 # The version should be at least v0.1.4.
\r
37 nbp_release: v0.1.0 # The version should be at least v0.1.0.
\r
39 container_enabled: <false_or_true>
\r
42 If you want to integrate OpenSDS with cloud platform (for example k8s), please modify `nbp_plugin_type` variable in `group_vars/common.yml`:
\r
44 nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'
\r
47 #### Database configuration
\r
48 Currently OpenSDS adopts `etcd` as database backend, and the default db endpoint is `localhost:2379,localhost:2380`. But to avoid some conflicts with existing environment (k8s local cluster), we suggest you change the port of etcd cluster in `group_vars/osdsdb.yml`:
\r
50 db_endpoint: localhost:62379,localhost:62380
\r
52 etcd_host: 127.0.0.1
\r
54 etcd_peer_port: 62380
\r
58 If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
\r
60 enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
\r
61 pv_devices: # Specify block devices and ensure them existed if you choose lvm
\r
64 vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm
\r
66 Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above:
\r
68 "vg001" # change pool name to be the same as vg_name
\r
71 If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
\r
73 enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
\r
74 ceph_pools: # Specify pool name randomly if choosing ceph
\r
79 Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`. But if you enable multiple pools, please append the current pool format:
\r
81 "rbd" # change pool name to be the same as ceph pool
\r
83 Configure two files under ```group_vars/ceph```: `all.yml` and `osds.yml`. Here is an example:
\r
85 ```group_vars/ceph/all.yml```:
\r
87 ceph_origin: repository
\r
88 ceph_repository: community
\r
89 ceph_stable_release: luminous # Choose luminous as default version
\r
90 public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
\r
91 cluster_network: "{{ public_network }}"
\r
92 monitor_interface: eth1 # Change to the network interface on the target machine
\r
94 ```group_vars/ceph/osds.yml```:
\r
96 devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
\r
97 - '/dev/sda' # Ensure this device exists and available if ceph is chosen
\r
98 - '/dev/sdb' # Ensure this device exists and available if ceph is chosen
\r
99 osd_scenario: collocated
\r
103 If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
\r
105 enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
\r
107 # Use block-box install cinder_standalone if true, see details in:
\r
108 use_cinder_standalone: true
\r
109 # If true, you can configure cinder_container_platform, cinder_image_tag,
\r
110 # cinder_volume_group.
\r
112 # Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.
\r
113 cinder_container_platform: debian:stretch
\r
114 # The image tag can be arbitrarily modified, as long as follow the image naming
\r
115 # conventions, default: debian-cinder
\r
116 cinder_image_tag: debian-cinder
\r
117 # The cinder standalone use lvm driver as default driver, therefore `volume_group`
\r
118 # should be configured, the default is: cinder-volumes. The volume group will be
\r
119 # removed when use ansible script clean environment.
\r
120 cinder_volume_group: cinder-volumes
\r
123 Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.
\r
125 ### Check if the hosts can be reached
\r
127 sudo ansible all -m ping -i local.hosts
\r
130 ### Run opensds-ansible playbook to start deploy
\r
132 sudo ansible-playbook site.yml -i local.hosts
\r
135 ## 2. How to test opensds cluster
\r
137 ### Configure opensds CLI tool
\r
139 sudo cp /opt/opensds-{opensds-release}-linux-amd64/bin/osdsctl /usr/local/bin
\r
140 export OPENSDS_ENDPOINT=http://127.0.0.1:50040
\r
141 export OPENSDS_AUTH_STRATEGY=noauth
\r
143 osdsctl pool list # Check if the pool resource is available
\r
146 ### Create a default profile first.
\r
148 osdsctl profile create '{"name": "default", "description": "default policy"}'
\r
151 ### Create a volume.
\r
153 osdsctl volume create 1 --name=test-001
\r
155 For cinder, az needs to be specified.
\r
157 osdsctl volume create 1 --name=test-001 --az nova
\r
160 ### List all volumes.
\r
162 osdsctl volume list
\r
165 ### Delete the volume.
\r
167 osdsctl volume delete <your_volume_id>
\r
171 ## 3. How to purge and clean opensds cluster
\r
173 ### Run opensds-ansible playbook to clean the environment
\r
175 sudo ansible-playbook clean.yml -i local.hosts
\r
178 ### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed
\r
180 cd /opt/ceph-ansible
\r
181 sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts
\r
184 ### Remove ceph-ansible source code (optional)
\r
187 sudo rm -rf /opt/ceph-ansible
\r