X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=ci%2Fansible%2FREADME.md;h=8e86694b7cdefc4531a7488bcf518dfa647b6fd6;hb=6bc7e08cc5d80941c80e8d36d3a2b1373f147a05;hp=37a22f43e0b04e0581aede1ebca0ac7d2e1ff827;hpb=64df7bc3bc70d49153409436b411fb327691a4d5;p=stor4nfv.git diff --git a/ci/ansible/README.md b/ci/ansible/README.md index 37a22f4..8e86694 100644 --- a/ci/ansible/README.md +++ b/ci/ansible/README.md @@ -2,13 +2,6 @@ This is an installation tool for opensds using ansible. ## 1. How to install an opensds local cluster -This installation document assumes there is a clean Ubuntu 16.04 environment. If golang is already installed in the environment, make sure the following parameters are configured in ```/etc/profile``` and run ``source /etc/profile``: -```conf -export GOROOT=/usr/local/go -export GOPATH=$HOME/gopath -export PATH=$PATH:$GOROOT/bin:$GOPATH/bin -``` - ### Pre-config (Ubuntu 16.04) First download some system packages: ``` @@ -28,6 +21,7 @@ ssh-copy-id -i ~/.ssh/id_rsa.pub # IP address of the target machine If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details. ### Install ansible tool +To install ansible, you can run `install_ansible.sh` directly or input these commands below: ```bash sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend. sudo apt-get update @@ -35,25 +29,38 @@ sudo apt-get install ansible ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends. ``` -### Download opensds source code -```bash -mkdir -p $HOME/gopath/src/github.com/opensds && cd $HOME/gopath/src/github.com/opensds -git clone https://github.com/opensds/opensds.git -b -cd opensds/contrib/ansible -``` - ### Configure opensds cluster variables: ##### System environment: -Configure the ```workplace``` in `group_vars/common.yml`: +Configure these variables below in `group_vars/common.yml`: +```yaml +opensds_release: v0.1.4 # The version should be at least v0.1.4. +nbp_release: v0.1.0 # The version should be at least v0.1.0. + +container_enabled: +``` + +If you want to integrate OpenSDS with cloud platform (for example k8s), please modify `nbp_plugin_type` variable in `group_vars/common.yml`: +```yaml +nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume' +``` + +#### Database configuration +Currently OpenSDS adopts `etcd` as database backend, and the default db endpoint is `localhost:2379,localhost:2380`. But to avoid some conflicts with existing environment (k8s local cluster), we suggest you change the port of etcd cluster in `group_vars/osdsdb.yml`: ```yaml -workplace: /home/your_username # Change this field according to your username. If login as root, configure this parameter to '/root' +db_endpoint: localhost:62379,localhost:62380 + +etcd_host: 127.0.0.1 +etcd_port: 62379 +etcd_peer_port: 62380 ``` ##### LVM If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`: ```yaml enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' -pv_device: "your_pv_device_path" # Specify a block device and ensure it exists if lvm is chosen +pv_devices: # Specify block devices and ensure them existed if you choose lvm + #- /dev/sdc + #- /dev/sdd vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm ``` Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above: @@ -64,9 +71,12 @@ Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_nam If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`: ```yaml enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'. -ceph_pool_name: "specified_pool_name" # Specify a name for ceph pool if choosing ceph +ceph_pools: # Specify pool name randomly if choosing ceph + - rbd + #- ssd + #- sas ``` -Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`: +Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`. But if you enable multiple pools, please append the current pool format: ```yaml "rbd" # change pool name to be the same as ceph pool ``` @@ -83,7 +93,7 @@ monitor_interface: eth1 # Change to the network interface on the target machine ``` ```group_vars/ceph/osds.yml```: ```yml -devices: # For ceph devices, append one or multiple devices like the example below: +devices: # For ceph devices, append ONE or MULTIPLE devices like the example below: - '/dev/sda' # Ensure this device exists and available if ceph is chosen - '/dev/sdb' # Ensure this device exists and available if ceph is chosen osd_scenario: collocated @@ -126,8 +136,10 @@ sudo ansible-playbook site.yml -i local.hosts ### Configure opensds CLI tool ```bash -sudo cp $GOPATH/src/github.com/opensds/opensds/build/out/bin/osdsctl /usr/local/bin +sudo cp /opt/opensds-{opensds-release}-linux-amd64/bin/osdsctl /usr/local/bin export OPENSDS_ENDPOINT=http://127.0.0.1:50040 +export OPENSDS_AUTH_STRATEGY=noauth + osdsctl pool list # Check if the pool resource is available ``` @@ -165,14 +177,12 @@ sudo ansible-playbook clean.yml -i local.hosts ### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed ```bash -cd /tmp/ceph-ansible +cd /opt/ceph-ansible sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts ``` -In addition, clean up the logical partition on the physical block device used by ceph, using the ```fdisk``` tool. - ### Remove ceph-ansible source code (optional) ```bash cd .. -sudo rm -rf /tmp/ceph-ansible +sudo rm -rf /opt/ceph-ansible ```