1 # OpenSDS Integration with OpenStack on Ubuntu
3 All the installation work is tested on `Ubuntu 16.04`, please make sure you have
4 installed the right one.
8 * OpenStack (Supposed you have deployed)
10 openstack endpoint list # Check the endpoint of the killed cinder service
15 Install following packages:
17 apt-get install -y git curl wget
23 wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
24 dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
28 ### Download opensds-installer code
30 git clone https://gerrit.opnfv.org/gerrit/stor4nfv
31 cd stor4nfv/ci/ansible
34 ### Install ansible tool
35 To install ansible, run the commands below:
37 # This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command.
38 chmod +x ./install_ansible.sh && ./install_ansible.sh
39 ansible --version # Ansible version 2.4.x is required.
42 ### Configure opensds cluster variables:
43 ##### System environment:
44 Change `opensds_endpoint` field in `group_vars/common.yml`:
46 # The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP
47 opensds_endpoint: http://127.0.0.1:50040
50 Change `opensds_auth_strategy` field to `noauth` in `group_vars/auth.yml`:
52 # OpenSDS authentication strategy, support 'noauth' and 'keystone'.
53 opensds_auth_strategy: noauth
57 If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
59 enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
62 Configure ```group_vars/ceph/all.yml``` with an example below:
64 ceph_origin: repository
65 ceph_repository: community
66 ceph_stable_release: luminous # Choose luminous as default version
67 public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
68 cluster_network: "{{ public_network }}"
69 monitor_interface: eth1 # Change to the network interface on the target machine
70 devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
71 - '/dev/sda' # Ensure this device exists and available if ceph is chosen
72 #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen
73 osd_scenario: collocated
76 ### Check if the hosts can be reached
78 ansible all -m ping -i local.hosts
81 ### Run opensds-ansible playbook to start deploy
83 ansible-playbook site.yml -i local.hosts
88 export CINDER_ENDPOINT=http://10.10.3.173:8776/v3 # Use endpoint shown above
89 export OPENSDS_ENDPOINT=http://127.0.0.1:50040
91 chmod +x ../bin/cindercompatibleapi && ../bin/cindercompatibleapi
94 Please create a default opensds profile after initializing opensds cluster:
96 osdsctl profile create '{"name": "default", "description": "default policy"}'
99 Then you can execute some cinder cli commands to see if the result is correct,
100 for example if you execute the command `cinder type-list`, the result will show
101 the profile of opnesds.
103 For detailed test instruction, please refer to the 5.3 section in
104 [OpenSDS Aruba PoC Plan](https://github.com/opensds/opensds/blob/development/docs/test-plans/OpenSDS_Aruba_POC_Plan.pdf).