--- /dev/null
+# nbp-ansible\r
+This is an installation tool for opensds northbound plugins using ansible.\r
+\r
+## Install work\r
+\r
+### Pre-config (Ubuntu 16.04)\r
+First download some system packages:\r
+```\r
+sudo apt-get install -y openssh-server git\r
+```\r
+Then config ```/etc/ssh/sshd_config``` file and change one line:\r
+```conf\r
+PermitRootLogin yes\r
+```\r
+Next generate ssh-token:\r
+```bash\r
+ssh-keygen -t rsa\r
+ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation\r
+```\r
+\r
+### Install docker\r
+If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.\r
+\r
+### Install ansible tool\r
+```bash\r
+sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.\r
+sudo apt-get update\r
+sudo apt-get install ansible\r
+ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.\r
+```\r
+\r
+### Check if the hosts can be reached\r
+```bash\r
+sudo ansible all -m ping -i nbp.hosts\r
+```\r
+\r
+### Run opensds-ansible playbook to start deploy\r
+```bash\r
+sudo ansible-playbook site.yml -i nbp.hosts\r
+```\r
+\r
+## Uninstall work\r
+\r
+### Run nbp-ansible playbook to clean the environment\r
+```bash\r
+sudo ansible-playbook clean.yml -i nbp.hosts\r
+```\r
--- /dev/null
+---\r
+# Defines some clean processes when banishing the cluster.\r
+\r
+- name: destory all opensds nbp files\r
+ hosts: worker-nodes\r
+ remote_user: root\r
+ vars_files:\r
+ - group_vars/common.yml\r
+ gather_facts: false\r
+ become: True\r
+ roles:\r
+ - cleaner\r
--- /dev/null
+---\r
+# Variables here are applicable to all host groups NOT roles\r
+\r
+# This sample file generated by generate_group_vars_sample.sh\r
+\r
+# Dummy variable to avoid error because ansible does not recognize the\r
+# file as a good configuration file when no variable in it.\r
+dummy:\r
+\r
+# You can override default vars defined in defaults/main.yml here,\r
+# but I would advice to use host or group vars instead\r
+\r
+\r
+###########\r
+# GENERAL #\r
+###########\r
+\r
+# These fields are not suggested to be modified\r
+nbp_download_url: https://github.com/opensds/nbp/releases/download/v0.1.0/opensds-k8s-linux-amd64.tar.gz\r
+nbp_tarball_url: /opt/opensds-k8s-linux-amd64.tar.gz\r
+nbp_dir: /opt/opensds-k8s-linux-amd64\r
+\r
+flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds\r
--- /dev/null
+[worker-nodes]\r
+localhost ansible_connection=local
\ No newline at end of file
--- /dev/null
+---\r
+- name: clean opensds flexvolume plugins binary file\r
+ file:\r
+ path: "{{ flexvolume_plugin_dir }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+\r
+- name: clean nbp release files\r
+ file:\r
+ path: "{{ nbp_dir }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
+\r
+- name: clean nbp release tarball file\r
+ file:\r
+ path: "{{ nbp_tarball_url }}"\r
+ state: absent\r
+ force: yes\r
+ ignore_errors: yes\r
--- /dev/null
+---\r
+- name: Run the equivalent of "apt-get update" as a separate step\r
+ apt:\r
+ update_cache: yes\r
+\r
+- name: check for nbp release files existed\r
+ stat:\r
+ path: "{{ nbp_dir }}"\r
+ ignore_errors: yes\r
+ register: releasesexisted\r
+\r
+- name: download nbp release files\r
+ get_url:\r
+ url={{ nbp_download_url }}\r
+ dest={{ nbp_tarball_url }}\r
+ when:\r
+ - releasesexisted.stat.exists is undefined or releasesexisted.stat.exists == false\r
+\r
+- name: extract the nbp release tarball\r
+ unarchive:\r
+ src={{ nbp_tarball_url }}\r
+ dest=/opt/\r
+ when:\r
+ - releasesexisted.stat.exists is undefined or releasesexisted.stat.exists == false\r
--- /dev/null
+---\r
+- name: Create flexvolume plugin directory if not existed\r
+ file:\r
+ path: "{{ flexvolume_plugin_dir }}"\r
+ state: directory\r
+ mode: 0755\r
+\r
+- name: Copy opensds flexvolume plugin binary file into flexvolume plugin dir\r
+ copy:\r
+ src: "{{ nbp_dir }}/flexvolume/opensds"\r
+ dest: "{{ flexvolume_plugin_dir }}/opensds"\r
--- /dev/null
+---\r
+# Defines deployment design and assigns role to server groups\r
+\r
+- name: deploy opensds flexvolume plugin in all kubelet nodes\r
+ hosts: worker-nodes\r
+ remote_user: root\r
+ vars_files:\r
+ - group_vars/common.yml\r
+ gather_facts: false\r
+ become: True\r
+ roles:\r
+ - common\r
+ - flexvolume\r
--- /dev/null
+## Prerequisite ##\r
+### ubuntu\r
+* Version information\r
+\r
+ ```\r
+ root@proxy:~# cat /etc/issue\r
+ Ubuntu 16.04.2 LTS \n \l\r
+ ```\r
+### docker\r
+* Version information\r
+\r
+ ```\r
+ root@proxy:~# docker version\r
+ Client:\r
+ Version: 1.12.6\r
+ API version: 1.24\r
+ Go version: go1.6.2\r
+ Git commit: 78d1802\r
+ Built: Tue Jan 31 23:35:14 2017\r
+ OS/Arch: linux/amd64\r
+\r
+ Server:\r
+ Version: 1.12.6\r
+ API version: 1.24\r
+ Go version: go1.6.2\r
+ Git commit: 78d1802\r
+ Built: Tue Jan 31 23:35:14 2017\r
+ OS/Arch: linux/amd64\r
+ ```\r
+\r
+### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster\r
+* Version information\r
+ ```\r
+ root@proxy:~# kubectl version\r
+ Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}\r
+ Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}\r
+ ```\r
+* You can startup the k8s local cluster by executing commands blow:\r
+\r
+ ```\r
+ cd $HOME\r
+ git clone https://github.com/kubernetes/kubernetes.git\r
+ cd $HOME/kubernetes\r
+ git checkout v1.9.0\r
+ make\r
+ echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile\r
+ RUNTIME_CONFIG=settings.k8s.io/v1alpha1=true AUTHORIZATION_MODE=Node,RBAC hack/local-up-cluster.sh -O\r
+ ```\r
+\r
+\r
+### [opensds](https://github.com/opensds/opensds) local cluster\r
+* For testing purposes you can deploy OpenSDS local cluster referring to ```ansible/README.md```. Besides, you need to deploy opensds flexvolume plugin refering to ```nbp-ansible/README.md```.\r
+\r
+## Testing steps ##\r
+\r
+* Create service account, role and bind them.\r
+ ```\r
+ cd /opt/opensds-k8s-linux-amd64/provisioner\r
+ kubectl create -f serviceaccount.yaml\r
+ kubectl create -f clusterrole.yaml\r
+ kubectl create -f clusterrolebinding.yaml\r
+ ```\r
+\r
+* Change the opensds endpoint IP in pod-provisioner.yaml\r
+The IP (192.168.56.106) should be replaced with the OpenSDS osdslet actual endpoint IP.\r
+ ```yaml\r
+ kind: Pod\r
+ apiVersion: v1\r
+ metadata:\r
+ name: opensds-provisioner\r
+ spec:\r
+ serviceAccount: opensds-provisioner\r
+ containers:\r
+ - name: opensds-provisioner\r
+ image: opensdsio/opensds-provisioner\r
+ securityContext:\r
+ args:\r
+ - "-endpoint=http://192.168.56.106:50040" # should be replaced\r
+ imagePullPolicy: "IfNotPresent"\r
+ ```\r
+\r
+* Create provisioner pod.\r
+ ```\r
+ kubectl create -f pod-provisioner.yaml\r
+ ```\r
+\r
+* You can use the following cammands to test the OpenSDS FlexVolume and Proversioner functions.\r
+\r
+ ```\r
+ kubectl create -f sc.yaml # Create StorageClass\r
+ kubectl create -f pvc.yaml # Create PVC\r
+ kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.\r
+ ```\r
+\r
+ Execute the `findmnt|grep opensds` to confirm whether the volume has been provided.\r
+\r
+## Clean up steps ##\r
+\r
+```\r
+kubectl delete -f pod-application.yaml\r
+kubectl delete -f pvc.yaml\r
+kubectl delete -f sc.yaml\r
+\r
+kubectl delete -f pod-provisioner.yaml\r
+kubectl delete -f clusterrolebinding.yaml\r
+kubectl delete -f clusterrole.yaml\r
+kubectl delete -f serviceaccount.yaml\r
+```
\ No newline at end of file