seperate ansible scripts (#6484)

* ansible scripts for infra automation, and workflow for applying and testing changes

Signed-off-by: anandrkskd <anandrkskd@gmail.com>

* fix

Signed-off-by: anandrkskd <ansingh@redhat.com>

* fix

Signed-off-by: anandrkskd <ansingh@redhat.com>

* rename requirements.yaml

Signed-off-by: anandrkskd <ansingh@redhat.com>

* change permissions

Signed-off-by: anandrkskd <ansingh@redhat.com>

* update cluster version

Signed-off-by: anandrkskd <ansingh@redhat.com>

---------

Signed-off-by: anandrkskd <anandrkskd@gmail.com>
Signed-off-by: anandrkskd <ansingh@redhat.com>
This commit is contained in:
Anand Kumar Singh
2023-05-11 20:25:25 +05:30
committed by GitHub
parent f491bcc7db
commit 7ff460bc62
42 changed files with 2001 additions and 135 deletions

36
.github/workflows/infra-apply.yaml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: Infra update
on:
push:
branches:
- main
paths:
- scripts/ansible
- '!scripts/ansible/Cluster/kubernetes-cluster/manual-changes/Readme.md'
- '!scripts/ansible/Cluster/openshift-cluster/manual-changes/Readme.md'
- '!scripts/ansible/Cluster/NFS-vm/manual-changes/Readme.md'
- '!scripts/ansible/Cluster/windows-openshift-cluster/manual-changes/Readme.md'
jobs:
kubernetes-infra-stage-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: pre-config
run: |
echo "${{ secrets.NFSKEY }}" > ./ssh_key
chmod 600 ./ssh_key
- name: update name from Stageing to production
run: |
sed -i 's/odo-stage/odo-tests/g' scripts/ansible/Cluster/vars.yml
- name: Create Stageing Cluster
uses: dawidd6/action-ansible-playbook@v2
env:
IC_API_KEY: ${{ secrets.IC_API_KEY }}
IC_REGION: 'eu-de'
SSHKEY: './ssh_key'
with:
playbook: scripts/ansible/create-infra.yaml
requirements: scripts/ansible/requirements.yaml

52
.github/workflows/infra-test.yaml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: odo-infra-stage-test
on:
push:
paths:
- scripts/ansible
- '!scripts/ansible/Cluster/kubernetes-cluster/manual-changes/Readme.md'
- '!scripts/ansible/Cluster/openshift-cluster/manual-changes/Readme.md'
- '!scripts/ansible/Cluster/NFS-vm/manual-changes/Readme.md'
- '!scripts/ansible/Cluster/windows-openshift-cluster/manual-changes/Readme.md'
pull_request:
branches:
- main
jobs:
kubernetes-infra-stage-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: pre-config
run: |
echo "${{ secrets.NFSKEY }}" > ./ssh_key
chmod 600 ./ssh_key
- name: Create Stageing Cluster
uses: dawidd6/action-ansible-playbook@v2
env:
IC_API_KEY: ${{ secrets.IC_API_KEY }}
IC_REGION: 'eu-de'
SSHKEY: './ssh_key'
with:
playbook: scripts/ansible/create-infra.yaml
requirements: scripts/ansible/requirements.yaml
- name: login to the three cluster
env:
IC_API_KEY: ${{ secrets.IC_API_KEY }}
IC_REGION: 'eu-de'
run: |
curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
ibmcloud login --apikey $IC_API_KEY -r $IC_REGION
ibmcloud plugin install ks
CLUSTER=`ibmcloud ks cluster get -c odo-test-kubernetes-cluster --output json `
ID=$(echo $CLUSTER | jq -r '.id')
ibmcloud ks cluster config --cluster $ID --admin
CLUSTER=`ibmcloud ks cluster get -c odo-test-kubernetes-cluster --output json `
ID=$(echo $CLUSTER | jq -r '.id')
ibmcloud ks cluster config --cluster $ID --admin
CLUSTER=`ibmcloud ks cluster get -c odo-test-kubernetes-cluster --output json `
ID=$(echo $CLUSTER | jq -r '.id')
ibmcloud ks cluster config --cluster $ID --admin

View File

@@ -0,0 +1,60 @@
# ReadMe
This directory contains yaml files to create NFS server
### NFS provisioner (how to configure nfs for cluster)
You can run the following commands upon a cluster to deploy the NFS provisioner to this cluster (either Kubernetes or OpenShift). You will need to uninstall the "Block Storage for VPC" add-on installed by default, to make the NFS provisioner work correctly.
```
$ helm repo add nfs-subdir-external-provisioner \
https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=<IP_FOR_NFS> \
--set nfs.path=/mnt/nfs \
--set storageClass.defaultClass=true \
--set storageClass.onDelete=delete
--version=4.0.15
```
> learn more about nfs-subdir-external-provisioner from https://artifacthub.io/packages/helm/nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
### check if nfs is working or not
login using the floating IP
### **NOTE**
ibmcoud storage provided with cluster doesnt works with nfs storge(if nfs storage is set as default). So make sure to diable addon `vpc-block-csi-driver` from cluster for which you want to use **nfs-storage**
#### *command to delete/remove storage addons from cluster*
```shell
ibmcloud ks cluster addon disable vpc-block-csi-driver
```
### helpful commands
1. Fetch IP for nfs configuration
```shell
IP_FOR_NFS=$(ibmcloud is instance <nfs-instance-name> --output json | jq -r ".primary_network_interface.primary_ip.address")
```
2. Fetch Floating IP of NFS-Server
```shell
NFS_IP=$(ibmcloud is instance k8s-nfs-server --output json | jq -r ".primary_network_interface.floating_ips[0].address" )
```
3. Create/Delete just NFS server
> NOTE: you will need to export path to ssh_key for login pourpose (`SSHKEY` is variable name)
```
$ export SSHKEY=/path/to/ssh/key
$ ansible-playbook create.yaml \
-e name_prefix=odo-tests \
-e cluster_zone="eu-de-2"
$ ansible-playbook delete.yaml \
-e name_prefix=odo-tests
```

View File

@@ -1,5 +1,5 @@
---
- name: Create OpenShift Cluster on IBM Cloud
- name: Create NFS vsi for Clusters on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
@@ -86,16 +86,17 @@
image_dict: "{{ images_list.resource.images |
items2dict(key_name='name', value_name='id') }}"
- name: Configure SSH Key
ibm_is_ssh_key:
name: "ansible-ssh-key"
public_key: "{{ ssh_public_key }}"
register: ssh_key_create_output
# uncomment if "automation-key" is deleted and re run the playbook to create sshkey
# - name: Configure SSH Key
# ibm_is_ssh_key:
# name: "{{ name_prefix }}-key"
# public_key: "{{ ssh_public_key }}"
# register: ssh_key_create_output
- name: Save SSH Key as fact
set_fact:
cacheable: True
ssh_key: "{{ ssh_key_create_output.resource }}"
# - name: Save SSH Key id as fact
# set_fact:
# cacheable: True
# ssh_key_id: "{{ ssh_key_create_output.resource.id }}"
- name: Configure VSI for NFS server
ibm_is_instance:
@@ -105,7 +106,7 @@
profile: "bx2-2x8"
image: "{{ image_dict[nfs_image] }}"
keys:
- "{{ ssh_key.id }}"
- "{{ ssh_key_id }}"
primary_network_interface:
- subnet: "{{ subnet.id }}"
zone: "{{ cluster_zone }}"
@@ -129,63 +130,23 @@
cacheable: True
nfsip: "{{ nfsip_create_output.resource }}"
- name: get ssh_key from enviroment variable
set_fact:
cacheable: True
ssh_login_key: "'{{ lookup('ansible.builtin.env', 'SSHKEY') }}'"
- name: Add NFS to Ansible inventory
add_host:
name: "{{ nfsip.address }}"
ansible_user: root
groups: new_vsi
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
- name: Configure Cloud Object Storage
ibm_resource_instance:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: available
register: cos_create_output
- name: Save Cloud ObjectStorage Subnet as fact
set_fact:
cacheable: True
cos: "{{ cos_create_output.resource }}"
when: cos_create_output.rc==0
- name: Configure cluster
ibm_container_vpc_cluster:
name: "{{ name_prefix }}-cluster"
resource_group_id: "{{ rg.id }}"
kube_version: "{{ kube_version }}"
flavor: "{{ node_flavor }}"
worker_count: "{{ workers }}"
vpc_id: "{{ vpc.id }}"
cos_instance_crn: "{{ cos.crn }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
state: available
register: cluster_create_output
- name: Save Cluster as fact
set_fact:
cacheable: True
cluster: "{{ cluster_create_output.resource }}"
when: cluster_create_output.rc==0
- local_action:
module: copy
content: "{{ cluster.id }}"
dest: "{{ cluster_id_file }}"
ansible_ssh_extra_args: -o StrictHostKeyChecking=no -i {{ ssh_login_key }}
- local_action:
module: copy
content: "{{ nfs.primary_network_interface[0].primary_ipv4_address }}"
dest: "{{ nfs_ip_file }}"
- name: Check Ansible connection to new NFS server
hosts: new_vsi
gather_facts: False
@@ -215,4 +176,4 @@
- name: Restart service nfs-kernel-server
ansible.builtin.service:
name: nfs-kernel-server
state: restarted
state: restarted

View File

@@ -0,0 +1,149 @@
---
- name: Destroy nfs server on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from var file
include_vars:
file: vars.yml
- name: Get the NFS IP details
ibm_is_floating_ip_info:
name: "{{ name_prefix }}-nfs-ip"
failed_when:
- nfsip_output.rc != 0
- '"No floatingIP found" not in nfsip_output.stderr'
register: nfsip_output
- name: set nfsip in fact
set_fact:
cacheable: True
nfsip: "{{ nfsip_output.resource }}"
when: nfsip_output.resource.id is defined
- name: Remove NFS IP
ibm_is_floating_ip:
id: "{{ nfsip.id }}"
zone: "{{ cluster_zone }}"
state: absent
when:
- nfsip is defined
- name: Get the NFS server details
ibm_is_instance_info:
name: "{{ name_prefix }}-nfs"
failed_when:
- nfs_output.rc != 0
- '"No Instance found" not in nfs_output.stderr'
register: nfs_output
- name: set nfs in fact
set_fact:
cacheable: True
nfs: "{{ nfs_output.resource }}"
when: nfs_output.resource.id is defined
- name: Remove NFS server
ibm_is_instance:
id: "{{ nfs.id }}"
image: "{{ nfs.image }}"
resource_group: "{{ nfs.resource_group }}"
vpc: "{{ nfs.vpc }}"
profile: "{{ nfs.profile }}"
keys: "{{ nfs.keys }}"
primary_network_interface:
- subnet: "{{ nfs.primary_network_interface[0].subnet }}"
zone: "{{ nfs.zone }}"
state: absent
when:
- nfs is defined
- name: Get the vpc details
ibm_is_vpc_info:
name: "{{ name_prefix }}-vpc"
failed_when:
- vpc_output.rc != 0
- '"No VPC found" not in vpc_output.stderr'
register: vpc_output
- name: set vpc in fact
set_fact:
cacheable: True
vpc: "{{ vpc_output.resource }}"
when: vpc_output.resource.id is defined
- name: Get the subnet details
ibm_is_subnet_info:
name: "{{ name_prefix }}-subnet"
failed_when:
- subnet_output.rc != 0
- '"No subnet found" not in subnet_output.stderr'
register: subnet_output
- name: set subnet in fact
set_fact:
cacheable: True
subnet: "{{ subnet_output.resource }}"
when: subnet_output.resource.id is defined
- name: Get the Resource group details
ibm_resource_group_info:
name: "{{ name_prefix }}-group"
failed_when:
- rg_output.rc != 0
- '"Given Resource Group is not found" not in rg_output.stderr'
register: rg_output
- name: set Resource group in fact
set_fact:
cacheable: True
rg: "{{ rg_output.resource }}"
when: rg_output.resource.id is defined
- name: Remove VPC Subnet
ibm_is_subnet:
state: absent
id: "{{ subnet.id }}"
when: subnet is defined
- name: Get the Public Gateway details
ibm_is_public_gateway_info:
name: "{{ name_prefix }}-gw"
failed_when:
- gw_output.rc != 0
- '"No Public gateway found" not in gw_output.stderr'
register: gw_output
- name: set Public Gateway in fact
set_fact:
cacheable: True
gw: "{{ gw_output.resource }}"
when: gw_output.resource.id is defined
- name: Remove Public Gateway
ibm_is_public_gateway:
id: "{{ gw.id }}"
state: absent
when: gw is defined
- name: Remove VPC
ibm_is_vpc:
state: absent
id: "{{ vpc.id }}"
when: vpc is defined
- name: Remove Resource Group
ibm_resource_group:
state: absent
id: "{{ rg.id }}"
when: rg is defined

View File

@@ -1,5 +1,5 @@
---
collections:
- ibm.cloudcollection
- name: community.kubernetes
- name: kubernetes.core
version: 2.0.0

View File

@@ -0,0 +1,6 @@
---
total_ipv4_address_count: 256
nfs_ip_file: /tmp/nfs_ip_ibmcloud
ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDO/TwSo9O3qT3wwZ0vH2HNlhhjyIlp6/MGZuEJOJFTw0eJa7nFzOmX4XcLZjj6nruaRQlTKpX5IlIaY+OKg8xDj/StxIyRkHHmhRoqkYty22480MTXzr5aCj1ABk3x0TTs0M3g7h+Xn/QttKIrPudic6U/+8gzwNLwiXKgecPb4nRW9lM44QudxXXZw3DPLg4r7qW0Vyhm5VGNGQHiZ8MAf+8GTf6KPi4XunBjpTKPGf2NMpXMU7U1+TJn+l1g3aRnhc+DphV86C0ELL7PAYHcXvo6AmWPCBbA7eNwz3/OiQI0rNwgb4YmS+MS4YZjSgWnUSk9dUD7bxDH9ID7WpS1
nfs_image: ibm-ubuntu-20-04-2-minimal-amd64-1
ssh_key_id: r010-257716c0-8094-4ade-86ea-3147f7248b23 # if this key is deleted from console uncomment and run the script to create a key from ssh_public_key

View File

@@ -0,0 +1,40 @@
# OVERVIEW
This directory contains ansible files that create/destroy clusters on IBMCloud.
This script is used for automation and are using in two github-actions.
- to create a staging environment to test if the changes are working as expected
- that will apply those new changes to the actual infrastructure
## Pre-requisite
- ansible
- `pip3 install openshift`
- `ansible-galaxy collection install -r requirements.yml`
## ***NOTE***
>Deleation of stagin environment is done manually, as the github-action will only create the staging env, testing the staging env is manual.
>By default the scripts are configured for staging environment( to make sure infra is not modified by mistake )
## ___How to check your changes?___
Create a PR with
- changes in README File Present in ansible:
- first commit should only have changes from Readme.md file. so that it creates the cluster similar to main infra
- changes in yaml files
- later commits will have the changes that you want to test on staging environment
### __How to delete staging environment?__
Run the following commands
``` shell
# expose the Region and API key for ansible script
export IC_REGION="eu-de"
export IC_API_KEY="<API_KEY>"
ansible-playbook delete-clusters.yaml
```
### To remove storage addon from cluster
```shell
ibmcloud ks cluster addon disable vpc-block-csi-driver -c <cluster-ID>
```

View File

@@ -0,0 +1,19 @@
---
- name: Create Cluster on IBM Cloud
hosts: localhost
tasks:
- name: Fetch the variables from Cluster var file
include_vars:
file: vars.yml
- name: create a kubernetes cluster
ansible.builtin.import_playbook: ./kubernetes-cluster/create.yml
- name: create a openshift cluster
ansible.builtin.import_playbook: ./openshift-cluster/create.yml
- name: create a windows-openshift cluster
ansible.builtin.import_playbook: ./windows-openshift-cluster/create.yml
- name: create NFS server for clusters
ansible.builtin.import_playbook: ./NFS-vm/create.yaml

View File

@@ -0,0 +1,262 @@
---
- name: Create Cluster on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from Cluster var file
include_vars:
file: vars.yml
- name: Get the Resource group details
ibm_resource_group_info:
name: "{{ name_prefix }}-group"
failed_when:
- rg_output.rc != 0
- '"Given Resource Group is not found" not in rg_output.stderr'
register: rg_output
- name: set Resource group in fact
set_fact:
cacheable: True
rg: "{{ rg_output.resource }}"
when: rg_output.resource.id is defined
- name: Get the vpc details
ibm_is_vpc_info:
name: "{{ name_prefix }}-vpc"
failed_when:
- vpc_output.rc != 0
- '"No VPC found" not in vpc_output.stderr'
register: vpc_output
- name: set vpc in fact
set_fact:
cacheable: True
vpc: "{{ vpc_output.resource }}"
when: vpc_output.resource.id is defined
- name: Get the subnet details
ibm_is_subnet_info:
name: "{{ name_prefix }}-subnet"
failed_when:
- subnet_output.rc != 0
- '"No subnet found" not in subnet_output.stderr'
register: subnet_output
- name: set subnet in fact
set_fact:
cacheable: True
subnet: "{{ subnet_output.resource }}"
when: subnet_output.resource.id is defined
- name: Get the Kubernetes cluster details
ibm_container_vpc_cluster_info:
name: "{{ name_prefix }}-kubernetes-cluster"
resource_group_id: "{{ rg.id }}"
failed_when:
- cluster_output.rc != 0
- '"cluster could not be found" not in cluster_output.stderr'
register: cluster_output
- name: set cluster in fact
set_fact:
cacheable: True
cluster: "{{ cluster_output.resource }}"
when: cluster_output.resource.id is defined
- name: Remove Kubernetes Cluster
ibm_container_vpc_cluster:
id: "{{ cluster.id }}"
state: absent
name: "{{ name_prefix }}-kubernetes-cluster"
vpc_id: "{{ vpc.id }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
when:
- vpc is defined
- subnet is defined
- cluster is defined
- rg is defined
- name: Get the openshift cluster details
ibm_container_vpc_cluster_info:
name: "{{ name_prefix }}-openshift-cluster"
resource_group_id: "{{ rg.id }}"
failed_when:
- cluster_output.rc != 0
- '"cluster could not be found" not in cluster_output.stderr'
register: cluster_output
- name: set cluster in fact
set_fact:
cacheable: True
cluster: "{{ cluster_output.resource }}"
when: cluster_output.resource.id is defined
- name: Remove openshift Cluster
ibm_container_vpc_cluster:
id: "{{ cluster.id }}"
state: absent
name: "{{ name_prefix }}-openshift-cluster"
vpc_id: "{{ vpc.id }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
when:
- vpc is defined
- subnet is defined
- cluster is defined
- name: Get the openshift windows cluster details
ibm_container_vpc_cluster_info:
name: "{{ name_prefix }}-openshift-win-cluster"
resource_group_id: "{{ rg.id }}"
failed_when:
- cluster_output.rc != 0
- '"cluster could not be found" not in cluster_output.stderr'
register: cluster_output
- name: set cluster in fact
set_fact:
cacheable: True
cluster: "{{ cluster_output.resource }}"
when: cluster_output.resource.id is defined
- name: Remove openshift windows Cluster
ibm_container_vpc_cluster:
id: "{{ cluster.id }}"
state: absent
name: "{{ name_prefix }}-openshift-win-cluster"
vpc_id: "{{ vpc.id }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
when:
- vpc is defined
- subnet is defined
- cluster is defined
- name: Get the Cloud Object Storage details
ibm_resource_instance_info:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
failed_when:
- cos_output.rc != 0
- '"No resource instance found" not in cos_output.stderr'
when: rg is defined
register: cos_output
- name: set Cloud Object Storage in fact
set_fact:
cacheable: True
cos: "{{ cos_output.resource }}"
when: cos_output.resource.id is defined
- name: Remove Cloud Object Storage
ibm_resource_instance:
id: "{{ cos.id }}"
name: "{{ name_prefix }}-cos"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: absent
when: cos is defined
- name: Get the NFS IP details
ibm_is_floating_ip_info:
name: "{{ name_prefix }}-nfs-ip"
failed_when:
- nfsip_output.rc != 0
- '"No floatingIP found" not in nfsip_output.stderr'
register: nfsip_output
- name: set nfsip in fact
set_fact:
cacheable: True
nfsip: "{{ nfsip_output.resource }}"
when: nfsip_output.resource.id is defined
- name: Remove NFS IP
ibm_is_floating_ip:
id: "{{ nfsip.id }}"
zone: "{{ cluster_zone }}"
state: absent
when:
- nfsip is defined
- name: Get the NFS server details
ibm_is_instance_info:
name: "{{ name_prefix }}-nfs"
failed_when:
- nfs_output.rc != 0
- '"No Instance found" not in nfs_output.stderr'
register: nfs_output
- name: set nfs in fact
set_fact:
cacheable: True
nfs: "{{ nfs_output.resource }}"
when: nfs_output.resource.id is defined
- name: Remove NFS server
ibm_is_instance:
id: "{{ nfs.id }}"
image: "{{ nfs.image }}"
resource_group: "{{ nfs.resource_group }}"
vpc: "{{ nfs.vpc }}"
profile: "{{ nfs.profile }}"
keys: "{{ nfs.keys }}"
primary_network_interface:
- subnet: "{{ nfs.primary_network_interface[0].subnet }}"
zone: "{{ nfs.zone }}"
state: absent
when:
- nfs is defined
- name: Remove VPC Subnet
ibm_is_subnet:
state: absent
id: "{{ subnet.id }}"
when: subnet is defined
- name: Get the Public Gateway details
ibm_is_public_gateway_info:
name: "{{ name_prefix }}-gw"
failed_when:
- gw_output.rc != 0
- '"No Public gateway found" not in gw_output.stderr'
register: gw_output
- name: set Public Gateway in fact
set_fact:
cacheable: True
gw: "{{ gw_output.resource }}"
when: gw_output.resource.id is defined
- name: Remove Public Gateway
ibm_is_public_gateway:
id: "{{ gw.id }}"
state: absent
when: gw is defined
- name: Remove VPC
ibm_is_vpc:
state: absent
id: "{{ vpc.id }}"
when: vpc is defined
- name: Remove Resource Group
ibm_resource_group:
state: absent
id: "{{ rg.id }}"
when: rg is defined

View File

@@ -0,0 +1,99 @@
---
- name: Create Kubernetes Cluster on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from var file
include_vars:
file: vars.yml
- name: Configure Resource Group
ibm_resource_group:
name: "{{ name_prefix }}-group"
state: available
register: rg_create_output
- name: Save Resource Group as fact
set_fact:
cacheable: True
rg: "{{ rg_create_output.resource }}"
when: rg_create_output.rc==0
- name: Configure VPC
ibm_is_vpc:
name: "{{ name_prefix }}-vpc"
resource_group: "{{ rg.id }}"
state: available
register: vpc_create_output
- name: Save VPC as fact
set_fact:
cacheable: True
vpc: "{{ vpc_create_output.resource }}"
when: vpc_create_output.rc==0
- name: Configure Public Gateway
ibm_is_public_gateway:
name: "{{ name_prefix }}-gw"
resource_group: "{{ rg.id }}"
zone: "{{ cluster_zone }}"
vpc: "{{ vpc.id }}"
state: available
register: gw_create_output
- name: Save Public Gateway as fact
set_fact:
cacheable: True
gw: "{{ gw_create_output.resource }}"
when: gw_create_output.rc==0
- name: Configure VPC Subnet
ibm_is_subnet:
name: "{{ name_prefix }}-subnet"
resource_group: "{{ rg.id }}"
vpc: "{{ vpc.id }}"
zone: "{{ cluster_zone }}"
total_ipv4_address_count: "{{ total_ipv4_address_count }}"
public_gateway: "{{ gw.id }}"
state: available
register: subnet_create_output
- name: Save VPC Subnet as fact
set_fact:
cacheable: True
subnet: "{{ subnet_create_output.resource }}"
when: subnet_create_output.rc==0
- name: Configure Cloud Object Storage
ibm_resource_instance:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: available
register: cos_create_output
- name: Save Cloud ObjectStorage Subnet as fact
set_fact:
cacheable: True
cos: "{{ cos_create_output.resource }}"
when: cos_create_output.rc==0
- name: Configure cluster
ibm_container_vpc_cluster:
name: "{{ name_prefix }}-kubernetes-cluster"
resource_group_id: "{{ rg.id }}"
kube_version: "{{ kube_version }}"
flavor: "{{ node_flavor }}"
worker_count: "{{ workers }}"
vpc_id: "{{ vpc.id }}"
cos_instance_crn: "{{ cos.crn }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
state: available
register: cluster_create_output

View File

@@ -9,58 +9,6 @@
include_vars:
file: vars.yml
- name: Get the NFS IP details
ibm_is_floating_ip_info:
name: "{{ name_prefix }}-nfs-ip"
failed_when:
- nfsip_output.rc != 0
- '"No floatingIP found" not in nfsip_output.stderr'
register: nfsip_output
- name: set nfsip in fact
set_fact:
cacheable: True
nfsip: "{{ nfsip_output.resource }}"
when: nfsip_output.resource.id is defined
- name: Remove NFS IP
ibm_is_floating_ip:
id: "{{ nfsip.id }}"
state: absent
when:
- nfsip is defined
- name: Get the NFS server details
ibm_is_instance_info:
name: "{{ name_prefix }}-nfs"
failed_when:
- nfs_output.rc != 0
- '"No Instance found" not in nfs_output.stderr'
register: nfs_output
- name: set nfs in fact
set_fact:
cacheable: True
nfs: "{{ nfs_output.resource }}"
when: nfs_output.resource.id is defined
- name: Remove NFS server
ibm_is_instance:
id: "{{ nfs.id }}"
image: "{{ nfs.image }}"
resource_group: "{{ nfs.resource_group }}"
vpc: "{{ nfs.vpc }}"
profile: "{{ nfs.profile }}"
keys: "{{ nfs.keys }}"
primary_network_interface:
- subnet: "{{ nfs.primary_network_interface[0].subnet }}"
zone: "{{ nfs.zone }}"
state: absent
when:
- nfs is defined
- name: Get the vpc details
ibm_is_vpc_info:
name: "{{ name_prefix }}-vpc"
@@ -93,7 +41,7 @@
- name: Get the cluster details
ibm_container_vpc_cluster_info:
name: "{{ name_prefix }}-cluster"
name: "{{ name_prefix }}-kubernetes-cluster"
failed_when:
- cluster_output.rc != 0
- '"cluster could not be found" not in cluster_output.stderr'
@@ -109,13 +57,12 @@
ibm_container_vpc_cluster:
id: "{{ cluster.id }}"
state: absent
name: "{{ name_prefix }}-cluster"
flavor: "{{ cluster.worker_pools.0.flavor }}"
name: "{{ name_prefix }}-kubernetes-cluster"
vpc_id: "{{ vpc.id }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster.worker_pools.0.zones.0.zone }}"
name: "{{ cluster_zone }}"
}
when:
- vpc is defined

View File

@@ -1,12 +1,11 @@
---
- name: Install Operators on Kubernetes Cluster
hosts: localhost
collections:
- community.kubernetes
tasks:
- name: Create a Subscription for Service Binding Operator
k8s:
kubernetes.core.k8s:
state: present
definition:
apiVersion: operators.coreos.com/v1alpha1
@@ -20,7 +19,7 @@
source: operatorhubio-catalog
sourceNamespace: olm
- name: Create a Subscription for EDB Postgresql Operator
k8s:
kubernetes.core.k8s:
state: present
definition:
apiVersion: operators.coreos.com/v1alpha1

View File

@@ -0,0 +1,5 @@
---
collections:
- ibm.cloudcollection
- name: kubernetes.core
version: 2.0.0

View File

@@ -0,0 +1,6 @@
---
total_ipv4_address_count: 256
kube_version: 1.26.4 # command to list all supported k8s/openshift `ibmcloud ks versions`
node_flavor: bx2.4x16
workers: 3
nfs_image: ibm-ubuntu-20-04-2-minimal-amd64-1

View File

@@ -0,0 +1,174 @@
# Ansible Playbooks for odo testing
## IBM Cloud Kubernetes Cluster
This ansible playbook deploys a VPC Kubernetes/OpenShift cluster on IBM Cloud and an NFS server on the same VPC (to be used for dynamic storage provisioning - deploying the NFS provisioner is required, see below).
It uses the [IBM Cloud Ansible Collections](https://github.com/IBM-Cloud/ansible-collection-ibm/).
### VPC Resources
The following VPC infrastructure resources will be created (Ansible modules in
parentheses):
* Resource group (ibm_resource_group)
* VPC (ibm_is_vpc)
* Security Group (ibm_is_security_group_rule)
* Public gateway (ibm_is_public_gateway)
* VPC Subnet (ibm_is_subnet)
* SSH Key (ibm_is_ssh_key)
* Virtual Server Instance (ibm_is_instance)
* Floating IP (ibm_is_floating_ip)
* Cloud Object Storage (ibm_resource_instance)
* VPC Kubernetes Cluster (ibm_container_vpc_cluster)
All created resources (expect resource group and SSH Key) will be inside the created Resource Group.
Note that:
- ibm_is_security_group_rule is not idempotent: each time the playbook is ran, an entry in the Inbound Rules of the Security Group allowing port 22 will be added. You should remove the duplicates from the UI and keep only one entry.
- I (feloy) didn't find a way to uninstall an addon from a cluster using the IBM Cloud ansible collection (https://github.com/IBM-Cloud/ansible-collection-ibm/issues/70). You will need to remove the "Block Storage for VPC" default add-on if you install an NFS provisioner for this cluster.
### Configuration Parameters
The following parameters can be set by the user, either by editing the `vars.yaml` or by usning the `-e` flag from the `ansible-galaxy` command line:
* `name_prefix`: Prefix used to name created resources
* `cluster_zone`: Zone on which will be deployed the resources
* `total_ipv4_address_count`: Number of IPv4 addresses available in the VPC subnet
* `ssh_public_key`: SSH Public key deployed on the NFS server
* `nfs_image`: The name of the image used to deploy the NFS server
* `kube_version`: Kubernetes or OpenShift version. The list of versions can be obtained with `ibmcloud ks versions`
* `node_flavor`: Flavor of workers of the cluster. The list of flavors can be obtained with `ibmcloud ks flavors --zone ${CLUSTER_ZONE} --provider vpc-gen2`
* `workers`: Number of workers on the cluster
* `cluster_id_file`: File on which the cluster ID will be saved
* `nfs_ip_file`: File on which the private IP of the NFS server will be saved
### Running
#### Set API Key and Region
1. [Obtain an IBM Cloud API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).
2. Export your API key to the `IC_API_KEY` environment variable:
```
export IC_API_KEY=<YOUR_API_KEY_HERE>
```
3. Export desired IBM Cloud region to the 'IC_REGION' environment variable:
```
export IC_REGION=<REGION_NAME_HERE>
```
You can get available regions supporting Kubernetes clusters on the page https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones.
#### Install Ansible collections
To install the required Ansible collections, run:
```
ansible-galaxy collection install -r requirements.yml
```
#### Create
To create all resources, run the 'create' playbook:
For example:
```
$ ansible-playbook create.yml \
-e name_prefix=odo-tests-openshift \
-e kube_version=4.7_openshift \
-e cluster_id_file=/tmp/openshift_id \
-e nfs_ip_file=/tmp/nfs_openshift_ip \
--key-file <path_to_private_key> # For an OpenShift cluster v4.7
$ ansible-playbook create.yml \
-e name_prefix=odo-tests-kubernetes \
-e kube_version=1.20 \
-e cluster_id_file=/tmp/kubernetes_id \
-e nfs_ip_file=/tmp/nfs_kubernetes_ip \
--key-file <path_to_private_key> # For a Kubernetes cluster v1.20
```
The `path_to_private_key` file contains ths SSH private key associated with the SSH public key set in the `ssh_public_key` configuration parameter.
#### Destroy
To destroy all resources run the 'destroy' playbook:
```
ansible-playbook destroy.yml -e name_prefix=...
```
## Kubernetes Operators
This ansible playbook deploys operators on a Kubernetes cluster. The cluster should be running the Operator Lifecycle Manager ([OLM](https://olm.operatorframework.io/)), either natively for an OpenShift cluster, or by installing it on a Kubernetes cluster.
To install OLM on a Kubernetes cluster go to the ([OLM releases page](https://github.com/operator-framework/operator-lifecycle-manager/releases/)), the latest version is displayed at the top, execute the commands as described under the "Scripted" section. At the time this document was written the latest version was v0.21.2:
```
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.21.2/install.sh | bash -s v0.21.2
```
### Running
1. Install necessary Python modules:
```
pip3 install ansible openshift
```
2. Install Ansible collections
To install the required Ansible collections, run:
```
ansible-galaxy collection install -r requirements.yml
```
3. Connect to the cluster and make sure your `kubeconfig` points to the cluster.
4. Install the operators for OpenShift / Kubernetes:
```
ansible-playbook operators-openshift.yml
```
or
```
ansible-playbook operators-kubernetes.yml
```
## NFS provisioner
You can run the following commands upon a cluster to deploy the NFS provisioner to this cluster (either Kubernetes or OpenShift). You will need to uninstall the "Block Storage for VPC" add-on installed by default, to make the NFS provisioner work correctly.
```
$ helm repo add nfs-subdir-external-provisioner \
https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=$(</tmp/nfs_ip) \
--set nfs.path=/mnt/nfs \
--set storageClass.defaultClass=true \
--set storageClass.onDelete=delete
```
## Devfile registry reverse proxy
To install a reverse proxy caching the requests to the Staging Devfile registry (https://registry.stage.devfile.io),
you can run the following command:
```
kubectl apply -f devfile-proxy.yaml
```
This will install an nginx install configured as a reverse proxy with the Staging Devfile registry as only backend.
A Load Balancer service will be created accessible publicly. To limit requests on the proxy, the requests are limited
to user agents beginning with `containerd` or `Go-http-client`.
The integration tests are able to detect the presence of the Load Balancer service and use the proxy if the service is present
and providing an external address.

View File

@@ -0,0 +1,99 @@
---
- name: Create OpenShift Cluster on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from var file
include_vars:
file: vars.yml
- name: Configure Resource Group
ibm_resource_group:
name: "{{ name_prefix }}-group"
state: available
register: rg_create_output
- name: Save Resource Group as fact
set_fact:
cacheable: True
rg: "{{ rg_create_output.resource }}"
when: rg_create_output.rc==0
- name: Configure VPC
ibm_is_vpc:
name: "{{ name_prefix }}-vpc"
resource_group: "{{ rg.id }}"
state: available
register: vpc_create_output
- name: Save VPC as fact
set_fact:
cacheable: True
vpc: "{{ vpc_create_output.resource }}"
when: vpc_create_output.rc==0
- name: Configure Public Gateway
ibm_is_public_gateway:
name: "{{ name_prefix }}-gw"
resource_group: "{{ rg.id }}"
zone: "{{ cluster_zone }}"
vpc: "{{ vpc.id }}"
state: available
register: gw_create_output
- name: Save Public Gateway as fact
set_fact:
cacheable: True
gw: "{{ gw_create_output.resource }}"
when: gw_create_output.rc==0
- name: Configure VPC Subnet
ibm_is_subnet:
name: "{{ name_prefix }}-subnet"
resource_group: "{{ rg.id }}"
vpc: "{{ vpc.id }}"
zone: "{{ cluster_zone }}"
total_ipv4_address_count: "{{ total_ipv4_address_count }}"
public_gateway: "{{ gw.id }}"
state: available
register: subnet_create_output
- name: Save VPC Subnet as fact
set_fact:
cacheable: True
subnet: "{{ subnet_create_output.resource }}"
when: subnet_create_output.rc==0
- name: Configure Cloud Object Storage
ibm_resource_instance:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: available
register: cos_create_output
- name: Save Cloud ObjectStorage Subnet as fact
set_fact:
cacheable: True
cos: "{{ cos_create_output.resource }}"
when: cos_create_output.rc==0
- name: Configure cluster
ibm_container_vpc_cluster:
name: "{{ name_prefix }}-openshift-cluster"
resource_group_id: "{{ rg.id }}"
kube_version: "{{ kube_version }}"
flavor: "{{ node_flavor }}"
worker_count: "{{ workers }}"
vpc_id: "{{ vpc.id }}"
cos_instance_crn: "{{ cos.crn }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
state: available
register: cluster_create_output

View File

@@ -0,0 +1,156 @@
---
- name: Destroy OpenShift Cluster on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from var file
include_vars:
file: vars.yml
- name: Get the vpc details
ibm_is_vpc_info:
name: "{{ name_prefix }}-vpc"
failed_when:
- vpc_output.rc != 0
- '"No VPC found" not in vpc_output.stderr'
register: vpc_output
- name: set vpc in fact
set_fact:
cacheable: True
vpc: "{{ vpc_output.resource }}"
when: vpc_output.resource.id is defined
- name: Get the subnet details
ibm_is_subnet_info:
name: "{{ name_prefix }}-subnet"
failed_when:
- subnet_output.rc != 0
- '"No subnet found" not in subnet_output.stderr'
register: subnet_output
- name: set subnet in fact
set_fact:
cacheable: True
subnet: "{{ subnet_output.resource }}"
when: subnet_output.resource.id is defined
- name: Get the cluster details
ibm_container_vpc_cluster_info:
name: "{{ name_prefix }}-openshift-cluster"
failed_when:
- cluster_output.rc != 0
- '"cluster could not be found" not in cluster_output.stderr'
register: cluster_output
- name: set cluster in fact
set_fact:
cacheable: True
cluster: "{{ cluster_output.resource }}"
when: cluster_output.resource.id is defined
- name: Remove Cluster
ibm_container_vpc_cluster:
id: "{{ cluster.id }}"
state: absent
name: "{{ name_prefix }}-openshift-cluster"
vpc_id: "{{ vpc.id }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
when:
- vpc is defined
- subnet is defined
- cluster is defined
- name: Get the Resource group details
ibm_resource_group_info:
name: "{{ name_prefix }}-group"
failed_when:
- rg_output.rc != 0
- '"Given Resource Group is not found" not in rg_output.stderr'
register: rg_output
- name: set Resource group in fact
set_fact:
cacheable: True
rg: "{{ rg_output.resource }}"
when: rg_output.resource.id is defined
- name: Get the Cloud Object Storage details
ibm_resource_instance_info:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
failed_when:
- cos_output.rc != 0
- '"No resource instance found" not in cos_output.stderr'
when: rg is defined
register: cos_output
- name: set Cloud Object Storage in fact
set_fact:
cacheable: True
cos: "{{ cos_output.resource }}"
when: cos_output.resource.id is defined
- name: Remove Cloud Object Storage
ibm_resource_instance:
id: "{{ cos.id }}"
name: "{{ name_prefix }}-cos"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: absent
when: cos is defined
- name: Remove VPC Subnet
ibm_is_subnet:
state: absent
id: "{{ subnet.id }}"
when: subnet is defined
- name: Get the Public Gateway details
ibm_is_public_gateway_info:
name: "{{ name_prefix }}-gw"
failed_when:
- gw_output.rc != 0
- '"No Public gateway found" not in gw_output.stderr'
register: gw_output
- name: set Public Gateway in fact
set_fact:
cacheable: True
gw: "{{ gw_output.resource }}"
when: gw_output.resource.id is defined
- name: Remove Public Gateway
ibm_is_public_gateway:
id: "{{ gw.id }}"
state: absent
when: gw is defined
- name: Remove VPC
ibm_is_vpc:
state: absent
id: "{{ vpc.id }}"
when: vpc is defined
- name: Remove Resource Group
ibm_resource_group:
state: absent
id: "{{ rg.id }}"
when: rg is defined

View File

@@ -0,0 +1,125 @@
apiVersion: v1
kind: Namespace
metadata:
name: devfile-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: devfile-proxy
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /etc/nginx # mount nginx-conf volumn to /etc/nginx
readOnly: true
name: nginx-conf
- mountPath: /var/log/nginx
name: log
- mountPath: /var/cache/nginx
name: cache
- mountPath: /var/run
name: run
- mountPath: /data/nginx/cache
name: nginx-cache
resources:
requests:
memory: 256Mi
cpu: 256m
limits:
memory: 256Mi
cpu: 256m
volumes:
- name: nginx-conf
configMap:
name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx
items:
- key: nginx.conf
path: nginx.conf
- name: log
emptyDir: {}
- name: cache
emptyDir: {}
- name: run
emptyDir: {}
- name: nginx-cache
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: devfile-proxy
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: devfile-proxy
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
proxy_cache_path
/data/nginx/cache
levels=1:2
keys_zone=app:1M
max_size=100M;
log_format cacheStatus '$host $server_name $server_port $remote_addr $upstream_cache_status $remote_user [$time_local] " $request " '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Need to have a DNS server to resolve the FQDNs provided to proxy_pass
# Use the DNS resolver provided to the container
resolver 172.21.0.10;
map "$http_user_agent" $proxybackend {
default "";
"~^containerd" https://registry.stage.devfile.io;
"~^Go-http-client" https://registry.stage.devfile.io;
}
server {
listen 8080;
error_log /dev/stderr error;
access_log /dev/stdout cacheStatus;
location / {
proxy_cache app;
proxy_pass $proxybackend;
proxy_set_header Host registry.stage.devfile.io;
proxy_ignore_headers Set-Cookie;
proxy_ignore_headers Cache-Control;
proxy_cache_valid any 30m;
}
}
}

View File

@@ -1,12 +1,10 @@
---
- name: Install Operators on Kubernetes Cluster
hosts: localhost
collections:
- community.kubernetes
tasks:
- name: Create a Subscription for Service Binding Operator
k8s:
kubernetes.core.k8s:
state: present
definition:
apiVersion: operators.coreos.com/v1alpha1
@@ -20,7 +18,7 @@
source: redhat-operators
sourceNamespace: openshift-marketplace
- name: Create a Subscription for EDB Postgresql Operator
k8s:
kubernetes.core.k8s:
state: present
definition:
apiVersion: operators.coreos.com/v1alpha1

View File

@@ -0,0 +1,5 @@
---
collections:
- ibm.cloudcollection
- name: kubernetes.core
version: 2.0.0

View File

@@ -0,0 +1,6 @@
---
total_ipv4_address_count: 256
kube_version: 4.12.13_openshift # command to list all supported k8s/openshift `ibmcloud ks versions`
node_flavor: bx2.4x16
workers: 3
nfs_image: ibm-ubuntu-20-04-2-minimal-amd64-1

View File

@@ -0,0 +1,5 @@
---
collections:
- ibm.cloudcollection
- name: kubernetes.core
version: 2.0.0

View File

@@ -1,12 +1,4 @@
---
name_prefix: odo-test-openshift
total_ipv4_address_count: 256
cluster_zone: eu-de-2
kube_version: 4.7_openshift
node_flavor: bx2.4x16
workers: 3
nfs_image: ibm-ubuntu-20-04-2-minimal-amd64-1
ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABgQCkFJaWm3EMxPLf/JMfJamXeVrsOQBgKe4F7thRZ5RYYmA0a9owJ9pievDPB459K7EWBNYZH1UtuG9ipCR25Y8baDZ5XWnK4e3z4nNhEbMxUnS2or52JCuix6LBbYEbsgLPPiof8kXUEKUGvzfJ0vASFcrF9XKMnQ789A9ee8BMd0SZMAs2Jp2wQ3gjzyg4ZpQvfmQ0ua4EVDu3RItSba8F5cfgGLVQPj+4K+WHfQf0jG7ew5F7LolfdjgAj041RbpYZx8SNDxdsoi06rttn92aEUgutGDHaa/P4JhuFZg4IRFwqDrEEyLuEhP89DniWmTyewyetEScmu2gVmFSFfHzPx3L9pkOP/zDTRzuuOYXNmqNUH3nt/kOxxzzuubTiSpGCP3H9NgYzOQk2E4m94znFVb3fifXETL9kOsZQ90yMnRrYRSPSmAcIiFXSWAFUIQ4Y6aTMI2D1ZoS2gMniKRN3SldkH3UeuKq2jf7TZpDMSQCcPbwZN0Qe+uLt/fiQKM=
vsi_image: odo-test-windows # Image ID: r010-1c3682a3-0013-496f-a663-9772f933d5be || this image is a private image and only available in Developer QE
vsi_profile: bx2-2x8
sshkey_id: r010-0f01beeb-9b45-4406-93bb-4a8604c8a3e0 # sshkey id ansible-ssh-key in developer qe group in VPC sshkey
---
name_prefix: odo-stage
cluster_zone: eu-de-2
ssh_public_key: AAAAB3NzaC1yc2EAAAADAQABAAABgQCkFJaWm3EMxPLf/JMfJamXeVrsOQBgKe4F7thRZ5RYYmA0a9owJ9pievDPB459K7EWBNYZH1UtuG9ipCR25Y8baDZ5XWnK4e3z4nNhEbMxUnS2or52JCuix6LBbYEbsgLPPiof8kXUEKUGvzfJ0vASFcrF9XKMnQ789A9ee8BMd0SZMAs2Jp2wQ3gjzyg4ZpQvfmQ0ua4EVDu3RItSba8F5cfgGLVQPj+4K+WHfQf0jG7ew5F7LolfdjgAj041RbpYZx8SNDxdsoi06rttn92aEUgutGDHaa/P4JhuFZg4IRFwqDrEEyLuEhP89DniWmTyewyetEScmu2gVmFSFfHzPx3L9pkOP/zDTRzuuOYXNmqNUH3nt/kOxxzzuubTiSpGCP3H9NgYzOQk2E4m94znFVb3fifXETL9kOsZQ90yMnRrYRSPSmAcIiFXSWAFUIQ4Y6aTMI2D1ZoS2gMniKRN3SldkH3UeuKq2jf7TZpDMSQCcPbwZN0Qe+uLt/fiQKM=

View File

@@ -0,0 +1,174 @@
# Ansible Playbooks for odo testing
## IBM Cloud Kubernetes Cluster
This ansible playbook deploys a VPC Kubernetes/OpenShift cluster on IBM Cloud and an NFS server on the same VPC (to be used for dynamic storage provisioning - deploying the NFS provisioner is required, see below).
It uses the [IBM Cloud Ansible Collections](https://github.com/IBM-Cloud/ansible-collection-ibm/).
### VPC Resources
The following VPC infrastructure resources will be created (Ansible modules in
parentheses):
* Resource group (ibm_resource_group)
* VPC (ibm_is_vpc)
* Security Group (ibm_is_security_group_rule)
* Public gateway (ibm_is_public_gateway)
* VPC Subnet (ibm_is_subnet)
* SSH Key (ibm_is_ssh_key)
* Virtual Server Instance (ibm_is_instance)
* Floating IP (ibm_is_floating_ip)
* Cloud Object Storage (ibm_resource_instance)
* VPC Kubernetes Cluster (ibm_container_vpc_cluster)
All created resources (expect resource group and SSH Key) will be inside the created Resource Group.
Note that:
- ibm_is_security_group_rule is not idempotent: each time the playbook is ran, an entry in the Inbound Rules of the Security Group allowing port 22 will be added. You should remove the duplicates from the UI and keep only one entry.
- I (feloy) didn't find a way to uninstall an addon from a cluster using the IBM Cloud ansible collection (https://github.com/IBM-Cloud/ansible-collection-ibm/issues/70). You will need to remove the "Block Storage for VPC" default add-on if you install an NFS provisioner for this cluster.
### Configuration Parameters
The following parameters can be set by the user, either by editing the `vars.yaml` or by usning the `-e` flag from the `ansible-galaxy` command line:
* `name_prefix`: Prefix used to name created resources
* `cluster_zone`: Zone on which will be deployed the resources
* `total_ipv4_address_count`: Number of IPv4 addresses available in the VPC subnet
* `ssh_public_key`: SSH Public key deployed on the NFS server
* `nfs_image`: The name of the image used to deploy the NFS server
* `kube_version`: Kubernetes or OpenShift version. The list of versions can be obtained with `ibmcloud ks versions`
* `node_flavor`: Flavor of workers of the cluster. The list of flavors can be obtained with `ibmcloud ks flavors --zone ${CLUSTER_ZONE} --provider vpc-gen2`
* `workers`: Number of workers on the cluster
* `cluster_id_file`: File on which the cluster ID will be saved
* `nfs_ip_file`: File on which the private IP of the NFS server will be saved
### Running
#### Set API Key and Region
1. [Obtain an IBM Cloud API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).
2. Export your API key to the `IC_API_KEY` environment variable:
```
export IC_API_KEY=<YOUR_API_KEY_HERE>
```
3. Export desired IBM Cloud region to the 'IC_REGION' environment variable:
```
export IC_REGION=<REGION_NAME_HERE>
```
You can get available regions supporting Kubernetes clusters on the page https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones.
#### Install Ansible collections
To install the required Ansible collections, run:
```
ansible-galaxy collection install -r requirements.yml
```
#### Create
To create all resources, run the 'create' playbook:
For example:
```
$ ansible-playbook create.yml \
-e name_prefix=odo-tests-openshift \
-e kube_version=4.7_openshift \
-e cluster_id_file=/tmp/openshift_id \
-e nfs_ip_file=/tmp/nfs_openshift_ip \
--key-file <path_to_private_key> # For an OpenShift cluster v4.7
$ ansible-playbook create.yml \
-e name_prefix=odo-tests-kubernetes \
-e kube_version=1.20 \
-e cluster_id_file=/tmp/kubernetes_id \
-e nfs_ip_file=/tmp/nfs_kubernetes_ip \
--key-file <path_to_private_key> # For a Kubernetes cluster v1.20
```
The `path_to_private_key` file contains ths SSH private key associated with the SSH public key set in the `ssh_public_key` configuration parameter.
#### Destroy
To destroy all resources run the 'destroy' playbook:
```
ansible-playbook destroy.yml -e name_prefix=...
```
## Kubernetes Operators
This ansible playbook deploys operators on a Kubernetes cluster. The cluster should be running the Operator Lifecycle Manager ([OLM](https://olm.operatorframework.io/)), either natively for an OpenShift cluster, or by installing it on a Kubernetes cluster.
To install OLM on a Kubernetes cluster go to the ([OLM releases page](https://github.com/operator-framework/operator-lifecycle-manager/releases/)), the latest version is displayed at the top, execute the commands as described under the "Scripted" section. At the time this document was written the latest version was v0.21.2:
```
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.21.2/install.sh | bash -s v0.21.2
```
### Running
1. Install necessary Python modules:
```
pip3 install ansible openshift
```
2. Install Ansible collections
To install the required Ansible collections, run:
```
ansible-galaxy collection install -r requirements.yml
```
3. Connect to the cluster and make sure your `kubeconfig` points to the cluster.
4. Install the operators for OpenShift / Kubernetes:
```
ansible-playbook operators-openshift.yml
```
or
```
ansible-playbook operators-kubernetes.yml
```
## NFS provisioner
You can run the following commands upon a cluster to deploy the NFS provisioner to this cluster (either Kubernetes or OpenShift). You will need to uninstall the "Block Storage for VPC" add-on installed by default, to make the NFS provisioner work correctly.
```
$ helm repo add nfs-subdir-external-provisioner \
https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=$(</tmp/nfs_ip) \
--set nfs.path=/mnt/nfs \
--set storageClass.defaultClass=true \
--set storageClass.onDelete=delete
```
## Devfile registry reverse proxy
To install a reverse proxy caching the requests to the Staging Devfile registry (https://registry.stage.devfile.io),
you can run the following command:
```
kubectl apply -f devfile-proxy.yaml
```
This will install an nginx install configured as a reverse proxy with the Staging Devfile registry as only backend.
A Load Balancer service will be created accessible publicly. To limit requests on the proxy, the requests are limited
to user agents beginning with `containerd` or `Go-http-client`.
The integration tests are able to detect the presence of the Load Balancer service and use the proxy if the service is present
and providing an external address.

View File

@@ -0,0 +1,99 @@
---
- name: Create Windows OpenShift Cluster on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from var file
include_vars:
file: vars.yml
- name: Configure Resource Group
ibm_resource_group:
name: "{{ name_prefix }}-group"
state: available
register: rg_create_output
- name: Save Resource Group as fact
set_fact:
cacheable: True
rg: "{{ rg_create_output.resource }}"
when: rg_create_output.rc==0
- name: Configure VPC
ibm_is_vpc:
name: "{{ name_prefix }}-vpc"
resource_group: "{{ rg.id }}"
state: available
register: vpc_create_output
- name: Save VPC as fact
set_fact:
cacheable: True
vpc: "{{ vpc_create_output.resource }}"
when: vpc_create_output.rc==0
- name: Configure Public Gateway
ibm_is_public_gateway:
name: "{{ name_prefix }}-gw"
resource_group: "{{ rg.id }}"
zone: "{{ cluster_zone }}"
vpc: "{{ vpc.id }}"
state: available
register: gw_create_output
- name: Save Public Gateway as fact
set_fact:
cacheable: True
gw: "{{ gw_create_output.resource }}"
when: gw_create_output.rc==0
- name: Configure VPC Subnet
ibm_is_subnet:
name: "{{ name_prefix }}-subnet"
resource_group: "{{ rg.id }}"
vpc: "{{ vpc.id }}"
zone: "{{ cluster_zone }}"
total_ipv4_address_count: "{{ total_ipv4_address_count }}"
public_gateway: "{{ gw.id }}"
state: available
register: subnet_create_output
- name: Save VPC Subnet as fact
set_fact:
cacheable: True
subnet: "{{ subnet_create_output.resource }}"
when: subnet_create_output.rc==0
- name: Configure Cloud Object Storage
ibm_resource_instance:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: available
register: cos_create_output
- name: Save Cloud ObjectStorage Subnet as fact
set_fact:
cacheable: True
cos: "{{ cos_create_output.resource }}"
when: cos_create_output.rc==0
- name: Configure cluster
ibm_container_vpc_cluster:
name: "{{ name_prefix }}-openshift-win-cluster"
resource_group_id: "{{ rg.id }}"
kube_version: "{{ kube_version }}"
flavor: "{{ node_flavor }}"
worker_count: "{{ workers }}"
vpc_id: "{{ vpc.id }}"
cos_instance_crn: "{{ cos.crn }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
state: available
register: cluster_create_output

View File

@@ -0,0 +1,156 @@
---
- name: Destroy OpenShift Cluster on IBM Cloud
hosts: localhost
collections:
- ibm.cloudcollection
tasks:
- name: Fetch the variables from var file
include_vars:
file: vars.yml
- name: Get the vpc details
ibm_is_vpc_info:
name: "{{ name_prefix }}-vpc"
failed_when:
- vpc_output.rc != 0
- '"No VPC found" not in vpc_output.stderr'
register: vpc_output
- name: set vpc in fact
set_fact:
cacheable: True
vpc: "{{ vpc_output.resource }}"
when: vpc_output.resource.id is defined
- name: Get the subnet details
ibm_is_subnet_info:
name: "{{ name_prefix }}-subnet"
failed_when:
- subnet_output.rc != 0
- '"No subnet found" not in subnet_output.stderr'
register: subnet_output
- name: set subnet in fact
set_fact:
cacheable: True
subnet: "{{ subnet_output.resource }}"
when: subnet_output.resource.id is defined
- name: Get the cluster details
ibm_container_vpc_cluster_info:
name: "{{ name_prefix }}-openshift-win-cluster"
failed_when:
- cluster_output.rc != 0
- '"cluster could not be found" not in cluster_output.stderr'
register: cluster_output
- name: set cluster in fact
set_fact:
cacheable: True
cluster: "{{ cluster_output.resource }}"
when: cluster_output.resource.id is defined
- name: Remove Cluster
ibm_container_vpc_cluster:
id: "{{ cluster.id }}"
state: absent
name: "{{ name_prefix }}-openshift-win-cluster"
vpc_id: "{{ vpc.id }}"
zones:
- {
subnet_id: "{{ subnet.id }}",
name: "{{ cluster_zone }}"
}
when:
- vpc is defined
- subnet is defined
- cluster is defined
- name: Get the Resource group details
ibm_resource_group_info:
name: "{{ name_prefix }}-group"
failed_when:
- rg_output.rc != 0
- '"Given Resource Group is not found" not in rg_output.stderr'
register: rg_output
- name: set Resource group in fact
set_fact:
cacheable: True
rg: "{{ rg_output.resource }}"
when: rg_output.resource.id is defined
- name: Get the Cloud Object Storage details
ibm_resource_instance_info:
name: "{{ name_prefix }}-cos"
resource_group_id: "{{ rg.id }}"
failed_when:
- cos_output.rc != 0
- '"No resource instance found" not in cos_output.stderr'
when: rg is defined
register: cos_output
- name: set Cloud Object Storage in fact
set_fact:
cacheable: True
cos: "{{ cos_output.resource }}"
when: cos_output.resource.id is defined
- name: Remove Cloud Object Storage
ibm_resource_instance:
id: "{{ cos.id }}"
name: "{{ name_prefix }}-cos"
service: "cloud-object-storage"
plan: "standard"
location: "global"
state: absent
when: cos is defined
- name: Remove VPC Subnet
ibm_is_subnet:
state: absent
id: "{{ subnet.id }}"
when: subnet is defined
- name: Get the Public Gateway details
ibm_is_public_gateway_info:
name: "{{ name_prefix }}-gw"
failed_when:
- gw_output.rc != 0
- '"No Public gateway found" not in gw_output.stderr'
register: gw_output
- name: set Public Gateway in fact
set_fact:
cacheable: True
gw: "{{ gw_output.resource }}"
when: gw_output.resource.id is defined
- name: Remove Public Gateway
ibm_is_public_gateway:
id: "{{ gw.id }}"
state: absent
when: gw is defined
- name: Remove VPC
ibm_is_vpc:
state: absent
id: "{{ vpc.id }}"
when: vpc is defined
- name: Remove Resource Group
ibm_resource_group:
state: absent
id: "{{ rg.id }}"
when: rg is defined

View File

@@ -0,0 +1,125 @@
apiVersion: v1
kind: Namespace
metadata:
name: devfile-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: devfile-proxy
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /etc/nginx # mount nginx-conf volumn to /etc/nginx
readOnly: true
name: nginx-conf
- mountPath: /var/log/nginx
name: log
- mountPath: /var/cache/nginx
name: cache
- mountPath: /var/run
name: run
- mountPath: /data/nginx/cache
name: nginx-cache
resources:
requests:
memory: 256Mi
cpu: 256m
limits:
memory: 256Mi
cpu: 256m
volumes:
- name: nginx-conf
configMap:
name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx
items:
- key: nginx.conf
path: nginx.conf
- name: log
emptyDir: {}
- name: cache
emptyDir: {}
- name: run
emptyDir: {}
- name: nginx-cache
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: devfile-proxy
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
namespace: devfile-proxy
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
proxy_cache_path
/data/nginx/cache
levels=1:2
keys_zone=app:1M
max_size=100M;
log_format cacheStatus '$host $server_name $server_port $remote_addr $upstream_cache_status $remote_user [$time_local] " $request " '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Need to have a DNS server to resolve the FQDNs provided to proxy_pass
# Use the DNS resolver provided to the container
resolver 172.21.0.10;
map "$http_user_agent" $proxybackend {
default "";
"~^containerd" https://registry.stage.devfile.io;
"~^Go-http-client" https://registry.stage.devfile.io;
}
server {
listen 8080;
error_log /dev/stderr error;
access_log /dev/stdout cacheStatus;
location / {
proxy_cache app;
proxy_pass $proxybackend;
proxy_set_header Host registry.stage.devfile.io;
proxy_ignore_headers Set-Cookie;
proxy_ignore_headers Cache-Control;
proxy_cache_valid any 30m;
}
}
}

View File

@@ -0,0 +1,33 @@
---
- name: Install Operators on Kubernetes Cluster
hosts: localhost
tasks:
- name: Create a Subscription for Service Binding Operator
kubernetes.core.k8s:
state: present
definition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-service-binding-operator
namespace: openshift-operators
spec:
channel: stable
name: rh-service-binding-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
- name: Create a Subscription for EDB Postgresql Operator
kubernetes.core.k8s:
state: present
definition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-cloud-native-postgresql
namespace: openshift-operators
spec:
channel: stable-v1.18
name: cloud-native-postgresql
source: certified-operators
sourceNamespace: openshift-marketplace

View File

@@ -0,0 +1,5 @@
---
collections:
- ibm.cloudcollection
- name: kubernetes.core
version: 2.0.0

View File

@@ -0,0 +1,9 @@
---
total_ipv4_address_count: 256
kube_version: 4.12.13_openshift # command to list all supported k8s/openshift `ibmcloud ks versions`
node_flavor: bx2.4x16
workers: 3
nfs_image: ibm-ubuntu-20-04-2-minimal-amd64-1
vsi_image: odo-test-windows # Image ID: r010-1c3682a3-0013-496f-a663-9772f933d5be || this image is a private image and only available in Developer QE
vsi_profile: bx2-2x8
sshkey_id: r010-0f01beeb-9b45-4406-93bb-4a8604c8a3e0 # sshkey id ansible-ssh-key in developer qe group in VPC sshkey

45
scripts/ansible/README.md Normal file
View File

@@ -0,0 +1,45 @@
# OVERVIEW
This directory contains ansible manifest files that create the complete infrastructure on IBMCloud.
This script is used for automation and are using in two github-actions.
- to create a staging environment to test if the changes are working as expected
- to will apply new changes to the production infrastructure
## ***NOTE***
>Deleation of stagin environment is done manually, as the github-action will only create the staging env, testing the staging env is manual.
### __How to create complete infra?__
> NOTE: you will need to export path to ssh_key for login pourpose (`SSHKEY` is variable name)
Run the following commands
``` shell
# export ssh_key path
export SSHKEY=/path/to/ssh/key
# expose the Region and API key for ansible script
export IC_REGION="eu-de"
export IC_API_KEY="<API_KEY>"
ansible-playbook create-infra.yaml
```
### __How to delete complete infra?__
Run the following commands
``` shell
# expose the Region and API key for ansible script
export IC_REGION="eu-de"
export IC_API_KEY="<API_KEY>"
ansible-playbook delete-infra.yaml
```
### Manual Steps to setup cluster
Manual steps need to be done on each cluster
#### 1. [install operators](./Cluster/kubernetes-cluster/README.md#kubernetes-operators)
> ___NOTE: if want to use nfs for storage___
#### 2. [adding nfs server to cluster](./Cluster/kubernetes-cluster/README.md#nfs-provisioner)
>___NOTE: only do this step if configuring nfs on cluster___
#### 3. [remove storage](./Cluster/README.md#to-remove-storage-addon-from-cluster)

View File

@@ -0,0 +1,7 @@
---
- name: Create Infrastructure on IBM Cloud
hosts: localhost
- name: Play to create Clusters
ansible.builtin.import_playbook: ./Cluster/create-clusters.yaml

View File

@@ -0,0 +1,7 @@
---
- name: Delete Infrastructure on IBM Cloud
hosts: localhost
- name: Play to delete Clusters
ansible.builtin.import_playbook: ./Cluster/delete-clusters.yaml

View File

@@ -0,0 +1,5 @@
---
collections:
- ibm.cloudcollection
- name: kubernetes.core
version: 2.0.0