Configuring A Multi-Node Kubernetes Cluster On AWS Cloud Using Ansible
Here I am going to show you how to configure a multi-node Kubernetes cluster on AWS cloud using Ansible.
Before configuring anything using Ansible, it is a must to know how to set up the desired environment manually. So first we have to configure Kubernetes (k8s) cluster on AWS manually. And then, I will show you how to repeat the same process using Ansible.
For this refer below link
Now we will set up Kubernetes (k8s) cluster on AWS using ansible.
- Playbook to launch ec2 instances on the AWS cloud.
- hosts: 127.0.0.1
vars_files:
- secret.yml
tasks:
- name: "Creating Master Node"
ec2_instance:
region: ap-south-1
image_id: ami-00bf4ae5a7909786c
instance_type: t2.micro
vpc_subnet_id: subnet-82fce8ea
name: "Master"
tags:
Node: "Master"
security_group: sg-080d9985aebdecbf9
key_name: fb_key
state: present
aws_access_key: "{{uname}}"
aws_secret_key: "{{pass}}"- name: "Creating 1st Slave Node"
ec2_instance:
region: ap-south-1
image_id: ami-00bf4ae5a7909786c
instance_type: t2.micro
vpc_subnet_id: subnet-82fce8ea
name: "Slave1"
tags:
Node: "Slave"
security_group: sg-080d9985aebdecbf9
key_name: fb_key
state: present
aws_access_key: "{{uname}}"
aws_secret_key: "{{pass}}"- name: "Creating 2nd Slave Node"
ec2_instance:
region: ap-south-1
image_id: ami-00bf4ae5a7909786c
instance_type: t2.micro
vpc_subnet_id: subnet-82fce8ea
name: "Slave2"
tags:
Node: "Slave"
security_group: sg-080d9985aebdecbf9
key_name: fb_key
state: present
aws_access_key: "{{uname}}"
aws_secret_key: "{{pass}}"
The output of the playbook
And the instances have been created.
Now, to get the IP of these instances using Ansible, I have used dynamic inventories. You can get dynamic inventories for ec2 in the above link:
Here is the configuration file of ansible
Now we have to provide credentials for my AWS account, for this I have used to above environment variables:
export AWS_ACCESS_KEY_ID=<Your Access Key ID>
export AWS_SECRET_ACCESS_KEY=<Your Secret Key>
Check all hosts and connectivity using the above commands
ansible all — list-hosts
ansible all -m ping
Playbook for K8s master setup:
- name: "K8s Matser Configuration"
hosts: tag_Node_Master
vars:
- pod_cidr_network: "10.240.0.0/16"
tasks:
- name: "Installing Required Packages"
package:
name:
- "docker"
- "iproute-tc"
state: present- name: "Creating Yum Repo For Kubeadm, Kubelet, and Kubectl"
yum_repository:
name: kubernetes
description: "Kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
gpgcheck: yes
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude: kubelet kubeadm kubectl- name: "Installing Kubeadm, Kubelet, and Kubectl"
yum:
name: [ 'kubectl', 'kubeadm', 'kubelet' ]
state: present
disable_excludes: kubernetes- name: "Starting Kubelet Service"
service:
name: "kubelet"
state: started
enabled: yes- name: "Copy Daemon file to change the Docker's cgroup Driver"
copy:
src: "daemon.json"
dest: "/etc/docker/daemon.json"- name: "Starting Docker Service"
service:
name: "docker"
state: started
enabled: yes- name: "Pull required imges"
command: "kubeadm config images pull"- name: "Initializing the Kubernetes cluser on Master Node"
command: "kubeadm init --pod-network-cidr={{ pod_cidr_network }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
ignore_errors: True- name: "Configuration Files Setup"
shell: |
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
ignore_errors: True- name: "Downloading CNI Plugin"
command: "kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
ignore_errors: True- name: "Creating Join Token"
command: "kubeadm token create --print-join-command"
register: x
ignore_errors: True- name: "Storing Token locally"
local_action: copy content={{ x.stdout }} dest=/tmp/token
Playbook for K8s slave setup:
- name: "K8s Slave Configuration"
hosts: tag_Node_Slave
tasks:
- name: "Installing Required Packages"
package:
name:
- "docker"
- "iproute-tc"
state: present- name: "Creating Yum Repo For Kubeadm, Kubelet, and Kubectl"
yum_repository:
name: kubernetes
description: "Kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
gpgcheck: yes
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude: kubelet kubeadm kubectl- name: "Installing Kubeadm, Kubelet, and Kubectl"
yum:
name: [ 'kubectl', 'kubeadm', 'kubelet' ]
state: present
disable_excludes: kubernetes- name: "Starting Kubelet Service"
service:
name: "kubelet"
state: started
enabled: yes- name: "Copy Daemon file to change the Docker's cgroup Driver"
copy:
src: "daemon.json"
dest: "/etc/docker/daemon.json"- name: "Starting Docker Service"
service:
name: "docker"
state: started
enabled: yes- name: "Copy Kernel setting file"
copy:
src: "k8s.conf"
dest: "/etc/sysctl.d/k8s.conf"- command: "sysctl --system"- name: "Copying token to slave nodes"
copy:
src: /tmp/token
dest: /tmp/token- name: "Joining the cluster"
shell: "bash /tmp/token"
ignore_errors: True
Let us confirm the same by logging into the master node:
Hence, the multi-node Kubernetes cluster has been configured successfully on the AWS cloud using Ansible.
I have also uploaded the playbook on Github, here is the link:
I hope you liked the above article.
Have a good day