Multinode Kubernetes Cluster on AWS with the help of Ansible

As we all know, Kubernetes is an open source software that allows us to deploy and manage containerized applications at scale.

So in this artical we are going to setup a Kubernetes Cluster on Amazon Linux 2 EC2 compute instance which will run containers on those instances with processes for deployment, maintenance, and scaling on AWS cloud using the great automation tool Ansible.


  • pre configured dynamic inventory on your Base OS.
  • A new user having Administrator Access. Note: Ansible requires administrator access user access key and secret key so that ansible can used for configuration.
  • Private key in .pem extension.

as you can see this is our ansible’s config file. in this config file we have given some required information e.g. default path of roles, sudo power in case of aws, private key to login aws .

We’ll see how to set up a Kubernetes cluster with ‘n’ Worker Nodes and 1 Master Node on Amazon Linux 2 Server’s. We will do this configuration using the Ansible roles where the “kubeadm” tool is used to set up the cluster. Kubeadm is a tool built to provide “kubeadm init” and “kubeadm join” for creating Kubernetes clusters.

lets start with playbook for whole setup …..

as you can see in this above playbook we included a secret file for providing the credentials of AWS account and this playbook contains 2 Roles for setting the master node and worker nodes respectively. these roles can be downloaded from these repos from ansible-galaxy.

  1. master node=
  2. worker node=

as we all know Roles are used to simplify the playbook.

now we will run this playbook with the help of this command:

#ansible-playbook k8s.yml — ask-vault-pass

this command will prompt you for the vault password , vault file contains the credentials in encrypted form so it requires password for authentication

now this playbook will launch one instance with tag_Name_k8smaster and ’n’ number of worker nodes with tag_Name_k8sworker. than ansible will perform further tasks on these instances with the help of tag given by us , using dynamic inventory.

here i am launching two instances as worker nodes

as you can see whole playbook run successfully , we can also check the instances on AWS GUI and verify is everything configured successfully or not

one master and two worker nodes

we can also check the status of cluster from the master node bcoz in this setup we configured the master node as client also , so you can go inside the master node and check the status of nodes and system pods and if all the system pods are working fine it means everything is configured successfully.

all the nodes and pods are configured and running successfully

as you can see this playbook configured everything automatically and worker nodes will also join the cluster automatically, so you dont need to worry about anything ….you just have to provide the number of worker nodes for k8s cluster , rest of everything will be configured automatically by ansible.

Hope you find this helpfull…! and you can also download the roles from above given link.

Thank you