K8S : EKS with Windows Self-Managed Node Group using Terraform

Paramanand Dhuri
2 min readApr 23, 2021

--

In this blog we will be setting up EKS Control Plane on AWS Cloud using Windows Worker Nodes

Note: For deploying Core Networking components we need at-least one Linux Node Group in cluster so that windows worker nodes can communicate with Control-Plane

Before starting actual implementation you will be curious to know why are we going with self-managed nodes! The answer is, AWS currently doesn’t support Managed Node Groups for windows.

Cluster endpoint types:

  1. Public cluster: Control-Plane API Server is accessible from internet, internal resources communicate through internet
  2. Private cluster: Control-Plane API Server can be accessed only from same VPC similarly the internal communication between resources like nodes is also through private endpoint
  3. Public and Private cluster: The Control-Plane API Server can be accessed from outside internet, but the resources will communicate using private endpoint

Networking Considerations:

  1. VPC should have DNS_HOST_NAME & DNS_RESOLUTION enabled.
  2. EKS Control-Plane resides in AWS VPC whereas Node Groups are deployed in customer VPC.
  3. AWS EKS service creates its own security groups for all the managed resources like Control-Plane and Managed Node Groups.
  4. By default VPC CNI plugin can be used which will assign the subnet ip ranges to pods.

High-Level steps in setting up EKS cluster with Windows Worker Node Groups:

  1. Setup the EKS Control-Plane
  2. Add Linux Node Group
  3. Apply VPC-Controller for networking resources
  4. Add windows Node Group

Setup EKS Control-Plane:

EKS Control-Plane creates the control plane resources and security groups which needs to be assigned to all the node groups created further. Cluster security group can be assigned further to Nodes using below command :

aws_eks_cluster.cluster.vpc_config[0].cluster_security_group_id

refer main.tf for setting up EKS Control-Plane

Add Linux Node Group:

We can use aws_eks_node_group terraform resource to create managed linux node groups, linux node group are required for core networking components which cannot run on windows nodes. refer linux_node_group_code to implement the managed node group

The aws-auth configmap is required in order to register both linux as well as windows nodes, which can be implemented using the resource kubernetes_config_map

Adding VPC Controller:

Once the linux nodes are created and registered inside the cluster we need to implement vpc_controller which will create the core resources required for windows instance to communicate with cluster api controller.

resource "null_resource" "install_vpc_controller" {
provisioner "local-exec" {
command = "eksctl utils install-vpc-controllers --cluster ${var.cluster_name} --approve"
}
depends_on = [aws_eks_node_group.linux_ng]
}

Windows Node Groups:

Windows node groups can be only self-managed as managed worker node groups are not supported for windows, to implement this we need to add Launch-Template and Auto-Scaling Group, please refer code here.

The windows worker nodes cannot directly register itself inside cluster, so that part is taken care by user_data

--

--