EKS Cluster
Create AWS VPC and EKS Cluster
Installing or updating eksctl
For this we are going to use homebrew
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
We are going to create an IAM user with admin privs to create and own this whole cluster.
In the web management UI for AWS, go to IAM settings and create a user with admin privileges but no management console access. We created a user called "K8-Admin"
Delete or rename your ~/.aws/credentials
file and re-run aws configure
with the new user's Access and secret access keys.
Deploying a cluster with eksctl
Run the eksctl
command below to create your first cluster and perform the following:
- Create a 6-node Kubernetes cluster named
filenet-east
with one node type asm5.xlarge
and region asus-east-1
. - Define a minimum of one node (
--nodes-min 1
) and a maximum of six-node (--nodes-max 6
) for this node group managed by EKS. The node group is namedstandard-workers
. - Create a node group with the name
filenet-workers
and select a machine type for thefilenet-workers
node group.
eksctl create cluster \
--name filenet-east \
--version 1.24 \
--region us-east-1 \
--with-oidc \
--zones us-east-1a,us-east-1b,us-east-1c \
--nodegroup-name filenet-workers \
--node-type m5.xlarge \
--nodes 6 \
--nodes-min 1 \
--nodes-max 6 \
--tags "Product=FileNet" \
--managed
OIDC
Associate an IAM oidc provider with the cluster if you didn't include --with-oidc
above. Make sure to use the region you created the cluster in.
eksctl utils associate-iam-oidc-provider --region=us-west-1 --cluster=filenet-east --approve
Configure kubectl
Once the cluster is up, add it to your kube config. eksctl
will probably do this for you.
aws eks update-kubeconfig --name filenet-east --region us-east-1
Installing Cluster components
Install the EKS helm repo
helm repo add eks https://aws.github.io/eks-charts
helm repo update
Install the EBS driver to the cluster
We install the EBS CSI driver as this gives us access to GP3 block storage.
Download the example ebs iam policy
curl -o iam-policy-example-ebs.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/docs/example-iam-policy.json
Create the policy. You can change AmazonEKS_EBS_CSI_Driver_Policy
to a different name, but if you do, make sure to change it in later steps too.
aws iam create-policy \
--policy-name AmazonEKS_EBS_CSI_Driver_Policy \
--policy-document file://iam-policy-example-ebs.json
{
"Policy": {
"PolicyName": "AmazonEKS_EBS_CSI_Driver_Policy",
"PolicyId": "ANPA24LVTCGN5YOUAVX2V",
"Arn": "arn:aws:iam::<ACCOUNT ID>:policy/AmazonEKS_EBS_CSI_Driver_Policy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2023-04-19T14:17:03+00:00",
"UpdateDate": "2023-04-19T14:17:03+00:00"
}
}
Take note of the arn
entry for the policy that is returned above as you will need it below.
Create the service account
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster filenet-east \
--attach-policy-arn arn:aws:iam::<ACCOUNT ID>:policy/AmazonEKS_EBS_CSI_Driver_Policy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRole
Take note of the arn
entry for the role that is returned above as you will need it below.
Create the addon for the cluster
eksctl create addon \
--name aws-ebs-csi-driver \
--cluster filenet-east \
--service-account-role-arn arn:aws:iam::<ACCOUNT ID>:role/AmazonEKS_EBS_CSI_DriverRole \
--force
Create the following StorageClass yaml to use gp3
gp3-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-gp3-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
fsType: ext4
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Create the storage class
kubectl apply -f gp3-sc.yaml
Enable EFS on the cluster
By default when we create a cluster with eksctl it defines and installs gp2
storage class which is backed by Amazon's EBS (elastic block storage). After that we installed the EBS CSI driver to support gp3
. However, both gp2
and gp3
are both block storage. They will not support RWX in our cluster. We need to install an EFS storage class.
Download the IAM policy document from GitHub. You can also view the policy document
curl -o iam-policy-example-efs.json https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json
Create the policy. You can change AmazonEKS_EFS_CSI_Driver_Policy
to a different name, but if you do, make sure to change it in later steps too.
aws iam create-policy \
--policy-name AmazonEKS_EFS_CSI_Driver_Policy \
--policy-document file://iam-policy-example-efs.json
{
"Policy": {
"PolicyName": "AmazonEKS_EFS_CSI_Driver_Policy",
"PolicyId": "ANPA24LVTCGN7YGDYRWJT",
"Arn": "arn:aws:iam::<ACCOUNT ID>:policy/AmazonEKS_EFS_CSI_Driver_Policy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2023-01-24T17:24:00+00:00",
"UpdateDate": "2023-01-24T17:24:00+00:00"
}
}
Take note of the returned policy Arn
as you will need it in the next step.
Create an IAM role and attach the IAM policy to it. Annotate the Kubernetes service account with the IAM role ARN and the IAM role with the Kubernetes service account name. You can create the role using eksctl
or the AWS CLI. We're going to use eksctl
, Also our Arn
is returned in the output above, so we'll use it here.
eksctl create iamserviceaccount \
--cluster filenet-east \
--namespace kube-system \
--name efs-csi-controller-sa \
--attach-policy-arn arn:aws:iam::<ACCOUNT ID>:policy/AmazonEKS_EFS_CSI_Driver_Policy \
--approve \
--region us-east-1
Once created, check the iam service account is created running the following command.
eksctl get iamserviceaccount --cluster <cluster-name>
NAMESPACE NAME ROLE ARN
kube-system aws-load-balancer-controller arn:aws:iam::748107796891:role/AmazonEKSLoadBalancerControllerRole
kube-system efs-csi-controller-sa arn:aws:iam::748107796891:role/eksctl-filenet-cluster-east-addon-iamserviceaccount-ku-Role1-1SCBRU1DS52QY
Now we just need our add-on registry address. This can be found here: https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
Let's install the driver add-on to our clusters. We're going to use helm
for this.
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
helm repo update
Install a release of the driver using the Helm chart. Replace the repository address with the cluster's container image address.
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa
Now we need to create the filesystem in EFS so we can use it
export clustername=filenet-east
export region=us-east-1
Get our VPC ID
vpc_id=$(aws eks describe-cluster \
--name $clustername \
--query "cluster.resourcesVpcConfig.vpcId" \
--region $region \
--output text)
Retrieve the CIDR range for your cluster's VPC and store it in a variable for use in a later step.
cidr_range=$(aws ec2 describe-vpcs \
--vpc-ids $vpc_id \
--query "Vpcs[].CidrBlock" \
--output text \
--region $region)
Create a security group with an inbound rule that allows inbound NFS traffic for your Amazon EFS mount points.
security_group_id=$(aws ec2 create-security-group \
--group-name EFS4FileNetSecurityGroup \
--description "EFS security group for Filenet Clusters" \
--vpc-id $vpc_id \
--region $region \
--output text)
Create an inbound rule that allows inbound NFS traffic from the CIDR for your cluster's VPC.
aws ec2 authorize-security-group-ingress \
--group-id $security_group_id \
--protocol tcp \
--port 2049 \
--region $region \
--cidr $cidr_range
Create a file system.
file_system_id=$(aws efs create-file-system \
--region $region \
--encrypted \
--performance-mode generalPurpose \
--query 'FileSystemId' \
--output text)
Create mount targets.
Determine the IDs of the subnets in your VPC and which Availability Zone the subnet is in.
aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$vpc_id" \
--query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \
--region $region \
--output table
Should output the following
----------------------------------------------------------------------
| DescribeSubnets |
+------------------+--------------------+----------------------------+
| AvailabilityZone | CidrBlock | SubnetId |
+------------------+--------------------+----------------------------+
| us-east-1a | 192.168.96.0/19 | subnet-0fd12c21a45619f4e |
| us-east-1b | 192.168.128.0/19 | subnet-0025586d6737aba67 |
| us-east-1b | 192.168.32.0/19 | subnet-0a437b71592c4cf58 |
| us-east-1c | 192.168.64.0/19 | subnet-0475c77b9a7ba7d9e |
| us-east-1c | 192.168.160.0/19 | subnet-0b1178e74f7800166 |
| us-east-1a | 192.168.0.0/19 | subnet-0db3c49d9c87a7437 |
+------------------+--------------------+----------------------------
Add mount targets for the subnets that your nodes are in.
Run the following command:
for subnet in $(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$vpc_id" --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' --region $region --output text | awk '{print $3}') ; do aws efs create-mount-target --file-system-id $file_system_id --region $region --subnet-id $subnet --security-groups $security_group_id ; done
This wraps the below command in a for loop that will iterate through your subnet ids.
aws efs create-mount-target \
--file-system-id $file_system_id \
--region $region \
--subnet-id <SUBNETID> \
--security-groups $security_group_id
Create a storage class for dynamic provisioning
Let's get our filesystem ID if we don't already have it above. However if you ran the above steps, $file_system_id
should already be defined.
aws efs describe-file-systems \
--query "FileSystems[*].FileSystemId" \
--region $region \
--output text
fs-071439ffb7e10b67b
Download a StorageClass
manifest for Amazon EFS.
curl -o EFSStorageClass.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
Update it with the storage class id
sed -i 's/fileSystemId:.*/fileSystemId: fs-0f2f504e89fd0033f/' EFSStorageClass.yaml
Deploy the storage class.
kubectl apply -f EFSStorageClass.yaml
Finally, verify it's there
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
efs-sc efs.csi.aws.com Delete Immediate false 7s
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 13d
Install the AWS Load Balancer Controller.
Download an IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json
Create an IAM policy using the policy downloaded in the previous step.
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
{
"Policy": {
"PolicyName": "AWSLoadBalancerControllerIAMPolicy",
"PolicyId": "ANPA24LVTCGNV55JFAAP5",
"Arn": "arn:aws:iam::<ACCOUNT ID>:policy/AWSLoadBalancerControllerIAMPolicy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2023-01-17T20:22:23+00:00",
"UpdateDate": "2023-01-17T20:22:23+00:00"
}
}
Take note of the returned policy arn
value from the policy above as you will need it in the next step.
Create a Kubernetes service account named aws-load-balancer-controller
in the kube-system
namespace for the AWS Load Balancer Controller and annotate the Kubernetes service account with the name of the IAM role.
For our example, we are calling the cluster filenet-east
eksctl create iamserviceaccount \
--cluster=filenet-east \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::748107796891:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Now install the AWS Load Balancer Controller with helm
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=filenet-east \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
NAME: aws-load-balancer-controller
LAST DEPLOYED: Tue Jan 17 15:33:50 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
Install NGINX Controller
Pull down the NGINX controller deployment
wget -O nginx-deploy.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/aws/deploy.yaml
Modify the deployment file by adding the annotations that do not exist and editing the one(s) that do:
service.beta.kubernetes.io/aws-load-balancer-type: "external" // set this to "internal" if not using a public ip
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
Also search for the following configmap entry in the deployment file:
---
apiVersion: v1
data:
allow-snippet-annotations: "false"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.1
name: ingress-nginx-controller
namespace: ingress-nginx
---
We want to add the following annotations:
allow-snippet-annotations: "true"
enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
So now our config map should look like this
---
apiVersion: v1
data:
allow-snippet-annotations: "false"
kind: ConfigMap
metadata:
annotations:
allow-snippet-annotations: "true"
enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.1
name: ingress-nginx-controller
namespace: ingress-nginx
---
Now apply the deployment
kubectl apply -f nginx-deploy.yaml
Verify the deployment
Command:
kubectl get ingressclass
Example output:
NAME CONTROLLER PARAMETERS AGE
alb ingress.k8s.aws/alb IngressClassParams.elbv2.k8s.aws/alb 19h
nginx k8s.io/ingress-nginx none 15h
Once created, check the iam service account is created running the following command.
eksctl get iamserviceaccount --cluster <cluster-name>
NAMESPACE NAME ROLE ARN
kube-system aws-load-balancer-controller arn:aws:iam::748107796891:role/AmazonEKSLoadBalancerControllerRole
kube-system efs-csi-controller-sa arn:aws:iam::748107796891:role/eksctl-filenet-cluster-east-addon-iamserviceaccount-ku-Role1-1SCBRU1DS52QY
Now we just need our add-on registry address. This can be found here: https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
Install the driver add-on to the clusters using helm, if you have not already. This is covered in the enabling EFS section.
Apply the storage class with
kubectl apply -f StorageClass.yaml
Finally, verify it's there
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
efs-sc efs.csi.aws.com Delete Immediate false 7s
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 13d