Advanced

Kubernetes on EKS

Deploy a production-grade Kubernetes cluster with managed node groups

Project Overview

Set up an Amazon EKS cluster, deploy containerized applications, configure ingress, and implement auto-scaling. Essential for modern microservices architectures.

Difficulty: Advanced
AWS Services: EKS, ECR, ALB Ingress Controller, Secrets Manager
Cost: ~$75-150/month (EKS cluster + nodes)

Prerequisites

  • Docker fundamentals and container experience
  • Basic Kubernetes concepts (pods, services, deployments)
  • kubectl and eksctl installed locally
  • AWS CLI configured with appropriate permissions

Architecture

🌐
Internet
ALB Ingress
Controller
EKS Cluster
Pods
Containers
🖥
Node Group
Managed
📦
ECR
Images
🔐
Secrets Mgr
Secrets

Container Insights provides monitoring and logging

Step-by-Step Instructions

1

Create EKS Cluster

  • Use eksctl to create cluster: eksctl create cluster
  • Or use AWS Console/Terraform for more control
  • Configure VPC with public and private subnets
  • Create managed node group with appropriate instance types
  • Wait for cluster to be ready (~15-20 minutes)
2

Configure kubectl Access

  • Update kubeconfig: aws eks update-kubeconfig --name cluster-name
  • Verify connection: kubectl get nodes
  • Configure aws-auth ConfigMap for additional users
  • Set up RBAC for team access
3

Deploy Your Application

  • Push container image to ECR
  • Create Kubernetes Deployment manifest
  • Create Service to expose the deployment
  • Apply manifests: kubectl apply -f deployment.yaml
  • Verify pods are running: kubectl get pods
4

Set Up ALB Ingress Controller

  • Install AWS Load Balancer Controller
  • Create IAM OIDC provider for the cluster
  • Create service account with appropriate IAM role
  • Deploy the controller using Helm
  • Create Ingress resource for your service
5

Implement Auto Scaling

  • Deploy Metrics Server for resource metrics
  • Create Horizontal Pod Autoscaler (HPA)
  • Configure based on CPU or custom metrics
  • Set up Cluster Autoscaler for node scaling
  • Test scaling by generating load
6

Configure Logging and Monitoring

  • Enable Control Plane logging to CloudWatch
  • Deploy CloudWatch Container Insights
  • Set up Fluent Bit for application logs
  • Create CloudWatch dashboards and alarms
  • Consider Prometheus/Grafana for advanced monitoring

Tips

  • Use Fargate profiles for serverless pods - No node management for specific workloads
  • Implement pod security policies - Enforce security best practices for pods
  • Use namespaces for isolation - Separate environments and teams
  • Consider GitOps with ArgoCD or Flux - Declarative, version-controlled deployments

Code Examples

Create EKS Cluster with eksctl

cluster-config.yaml YAML
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-cluster
  region: us-east-1
  version: "1.28"

managedNodeGroups:
  - name: ng-1
    instanceType: t3.medium
    desiredCapacity: 2
    minSize: 1
    maxSize: 4
    volumeSize: 50
    ssh:
      allow: false
    iam:
      withAddonPolicies:
        albIngress: true
        cloudWatch: true

cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"]

Kubernetes Deployment Manifest

deployment.yaml YAML
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: ClusterIP
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 8080

ALB Ingress Configuration

ingress.yaml YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /health
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789:certificate/xxx
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80

Horizontal Pod Autoscaler

hpa.yaml YAML
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Essential kubectl Commands

Terminal Commands BASH
# Create cluster
eksctl create cluster -f cluster-config.yaml

# Update kubeconfig
aws eks update-kubeconfig --name my-cluster --region us-east-1

# Deploy application
kubectl apply -f deployment.yaml
kubectl apply -f ingress.yaml

# Check status
kubectl get pods
kubectl get svc
kubectl get ingress

# View logs
kubectl logs -f deployment/myapp

# Scale deployment
kubectl scale deployment myapp --replicas=5

What You'll Learn

  • EKS cluster provisioning and management
  • Kubernetes networking on AWS (VPC CNI)
  • Pod autoscaling (HPA/VPA) and cluster autoscaling
  • AWS Load Balancer Controller and Ingress
  • Container Insights for monitoring and logging