Experimenting with EKS Fargate profiles and AWS LB Controller


Intro

With more and more apps running on Kubernetes, EKS is definitely one of the best managed-Kubernetes options in the market. I have been using EKS in production applications for the last year or so. Up until recently, most deployments on EKS were made using an NLB (Network Load Balancer), as creating an ALB (Application Load Balancer) for every service is at least cost-wise suboptimal.

The latest version of AWS Load Balancer Controller, an ingress controller by AWS, was published a few weeks ago. The biggest feature introduced was the support for NLBs as well as ALBs shared across services. I’m curious by nature so I did not have a better motive than this: I had not tried out in practise Fargate profiles, and the newly published AWS LB controller caught my attention, so I went for a combination of it.

This guide aims to be something in between a quickstart guide topped with personal opinions and a self-documentation attempt.

First steps

AWS LB Controller

A prerequisite is to install AWS LB controller. In brief, it should be fairly simple to get started by using the official Helm chart:

helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
helm install eks/aws-load-balancer-controller

Further details on the chart above can be found on the official github page, with a handful of config options available.

Setting up Fargate

AWS is pretty competent in creating to-the-point documentation for their services. Fargate is no exception, hence I’d recommend you to follow this guide to create the Fargate profiles that would fit best for your use-cases.

In short, either via the UI or via eksctl (which is the most common way to manage an EKS cluster) you create a Fargate profile and assign selectors for it. If you are trying to limit the blast radius of your tests on the cluster, I’d recommend creating a new namespace and adding a namespace selector while setting up your Fargate profile. I went for fargate.

Glueing stuff together

In my tests, I used my service template helm chart - github template , but this can work with pretty much any chart of your choice for application deployment.

The config I ended up with was this:

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    alb.ingress.kubernetes.io/group.name: fargate
    alb.ingress.kubernetes.io/healthcheck-path: /health
    alb.ingress.kubernetes.io/certificate-arn: '<my certificate arn is here>'
  hosts:
    - host: my.koslib.com
      paths:
        - /*

and released my app with

helm upgrade --install <my release name> http://www.koslib.com/mycharts/servicetpl-0.8.0.tgz -f path/to/values.yaml -n fargate

Some comments/explanation:

kubernetes.io/ingress.class: alb: by defining the ingress class in this Ingress object, we designate the relevant ingress controller to direct traffic for the service mapped on the ingress.

alb.ingress.kubernetes.io/scheme: internet-facing: by default, the ALB created is of type internal, and does not get assigned a public IP. It could work for various use-cases, but as I was only trying to make a publicly accessible test, I needed my ALB to be internet-facing.

alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]': no app with the tools existing in our century has any excuse to ship without TLS. Port 80 and 443 are a must.

alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}': via this annotation we can create a permanent redirect from port 80 to 443.

alb.ingress.kubernetes.io/certificate-arn: '<my certificate arn is here>': normally the AWS LB controller will do some auto-cert-discovery, based on your hosts. But in case you also prefer to be explicit rather than implicit, you can define your ACM cert arn using this annotation.

Conclusions

I hope this was somewhat helpful for you. After all, EKS with Fargate is a fairly new concept in general, so experimentation can never be enough. Some closing thoughts which I found out the hard way:

  • Right-sizing Fargate-running pods can be difficult. A common approach is to use VPA (Vertical Pod Autoscaling) first, let the app run, and then define the correct resources and switch to HPA (Horizontal Pod Autoscaling).
  • 256Mb are added on top by Fargate for side-loaded components. Keep this in mind when selecting sizes.
  • SecurityGroups for pods do not work on fargate (relevant docs).
  • Fargate pods run only on private subnets. Keep track of internet-bound egress traffic, as this would spice up your NAT Gateway costs, if you’re not using an ECR VPC Endpoint.
  • DaemonSets are not supported at the time of writing in Fargate, although there is an open issue in AWS containers roadmap. This means exporting logs (eg. using Promtail) is not as trivial as it is with the rest of your node-running pods.

[Update (20-Dec-2020): Added note for missing DaemonSet support in Fargate]