Deploy apps on Kubernetes with GitHub Actions - from start to finish

After many years of using DroneCI, Gitlab or Travis for most of my projects, I thought to give Github Actions (GA) a try. My scope was to deploy a side-project I have on AWS EKS, but make it as automated as possible in every aspect.

Then I quickly realized that no good complete tutorials, or at least people talking about their experience on this exact matter existed, and decided to write up my own.

Background and topics

Most articles and guides cover only a small part of the CI/CD process. Some focus on the Github Actions part, others on a few parts of the app-kubernetization process and so on. The idea is to give someone with at least a basic idea of the kubernetes world a high level overview of the process as well as enough realistic examples to help them get started with it.

Therefore I will cover the following:

  1. Kubernetize an app, using Helm
  2. Spin up your K8S cluster with AWS EKS
  3. Configure your CI/CD process with Github Actions
  4. Environment variables
  5. Secrets

There are some assumptions in this guide, and of course I am describing only my experience out here, so you may find some parts of the post oppinionated. This does not mean that this post only contains the single correct way to do things. I myself tend to forget long and sophisticated processes like this, so I tried to generalize things as much as possible, and hopefully it can serve as a point of reference for this process.

So, let’s start!

The complete guide

As I mentioned before, there are some assumptions in this guide. Heads up about them:

  • I use AWS in general, and I’d love to let AWS manage my Kubernetes cluster through their EKS service
  • EKSCTL to the rescue
  • As I am on AWS already, I’d use KMS to encrypt stuff and generate secrets
  • I found the existing options of authenticating and executing commands on an EKS cluster cumbersome, so I made my own GA for this
  • In my use-case, Kubernetes made sense. Please, please, please make sure you also need it otherwise you will shoot yourself in the foot.

Prepare your app

If you have made it this far, it would be safe to assume that you already have a Dockerized app. Programming language does not really matter, but for the sake of it, I used Python and Go while following those steps. Also, I will assume you already have a remote Docker registry.

In order to deploy your app in Kubernetes, you need to have a few basic YAML manifest files: deployment.yaml and service.yaml.

Super quick notes
A deployment allows you to describe an application’s life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated. (1)
A service provides an abstract way to expose an application running on a set of Pods as a network service.(2)

Deploy with yaml files

After consulting the official Kubernetes docs you could start writing them down. Even applying them on a potential local k8s deployment with kubectl apply -f deployment.yaml for example would show some hopeful first results. And, of course, you’d move on creating other important resource object files like ingress.yaml, hpa.yaml etc.

But if you are like me, and have a handful of different apps and services or side-projects to work on, you’d find this repetitive process tedious. There comes Helm. Helm is a package manager for Kubernetes apps. It is written in Go and leverages Go templates.

I’ve found that it can be used either as a package/version manager for an app or as a mainly templating tool for your kubernetes manifest files. Each Helm “app” is called chart, and it is a collection of *.yaml templates which describe a set of resources: deployment.yaml, service.yaml, ingress.yaml, hpa.yaml, secrets.yaml.

In my case, after looking around on Github for an existing helm chart to cover my needs, I was dissapointed enough to decide to roll out my own. And to be honest, Helm makes it fairly easy given their incredible docs. And that it how I ended up creating a service template chart with the following structure:

- templates/
    -- _helpers.tpl
    -- deployment.yaml
    -- hpa.yaml
    -- ingress.yaml
    -- secrets.yaml
    -- service.yaml
    -- serviceaccount.yaml
- Chart.yaml
- values.yaml

Feel free to take a look inside. You will quickly understand that is nothing extraordinary, and certainly not something very much different from the initial set of files that helm create mychart produces, yet with adjustments to make it more usage-ready. The main differences are: 1) secrets support, 2) environment variables in containers - minor stuff.

Deploy with helm

I’ve taken the extra step and created a remote chart repo using another great Github feature, Github Pages and published my service template chart.

Deploying using Helm is as easy as helm install <release name> <remote or local path to chart> -f <path to custom values file>. Therefore if you’d like to use my own provided service chart you can do:

helm upgrade --install myrelease -f .values.yaml

Your AWS EKS cluster

Why bother rolling out your own cluster and carry the burden of maintaining, securing and keeping it alive, while you can have a managed Kubernetes cluster by a large-scale cloud provider? AWS offers EKS, DigitalOcean offers DOKS, Azure offers AKS, and so on.

As an AWS user myself, I opted to use EKS. I’d recommend instead of using the AWS UI and create your cluster via clicking, use eksctl, which is the official CLI for EKS. Follow the docs there as well, and you will end up with a cluster.yaml file which describes the desired attributes of your cluster.

Now, some high level instructions in order to configure your cluster to gather metrics, provide a dashboard for UI lovers and expose an endpoint for eventually-to-be-deployed services on it:

  1. Create an EKS cluster with eksctl and a custom cluster.yaml configuration.
  2. When the cluster is created, install the metrics server
kubectl apply -f

and verify it’s running (you should see one replica running there).

kubectl get deployment metrics-server -n kube-system
  1. Deploy the dashboard
kubectl apply -f
  1. Create an eks-admin service account and cluster role binding, using t custom eks-admin-service-account.yaml
apiVersion: v1
kind: ServiceAccount
  name: eks-admin
  namespace: kube-system
kind: ClusterRoleBinding
  name: eks-admin
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system
  1. Retrieve a token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

and you can use it on the dashboard after proxying with kubectl proxy, by going here: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login.

  1. Install nginx ingress controller using a NetworkLoadBalancer (instructions and info).

  2. Follow DO’s instructions for cert-manager support in Ingress resources. This is very helpful in order to automatically issue SSL certificates for your services’ endpoints. s

  3. Allow EKS nodes to pull ECR images:

Should everything go well, you will have a fully functioning EKS cluster, where you’d be able to browse and connect on the dashboard, have the nginx ingress controller ready to expose your new services to the outer world and, last but not least, your nodes can pull Docker images from ECR.

Configuring your CI/CD - Automation

In this blog post, Github Actions is the selected CI/CD automation tool. In order to keep things within scope, I will briefly describe the idea of the workflow and then also provide a working example for you to copy and use as you wish.

The process looks like this, as in any CI/CD system:

  1. Checkout your code
  2. Build your Docker image after running your tests
  3. Push image to Docker registry
  4. Authenticate the CI/CD system with the k8s cluster
  5. Deploy a new version of the app with Helm
  6. Cleanup

In step #4, I talked about authentication with the kubernetes cluster. Practically this means that the Github Action runner will need to be able to execute kubectl/helm commands inside our EKS cluster. If you followed the instructions from this post, you should have created the EKS cluster using eksctl tool, which appends the auth details to your new EKS cluster in your kube config file on your computer. However, this uses aws-iam-authenticator, a small tool which is used to securely authenticate with AWS resources.

Again, with a brief look in the GA marketplace, I was not able to find an existing GA which would 1) authenticate on EKS with aws-iam-authenticator given the kubeconfig file and 2) execute helm commands. So, I rolled out my own dead-simple action which does exactly that: You may feel free to use/extend it. The only thing you need to do is create a Github Secret in your project repository, named KUBE_CONFIG_DATA and add in the value of that secret your kube config file in base64-encrypted form.

A fully working example of a .github/workflow/main.yml follows below:

name: CI

      - master
      - develop

    runs-on: ubuntu-latest

    - uses: actions/checkout@v2

    # Runs a single command using the runners shell
    - name: AWS ECR
      uses: kciter/aws-ecr-action@v1
        access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        account_id: ${{ secrets.AWS_ACCOUNT_ID }}
        repo: # fill this in
        region: us-east-1  # change as needed
        tags: ${{ github.sha }}
        create_repo: true

    runs-on: ubuntu-latest
    needs: [build]

      - uses: actions/checkout@v2

      - name: AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1  # change as needed

      - name: helm deploy
        uses: koslibpro/helm-eks-action@master
          KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
          command: helm upgrade --install myrelease -f .values.yaml --set image.tag=${{ github.sha }} --wait

As you can see, there are a few AWS Account-related secrets to create on Github Secrets, and of course you will need to adapt your helm install/upgrade command as needed, but I’m sure you have a grasp of the concept. Consider this as a guideline, so of course adapt accordingly.

Environment variables and secrets

Non sensitive information

Our good old env vars, every software engineer knows about them and nobody can live without them. Back in the day, env vars would contain anything. From app configuration flags used on startup to database connection strings.

In the containers world, we, thank God, opted for not storing sensitive information as plain text in the environment variables. And that is the notion of secrets.

In kubernetes, there is a way to define the env variables of a container with a pretty neat way. Also, if you go down this road using my service template chart I referenced above, you can just add a key-value set inside the .values.yaml file you created for your repository, and those will be exposed as env vars.


# .values.yaml


Sensitive information

What about the secrets though? How would you safely commit sensitive info inside your codebase, so that GA could pick them up and add them as kubernetes secrets in your cluster, so that your containers could fetch and read them up later on?

Helm secrets can help! Helm secrets is a plugin developed and maintained by Zendesk, which helps with encrypting/decrypting secrets while executing helm install/upgrade commands. Under the hood, it uses mozilla sops to encrypt keys you provide, and you can use a managed key service to encrypt your sensitive info.

Briefly talking, the concept is to create a secrets.yaml file which would contain your sensitive info in plain text, initially. Example:

    DATABASE_URL: postgres://username:password@host:port/db

By adding a .sops.yaml file at the root path of your folder, you can define which keys sops will use to encrypt your secrets. Eg. in my case with AWS KMS:

  # Encrypt with AWS KMS
  - kms: 'arn:aws:kms:<region>:<account_id>:key/<id>'

And then using helm secrets enc .secrets.yaml, you will encrypt the content! Open up the file now, and confirm that this has worked out correctly for you as well. Also, more info on how sops works can be found here. Not to forget, the .secrets.yaml file, after it’s encry pted can be safely committed into your git repository.

That is exactly why in my service template Helm chart I extended the default-generated version of it to include Secrets/secrets.yaml. Because now, by installing/upgrading a helm release and providing the encrypted secrets.yaml file as a file param, helm will create the secrets on kubernetes for your service, and will also expose them to your containers in a safe manner to be consumed.

Glueing everything together

I’m glad you made it this far, and really appreciate it!

As of now, you should have the following:

  1. Your EKS cluster using eksctl
  2. .values.yaml adjusted according to your needs and .secrets.yaml containing your encrypted sensitive info in your git repository
  3. A GitHub Action workflow set up in your repository

The last adjustment we need to do in order to have a fully working and functional CI/CD workflow is to make our GA helm release command include the secrets file. You can easily do it like this:

    command: |
    helm secrets upgrade --install <release> -f .values.yaml -f .secrets.yaml  --set image.tag=${{ github.sha }} --wait


I understand that this article may be demanding for more junior software engineers, however it attempts to sum up the process, end to end, on how one can use Github Actions to deploy apps with Helm on a Kubernetes cluster. Hopefully people found it useful, but I’d be more than happy to receive feedback and comments on how to improve and make this process more efficient/better!



See also