Deploying GitOps with Weave Flux and Amazon EKS
You have countless options for deploying resources into an Amazon EKS cluster. GitOps—a term coined by Weaveworks—provides some substantial advantages over the alternatives. With only Git as the single, central source for controlling deployment into your cluster, GitOps provides easy version control on a platform your team already knows. Getting started with GitOps is straightforward: create a pull request, merge, and the configuration deploys to the EKS cluster.
Weave Flux makes running GitOps in your EKS cluster fast and easy, as it monitors your configuration in Git and image repositories and automates deployments. Weave Flux follows a pull model, automatically triggering deployments based on changes. This provides better security than most continuous deployment tools, which need permissions to access your cluster. This approach also provides Git with version control over your configuration and enables rollback.
This post walks through implementing Weave Flux and deploying resources to EKS using Git. To simplify the image build pipeline, I use AWS Service Catalog to provide a standardized pipeline. AWS Service Catalog lets you centrally define a portfolio of approved products that AWS users can provision. An AWS CloudFormation template defines each product, which can be version-controlled.
After you deploy the sample resources, I quickly demonstrate the GitOps approach where a new image results in the configuration automatically deploying to EKS. This new image may be a commit of Kubernetes manifests or a commit of Helm release definitions.
The following diagram shows the workflow.
In GitOps, you manage Docker image builds separately from deployment configuration. For image builds, this example uses AWS CodePipeline and AWS CodeBuild , which provide a managed workflow from GitHub source through to an image landing in Amazon Elastic Container Registry (ECR) .
This post assumes that you already have an EKS cluster deployed, including kubectl access. It also assumes that you have a GitHub account.
To deploy a cluster, see Getting Started with eksctl .
If you need kubectl, see Installing kubectl .
If you don’t have a GitHub account, sign up for a new account .
First, create a GitHub repository to store the Kubernetes manifests (configuration files) to apply to the cluster.
In GitHub, create a GitHub repository . This repository holds Kubernetes manifests for your deployments. Name the repository k8s-config to align with this post. Leave it as a public repository, check the box for Initialize this repository with a README , and choose Create Repo .
On the GitHub repository page, choose Clone or Download and save the SSH string:
Next, create a GitHub token that allows creating and deleting repositories so AWS Service Catalog can deploy and remove pipelines.
In your GitHub profile, access your token settings .
Choose Generate New Token .
Name your new token CodePipeline Service Catalog , and select the following options:
repo scopes (repo:status, repo_deployment, public_repo, and repo:invite)
write:public_key and read:public_key
write:repo_hook and read:repo_hook
read:user and user:email
4 . Choose Generate Token .
5. Copy and save your access token for future access.
Helm is a package manager for Kubernetes that allows you to define a chart. Charts are collections of related resources that let you create, version, share, and publish applications. By deploying Helm into your cluster, you make it much easier to deploy Weave Flux and other systems. If you’ve deployed Helm already, skip this section.
First, install the Helm client with the following command:
curl -LO https://git.io/get_helm.sh
chmod 700 get_helm.sh
On macOS, you could alternatively enter the following command:
brew install kubernetes-helm
Next, set up a service account with cluster role for Tiller, Helm’s server-side component. This allows Tiller to manage resources in your cluster.
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
Finally, initialize Helm and verify your version. Tiller takes a few seconds to start.
helm init –service-account tiller –history-max 200
Deploy Weave Flux
With Helm installed, proceed with the Weave Flux installation. Begin by installing the Flux Custom Resource Definition.
kubectl apply -f https://raw.githubusercontent.com/fluxcd/flux/helm-0.10.1/deploy-helm/flux-helm-release-crd.yaml
Now add the Weave Flux Helm repository and proceed with the install. Make sure that you update the git.url to match the GitHub repository that you created earlier.
helm repo add fluxcd https://charts.fluxcd.io
helm upgrade -i flux –set helmOperator.create=true –set helmOperator.createCRD=false –set firstname.lastname@example.org:YOURUSER/k8s-config –namespace flux fluxcd/flux
You can use the following code to verify that you successfully deployed Flux. You should see three pods running:
kubectl get pods -n flux
NAME READY STATUS RESTARTS AGE
flux-5bd7fb6bb6-4sc78 1/1 Running 0 52s
flux-helm-operator-df5746688-84kw8 1/1 Running 0 52s
flux-memcached-6f8c446979-f45wj 1/1 Running 0 52s
Flux requires a deploy key to work with the GitHub repository. In this post, Flux generates the SSH key pair itself, but you can also specify a different key pair when deploying. To access the key, download fluxctl, a command line utility that interacts with the Flux API. The following steps work for Linux. For other OS platforms, see Installing fluxctl .
sudo wget -O /usr/local/bin/fluxctl https://github.com/fluxcd/flux/releases/download/1.14.1/fluxctl_linux_amd64
sudo chmod 755 /usr/local/bin/fluxctl
Validate that fluxctl installed successfully, then retrieve the public key pair using the following command. Specify the namespace where you deployed Flux.
fluxctl –k8s-fwd-ns=flux identity
Copy the key and add that as a deploy key in your GitHub repository.
In your GitHub repository, choose Settings, Deploy Keys.
Choose Add deploy key and name the key Flux Deploy Key.
Paste the key from fluxctl identity.
Choose Allow Write Access , Add Key .
Now use AWS Service Catalog to set up your image build pipeline.
Set up AWS Service Catalog
To allow end users to consume product portfolios, you must associate a portfolio with an IAM principal (or principals): a user, group, or role. For this example, associate your current identity. After you master these basics, there are additional resources to teach you how to set up a multi-region, multi-account catalog .
To retrieve your current identity, use the AWS CLI to get your ARN:
aws sts get-caller-identity
Deploy the product portfolio that contains an image build pipeline service by doing the following:
In the AWS CloudFormation console, launch the CloudFormation stack with the following link:
2. Choose Next .
3. On the Specify Details page, enter your ARN from get-caller-identity. Also enter an environment tag, which AWS applies to all resources from this portfolio.
4. Choose Next .
5. On the Options page, choose Next .
6. On the Review page, select the check box displayed next to I acknowledge that AWS CloudFormation might create IAM resources .
7. Choose Create . CloudFormation takes a few minutes to create your resources.
Deploy the image pipeline
The image pipeline provisions a GitHub repository, Amazon ECR repository, and AWS CodeBuild project. It also uses AWS CodePipeline to build a Docker image.
In the AWS Management Console, go to the AWS Service Catalog products list and choose Pipeline for Docker Images .
Choose Launch Product.
For Name , enter ExamplePipeline , and choose Next .
On the Parameters page, fill in a project name, description, and unique S3 bucket name. The specifics don’t matter, but make a note of the name and S3 bucket for later use.
Fill in your GitHub User and GitHub Token values from earlier. Leave the rest of the fields as the default values.
To clean up your GitHub repository on stack delete, change Delete Repository to true .
Choose Next .
On the TagOptions screen, choose Next .
Choose Next on the Notifications page.
On the Review page, choose Launch .
The launch process takes 1–2 minutes. You can verify that you now have a repository matching your project name (eks-example ) in GitHub. You can also look at the pipeline created in the AWS CodePipeline console .
Deploying with GitOps
You can now provision workloads into the EKS cluster. With a GitOps approach, you only commit code and Kubernetes resource definitions to GitHub. AWS CodePipeline handles the image builds, and Weave Flux applies the desired state to Kubernetes.
First, create a simple Hello World application in your example pipeline. Clone the GitHub repository that you created in the previous step and substitute your GitHub user below.
git clone email@example.com:youruser/eks-example.git
Create a base README file, a source directory, and download a simple NGINX configuration (hello.conf), home page (index.html), and Dockerfile.
echo “# eks-example” > README.md
wget -O src/hello.conf https://blog-gitops-eks.s3.amazonaws.com/hello.conf
wget -O src/index.html https://blog-gitops-eks.s3.amazonaws.com/index.html
Now that you have a simple Hello World app with Dockerfile, commit the changes to kick off the pipeline.
git add .
git commit -am “Initial commit”
[master (root-commit) d69a6ba] Initial commit
4 files changed, 34 insertions(+)
create mode 100644 Dockerfile
create mode 100644 README.md
create mode 100644 src/hello.conf
create mode 100644 src/index.html
Watch in the AWS CodePipeline console to see the image build in process. This may take a minute to start. When it’s done, look in the ECR console to see the first version of the container image.
To deploy this image and the Hello World application, commit Kubernetes manifests for Flux. Create a namespace, deployment, and service in the Kubernetes Git repository (k8s-config) you created. Make sure that you aren’t in your eks-example repository directory.
git clone firstname.lastname@example.org:youruser/k8s-config.git
mkdir charts namespaces releases workloads
The preceding directory structure helps organize the repository but isn’t necessary. Flux can descend into subdirectories and look for YAML files to apply.
Create a namespace Kubernetes manifest.
cat << EOF > namespaces/eks-example.yaml
Now create a deployment manifest. Make sure that you update this image to point to your repository and image tag. For example, .dkr.ecr.us-east-1.amazonaws.com/eks-example:d69a6bac.
cat << EOF > workloads/eks-example-dep.yaml
# Container Image Automated Updates
# do not apply this manifest on the cluster
– name: eks-example
– containerPort: 80
Finally, create a service manifest to create a load balancer.
cat << EOF > workloads/eks-example-svc.yaml
– port: 80
In the preceding code, there are two Kubernetes annotations for Flux. The first, flux.weave.works/automated , tells Flux whether the container image should be automatically updated. This example sets the value to true, enabling updates to your deployment as new images arrive in the registry. This example comments out the second annotation, flux.weave.works/ignore . However, you can use it to tell Flux to ignore the deployment temporarily.
Commit the changes, and in a few minutes, it automatically deploys.
git add .
git commit -am “eks-example deployment”
[master 954908c] eks-example deployment
3 files changed, 64 insertions(+)
create mode 100644 namespaces/eks-example.yaml
create mode 100644 workloads/eks-example-dep.yaml
create mode 100644 workloads/eks-example-svc.yaml
Make sure that you push your changes.
Now check the logs of your Flux pod:
kubectl get pods -n flux
Update the name below to reflect the name of the pod in your deployment. This sample pulls every five minutes for changes. When it triggers, you should see kubectl apply log messages to create the namespace, service, and deployment.
kubectl logs flux-5bd7fb6bb6-4sc78 -n flux
Find the load balancer input for your service with the following:
kubectl describe service eks-example -n eks-example
Now when you connect to the load balancer address in a browser, you can see the Hello World app.
Change the eks-example source code in a small way (such as changing index.html to say Hello World Deployment 2), then commit and push to Git.
After a few minutes, refresh your browser to see the deployed change. You can watch the changes in AWS CodePipeline, in ECR, and through Flux logs. Weave Flux automatically updated your deployment manifests in the k8s-config repository to deploy the new image as it detected it. To back out that change, use a git revert or git reset command.
Finally, you can use the same approach to deploy Helm charts. You can host these charts within the configuration Git repository (k8s-config in this example), or on an external chart repository. In the following example, you use an external chart repository.
In your k8s-config directory, get the latest changes from your repository and then create a Helm release from an external chart.
First, create the namespace manifest.
cat << EOF > namespaces/nginx.yaml
Then create the Helm release manifest. This is a custom resource definition provided by Weave Flux.
cat << EOF > releases/nginx.yaml
flux.weave.works/locked_msg: ‘”Halt updates for now”‘
flux.weave.works/locked_user: User Name <email@example.com></firstname.lastname@example.org>
git add .
git commit -am “Adding NGINX Helm release”
There are a few new annotations for Flux above. The flux.weave.works/locked annotation tells Flux to lock the deployment. This is useful if you find a known bad image and must roll back to a previous version. In addition, the flux.weave.works/tag.nginx annotation filters image tags by semantic versioning.
Wait up to five minutes for Flux to pull the configuration and verify this deployment as you did in the previous example:
kubectl get pods -n flux
kubectl logs flux-5bd7fb6bb6-4sc78 -n flux
kubectl get all -n nginx
If this doesn’t deploy, ensure Helm initialized as described earlier in this post.
kubectl get pods -n kube-system | grep tiller
kubectl get pods -n flux
kubectl logs flux-helm-operator-df5746688-84kw8 -n flux
Log in as an administrator and follow these steps to clean up your sample deployment.
Delete all images from the Amazon ECR repository .
2. In AWS Service Catalog provisioned products , select the three dots to the left of your ExamplePipeline service and choose Terminate provisioned product . Wait until it completes termination (1–2 minutes).
3. Delete your Amazon S3 artifact bucket.
4. Delete Weave Flux:
helm delete flux –purge
kubectl delete ns flux
kubectl delete crd helmreleases.flux.weave.works
5. Delete the load balancer services:
helm delete mywebserver –purge
kubectl delete ns nginx
kubectl delete svc eks-example -n eks-example
kubectl delete deployment eks-example -n eks-example
kubectl delete ns eks-example
6. Clean up your GitHub repositories:
– Go to your k8s-config repository in GitHub, choose Settings , scroll to the bottom and choose Delete this repository . If you set delete to false in the pipeline service, you also must delete your eks-example repository.
– Delete the personal access token that you created.
7. If you provisioned an EKS cluster at the beginning of this post, delete it:
eksctl get cluster
eksctl delete cluster
8. In the AWS CloudFormation console , select the DevServiceCatalog stack, and choose the Actions, Delete Stack .
In this post, I demonstrated how to use a GitOps approach, which allows you to focus on committing code and configuration to Git rather than learning new CI/CD tooling. Git acts as the single source of truth, and Weave Flux pulls changes and ensures that the Kubernetes cluster configuration matches the desired state.
In addition, AWS Service Catalog can be used to create a portfolio of services that enables you to standardize your offerings, such as an image build pipeline based on AWS CodePipeline .
As always, AWS welcomes feedback. Please submit comments or questions below.