Skip to content

An example Terraform repo that utilizes the upstream EKS blueprints project from AWS Integration and Automation.

Notifications You must be signed in to change notification settings

superorbital/aws-eks-blueprint-examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

SuperOrbital: aws-eks-blueprint-examples

Table of Contents

Overview

  • AWS EKS Blueprint examples using Terraform

Note: These are examples. They are intended to be used as a starting point for creating your own infrastructure, but you should not deploy this into a live environment without understanding it and configuring it for your requirements.

Preparation

Terraform

  • The AWS EKS blueprint modules use a few experimental terraform features. Terraform will produce a warning that makes this clear to the user.
  • For simplicity sake within this repo, we are using various *.auto.tfvars files to provide some global variables to multiple underlying directories. Hashicorp prefers that you define TF_VAR_* environment variables for this purpose instead. Because of this you will see some warnings from terraform about unused variables, you can make them a much less noisy, by passing adding the -compact-warnings argument to your terraform commands.

Order of Operations

You should terraform init, plan and apply the directories in the following order when you first spin everything up:

  1. accounts/sandbox/network/primary (~2+ minutes)
  2. accounts/sandbox/eks/poc (~30+ minutes)
  3. accounts/sandbox/k8s/poc (~20+ minutes)

Then you can authenticate with the ArgoCD UI and experiment with the cluster.

AWS Authentication

By default, the terraform code in this repo expects you to have an AWS profile called sandbox defined in your ~/.aws/credentials file.

$ aws --profile=sandbox eks update-kubeconfig --region us-west-2 --name the-a-team-sbox-poc-eks
Added new context arn:aws:eks:us-west-2:000000000000:cluster/the-a-team-sbox-poc-eks to ~/.kube/config

$ kubectl get all -A
...

ArgoCD

Ideally, you should pass in a pre-made ArgoCD password, but for the moment, you can connect and authenticate to ArgoCD using the default admin password:

  • Get default the ArgoCD admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
  • Forward a local port to the ArgoCD Server service
kubectl port-forward -n argocd svc/argo-cd-argocd-server 8080:443
  • Open up a web browser and navigate to:

Addons (via ArgoCD)

Kubernetes Dashboard

  • Forward a local port to the Kubernetes Dashboard service
kubectl port-forward -n kube-system svc/kubernetes-dashboard 8081:443
  • Grab a current valid token for the cluster.

    • If you have jq installed locally you can use the output from:
      • aws --profile=sandbox eks get-token --cluster-name the-a-team-sbox-poc-eks | jq -r '.["status"]["token"]'
    • If not, you can use the token value from this command:
      • aws --profile=sandbox eks get-token --cluster-name the-a-team-sbox-poc-eks
  • Open up a web browser and navigate to:

  • To authenticate select token, and provide the token from the previous command.

Prometheus

  • Forward a local port to the Prometheus Server service
kubectl port-forward -n prometheus svc/prometheus-server 8082:80

Workloads (via ArgoCD)

Team Riker (Guestbook)

  • Forward a local port to the Guestbook UI service
kubectl port-forward -n team-riker svc/guestbook-ui 8090:80

Team Burnham (Nginx)

  • Forward a local port to the Nginx service
kubectl port-forward -n team-burnham svc/nginx 8091:80

Team Carmen (Geolocation API)

  • Forward a local port to the Nginx service
kubectl port-forward -n geolocationapi svc/geolocationapi 8092:5000

Troubleshooting

EKS Issues

  • My worker nodes are not joining the cluster.
    • Cause
      • There are various reasons that this can happen, but some of the most likely reasons for this issue, are:
        • The subnets that contain the EC2 instances do not have internet access.
        • The user-data script failed to run properly.
          • This can sometimes manifest as nodes that have successfully joined the Kubernetes cluster, because the bootstrap.sh script in the user-data ran successfully, but the node group still reports Create failed, because the something else in the user-data script produced an error.
    • Solution
      • Make sure that you EC2 instances can reach the eks.amazonaws.com endpoints.
      • Make sure that the user-data script is not erroring out on the node and that the bootstrap script is run last (and only once).

Terraform Issues

  • Error: error creating EKS Add-On (*): InvalidParameterException: Addon version specified is not supported

    • Cause

      • This means that you are trying to install a version of the EKS addon that is not compatible with the version of k8s that you are using in the EKS cluster.
    • Solution

      • Determine what versions of the EKS addon are available by running aws eks describe-addon-versions and then add an appropriate config block for that addon.
      • e.g.
      amazon_eks_coredns_config    = {
        // We need an older version since we aren't using the latest EKS k8s version
        addon_version              = "v1.8.3-eksbuild.1"
      }
  • Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused

    • Cause
      • This typically occurs if the EKS cluster fails to come up fully (e.g. The worker nodes fail to connect to the cluster) or if the EKS cluster is deleted without Terraform's knowledge.
    • Solution
      • Run terraform state rm module.eks_blueprints.kubernetes_config_map.aws_auth[0] from appropriate terraform directory.

About

An example Terraform repo that utilizes the upstream EKS blueprints project from AWS Integration and Automation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages