GitOps with Terraform, Config Connector and Config Sync

in collaboration with Scott Suarez

Terraform, Config Connector and Config Sync can be used together to automate and scale end-to-end GCP deployment workflow, for multiple teams and on multiple environments. This post will first explain the Platform Admin and the App Deployer roles and responsibilities. It will then provide a simple example of using Google Cloud Build and Terraform to provision a GCP project and a GKE cluster with Config Connector and Config Sync per DEV and PROD environment with these roles in mind.

For each environment, each application development team will have a dedicated Kubernetes namespace. Teams can use this namespace to create Kubernetes native objects, such as pods and deployments for their workloads. With Config Connector, they can also create configs for GCP managed services, such as databases, storage, networking resources, using a familiar Kubernetes model. They will do it by checking the Kubernetes configs into Git repo. Config Sync and Config Connector extensions will reconcile the desired state with the actual state of the infrastructure.

The following is a repository that shows a working example of this model.

Tools Summary

  • Terraform – popular IaC provisioning tool. In this example we are using Terraform GCP Provider to create a GCP project and a GKE cluster for each environment.
  • Google Cloud Build – managed GCP service that runs a builder image. In this scenario, Cloud Build runs Terraform when the Git pull request is validated or submitted. Cloud Build is the recommended way to run Terraform with GCP.
  • Config Connector – GKE extension. It allows provisioning GCP resources in the same way, as Kubernetes native resources – pods, services and deployments.
  • Config Sync – GKE extension. It allows automatic synchronization between Kubernetes cluster and a Git repo. Specifically, when a developer checks configs into the repo, they are created on the cluster automatically.

Who Will Use This Workflow

This workflow is for a small, but rapidly growing software development organization. They use GCP, containerized workloads on GKE. Multiple teams are developing and deploying their micro-services independently. This organization identified the need for a central platform team. Above all, the platform team wants to enable multiple app/service development teams to continue to iterate fast. At the same time, platform team also wants to ensure consistency, safety and compliance.

We will walk through the steps of configuring this environment from the view of 2 personas: Platform Admin and App Deployer. This table explains who they are:

Platform Admin and App Developer

Platform AdminApp Deployer
ResponsibilitiesPlatform Admin is a member of central platform team. They are responsible for creating consistent and compliant environment for app development teams.

Platform Admin configures environments, provisions new teams, GCP projects and K8s clusters. They have permissions to Cloud Build triggers, Config Sync and Config Connector mappings for teams.
App Deployer is member of app development team or app operations team.

App Deployer is responsible for authoring , validation deployment K8s configs within their dedicated namespace.
ExpertiseTerraform, Kubernetes tools, GCPKubernetes tools
Platform Admin configures environments (DEV, PROD), new teams, GCP projects and K8s clusters, Cloud Build triggers, Config Sync and Config Connector mappings for teams. App Deployer is responsible for authoring , validation deployment K8s configs within their dedicated namespace.
Platform Admin configures environments (DEV, PROD), new teams, GCP projects and K8s clusters, Cloud Build triggers, Config Sync and Config Connector mappings for teams. App Deployer is responsible for authoring , validation deployment K8s configs within their dedicated namespace.

Platform Admin Flow

Platform team is responsible for providing consistently managed, easy, safe-to-operate, and auditable environments for multiple application/service development teams. We are assuming in this example, that all the application/service development teams are using GKE to run containerized workloads. They are also using additional GCP managed services, such as databases and storage. Platform Admin will start by provisioning 2 environments – DEV and PROD. There is one GCP project and one GKE cluster for each environment.

DEV and PROD environments
DEV and PROD environments

Platform Admin will use Terraform to automate this setup. Using infrastructure as code will make it reviewable and auditable. This is advantageous when  provisioning new environments to ensure the configurations are reviewed and consistent. Instead of applying the Terraform template directly, Platform Admin will configure a Cloud Build trigger. This way, terraform init, plan and apply commands will run whenever new changes are checked into the branch. The state is persisted in the GCS bucket.

Git repo submit activates Cloud Build Trigger that uses Terraform to provision DEV and PROD environments.
Git repo submit activates Cloud Build Trigger that uses Terraform to provision DEV and PROD environments.

The following is an example of such Terraform configuration for dev and prod environments. To configure Cloud Build trigger, the Platform Admin can add a file like this one, then follow these instructions to create a Cloud Build trigger for the repository. In this example Cloud Build trigger includes separate calls to terraform plan and terraform apply: they can be configured selectively on different environments.

Environment for Application Development Teams

As a next step, Platform Admin will configure a dedicated environment for each of the application development teams. There will be a separate namespace on each of the Kubernetes clusters for DEV and PROD environments.

Application development teams are familiar with Kubernetes. Therefore, they would prefer to use it for both their native Kubernetes objects and their GCP objects. The Platform Admin will accomplish this by enabling Config Connector and Config Sync extensions on the cluster. Config Sync is a GKE add-on that will monitor changes on the repo and apply the configurations to the cluster.

Config Sync keeps resources on the cluster in sync with repo and provisions both K8s native resources, and with Config Connector help, GCP resources.
Config Sync keeps resources on the cluster in sync with repo. It provisions both K8s native resources, and with Config Connector’s help, GCP resources.

App Deployer Flow

The App Deployer is a member of the Application Development team. They can also be part of a separate team responsible for deploying the app or service. Platform Admin configured a dedicated namespace on DEV and PROD clusters and linked them to the repo folders. After this, the App Deployer has permissions to submit configs in these repo folders. Config Sync then detects and applies these changes.

We differentiate between templated DRY (Don’t Repeat Yourself) and expanded WET (Write Every Time) configs. To apply the changes, Config Sync requires expanded configs. The App Deployer can use tools such as Helm or kpt, to expand DRY configs to WET configs. In this example, a separate repo folder contains Helm charts (DRY configs). App Deployer uses Helm to expand these charts to the Config Sync watched directory prior to submitting.

Putting All Together

Finally, the diagram illustrates all the parts of the flow:

  1. Platform admin uses Cloud Build and Terraform to configure DEV and PROD environments, projects and GKE clusters. Config Sync and Config Connector are also enabled on the clusters.
  2. Each application/service development team gets a dedicated folder in the repo. Config Sync connects these folders to their corresponding Kubernetes namespace.
  3. The App Deployer then uses familiar Kubernetes tools to author both K8s native configs and GCP configs. They submit the configurations into the repository. Config Sync then automatically synchronizes them to the cluster.

This repo contains a step by step implementation guide.

Leave a Comment