How to Provision Config Connector with Terraform

This post will show, step by step, how to provision Config Connector with Terraform.

If you are new to Config Connector, start with Why Kubernetes Config Connector. Also, since the time this blog started, the project matured, went GA on GCP in early 2020 and now has extensive documentation on GCP.

If you already have a K8s cluster with Config Connector installed, you can start using all its capabilities to provision GCP resources, such as described in other posts in this blog, such as WordPress on Kubernetes with GCP and Workload Identity.

This post demonstrates easiest way to configure that dedicated GKE cluster with Config Connected that will be used to provision the other GCP resources – compute resource, storage, databases and anything else that Config Connector supports.

Before I used to have a brittle shell script. In fact, that is what other posts use. However, after Config Connector GKE add-on went GA, Terraform GCP container_cluster added support for config_connnector_config. Then Terraform became the easiest and recommended way to provision that first cluster with Config Connector enabled.

This shows how to create a single Terraform script that provisions GCP project, GKE cluster with Config Connector.

First, let’s create the project. Here I’m making project, folder_id, billing_account as variables. Feel free to change this part of skip altogether, if you already have a project that you want to use.

variable "project" {}
variable "folder_id" {}
variable "billing_account" {}

locals {
  region = "us-central1"
  zone   = "us-central1-b"

provider "google-beta" {
  region = local.region
  zone   =

resource "random_id" "id" {
  byte_length = 4
  prefix      = "${var.project}-"

resource "google_project" "root_project" {
  name       =
  project_id =
  folder_id  = var.folder_id
  billing_account = var.billing_account

Next, I’m enabling cloudresourcemanager and container APIs on the project. You might have them already enabled. Here I’m including everything needed to make it work from scratch.

resource "google_project_service" "crmservice" {
  project = google_project.root_project.project_id
  service = ""

  disable_dependent_services = true

resource "google_project_service" "containerservice" {
  project = google_project.root_project.project_id
  service = ""

  disable_dependent_services = true

  depends_on = [

GKE Cluster and Config Connector

Now, you can configure GKE cluster with Config Connector add-on. I’m follow the standard recommended way to remove the default pool and instead create a custom pool.

resource "google_container_cluster" "primary" {
  provider = google-beta
  name      = "cluster-1"
  project   = google_project.root_project.project_id
  location  =

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count = 1

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false

  workload_identity_config {
         identity_namespace = "${google_project.root_project.project_id}"

  addons_config {
    config_connector_config {
      enabled = true

  depends_on = [

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "primary-node-pool"
  project    = google_project.root_project.project_id
  cluster    =
  location   =
  node_count = 3

  node_config {
    preemptible  = true
    machine_type = "e2-medium"

    metadata = {
      disable-legacy-endpoints = "true"

    oauth_scopes = [

Note, that I’m enabling Workload Identity feature on the cluster. This is the recommended way to handle propagation of GCP API permissions to K8s service accounts. It allows enabling security without having to export service account keys.

In this last part, I’m creating GCP service account, that Config Connector will use to provision GCP resources and give it owner access. You can choose giving it different permissions. Then I’m adding IAM binding that connects this GCP service account to K8s service account that was created as part of Config Connector add-on installation.

resource "google_service_account" "cnrmsa" {
  account_id   = "cnrmsa"
  project = google_project.root_project.project_id
  display_name = "IAM service account used by Config Connector"

resource "google_project_iam_binding" "project" {
  project = google_project.root_project.project_id
  role    = "roles/owner"

  members = [

  depends_on = [

resource "google_service_account_iam_binding" "admin-account-iam" {
  service_account_id =
  role               = "roles/iam.workloadIdentityUser"

  members = [

  depends_on = [

output "project_id" {
  value       = google_project.root_project.project_id
  description = "Created project id"

If you choose to use the script above, you can just paste all the code snippets into a single TF file. Then you can run it with:

# login to GCP
gcloud auth application-default login

# run the script
terraform apply -var="project=PROJECT_ID"       \
                    -var="folder_id=FOLDER_ID"      \

When the script completes, you will get project_id output variable. You can use it in the next couple of steps, required to complete the installation:


Use gcloud to set the context on the newly created cluster:

gcloud config set project $PROJECT_ID
gcloud container clusters get-credentials cluster-1 \

Create and apply configconnnector object instance:

cat <<EOF | kubectl apply -f -
kind: ConfigConnector
 mode: cluster
 googleServiceAccount: "cnrmsa@${PROJECT_ID}"

And the final installation step: annotate the namespace to indicate in which project resources will be create. In this example I’m annotating default namespace:

kubectl annotate namespace default$PROJECT_ID

This is it! You Config Connector should be ready and you can verify it by running this command:

 kubectl wait -n cnrm-system --for=condition=Ready pod --all

This repo includes full Terraform script and the steps to automate ConfigConnector resource modification with Helm.

Leave a Comment