Workload Identity with Config Connector

In the previous post, we discussed how you can use Config Connector to provision isolated and secure workspace for your teams. For each of them, we created a team Kubernetes namespace, which we then linked to a team GCP project. We configured permissions on the namespace, so that each team can create K8s resources. Likewise, the team had permissions to create GCP resources, but only in its dedicated GCP project. In this post, we’ll extend this configuration: we will enable easy access from Kubernetes workloads to GCP services. Specifically, we’ll be provisioning Workload Identity with Config Connector across team’s namespace and project.

The steps in this post assumed that you followed the previous post to configure namespaces and projects for multiple teams. We will now be creating team-a‘s configuration. For complete source code, check this repo. This diagram illustrates the configuration we have so far: multiple namespaces within the cluster, each linked it dedicated GCP project.

Provisioning multiple teams with Config Connector
Provisioning multiple teams with Config Connector

Now we will extend this configuration. We are going to add a storage bucket, service account that has access to the bucket on the GCP project side. Then, in our cluster we’ll add Kubernetes service account, that we will annotate in the way to enable K8s workloads permissions to the bucket.

GCP Resources

Let’s start by creating a storage bucket. As the bucket names should be globally unique, replace [PROJECT_ID] with your local project name. As usual, use kubectl apply to create GCP project. If you are continuing to follow the steps from the previous post, add --as=team-a-user to your kubectl command to run the command as a member of team-a.

apiVersion: storage.cnrm.cloud.google.com/v1alpha2
kind: StorageBucket
metadata:
  name: [PROJECT_ID]-team-a-bucket
  namespace: team-a

Next, let’s create a service account:

apiVersion: iam.cnrm.cloud.google.com/v1alpha1
kind: IAMServiceAccount
metadata:
  name: sa-bucket-team-a
  namespace: team-a
spec:
  displayName: service account for bucket access

Let’s now give service account permissions to the bucket. Don’t forget to replace [PROJECT_ID] with the name of your project:

apiVersion: iam.cnrm.cloud.google.com/v1alpha1
kind: IAMPolicy
metadata:
  name: bucket-policy-team-a
  namespace: team-a
spec:
  resourceRef:
    apiVersion: storage.cnrm.cloud.google.com/v1alpha2
    kind: StorageBucket
    name: [PROJECT_ID]-team-a-bucket
    namespace: team-a
  bindings:
    - role: roles/storage.admin
      members:
        - serviceAccount:sa-bucket-team-a@[PROJECT_ID]-team-a.iam.gserviceaccount.com

Configuring Workload Identity

Workload identity requires 3 steps:

  1. Google service account, that we just created above;
  2. Kubernetes service account, that we are about to create;
  3. Policy that connects them, that we are going to create next.

Let’s create a Kubernetes service account. This is standard Kubernetes object, not using Config Connector:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: ksa-bucket-team-a
  namespace: team-a
  annotations:
    iam.gke.io/gcp-service-account: sa-bucket-team-a@[PROJECT_ID]-team-a.iam.gserviceaccount.com

Most importantly, note the annotation that we added – it links our Kubernetes service account to Google service account that we created earlier. Now, let’s create the policy on Google service account, linking it to Kubernetes service account:

apiVersion: iam.cnrm.cloud.google.com/v1alpha1
kind: IAMPolicy
metadata:
  name: sa-wi-policty-team-a
  namespace: team-a
spec:
  resourceRef:
    apiVersion: iam.cnrm.cloud.google.com/v1alpha1
    kind: IAMServiceAccount
    name: sa-bucket-team-a
    namespace: team-a
  bindings:
    - role: roles/iam.workloadIdentityUser
      members:
        - serviceAccount:[PROJECT_ID].svc.id.goog[team-a/ksa-bucket-team-a]

You can apply all the steps above by simply pointing your kubectl apply -f /resources/ --as=team-a-user to a folder.

Validating Access

Let us now validate access to ensure that everything was configured correctly. In order to do this, we will start a pod running Google Cloud SDK and execute commands to validate the access permissions that we have:

# run a pod with Google Cloud SDK
kubectl run -it \
    --generator=run-pod/v1 \
    --image google/cloud-sdk \
    --serviceaccount ksa-bucket-team-a\
    --namespace team-a \
    team-a-ksa-test --as=team-a-user

Note that we passed our service account name into the pod. With the declarative pod configuration, this is equivalent to specifying serviceAccount on the pod spec.

After the pod starts, run the following commands on the pod:

gcloud auth list # google service account should be listed

# create file
echo some text > f1.txt

# copy to bucket, should succeed
gsutil cp f1.txt gs://[PROJECT_ID]-team-a-bucket

# list files in bucket, should succeed
gsutil ls gs://[PROEJCT_ID]-team-a-bucket

As you can see, the permissions from google service account were propagated to Kubernetes service account. In other words, workload identity enabled easy storage bucket access from our pod to the bucket.

This is it! We provisioned Workload Identity with Config Connector to enabled easy access from workloads to GCP resources. This repo has complete source code I used in these post. For completeness, it also includes team-b‘s copy of the configuration we just created for team-a – follow the steps here.

Leave a Comment