GCR Image from External Kubernetes

Let’s say you use Google Cloud Registry (GCR) to store your images. If you are running your workloads on GKE (Google Kubernetes Engine) that is in the same project as GCR, you should have access by default. GKE clusters are created with read-only permissions for Storage buckets. However, what if you are not running Kubernetes on GCP? Or, what if you are simply running it from a different GCP project? In that case you need to configure imagePullSecret. This is one of the scenarios made simple with Config Connector – creating imagePullSecret to pull GCR Image from external Kubernetes can be done declaratively in a few steps. And if you are looking for general discussion about benefits of using Config Connector, start with this post.

Setup: GCP project, GCR image, external K8s cluster and Config Connector

We’ll start by configuring your non-GKE K8s cluster. When I tested this, I created AWS EKS cluster using the instructions here, but I’m assuming that you already have a cluster.

Next, let’s create a GCP project, install Config Connector on your cluster, and create a service account that Config Connector will use to access GCR bucket. Again, if you already have a project, feel free to only use the part of this script that configures Config Connector access.

export PROJECT_ID=[PROJECT_ID]
export SA_EMAIL="cnrm-system@${PROJECT_ID}.iam.gserviceaccount.com"

# create project
gcloud projects create $PROJECT_ID --name="$PROJECT_ID"
gcloud config set project $PROJECT_ID

# to provision Config Connector, create cnrm-system service account and export the tkey
gcloud iam service-accounts create cnrm-system --project ${PROJECT_ID}
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member "serviceAccount:${SA_EMAIL}" --role roles/owner
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member "serviceAccount:${SA_EMAIL}" --role roles/storage.admin
gcloud iam service-accounts keys create --iam-account "${SA_EMAIL}" ./key.json

# install Config Connector
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
curl -X GET -sLO --location-trusted https://us-central1-cnrm-eap.cloudfunctions.net/download/latest/infra/install-bundle.tar.gz
rm -rf install-bundle
tar zxvf install-bundle.tar.gz
kubectl apply -f install-bundle/

# give cnrm-system namespace permissions to manage GCP
kubectl create secret generic gcp-key --from-file ./key.json --namespace cnrm-system

# annotate the namespace
kubectl annotate namespace default "cnrm.cloud.google.com/project-id=${PROJECT_ID}" --overwrite

Let’s now upload your image to GCR bucket, unless you already have an image on GCR. In this example I’m using an image with a simple node app that returns custom message from its endpoint.

docker pull bulankou/node-hello-world:latest
docker tag bulankou/node-hello-world gcr.io/[PROJECT_ID]/node-hello-world
docker push gcr.io/[PROJECT_ID]/node-hello-world

Configure Image Pull Secret

Now, when we have you image on GCR, let’s have Config Connector take ownership of the bucket. This is done by create a K8s configuration representing the bucket with the same name, usually artifacts.[PROJECT_ID].appspot.com. As usual, apply this yaml with kubectl apply.

apiVersion: storage.cnrm.cloud.google.com/v1alpha2
kind: StorageBucket
metadata:
  name: artifacts.[PROJECT_ID].appspot.com

In addition to the bucket, let’s create ServiceAccount, IAMPolicy and ServiceAccountKey:

apiVersion: iam.cnrm.cloud.google.com/v1alpha1
kind: IAMServiceAccount
metadata:
  name: gcr-sa
spec:
  displayName: Service Account for GCR access
---
apiVersion: iam.cnrm.cloud.google.com/v1alpha1
kind: IAMPolicy
metadata:
  name: gcr-bucket-policy
spec:
  resourceRef:
    apiVersion: storage.cnrm.cloud.google.com/v1alpha2
    kind: StorageBucket
    name: artifacts.[PROJECT_ID].appspot.com
  bindings:
    - role: roles/storage.objectViewer
      members:
        - serviceAccount:gcr-sa@[PROJECT_ID].iam.gserviceaccount.com
---
kind: IAMServiceAccountKey
metadata:
  name: gcr-sa-key
spec:
  publicKeyType: TYPE_X509_PEM_FILE
  keyAlgorithm: KEY_ALG_RSA_2048
  privateKeyType: TYPE_GOOGLE_CREDENTIALS_FILE
  serviceAccountRef:
    name: gcr-sa

IAMServiceAccountKey resource will automatically create a secret with the same name. Now we just need to create a docker-registry type secret that will take its key.json field as password:

kubectl create secret docker-registry gcr-docker-key \
  --docker-server=https://gcr.io \
  --docker-username=_json_key \
  --docker-email=user@example.com \
  --docker-password="$(kubectl get secret gcr-sa-key -o go-template=$'{{index .data "key.json"}}' | base64 --decode)"

We are done configuring the secret to pull the image. Now we can use it in our pod:

apiVersion: v1
kind: Pod
metadata:
  name: node-app-pod
spec:
  containers:
  - name: node-app-container
    image: gcr.io/[PROJECT_ID]/node-hello-world
    imagePullPolicy: Always
    env:
    - name: HELLO_MESSAGE
      value: "Hello from GCR!"
    ports:
    - containerPort: 8080
  imagePullSecrets:
      - name: gcr-docker-key

This is all today! We looked how you create a secret to pull a GCR image from external Kubernetes workload. The source code for this example is here.

Leave a Comment