With Config Connector you can initialize your Google Cloud resources in the same way you’re provisioning your Kubernetes workloads. As we explain this concept to organizations, we see excitement about using declarative, idempotent, eventually-consistent and self-healing model. Often we demo Config Connector to platform teams. Platform teams are responsible for spinning up infrastructure for multiple groups within their organization. The question, that immediately comes up, is: How do we use Config Connector for provisioning multiple teams with Config Connector? We would like to use dedicated workspace for each team, in a way that is transparent, safe and ensures compliance, security and isolation?
This post will show, step by step, how to create exactly this: isolated and secure workspace that per team.
Requirements
First, let us summarize the requirements:
- Create a dedicated K8s namespace per team. Give the team permissions only to create resources in that namespace.
- Create a dedicated GCP project per team. Give the team permissions only to only create GCP resources in that project.
- Make it easy to provision cloud infrastructure by creating Config Connector resource proxies in the dedicated namespace;
- Enable easy access to GCP resources from Kubernetes workloads within the dedicated namespace. It should be declarative, without having to export service account keys.
In this post we’ll build a solution to satisfy requirements #1, #2 and #3. We’ll focus on the last requirement in the next post.
Projects
Let’s start by creating a Kubernetes cluster and installing Config Connector. Below is the same script we used in the other posts. Don’t forget to substitute your [PROJECT_ID] and [BILLING_ACCOUNT]. You can also skip the part that is creating a project, if you already have one.
Now we’ll create Google Cloud projects team-a
and team-b
. They are intended to host the resources of the respective teams.
export PROJECT_ID=[PROJECT_ID] export SA_EMAIL="cnrm-system@${PROJECT_ID}.iam.gserviceaccount.com" # configure project and permissions for team-a # create dedicated project for team-a gcloud projects create ${PROJECT_ID}-team-a --name=${PROJECT_ID}-team-a # give Config Connector access to team-a dedicated project gcloud projects add-iam-policy-binding ${PROJECT_ID}-team-a --member "serviceAccount:${SA_EMAIL}" --role roles/owner gcloud projects add-iam-policy-binding ${PROJECT_ID}-team-a --member "serviceAccount:${SA_EMAIL}" --role roles/storage.admin # create dedicated project for team-b gcloud projects create ${PROJECT_ID}-team-b --name=${PROJECT_ID}-team-b # give Config Connector access to team-b dedicated project gcloud projects add-iam-policy-binding ${PROJECT_ID}-team-b --member "serviceAccount:${SA_EMAIL}" --role roles/owner gcloud projects add-iam-policy-binding ${PROJECT_ID}-team-b --member "serviceAccount:${SA_EMAIL}" --role roles/storage.admin
Team Namespace and Permissions
Next, we’ll create a Kubernetes team-a
namespace in the cluster that we created earlier. This is the namespace dedicated to team-a
‘s resources. Substitute [PROJECT_ID] with the same of your project and use kubectl apply
to execute the following yaml
:
apiVersion: v1 kind: Namespace metadata: annotations: cnrm.cloud.google.com/project-id: [PROJECT_ID]-team-a name: team-a
cnrm.cloud.google.com/project-id
annotation is what links the namespace to the project. All the Config Connector K8s resources, that we create in this namespace, will instantiate Google Cloud resources in the specified project.
Now, let’s give team-a-user
permissions to edit general objects and Config Connector resources within that namespace. I’m going to create two RoleBinding
objects to set it up. I am directly referencing team-a-user
in my example. However, in reality you will work with groups. Please, see more details on that in RBAC Authorization documentation. There’s also a GKE feature that lets you use your GSuite groups in Kubernetes.
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: team-a-role-binding namespace: team-a subjects: - kind: User name: team-a-user apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: team-a-cnrm-manager-role-binding namespace: team-a subjects: - kind: User name: team-a-user apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cnrm-manager-role apiGroup: rbac.authorization.k8s.io
Note, that second RoleBinding
is using cnrm-manager-role
. We installed it as part of Config Connector set up. For reference, this is, for example, my copy. This ClusterRole describes the RBAC permissions for all Config Connector resources.
As the next step, we’ll repeat the same steps for team-b
:
apiVersion: v1 kind: Namespace metadata: annotations: cnrm.cloud.google.com/project-id: alexbu-kcc-multiteam-team-b name: team-b --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: team-b-role-binding namespace: team-b subjects: - kind: User name: team-b-user apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: team-b-cnrm-manager-role-binding namespace: team-b subjects: - kind: User name: team-b-user apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cnrm-manager-role apiGroup: rbac.authorization.k8s.io
Validation and Risks
Let’s now verify the permissions using auth can-i
command. As you can see, team-a-user
can create Kubernetes native objects and cloud resources in team-a
namespace, but not in team-b
or default
namespaces:
$ kubectl auth can-i create pods --namespace team-a --as=team-a-user yes $ kubectl auth can-i create sqlinstances --namespace team-a --as=team-a-user yes $ kubectl auth can-i create sqlinstances --namespace team-b --as=team-a-user no $ kubectl auth can-i create sqlinstances --as=team-a-user no
Did we configure everything as expected? Let’s actually create a GCP resource impersonating team-a-user
: $ kubectl gcp-bucket.yaml --as=team-a-user
apiVersion: storage.cnrm.cloud.google.com/v1alpha2 kind: StorageBucket metadata: name: [PROJECT_ID]-team-a-storage-bucket namespace: team-a
The resource will be created in team-a
project.
The question that is often asked: can you just change the annotation on your namespace, pointing it to a different GCP project and potentially breaking or (or using quota) of another team? This is mitigated by the fact, that permissions to edit the namespace itself are not part of edit
ClusterRole that team-a-user
has. In fact, if we try to edit the namespace, we get an error:
error: namespaces "team-a" could not be patched: namespaces "team-a" is forbidden: User "team-a-user" cannot patch resource "namespaces" in API group "" in the namespace "team-a"
Finally, the bigger problem, as of the time of this writing, is permissions of Config Connector service account. They span the projects of all the teams. This large blast radius creates large risk, in case the account is compromised. See this GitHub issue for discussion on addressing this issue in one of the future releases.
That’s it for today. We looked at provisioning multiple teams with Config Connector by creating dedicated K8s namespace and GCP project. In the next post, we’ll extend this solution to satisfy the last requirement: Enable easy access to GCP resources from Kubernetes workloads within the dedicated namespace. It should be declarative, without having to export service account keys.