In the first part of the post on multi-cluster ingress, we covered simpler options of exposing your cluster to receive external traffic: ClusterIP, NodePort, LoadBalancer, Ingress. In the second part we proceeded to create regional clusters spanning multiple zones. This final post will focus on adding the load balancing resources using Config Connector. This will include creating, firewall rule, backend service, url maps, http target proxy and trying how load balancing works in practice.
First of all, let’s switch the context back to our main cluster with Config Connector:
gcloud container clusters get-credentials cluster-1 --zone=us-central1-b
Firewall Rule
Now let’s configure firewall rules to allow traffic to this port. In our configuration we’ll reference default
network. To start managing it with Config Connector we need to create a network resource named default
.
apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeNetwork metadata: name: default annotations: cnrm.cloud.google.com/deletion-policy: abandon spec: description: Default network for the project --- apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeFirewall metadata: name: fw-allow-mci-neg spec: allow: - protocol: tcp sourceRanges: - "130.211.0.0/22" - "35.191.0.0/16" networkRef: name: default
At this point we are ready to create a backend service.
Backend Service
In order to configure backend service with Network Endpoint Groups (NEGs), let’s retrieve their values. In the future we are looking at how we can improve this step, so this wiring is happening automatically.
gcloud compute network-endpoint-groups list --format="value(uri())"
Apply the following configuration to provision backend service and a health check. In the snippet below, replace NEG1 and NEG2 with urls from the previous step.
apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeBackendService metadata: name: node-app-backend-service labels: retry: again spec: backend: - group: "[NEG1]" balancingMode: RATE maxRate: 100 - group: "[NEG2]" balancingMode: RATE maxRate: 100 healthCheckRef: name: node-app-backend-healthcheck protocol: HTTP location: global --- apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeHealthCheck metadata: name: node-app-backend-healthcheck spec: checkIntervalSec: 10 tcpHealthCheck: port: 8080 location: global
After the backend service is created, you should be able to see its visual representation in Cloud Shell UI, even though it doesn’t yet have a frontend attached to it. Note that each of the two instance groups has 2 healthy instances.

Target Url Map and Http Proxy
First of all, we are going to create target url map. It references node-app-backend-service
that we created in the previous part.
apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeURLMap metadata: name: node-app-url-map spec: defaultService: backendServiceRef: name: node-app-backend-service location: global
Secondly, let’s create target http proxy, that references our url map:
apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeTargetHTTPProxy metadata: name: node-app-target-proxy spec: description: Proxy for node app urlMapRef: name: node-app-url-map location: global
Global Forwarding Rule
Finally, we are going to create global forwarding rule. It points to the target http proxy and provisions port 80 as external port to our services. Use the following snippet to create it
apiVersion: compute.cnrm.cloud.google.com/v1alpha3 kind: ComputeForwardingRule metadata: name: node-app-forwarding-rule spec: target: targetHTTPProxyRef: name: node-app-target-proxy portRange: "80" ipProtocol: "TCP" ipVersion: "IPV4" location: global
As we are not specifying a named address, it will be created for us. You can retrieve the address by running:
$ gcloud compute forwarding-rules list NAME REGION IP_ADDRESS IP_PROTOCOL TARGET node-app-fw-rule [your address] TCP node-app-target-proxy
If you trying curl
-ing your address, you should see “Hello from North America” or “Hello from Europe” depending on what region is closer to your location. Additionally, in Cloud Shell UI you can see the visual representation of how your traffic is routed:

Load Balancing In Action
Now that we have configured global forwarding with Config Connector, to see load balancing in action, try changing one of the deployments to specify wrong image and then killing and redeploying it. If you continue curl’ing, then you will see 502 error codes, and then the service will recover and start sending the response from the region that is not closest to you.

This completes our four part post on configuring multi-cluster ingress with Config Connector. In this last part we provisioned global forwarding. In conclusion, this repo has all the code snippets and short step-by-step instructions for all parts.