TL;DR: In this tutorial, you will learn how to create, connect and operate three Kubernetes clusters in different regions: North America, Europe and South East Asia.
One interesting challenge with Kubernetes is deploying workloads across several regions.
While you can technically have a cluster with several nodes located in different regions, this is generally regarded as something you should avoid due to the extra latency.
Another popular alternative is to deploy a cluster for each region and find a way to orchestrate them.
Placing nodes in a multicluster setup
But before discussing solutions, let’s look at the challenges of a multicluster & multi-cloud setup.
When you orchestrate several clusters, you have to face the following issues:
How do you decide how to split the workloads?
How does the networking work across regions?
What should I do with stateful apps/data?
Challenges of running a Kubernetes multicluster setup
Let’s try to answer some of those questions.
To tackle the first (scheduling workloads), I used Karmada.
With Karmada, you can create deployments with kubectl and distribute them across several clusters using policies.
Karmada takes care of propagating them to the correct cluster.
The project is similar (in spirit) to kubefed.
Goegraphically distributed Kubernetes cluster with Karmada
Karmada uses a Kubernetes cluster as the manager and creates a second control plane that is multicluster aware.
This is particularly convenient because kubectl “just works” and is now multicluster aware.
In other words, you can keep using kubectl, but all the commands can apply resources across clusters and aggregate data.
Each cluster has an agent that issues commands to the cluster’s API server.
The Karmada controller manager uses those agents to sync and dispatch commands.
Karmada client-server architecture and control plane
Karmada uses policies to decide how to distribute your workloads.
You could have policies to have a deployment equally distributed across regions.
Or you could place your pods in a single region.
Orchestrating workloads across several regions and clouds with Karmada policies
Karmada is essentially a multicluster orchestrator but doesn’t provide any mechanism to connect the clusters’ networks.
Traffic routed to a region will always reach pods from that region.
Traffic routed to a cluster will always reach pods from that cluster
But you can use a service mesh like Istio to create a network that spans several clusters.
Istio can discover other instances in other clusters and forward the traffic to other clusters.
Istio multi-cluster setup
But how does the traffic routing work?
For every app in your cluster, Istio injects a sidecar proxy.
All traffic from and to the app goes through the proxy.
The Istio control plane can configure the proxy on the fly and apply routing policies.
Architecture of a service mesh with proxy side cars
In a multicluster setup, Istio instances share endpoints.
When a request is issued, the traffic is intercepted by the proxy sidecar and forwarded to one of the endpoints amongst all endpoints in all clusters.
Kubernetes endpoints are shared so that traffic can be forwarded from one cluster to the other
Since Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services, you can have traffic reaching a single region even if pods are deployed in each region.
Or you could create rules to shift traffic from one region to another.
Multi cluster traffic management with Istio
Nice in theory, but does it work in practice?
I built a proof of concept with Terraform so that you can recreate it in 5 clicks here: https://github.com/learnk8s/multi-cluster
And here’s a demo of it.
Multi cluster Kubernetes setup
I also installed Kiali to visualise the traffic flowing in the clusters in real time.
Multi cluster Kiali demo
If you wish to see this in action, you can watch my demo here.
And finally, if you’ve enjoyed this thread, you might also like the Kubernetes workshops that we run at Learnk8s https://learnk8s.io/training or this collection of past Twitter threads https://twitter.com/danielepolencic/status/1298543151901155330
Until next time!
More from Daniele Polencic
Teaching containers and Kubernetes at learnk8s.io
Sep 19
Proactive cluster autoscaling in Kubernetes
TL;DR: Scaling nodes in a Kubernetes cluster could take several minutes with the default settings. Learn how to size your cluster nodes and proactively create nodes for quicker scaling. When your Kubernetes cluster runs low on resources, the Cluster Autoscaler provision a new node and adds it to the cluster. …
Kubernetes
4 min read
Proactive cluster autoscaling in Kubernetes
Share your ideas with millions of readers.
Write on Medium
Sep 12
Request-based autoscaling in Kubernetes: scaling to zero
TL;DR: In this article, you will learn how to monitor the HTTP requests to your apps in Kubernetes and how to define autoscaling rules to increase and decrease replicas for your workloads. Reducing infrastructure costs boils down to turning apps off when you don’t use them. That’s easy to do…
Kubernetes
4 min read
Request-based autoscaling in Kubernetes: scaling to zero
Jun 15
Binding AWS IAM roles to Kubernetes Service Account for on-prem clusters
TL;DR: In this short tutorial, you will learn how to configure the IAM roles for Service Account for a bare-metal cluster using minikube as an example. Create a cluster with a specific service account issuer: minikube start --extra-config=apiserver.service-account-issuer=« https://[bucket name].s3.amazonaws.com » Create a service account: kubectl create sa test Create a Pod…
Kubernetes
3 min read
Jun 8
The Kubernetes API architecture
The Kubernetes API server handles all of the requests to your Kubernetes cluster. But how does it actually work? When you type kubectl apply -f my.yaml, your YAML is sent to the API and stored in etcd. But what is the API server doing?
Kubernetes
4 min read
The Kubernetes API architecture
May 24
How does RBAC work in Kubernetes?
Before diving into Kubernetes, let’s take discuss how you’d grant access to resources. For example, how would you authorize access to apps? The simplest idea is to assign permissions to users. Mo is an admin and can do anything. Alice is a dev and has read-only access. This authorization system…
Kubernetes
4 min read
How does RBAC work in Kubernetes?
Love podcasts or audiobooks? Learn on the go with our new app.
Try Knowable