Introduction to Crossplane and Building GKE Clusters
Welcome to our Crossplane tutorial, where we delve into building a GKE cluster in Kubernetes with ease. Crossplane, a powerful control plane, has significantly simplified the process of managing and deploying resources across multiple cloud providers. In this blog post, we’re not just exploring Crossplane; we’re providing a step-by-step guide. Starting with a basic approach, we’ll then unveil the proper method for creating your GKE cluster with Crossplane, using Crossplane providers, Crossplane resources, and other essential components. If you’re new to Crossplane, beginning with a bunch of managed resources is a good start, although it can be cumbersome for complex setups. For more insights, you might also want to check out my previous blog post called Crossplane: Unveiling a New Horizon in Cloud Infrastructure Management.
Video
Below is a video demonstration.
Video Chapters
- 00:00 – Introduction to Crossplane and GKE Cluster
- 00:35 – Demo 1: Managed Resources Crossplane
- 10:54 – Demo 1 – Conclusion and Feedback Request
- 11:22 – Understanding Crossplane Components
- 13:58 – Demo 2: Creating a GKE Cluster Using Crossplane Compositions
- 26:40 – Final Thoughts
Install Crossplane Tutorial
Follow the instructions below to install Crossplane in a Kubernetes cluster. I’m using Docker Desktop which is the easiest way to get a Kubernetes cluster going. The installation process involves:
- Installing Crossplane using a shell script, which is a crucial step in setting up the Crossplane system.
- Installing the official GCP container provider, a key Crossplane provider for managing GCP resources.
- Creating a Kubernetes secret to allow Crossplane to communicate with GCP and create our cluster, a common practice in real-world infrastructure management using Crossplane.
Install the Up command-line
Download and install the Upbound up command-line.
curl -sL "https://cli.upbound.io" | sh
mv up /usr/local/bin/
Verify the version of up with up –version
$ up --version
v0.19.1
Install Universal Crossplane
Install Upbound Universal Crossplane with the Up command-line.
$ up uxp install
UXP 1.13.2-up.2 installed
Verify the UXP pods are running
$ kubectl get pods -n upbound-system
NAME READY STATUS RESTARTS AGE
crossplane-77ff754998-k76zz 1/1 Running 0 40s
crossplane-rbac-manager-79b8bdd6d8-79577 1/1 Running 0 40s
Install the official GCP Container Provider
kubectl apply -f gke_provider.yaml
After installing the provider, verify the install with kubectl get providers.
$ kubectl get providers
NAME INSTALLED HEALTHY PACKAGE AGE
provider-gcp-container True True xpkg.upbound.io/upbound/provider-gcp-container:v0.41.0 47h
provider-gcp-storage True True xpkg.upbound.io/upbound/provider-gcp-storage:v0.41.0 47h
upbound-provider-family-gcp True True xpkg.upbound.io/upbound/provider-family-gcp:v0.41.0 47h
Create a Kubernetes secret
The provider-gcp-container requires credentials to create and manage GCP resources. Create a JSON key file containing the GCP account credentials. GCP provides documentation on how to create a key file.
Here is an example key file:
{
"type": "service_account",
"project_id": "caramel-goat-354919",
"private_key_id": "e97e40a4a27661f12345678f4bd92139324dbf46",
"private_key": "-----BEGIN PRIVATE KEY-----nMIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCwA+6MWRhmcPB3nF/irb5MDPYAT6BWr7Vu/16U8FbCHk7xtsAWYjKXKHu5mGzum4F781sM0aMCeitlvn+jr2y7Ny23S9uP5W2kfnD/lfj0EjCdfoaN3m7W0j4DrriJviV6ESeSdb0Ehg+iEWngNrkb/ljigYgsSLMuemby5lvJVINUazXJtGUEZew+iAOnI4/j/IrDXPCYVNo5z+bneiMsDYWfccenWGOQf1hkbVWyKqzsInxu8NQef3tNhoUXNOn+/kgarOA5VTYvFUPrn2l1P9TxzrcYuL8XK++HjVj5mcNaWXNN+jnFpxjMIJOiDJOZoAo0X7tuCJFXtAZbHn9P61GjhbAgMBAAECggEARXo31kRw4jbgZFIdASa4hAXpoXHx4/x8Q9yOR4pUNR/2nt+FMRCv4YTEWb01+nV9hfzISuYRDzBEIxS+jyLkda0/+48i69HOTAD0I9VRppLgEne97e40a4a27661f12345678f4bd92139324dbf46+2H7ulQDtbEgfcWpNMQcL2JiFq+WSneh3H0gHSFFIWGnAM/xofrlhGsN64palZmbt2YiKXcHPT+WgLbD45mT5j9oMYxBJfnPkUUX5QibSSBQyvNqCgRKHSnsY9yAkoNTbPnEV0clQ4FmSccogyS9uPEocQDefuYnY7gpwSzjXpaw7tP5scK3NtWmmssi+dwDadfLrKF7oQKBgQDjIZ+jwAggCp7AYB/Sn6dznl5/G28Mw6CIM6kPgFnJ8P/C/Yi2y/OPKFKhMs2ecQI8lJfcvvpU/z+kZizcGnr/7iRMR/SX8n1eqS8XfWKeBzIdwQmiKyRg2AKelGKljuVtI8sXKv9t6cm8RkWKuZn9uVroTCPWGpIrh2EMxLeOrlm0QKBgQDGYxoBvl5GfrOzjhYOa5GBgGYYPdE7kNnynhpHE9CrPZFIcb5nGMlBCOfV+bqA9ALCXKFCr0eHhTjk9HjHfloxuxDmz34vC0xXGncegqfV9GNKZPDctysAlCWW/dMYw4+tzAgoG9Qm13Iyfi2Ikll7vfeMX7fH1cnJs0nnYpN9LYPawKBgQCwMi09QoMLGDH+2pLVc0ZDAoSYJ3NMRUfk7Paqp784VAHW9bqtn1zB+W3gTyDjgJdTl5IXVK+tsDUWu4yhUr8LylJY6iDF0HaZTR67HHMVZizLETk4MnLfvbKKgmHkPO4NtG6gEmMESRCOVZUtAMKFPhIrIhAV2x9CBBpb1FWBjrgQKBgQCjnkP3WRjDQipJ7DkEdLo9PaJ/EiOND60/m6BCzhGTvjVUt4M22XbFSiRrhXTB8W189noZ2xrGBCNQ54V7bjE+tBQEQbC8rdnNAtR6kVrzyoU6xzLXp6Wq2nqLnUc4+bQypTnBscVVfmO6stt+v5Iomvh+l+x05hAjVZh8Sog0AxzdQKBgQCMgMTXt0ZBs0ScrG9vnp5CGa18KC+S3oUOjK/qyACmCqhtd+hKHIxHx3/FQPBWb4rDJRsZHH7C6URR1pHzJnmhCWgKGsvYrXkNxtiyPXwnU7PNP9JNuCWa45dr/vE/uxcbccK4JnWJ8+Kk/9LEX0nmjtDm7wtLVlTswYhP6AP69RoMQ==n-----END PRIVATE KEY-----n",
"client_email": "[email protected]",
"client_id": "103735491955093092925",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/my-sa-313%40caramel-goat-354919.iam.gserviceaccount.com"
}
Save this JSON file as gcp-credentials.json.
Use kubectl create secret -n upbound-system to generate the Kubernetes secret object inside the Universal Crossplane cluster.
kubectl create secret generic gcp-secret -n upbound-system --from-file=creds=./gcp-credentials.json
View the secret with kubectl describe secret
$ kubectl describe secret gcp-secret -n upbound-system
Name: gcp-secret
Namespace: upbound-system
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
creds: 2334 bytes
Create a ProviderConfig
Create a ProviderConfig Kubernetes configuration file to attach the GCP credentials to the installed official provider-gcp-storage.
kubectl apply -f gke_provider.yaml
Verify the ProviderConfig
$ kubectl describe providerconfigs
Name: default
Namespace:
API Version: gcp.upbound.io/v1beta1
Kind: ProviderConfig
# Output truncated
Spec:
Credentials:
Secret Ref:
Key: creds
Name: gcp-secret
Namespace: upbound-system
Source: Secret
Project ID: crossplaneprojects
Demo 1: Managed Resources in Crossplane
The first demo involves Setting up a GKE cluster resource with a Nodepool using Managed Resources. We’ll disucss Managed Resource later.
The key focus is on creating two managed resources: a cluster and a node pool. This is a basic setup using default VPC and subnets. The deployment is achieved through kubectl apply -f, where we specify our resource file.
kubectl apply -f gke_cluster_managed_resources.yaml
Observing the Cluster Creation
You can monitor the cluster and node pool creation using kubectl get commands. Describing the cluster and node pool gives insights into the status and any potential errors.
kubectl get cluster
kubectl get nodepool
kubectl describe cluster
kubectl describe nodepool
Cluster is Ready
Nodepool is Ready
Access the New Cluster
Get the cluster name:
CLUSTER_NAME=$(kubectl get clusters.container.gcp.upbound.io gke-managed-resources -o=jsonpath='{.metadata.name}')
Get the cluster location:
CLUSTER_LOCATION=$(kubectl get clusters.container.gcp.upbound.io gke-managed-resources -o=jsonpath='{.spec.forProvider.location}')
Get the project ID:
PROJECT_ID=$(kubectl get providerconfig.gcp.upbound.io/default -o=jsonpath='{.spec.projectID}')
After running these commands, you will have environment variables CLUSTER_NAME, CLUSTER_LOCATION, and PROJECT_ID set with the respective values. Then, you can use them in your gcloud command like this:
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_LOCATION --project $PROJECT_ID
Note: You may run into this error: CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable. Install gke-gcloud-auth-plugin for use with kubectl by following this link.
If that happens, follow the link above to install the gke auth plugin.
Now run the following commands to look around your new GKE cluster.
kubectl get nodes
kubectl get all
Cluster is Running
GCP Console Views
Crossplane Configuration for GKE Cluster and Node Pool
This configuration defines two primary resources using Crossplane to manage a GKE cluster in Google Cloud Platform, showcasing how Crossplane can manage cloud infrastructure efficiently. Let’s examine the gke_cluster_managed_resources.yaml file.
1. GKE Cluster Configuration:
apiVersion: container.gcp.upbound.io/v1beta1
kind: Cluster
metadata:
labels:
cluster_name: gke
name: gke-managed-resources
spec:
forProvider:
enableAutopilot: false
enableIntranodeVisibility: true
ipAllocationPolicy:
- {}
initialNodeCount: 1
location: us-central1
removeDefaultNodePool: true
writeConnectionSecretToRef:
name: gke-managed-resources-kubeconfig
namespace: upbound-system
- API Version and Kind: Specifies the GCP API version and that the resource is a Cluster, a perfect example of how Crossplane interacts with the Kubernetes API to deploy resources.
- Metadata: Defines the cluster’s name (gke-managed-resources) and labels it as a gke cluster.
- Specifications:
- enableAutopilot: Set to false, indicating this cluster will not use GCP’s Autopilot mode.
- enableIntranodeVisibility: Enables visibility into the network traffic between pods within a node.
- ipAllocationPolicy: Configures the IP allocation for the cluster; here, it’s left empty, indicating default settings.
- initialNodeCount: Specifies the initial number of nodes for the cluster, set to 1.
- location: The geographical location for the cluster, set to us-central1.
- removeDefaultNodePool: Indicates the default node pool should be removed, usually done when custom node pools are defined.
- Connection Secret: The writeConnectionSecretToRef field is a part of composition metadata in Crossplane, specifying where the connection details (like kubeconfig) should be stored. This is crucial for accessing the cluster post-creation. Although, this works for EKS and AKS and not for GKE. We’ll see how to access our GKE cluster later.
2. GKE Node Pool Configuration:
apiVersion: container.gcp.upbound.io/v1beta1
kind: Cluster
metadata:
labels:
cluster_name: gke
name: gke-managed-resources
spec:
forProvider:
enableAutopilot: false
enableIntranodeVisibility: true
ipAllocationPolicy:
- {}
initialNodeCount: 1
location: us-central1
removeDefaultNodePool: true
writeConnectionSecretToRef:
name: gke-managed-resources-kubeconfig
namespace: upbound-system
- API Version and Kind: Defines the resource as a NodePool under the GCP provider, demonstrating how Crossplane can manage specific Kubernetes resources.
- Metadata: Sets the name of the node pool (gke-managed-resources-nodepool) and labels it to match the GKE cluster.
- Specifications:
- clusterSelector: Ensures this node pool is associated with the defined GKE cluster using label matching.
- nodeConfig: Configures the nodes in the pool.
- machineType: Specifies the type of machine (e.g., e2-small).
- diskSizeGb: Defines the disk size for each node.
- oauthScopes: Sets the OAuth scopes for the nodes, essential for defining permissions.
- preemptible: Indicates whether the nodes are preemptible, which can reduce costs but may affect availability.
- nodeCount: The number of nodes in the pool, set to 1.
Crossplane Components
Crossplane’s compositions offer a more efficient method for cluster creation. Think of Terraform modules, where individual resources are brought under one umbrella. It’s time to examine the different Crossplane components.
Managed Resources
- A Managed Resource maps to an actual resource for the cloud provider.
- Examples would be an S3 bucket, a GKE cluster, a GKE nodepool, and so on.
- We saw this in our first demo how you can simply just create resources that way, but it’s not efficient.
Composite Resource Definitions (XRD)
- An XRD acts as an API schema for both the claims and the composite resources. It defines how claims and composite resources should look, essential in managing infrastructure effectively.
- It defines the group, kind, and versions, along with the properties like location, machine type, and node count as we’ll see in the configuration below.
Compositions
- A composition in Crossplane defines the Managed Resources needed and how they are configured.
- We use patches in the composition to update resource configurations based on variables fed by the Claim.
- Think of compositions as the template or blueprint to create multiple Managed Resources.
- If you come from a Terraform background, I like to think of compositions as Terraform Modules
- Another analogy is to think of them as container images
Composite Resources (XR)
- A composite resource is made up of Managed Resources and it’s the actual running resources.
- Going with the container analogy, think of this as the running container as opposed to the image.
Claims
- Developers write claims to request resources, specifying parameters like location and node count.
- Developers are responsible for this and this is what enables self-service, they don’t need to know anything about how the infrastructure is defined.
- Monitoring the progress of these resources can be done using kubectl get xcluster and kubectl describe xcluster.
Demo 2: Crossplane Compositions for GKE Cluster
Before starting the second demo, make sure you delete the cluster and nodepool created by the first demo by running this command:
kubectl delete -f gke_cluster_managed_resources.yaml
Now let’s move on to our second demo.
Create a Composite Resource Definition
kubectl apply -f cluster_XRD.yaml
Build a Composition
kubectl apply -f cluster_composition.yaml
Request a Claim
kubectl apply -f cluster_XR_claim.yaml
Monitor the provisioning
kubectl get xcluster
kubectl describe xcluster
kubectl get cluster
kubectl describe cluster
kubectl get nodepool
kubectl describe nodepool
Access the Cluster
- Once the resources are ready, we use kubectl get cluster to fetch our cluster name and update our environment variable.
- Note that our zone has changed, since the claim calls for us-east1. So make sure to update the CLUSTER_LOCATION variable accordingly.
- We run the gcloud command to update our kubeconfig and access the newly created cluster similar to what we did in the first demo.
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_LOCATION --project $PROJECT_ID
Now run the following commands to look around your new GKE cluster.
kubectl get nodes
kubectl get all
XCluster is Ready
Composite Resource Definition Configuration
Now, that our cluster is running, let’s take a look at the configuration used to make all of this happen. Let’s start by examining our **cluster_XRD.yaml **file.
Overview of the XRD:
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: xclusters.compositions.io
spec:
group: compositions.io
names:
kind: XCluster
plural: xclusters
claimNames:
kind: GKECluster
plural: gkeclusters
versions: [...]
- API Version and Kind: Defines the resource as a CompositeResourceDefinition, a custom resource type in Crossplane.
- Metadata: Names the definition xclusters.compositions.io, making it identifiable within the Crossplane framework.
- Specifications:
- group: Indicates the API group (compositions.io) under which this custom resource will be placed.
- names: Defines the kind (XCluster) and its plural form (xclusters), establishing how to refer to these resources in Kubernetes.
- claimNames: Specifies the names for claims that can be made against this composite resource. In this case, the kind is GKECluster with its plural form gkeclusters.
Schema Definition for XCluster:
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
parameters:
type: object
properties:
location:
type: string
machineType:
type: string
nodeCount:
type: number
compositionRef:
type: object
properties:
name:
type: string
enum:
- gke-cluster-nodepool
- Versioning: The v1alpha1 version of the XCluster is declared, which is served and referenceable.
- Schema Definition:
- The openAPIV3Schema defines the structure of the XCluster resource.
- Under spec, the parameters are defined, which include location, machineType, and nodeCount. These are user-defined parameters to specify the cluster location, the machine type for nodes, and the number of nodes. These allow the developer to define using a Claim which we’ll see later.
- compositionRef: Allows specifying a reference to a Composition, in this case, named gke-cluster-nodepool. This ties the XCluster to a specific composition that defines how the actual resources are provisioned in the cloud.
Thoughts
This CompositeResourceDefinition is a fundamental part of Crossplane’s capability to abstract complex cloud services into higher-level, user-friendly constructs. In this case, the XCluster is an abstraction for a GKE cluster, allowing users to request cloud resources by specifying high-level requirements like location, machine type, and node count. The GKECluster claim is a user-facing representation of the XCluster, making it easier for developers to request and use Kubernetes clusters in GCP without needing to understand the intricate details of the underlying infrastructure. This approach encapsulates complexity and promotes a self-service model in cloud resource provisioning, a key advantage of using Crossplane in cloud-native environments.
Composition Configuration
Let’s examine our **cluster_composition.yaml **file.
Composition Overview:
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: gke-cluster-nodepool
spec:
compositeTypeRef:
apiVersion: compositions.io/v1alpha1
kind: XCluster
resources: [...]
- API Version and Kind: Specifies the API version for Crossplane compositions and declares the resource type as Composition.
- Metadata: Names the composition gke-cluster-nodepool, which is a reference used by other Crossplane resources.
- Composite Type Reference: Points to XCluster under compositions.io/v1alpha1, a custom resource defined for bundling resources together.
GKE Cluster Resource:
- name: cluster
base:
apiVersion: container.gcp.upbound.io/v1beta1
kind: Cluster
metadata:
labels:
cluster_name: gke
spec: [...]
patches:
- type: FromCompositeFieldPath
fromFieldPath: "spec.parameters.location"
toFieldPath: "spec.forProvider.location"
- Base Configuration:
- Defines a GKE cluster (kind: Cluster) with the necessary API version.
- The metadata labels the resource as part of a gke cluster for identification.
- The specifications (spec) include configurations like autopilot, intranode visibility, IP allocation policy, and initial node count.
- writeConnectionSecretToRef: Specifies where to store the cluster’s connection details.
- Patches:
- Used to dynamically alter the base configuration based on inputs from a composite resource.
- In this example, the location parameter (spec.parameters.location) from the composite resource is mapped to the cluster’s location (spec.forProvider.location).
GKE Node Pool Resource:
- name: nodepool
base:
apiVersion: container.gcp.upbound.io/v1beta1
kind: NodePool
spec: [...]
patches:
- type: FromCompositeFieldPath
fromFieldPath: "spec.parameters.location"
toFieldPath: "spec.forProvider.location"
- type: FromCompositeFieldPath
fromFieldPath: "spec.parameters.machineType"
toFieldPath: "spec.forProvider.nodeConfig[0].machineType"
- type: FromCompositeFieldPath
fromFieldPath: "spec.parameters.nodeCount"
toFieldPath: "spec.forProvider.nodeCount"
- Base Configuration:
- Defines a node pool (kind: NodePool) with necessary configurations like cluster selector, node configuration (including machine type, disk size, OAuth scopes, and preemptibility), and node count.
- Patches:
- type: FromCompositeFieldPath patches modify the base configuration.
- Patches map parameters from the composite resource to the node pool’s configuration, such as location, machine type, and node count.
Thoughts
This Crossplane composition provides a template for creating a GKE cluster with a customizable node pool. It demonstrates the use of base configurations and patches, allowing for dynamic resource creation based on parameters defined in a composite resource (XCluster). This method is particularly powerful in scenarios where you need to create similar clusters with slight variations, as it allows for reusability and consistency in infrastructure provisioning. By understanding and utilizing Crossplane compositions, developers and infrastructure teams can efficiently manage complex Kubernetes deployments on cloud platforms like GCP.
Claim Configuration
Let’s examine our **cluster_XR_claim.yaml **file.
Overview of the XCluster Instance:
apiVersion: compositions.io/v1alpha1
kind: XCluster
metadata:
name: example-gke-cluster
spec:
parameters:
location: us-east1
machineType: e2-small
nodeCount: 1
compositionRef:
name: gke-cluster-nodepool
- API Version and Kind: The resource is defined as an XCluster, a composite resource in Crossplane, under the compositions.io/v1alpha1 API group, showcasing Crossplane’s declarative API model.
- Metadata: Names this specific instance of the XCluster as example-gke-cluster.
- Specifications:
-
parameters: Defines the configuration parameters for the GKE cluster.
- location: Specifies the geographical location of the GKE cluster as us-east1.
- machineType: Sets the machine type for the nodes in the cluster to e2-small.
- nodeCount: Indicates the number of nodes per zone in the cluster, set to 1 in this case.
-
compositionRef: Links this XCluster to a specific Crossplane Composition named gke-cluster-nodepool. This composition defines the detailed resource configurations and how they should be provisioned in GCP.
Thoughts
This Crossplane XCluster instance serves as a high-level request for a GKE cluster with specific characteristics such as location, machine type, and node count. By referencing the gke-cluster-nodepool composition, it leverages a predefined template that outlines how the cluster and its associated resources should be created in GCP.
Such configurations exemplify the power of Crossplane in simplifying cloud resource management. Developers or platform engineers can create and manage sophisticated cloud resources like Kubernetes clusters by defining simple, declarative configurations, abstracting away the underlying complexities of cloud provider APIs. This makes managing cloud infrastructure more accessible and manageable, particularly in large-scale, multi-cloud, and multi-tenant environments.
Conclusion
We’ve seen both methods of creating a GKE cluster with Crossplane, and it’s clear that compositions offer a more streamlined and flexible approach. Crossplane not only simplifies the process but also integrates seamlessly with Kubernetes, allowing for a unified definition of infrastructure and applications, crucial for Crossplane projects involving multiple teams.
If you’re interested in learning more about Crossplane, drop a comment below. For a more detailed walkthrough, including the nuances of Crossplane compositions and claims, stay tuned for more content. Thanks for tuning in, and I’ll see you in the next exploration of Kubernetes and Crossplane!
Suggested Reading
Crossplane: Unveiling a New Horizon in Cloud Infrastructure Management
Terraform for Beginners – A Beginner’s Guide to Automating Cloud Infrastructure
Webblog App Part 1 – Infrastructure as Code with Terraform
Code
Hi and Welcome!
Join the Newsletter and get FREE access to all my Source Code along with a couple of gifts.

