Kubernetes Controllers

March 14, 2024
14
min read

Kubernetes Controllers are components built-in to all Kubernetes clusters. They provide all the critical functionality users expect from their clusters by managing the cluster's state based on objects (like DaemonSets) deployed by the user. Controllers are a fundamental aspect of Kubernetes, and understanding them will help users to better understand how Kubernetes works under the hood.

This article will explore Kubernetes Controllers in detail, including custom Controllers called Operators and how users can leverage this advanced concept to extend the functionality of their Kubernetes clusters.

Summary of key Kubernetes Controller Concepts

The table below summarizes the Kubernetes Controllers concepts this article will explore.

Concept Description
What are Kubernetes Controllers? Controllers are the core building block of Kubernetes clusters. They’re responsible for actioning every object deployed to the cluster, reconciling the desired and current state of cluster resources.
Six examples of Kubernetes Controllers Examples of built-in Controllers include DaemonSet, Deployment, Service, StatefulSet, and Node Controllers. These are bundled with the Kube Controller Manager.
Kubernetes Controller design Controllers are designed with distributed systems principles in mind, such as maintaining fault tolerance, atomicity, and self-healing.
What are Custom Controllers (Operators)? Operators are an advanced functionality of Kubernetes, enabling users to extend the features of their clusters with custom resources.
What are examples of Operators? An example of an Operator is Uffizzi’s Cluster Operator, which can create virtual cluster resources for users implementing multi-tenant environments.

What are Kubernetes Controllers?

Controllers are critical components underpinning the architecture of Kubernetes clusters. They are binaries responsible for maintaining the cluster's desired state for a subset of Kubernetes objects. For example, the DaemonSet Controller is accountable for analyzing changes to DaemonSet objects and actioning them by creating Pods for every Worker Node in the cluster.

The default Controllers built-in to all Kubernetes clusters are located within the Kube Controller Manager component. This component contains a collection of default Controllers responsible for handling standard Kubernetes functionality. It's possible to install additional Controllers (Operators) to extend the cluster's functionality, which we'll discuss later in the article.

Controllers typically operate in a control loop with a general sequence of events:

  1. Observe: Each Controller in the cluster will be designed to observe a specific set of Kubernetes objects. For example, the Deployment Controller will observe Deployment objects, and the Service Controller will observe Service objects, etc. Every object in a Kubernetes cluster has an associated Controller responsible for watching it to analyze its configuration.
  2. Compare: The Controller will compare the object configuration with the current state of the cluster to determine if changes are necessary. For example, if a ReplicaSet is configured to have five replicas (copies of a pod) , the ReplicaSet Controller will constantly monitor the number of active Pod replicas in the cluster. If a pod is deleted, the ReplicaSet Controller will recognize a difference between the desired state and the current state of the cluster.
  3. Action: The Controller will apply changes to the cluster to ensure the cluster's current state matches the desired state of the Kubernetes objects. Following the above example, if a Pod is deleted and, therefore, "drifted" from the cluster's current state, the ReplicaSet Controller will mitigate this drift by launching a replacement Pod. In this case, the ReplicaSet Controller's objective is to ensure the number of active Pods in the cluster matches the number of replicas in the ReplicaSets.
  4. Repeat: All of the above steps are being completed in an infinite loop by all Controllers in the Kubernetes cluster. The Controllers ensure the cluster's current state always matches the desired state specified in the Kubernetes object configuration.

This pattern of behavior is a crucial aspect of how all Kubernetes clusters work "under the hood." Controllers are the core element that drives a cluster's functionality by executing the configurations defined in Kubernetes objects. Without Controllers, no actions (like container orchestration) would occur in the cluster.

Six examples of Kubernetes Controllers

Many Controllers are built-in to the Kube Controller Manager deployed as part of all Kubernetes clusters, each with a specific purpose.

Here are six examples of Controllers:

  • Node Controller: Responsible for managing Worker Nodes. It will monitor the new Nodes connecting to the cluster, validate the Node's health status based on metrics reported by the Node's Kubelet component, and update the Node's .status field. If a Kubelet stops posting health checks to the API Server, the Node Controller will be responsible for triggering Pod eviction from the missing Node before removing the Node from the cluster.
  • Deployment Controller: Responsible for managing Deployment objects and creating/modifying ReplicaSet objects.
  • ReplicaSet Controller: Responsible for creating/modifying Pods based on the ReplicaSet object configuration.
  • Service Controller: Responsible for configuring ClusterIP, NodePort, and LoadBalancer configuration based on Service objects.
  • CronJob Controller: Responsible for creating Job objects based on the Cron schedule defined in CronJob objects.
  • StatefulSet Controller: Responsible for creating Pods in a guaranteed order with a sticky identity.

Here is an example of the Golang functions contained within the DaemonSet Controller. The functions reflect the basic functionality users expect when they deploy a DaemonSet:

// Watch for new Worker Nodes joining the cluster.
// The Controller will ensure every new Node is assigned DaemonSet Pods.
func addNode()

// Create a map for which Nodes are running DaemonSet's Pods.
// This will help the Controller find which Nodes are missing a DaemonSet Pod.
func getNodesToDaemonPods()

// Handle user changes to the DaemonSet.
// Changes to a DaemonSet should trigger updates for Pods.
// Strategies are specified in the DaemonSet schema, like rollingUpdate.
func updateDaemonSet()

// The DaemonSet's .status field should be updated based on the current state.
func updateDaemonSetStatus()

// Check if a given Node can run the DaemonSet Pod.
// This includes checking attributes like Taints, Tolerations, and Affinity.
func NodeShouldRunDaemonPod()

There are many more functions in the DaemonSet Controller to ensure every aspect of a DaemonSet object's lifecycle is managed correctly, and there are more Controllers built-in to the Kube Controller Manager, which you can view here. The Controllers enable us to deploy Kubernetes objects and expect that the cluster will keep in step with the desired configuration.

Kubernetes Controller design

All built-in Kubernetes Controllers follow some basic design principles. The core design aspects of Kubernetes Controllers allow them to consistently and reliably carry out operations for clusters to behave as per the user's expectations. Kubernetes is a highly reliable distributed system because these principles are built-in to all Controllers in the Kube Controller Manager.

Each of the Controllers will monitor their specific Kubernetes object type for changes by connecting to the API Server. The API Server is a mandatory entry point for any API data stored in the cluster's control plane database, called etcd. All Controllers monitor resources by contacting the API Server, which will query data from etcd and return the object's current configuration. A similar sequence of events occurs when a user runs a command like kubectl describe pod.

The design principles followed by Controllers to enable Kubernetes to operate as a high-quality distributed system are summarized in the sections below.

Kubernetes design principle #1: Desired state

This model involves specifying how the desired state of a cluster should look rather than defining execution steps like in traditional configuration management tools (Chef, Puppet, Ansible, etc.). By declaratively defining the cluster's desired state in YAML manifests, users can delegate the implementation details to Controllers to apply the desired changes. Controllers are constantly reconciling the cluster's current state with the desired state, leaving less responsibility to the user to manually reconcile differences by pushing changes through configuration management tools.

Kubernetes design principle #2: Fault tolerance

Controllers typically run as multiple replicas on the control plane Master Nodes. Even when a Master Node experiences an interruption, such as hardware failure, a separate copy of the Kube Controller Manager will be running on another Master Node. The Controllers will continue the reconciliation activities automatically after failover and ensure reconciliation activities continue without disruption.

Kubernetes design principle #3: Self-healing

The nature of Controllers constantly observing and comparing the desired state of the cluster with the current state of Kubernetes objects means any drift in cluster configuration will be detected immediately and mitigated. The cluster will self-heal by monitoring for unexpected cluster configuration drift, such as a hardware failure causing the number of active Pods to decline. Since Controllers are continuously monitoring the cluster's desired state, they can take care of configuration drift by continually changing the cluster's state to ensure it matches the desired configuration.

Kubernetes design principle #4: Atomicity

Controllers are designed to handle interruptions during reconciliation activities. A distributed system should be designed to handle failures in any component, including the control plane, network connectivity issues, hardware failures, node reboots, etc. Controllers will always be capable of picking up the reconciliation steps from wherever the interruption occurred, regardless of which actions were missed or half-finished. By comparing the current and desired states, Controllers will know what steps must be taken to continue the reconciliation.

For example, suppose a DaemonSet Controller encounters a hardware failure and is restarted. In that case, it will identify any DaemonSet objects that may be only partially rolled out and continue generating Pods to reconcile the desired state. The restarted Controller will ensure the DaemonSet is fully rolled out by examining which Nodes are missing the Pods defined in the DaemonSet. Regardless of when the DaemonSet Controller crashes, the restarted process will continue the rollout without impacting the user. Atomicity is a key concept for distributed systems and is defined by the ability to reliably complete operations despite interruptions.

{{banner-1="/utility-pages/banners"}}

What are Custom Controllers (Operators)?

While there are many Controllers built into Kubernetes clusters by default, users may want to extend the functionality of their clusters by enabling additional custom Controllers called Operators. Operators are custom Controllers that implement the Operator Pattern, an approach to adding new automation into Kubernetes.

Operators are typically deployed as regular Kubernetes workloads, such as Deployments or StatefulSets.

The Operator is configured to monitor for custom Kubernetes resource types, which are defined via the Kubernetes object called Custom Resource Definitions (CRDs). CRDs are a vital aspect of understanding the use case and benefit of Operators. These objects are practically always combined to extend a Kubernetes cluster's functionality.

A CRD allows users to define new Kubernetes objects beyond the defaults like Pods, Deployments, Nodes, etc. The CRD's schema will tell Kubernetes about the schema for the new object, such as what values it will contain inside the .spec section. For example, we could define a new CRD for a resource called "Database,". Once it is deployed, Kubernetes will become aware that a new "Database" object type is available. The user can now deploy objects with a "kind: Database" and interact via standard API calls like kubectl get database.

Defining a new Kubernetes object will only impact a cluster if there is an associated Controller to observe and perform some reconciliation within the cluster. Users can deploy an Operator to observe their new object and perform custom cluster actions to extend the cluster's default functionality.

There are many packaged Operators available for Kubernetes that users can deploy directly. It's also possible for users to develop their own Operators using frameworks like the Operator SDK and Kube Builder. Tools like these aim to simplify the developer experience by providing libraries for connecting to the Kubernetes API and performing common actions, like watching resources. Users may develop their own Operators to fulfill use cases that go beyond the default capabilities of Kubernetes. Leveraging Operators is a powerful way to extend cluster functionality and is a key aspect of why Kubernetes is so compelling; expanding a cluster's functionality by plugging in new Operators allows for a high degree of customization and personalization.

What are examples of Operators?

An example of an Operator is the Uffizzi Cluster Operator. This project provides functionality for creating virtual clusters, which helps build multi-tenant Kubernetes environments. Virtual clusters allow multi-tenant administrators to implement strong isolation and maximize security beyond just using Namespaces to divide clusters. Virtual clusters are not a native Kubernetes resource. We can implement this functionality by deploying a CRD and the Uffizzi Operator. The CRD will define a new Kubernetes object, “UffizziCluster,” representing our virtual cluster. An Operator will monitor for new UffizziCluster objects and deploy new virtual clusters based on the schema.

Here is an example of a CRD provided by Uffizzi (some fields have been truncated for brevity, the full CRD can be viewed here):

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: uffizziclusters.uffizzi.com
spec:
  group: uffizzi.com
  names:
    kind: UffizziCluster
    listKind: UffizziClusterList
    plural: uffizziclusters
    shortNames:
    - uc
    singular: uffizzicluster
  scope: Namespaced
  versions:
  - additionalPrinterColumns:
    - jsonPath: .status.conditions[?(@.type=='Ready')].status
      name: Ready
      type: string
    name: v1alpha1
    schema:
      openAPIV3Schema:
        description: UffizziCluster is the Schema for the UffizziClusters API
        properties:
          apiVersion:
            description: 'Version information'
            type: string
          kind:
            description: 'Kind will be set to UffizziCluster'
            type: string
          spec:
            description: UffizziClusterSpec defines the desired state of UffizziCluster
            properties:
              helm:
                items:
                  properties:
                    chart:
                      properties:
                        name:
                          type: string
                        repo:
                          type: string
                        version:
                          type: string
                      required:
                      - name
                      - repo
                      type: object

The CRD example above illustrates how Kubernetes allows users to define custom objects with personalized schemas. In this example, we're creating a new UffizziCluster object type with a few fields like .spec.helm.chart. Once the CRD has been deployed, users can create an object with  "kind: UffizziCluster" just like any other Kubernetes built-in object.

kind: UffizziCluster
apiVersion: uffizzi.com/v1alpha1
metadata:
  name: cluster-1
spec:
  helm:
  - chart:
      name: kubernetes-dashboard
      repo: https://kubernetes.github.io/dashboard/

The above manifest shows how the CRD can be instantiated by deploying new UffizziCluster objects like any other Kubernetes object. We can query the object's status the usual way with kubectl describe uffizzicluster.

The Uffizzi Operator will monitor the Kubernetes API Server to determine when this object type has been deployed or modified. When a change between the cluster's current and desired state has occurred, an Operator should attempt to reconcile the desired state. In this case, the Uffizzi Operator will now action the UffizziCluster object by creating a new virtual cluster based on the parameters we supplied in the object's properties.

The Operator is responsible for generating the new virtual cluster, modifying existing virtual clusters, and deleting them if the user deletes the UffizziCluster object. The Operator will also update the object's .status section to provide the user with additional information about the virtual cluster's reconciliation status. Operators will update special fields like .status to ensure the object contains information relevant to the user or other Operators and Controllers. For example, the Deployment Controller will update the .status field of all Deployment objects to inject information like the Deployment's active replica count.

{{banner-2="/utility-pages/banners"}}

Conclusion

Controllers are a critical element of every Kubernetes cluster. They are responsible for managing every Kubernetes object deployed to a cluster, determining what changes are required according to the desired state, and executing reconciliation steps. Every cluster has a set of built-in Controllers running in the Kube Controller Manager, ensuring default Kubernetes resources like DaemonSets, Deployments, and Services are being actioned when a user deploys, modifies, or deletes an object. The principles behind every Controller help ensure reliability and availability in a distributed system, allowing Kubernetes to act as a production-grade platform for containerized applications.

Operators are custom Controllers developed to provide additional functionality for clusters. They extend the default object list with new custom objects defined in CRDs, and Operators manage these new objects to offer unique functionality for users. In this article, we've seen an example of the Uffizzi Operator generating virtual clusters based on custom UffizziCluster objects, enabling users to extend the default functionality of their Kubernetes clusters.  Operators are an advanced Kubernetes feature that builds on the Kubernetes ecosystem to provide more powerful features, following the Controller design principles underpinning the Kubernetes project. Users should consider evaluating available Operators to increase the functionality of their Kubernetes clusters.

Like this article?

Subscribe to our LinkedIn Newsletter to receive more useful content.

Subscribe now