Kubernetes Sandbox: Tutorial & Instructions

March 19, 2024
15
min read

Kubernetes continues to expand as a critical infrastructure tool. With this expansion comes the need to provide access with guard-rails to app development and testing spaces.

Giving access to full Kubernetes clusters is less than ideal from a cost, management, and security perspective. Kubernetes sandbox technologies can be helpful in various scenarios, including automated and manual application testing, Kubernetes version upgrade testing, or in quickly testing a Helm chart.  Sandbox is the best place to test version upgrades, new features, and supporting services before implementing them on production clusters. You can also use it to investigate how a Kubernetes cluster behaves under specific configurations, such as various Kubernetes versions, or in the presence or absence of custom resources.

There are many ways to run Kubernetes sandbox clusters, from local and simple to more complex cloud-based solutions. There is also the new and innovative concept of Kubernetes virtual clusters, which you can use as a sandboxed cluster within a cluster.

This article discusses the pros and cons of various Kubernetes sandbox solutions and provides short tutorials on how to get started.

Summary of key Kubernetes sandbox solutions

This article will look at several ways to run sandboxed Kubernetes clusters.

Sandbox Solution Location Brief Description
Kubernetes-in-Docker Local Runs a local Kubernetes cluster using only Docker containers. Requires that you have Docker installed on your computer.
Minikube Local Requires a backend installed on your computer, such as Docker or Virtualbox. Directly supports lots of add-ons.
Cloud vendor Remote Use your preferred public cloud vendor’s solution for managed Kubernetes clusters. This solution usually works well but can be slow, complicated, and costly.
Virtual clusters Remote A Kubernetes cluster inside a host Kubernetes cluster. This solution is very fast and requires an existing cluster. An alternative is to use an online provider of virtual clusters, such as Uffizzi (available as a hosted or self-hosted solution and open-source software).

Local Kubernetes sandbox solutions

Kubernetes sandbox solutions for local environments allow developers to run, test, and experiment with Kubernetes clusters on their own machines without needing a full-scale, cloud-based Kubernetes setup. These tools are essential for learning, development, and testing purposes.

Kubernetes-in-Docker

Kubernetes-in-Docker (kind) is a solution that runs a local Kubernetes cluster using only Docker. It was developed to test Kubernetes but can also be used for local development.

To run kind, you’ll need the following:

Kind makes it easy to create a Kubernetes cluster on a local machine. By default, kind creates a single-node cluster, meaning the control and data planes are on the same node. It is possible to run more than one node, but it is outside the scope of this article.

There are some potential drawbacks worth mentioning for kind. First, you have to manually apply any add-ons using external manifest files. Unfortunately, this feature is not integrated into kind as other local Kubernetes distributions.

The second potential drawback is that kind does not support cluster autoscaling. This means you must be thoughtful about the number of nodes you need and will not be able to test any scenarios requiring auto-scaling. Depending on your testing scenarios, this may be a blocker.

Many organizations run Kind within their CI/CD to support automated testing.  This typically creates a bottleneck and flakey test results because of scalability issues and reduced control over underlying infrastructure (runners).  Using virtual clusters provided by Uffizzi is a way to dramatically reduce these issues and total time to run tests - often by a factor of 5x (see case study with ingress-nginx project).

Kubernetes-in-Docker installation tutorial

The first step is to install kind as per the instructions for your system on their Quick Start page. Once kind is installed, you can verify by checking the version.

$ kind version
kind v0.20.0 go1.18.1 linux/amd64

For this example and throughout the article, we will use the following example manifest file. Using your favorite editor, save the following file as test-app.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx
        ports:
        - name: http
          containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - name: http
    port: 80
    targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp
            port:
              name: http
    host: myapp.local

The test-app.yaml contains three Kubernetes resources: a Deployment, Service, and Ingress resource. These three resources represent a straightforward application that launches an nginx container and allows you to send requests to it via the Ingress and Service resources.

Because we use an Ingress resource, we must create a kind configuration file to label the nodes properly.

Save the following file to kind-config.yaml

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
  kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
  extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP

The next step is to create our kind cluster using this config file.

$ kind create cluster --config kind-config.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.27.3) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Verify kubectl works by querying the nodes.

kubectl get nodes -o wide
NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION    CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   8m43s   v1.27.3   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   6.4.16-linuxkit   containerd://1.7.1

It was noted earlier that add-ons must be installed via manifest files. The below step adds an Ingress controller to the cluster via a manifest file.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Verify the manifest was applied.

kubectl describe ns/ingress-nginx
Name:         ingress-nginx
Labels:       app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              kubernetes.io/metadata.name=ingress-nginx
Annotations:  <none>
Status:       Active
No resource quota.
No LimitRange resource.

Apply our test-app.yaml file.

$ kubectl apply -f test-app.yaml
deployment.apps/myapp unchanged
service/myapp unchanged
ingress.networking.k8s.io/myapp created

Verify the deployment works. You should get a Nginx Welcome page.

$ curl -H "Host: myapp.local" http://localhost/

Once your experimentations are finished, you can delete the cluster like so:

$ kind delete cluster
Deleting cluster "kind" ...
Deleted nodes: ["kind-control-plane"]

This shows that even for essential things such as ingresses, using Kind requires additional configuration.

Minikube

Minikube is another tool to run a Kubernetes cluster locally on your computer. It is older than kind, offers more functionality, and supports more backends. It creates only one node by default, but you can use command-line options to set the number of nodes.

Minikube manages addons straight from the command line, which means the addons will likely work out of the box. It supports a surprising collection of useful add-ons, for example:

  • Ingresses with ingress-nginx
  • Mesh networking with istio
  • API Gateway with Kong

However, Just like kind, minikube does not support auto-scaling. So, if you want to run any tests involving autoscaling of your cluster, minikube won’t be able to help you either.

{{banner-1="/utility-pages/banners"}}

Minikube installation tutorial

To run a minikube cluster, you need the following:

  • The minikube executable to start and manage the cluster
  • The kubectl executable to communicate with the Kubernetes API in the Minikube cluster
  • A “backend” or driver supported by minikube, such as VirtualBox or Docker

Follow the instructions for installing minikube on your system here.

Once minikube is installed, verify it works by checking the version.

$ minikube version
minikube version: v1.31.2
commit: fd7ecd9c4599bef9f04c0986c4a0187f98a4396e

Next, create a simple cluster using the Docker backend:

$ minikube start --driver=docker

After a couple of minutes, you will have a working cluster. Now let’s have a look at adding the ingress-nginx addon:

$ minikube addons enable ingress.

Again, wait a few minutes, and ingress-nginx is ready and working out of the box.

Let’s deploy and test our test app:

$ kubectl apply -f test-app.yaml 
deployment.apps/myapp created
service/myapp created
ingress.networking.k8s.io/myapp created

To test our app on the locahost we will need to run the following in a separate window and keep it running.

$ minikube tunnel

Now verify we can connect to our application.

$ curl -H "Host: myapp.local" http://localhost

Finally, delete the Minikube cluster:

$ minikube delete

Kind and Miniukbe tools are for local testing and are helpful initially. However, they fall short when compared to testing in a real-world setting. Since these tools typically operate on a single machine, they cannot thoroughly test configurations across multiple applications and servers. Their ingress capabilities are limited.

Local machines also lack the CPU, RAM, and disk resources of production servers, limiting their effectiveness in testing scalability. The localized nature of these environments also makes collaborative problem-solving more challenging. It would be difficult to share the local environment for collaborative efforts.

These solutions have their use cases, but let’s look at remote sandbox solutions as well.

Remote Kubernetes sandbox solutions

Remote Kubernetes sandbox solutions provide a cloud-based environment where you can experiment with Kubernetes without having to set up and maintain the infrastructure on your local machine. These solutions are excellent for learning, testing, and even prototyping Kubernetes deployments.

Deployed on a public cloud platform

You can run a temporary cluster straight on a major cloud vendor platform. The key here is to have a way to spin up a cluster easily.

The main advantage of this Kubernetes sandbox solution is that you can create a cluster as close as possible to your production cluster. This includes the same Kubernetes version, storage classes, network plugins, etc. Cloud sandbox clusters also support autoscaling, so running tests (such as stress tests) that involve the cluster automatically increasing and decreasing the number of nodes is now possible. Lastly, if you need to test specific features of a cloud vendor, this would be the only way to test those out.

The method also has some disadvantages. The first issue is that it will cost you money. It is also generally much slower to provision compared to a local cluster. It can also be much more complicated with a much wider range of configuration options, depending on the cloud vendor and how much customization you need. One could argue that it is less of a sandbox and just setting up a separate cluster.

Let’s look at how the main public cloud vendor does it.

Kubernetes sandbox on AWS

Amazon Web Services (AWS) has a command-line tool that makes creating and managing Elastic Kubernetes Service (EKS) clusters quite easy: eksctl. Below, we give a short tutorial on how to get started.

Important: If you follow this tutorial, it will cost you money!

It also requires the following pre-requisites.

  1. Create an IAM user
  2. Enable programmatic access for that user.
  3. Installing the user’s access key ID and secret access key locally on your computer.
  4. Give the user permissions to allow eksctl to call the necessary AWS APIs.
  5. Install eksctl.

Ensure you have configured your AWS credentials correctly. Refer this help document to check. You should be able to run a command such as aws sts get-caller-identity successfully.

Also, note that eksctl is a single executable; the easiest way to ensure you run the latest version is to download it. Please follow the instructions on the official website.

Assuming you properly configured your AWS credentials, create a simple, temporary EKS cluster as shown.

$ eksctl create cluster --name test

Now you will have to wait about 15 minutes to get the cluster up and running. Eksctl will also update your kubeconfig file and give you direct access to the cluster:

$ kubectl get nodes
NAME                                           STATUS   ROLES          AGE     VERSION
ip-192-168-10-34.eu-west-1.compute.internal    Ready    <none>   4m32s   v1.25.12-eks-8ccc7ba
ip-192-168-78-236.eu-west-1.compute.internal   Ready    <none>   4m31s   v1.25.12-eks-8ccc7ba

Eksctl supports “add-ons,” but these are only EKS-specific components such as the EBS-backed CSI, VPC CNI, etc. These details would be specific to EKS or any other cloud vendor.

Finally, it is important to delete the cluster afterward. Again, this will take quite a while to finish.

$ eksctl delete cluster --name tst

Virtual clusters

This new technology allows you to run Kubernetes clusters inside an existing one. It becomes practical to run temporary sandbox clusters, especially when your use case requires more than a local environment for collaboration and integration with third-party services to mimic a product environment.

The obvious drawback is that you need to have a cluster already running, and it must be “big enough” to accommodate any temporary cluster that could run simultaneously.

Vendors can simplify the process by allowing you to create virtual clusters on their cloud platforms. Uffizzi is one such vendor.  Uffizzi specializes in enabling self-service access to isolated environments that are 10x faster to spin up and 10x less expensive than operating a full cluster.

Let's look at how to create a virtual cluster using their system.

Kubernetes sandbox with Uffizzi

The first step is to install the Uffizzi command line tool—explained here. As per usual,  you will need to have kubectl installed. You will also need an account at either GitHub or GitLab.

The next step is to log in to Uffizzi and create a new project, which is achieved by simply running the following command:

$ uffizzi login

Creating a cluster is easy. Just run the following (replace “testcluster” by the chosen name of your cluster)

$ uffizzi cluster create testcluster

About a minute later, your cluster will be ready.

$ kubectl get nodes
NAME                                                  STATUS   ROLES          AGE   VERSION
gke-uffizzi-ucluster-sandbox-preempti-4a300ffc-hg22   Ready    <none>   70s   v1.27.3+k3s1

Before deploying our app, we must make one small change in its manifest file. Let’s change the ingressClassName from nginx to uffizzi. This will use the default uffizzi ingress class, which is practical because it allows you to access the ingress without having a domain name. Once you have made the change, apply the manifest file like so.

$ kubectl apply -f test-app.yaml 
deployment.apps/myapp created
service/myapp created
ingress.networking.k8s.io/myapp created

You will need to wait for the ingress to be actually created. The uffizzi ingress class will overwrite the hostname for the ingress to make it more practical. After a minute or so, you should be able to see this:

$ kubectl get ingress
NAME    CLASS     HOSTS                                                      ADDRESS        PORTS   AGE
myapp   uffizzi   myapp-default-testcluster-c784.uclusters.app.uffizzi.com   34.31.80.202   80      5m51s

So let’s try to access this URL.

$ curl myapp-default-testcluster-c784.uclusters.app.uffizzi.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

It works! Finally, let’s delete the cluster:

$ uffizzi cluster delete testcluster

{{banner-2="/utility-pages/banners"}}

Conclusion

This article reviewed several ways you could run Kubernetes sandbox environments. We looked at kind and minikube as ways to locally test applications. These solutions can work well for testing deployments if they do not rely on any specific cloud components. They cost nothing and are quick to spin up and spin down.

We also considered using a cloud provider to spin up clusters separate from the production cluster. The benefits here are that we can get closer to the exact specs of the production environment. However, this comes with the major drawback that you are paying hourly to run your tests. If you forget to delete the cluster after your testing, this could incur significant costs.

Lastly, we looked at virtual clusters, specifically Uffizzi. You have the benefit of being able to temporarily spin up clusters, test your deployment, and spin them down automatically. They take many manual processes off of the end-users' hands. They also have the added benefit of running a virtual cluster within your already running cluster.

Ultimately, Kubernetes sandbox testing is important and is more likely to be done if it is straightforward to setup and manage.

Like this article?

Subscribe to our LinkedIn Newsletter to receive more useful content.

Subscribe now