Skip to content

What is Kubernetes?

Official description

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

This is how the official website describes it. It's effectively a standardized way of deploying applications in a very scalable way - from everything such as development prototyping through to massive highly available enterprise solutions.

In this document I'll give a very brief summary that should help those of you new to Kubernetes to make your first steps with Egeria.

What are the key concepts in Kubernetes?

These are just some of the concepts that can help to understand what's going on. This isn't a complete list.


Kubernetes using a standard API which is oriented around manipulating Objects. The commands are therefore very standard, it's all about the objects.

"Making it so"

The system is always observing the state of the system through these objects, and where there are discrepancies, taking action to "make it so" as Captain Picard would say. The approach is imperative. So we think of these objects as describing the desired state of the system.


A namespace provides a way of separating out Kubernetes resources by users or applications as a convenience. It keeps names more understandable, and avoids global duplicates.

For example a developer working on a k8s cluster may have a namespace of their own to experiment in.


A container is what runs stuff. It's similar to a Virtual Machine in some ways, but much more lightweight. Containers use code Images which may be custom-built, or very standard off-the-shelf reusable items. Containers are typically very focussed on a single need or application.


A pod is a single group of one or more containers. Typically, a single main container runs in a pod, but this may be supported by additional containers for log, audit, security, initialization etc. Think of this as an atomic unit that can run a workload.

Pods are disposable - they will come and go. Other objects are concerned with providing a reliable service.


A service provides network accessibility to one or more pods. The service name will be added into local Domain Name Service (DNS) for easy accessibility from other pods. Load can be shared across multiple pods


Think of ingress as the entry point to Kubernetes services from an external network perspective - so it is these addresses external users would be aware of.


A deployment keeps a set of pods running - including replica copies, ie restarted if stopped, matching resource requirements, handling node failure .


A statefulset goes further than a deployment in that it keeps a well known identifier for each identical replica. This helps in allocating persistent storage and network resources to a replica


A configmap is a way of keeping configuration (exposed as files or environment variables) separate to an application.


A secret is used to keep information secret, as the name might suggest ... This might be a password or an API key and the data is encoded to avoid being directly read as plain text.

Custom Objects

In addition to this list -- and many more covered in the official documentation -- Kubernetes also supports custom resources. These form a key part of Kubernetes Operators .


Pods can request storage - which is known as a persistent volume claim (PVC), which are either manually or automatically resolved to a persistent volume.

See the k8s docs Persistent Volumes

Why are we using Kubernetes?

All sizes of systems can run Kubernetes applications - from a small raspberry pi through desktops and workstations through to huge cloud deployments.

Whilst the details around storage, security, networking etc do vary by implementation, the core concepts, and configurations work across all.

Some may be more concerned about an easy way to play with development code, try out new ideas, whilst at the far end of the spectrum enterprises want something super scalable and reliable, and easy to monitor.

For Egeria we want to achieve two main things

  • Provide easy to use demos and tutorials that show how Egeria can be used and worked with without requiring too much complex setup.
  • Provide examples that show how Egeria can be deployed in k8s, and then adapted for the organization's needs.

Other alternatives that might come to mind include

  • Docker -- whilst simple, this is more geared around running a single container, and building complex environment means a lot of work combining application stacks together, often resulting in something that isn't usable. We do of course still have container images, which are essential to k8s, but these are simple and self-contained.
  • docker-compose -- this builds on docker in allowing multiple containers and servers to be orchestrated, but it's much less flexible and scalable than Kubernetes.

How do I get access to Kubernetes?

Kubernetes' own Getting Started guide provides links to setting up Kubernetes in many environments. Below we'll take a quick look at some of the simpler examples, especially for new users.

microk8s (Linux, Windows, MacOS)

4GB is recommended as the minimum memory requirement

As with most k8s implementations, when running some ongoing cpu will be used, so if running on your laptop/low power device it's recommended to refer to the relevant docs and stop k8s when not in use.

When running on a separate server or a cloud service this isn't a concern.

microk8s uses its own commands to avoid conflicts

When using microk8s, note that the standard k8s commands are renamed to avoid clashes, so use the microk8s ones in the remainder of the Egeria documentation:

  • kubectl becomes microk8s kubectl
  • helm becomes microk8s helm

They can also be aliased on some platforms, for instance using alias kubectl='microk8s kubect' in ~/.zshrc or an equivalent shell startup script.


The MacOS install docs cover the steps needed to install microk8s.

Most of the Egeria development team use MacOS, so the instructions are elaborated and qualified here:

Disable firewall stealth mode first

Before installing, go into System Preferences -> Security and Privacy. Click the lock to get into Admin mode. Then ensure Firewall Options -> Enable Stealth Mode is NOT enabled (no tick). If it is left enabled, microk8s will not work properly!

  • The recommended approach uses HomeBrew . This offers a suite of tools often found on Linux which are easy to setup on MacOS.
  • If you are concerned over the firewall change, or HomeBrew requirement, refer back to the official k8s documentation and choose another k8s implementation that works for you.
  • Ensure you turn on the following services: storage, dns, helm3. dashboard is also useful to understand more about k8s and what is running. However, it is currently failing as described in issue 2507

As an example, the following commands should get you set up, but always check the official docs for current details

Installing microk8s on MacOS

brew install ubuntu/microk8s/microk8s
microk8s install
microk8s status --wait-ready
microk8s enable dns storage helm3
microk8s kubectl get all --all-namespaces

Kubernetes is now running.


Follow the official instructions (untested)


Follow the official instructions (untested)

Docker Desktop (Windows, MacOS)

After installing, go into Docker Desktop Settings and select Kubernetes. Make sure Enable Kubernetes is checked. Also, under resources ensure at least 4GB is allocated to Docker.


Many cloud providers offer Kubernetes deployments which can be used for experimentation or production. These include:

In addition to a cloud install, ensure you have installed the relevant cloud provider's tooling to manage their k8s environment, including having access to the standard Kubernetes command kubectl.

Note that in the team's testing we mostly are running Redhat OpenShift on IBM Cloud as a managed service. We welcome feedback of running our examples on other environments, especially as some of the specifics around ingress rules, storage, security can vary.

Accessing applications in your cluster

Furether information

See the Kubernetes docs .


In the sample charts provided an option to use a NodePort is usually provided.

This is often easiest when running k8s locally, as any of the ip addressable worker nodes in your cluster can service a request on the port provided. This is why it's named a node port i.e. a port on your node.


This can be run at a command line, and directly sets up forwarding from local ports into services running in your cluster. It requires no additional configuration beforehand, and lasts only as long as the port forwarding is running.

See port forwarding for more information.


Ingress rules define how traffic directed at your k8s cluster is directed. Their definition tends to vary substantially between different k8s implementations but often is the easiest approach when running with a cloud service.

For example, see microk8s's ingress documentation .

Back to top