Kubernetes on Debian 13: A Complete Deployment Guide with Flannel and Cilium

news_Kubernetes on Debian 13: A Complete Deployment Guide with Flannel and Cilium

Start with Flannel for simplicity, graduate to Cilium for production-grade performance and security.

Introduction

Kubernetes has become the standard for container orchestration, but the vast majority of deployment guides assume cloud-managed environments where the underlying infrastructure is abstracted away. For organisations running their own servers across the UAE, GCC, and the wider Middle East, the reality is quite different: bare-metal hardware, private VLANs, and an operating system that enforces its own rules.

At IP Technics, we have been deploying and managing infrastructure across the region since 2009. Our clients in finance, healthcare, education, and government operate in environments where data sovereignty, network control, and cost efficiency are non-negotiable. Kubernetes gives these organisations the agility of containerised workloads without surrendering control to a hyperscaler.

This guide distils our production experience into a practical, two-stage approach to building a Kubernetes cluster on Debian 13 (Trixie), the current stable release.

Stage 1 walks you through a complete cluster deployment using Flannel, the simplest and most lightweight Container Network Interface (CNI) plugin. Flannel is ideal for development environments, proof-of-concept deployments, and teams building their first Kubernetes competency. We cover not just the commands, but the reasoning behind each step, including the specific challenges that Debian 13 introduces in 2026.

Stage 2 upgrades the cluster's networking to Cilium, the eBPF-powered CNI that Google selected to power GKE Dataplane V2. Cilium replaces the decades-old iptables model with programs that execute directly inside the Linux kernel, delivering identity-based security policies, deep traffic observability, and routing performance that does not degrade as the cluster scales.

Both stages use the same 3-node architecture on Debian 13, so you can start simple and upgrade in place when your requirements demand it.

Who Is This For?

Infrastructure engineers, DevOps teams, and technology leaders evaluating Kubernetes for on-premises or hosted bare-metal deployments. We assume solid Linux administration skills and basic networking knowledge. No prior container or YAML experience is required; we explain those concepts as we go.

Architecture Overview

Before touching a terminal, it is worth understanding what we are building and why each component exists. A Kubernetes cluster is not a single application; it is a distributed system with distinct roles.

The Machines

Our deployment uses three Debian 13 servers. This is the minimum configuration that provides genuine application redundancy: if one worker node goes offline, the surviving worker absorbs the workload while the control plane continues to manage the cluster.

k8master01k8node01k8node02
FQDNk8master01.iptechnics.comk8node01.iptechnics.comk8node02.iptechnics.com
IP Address10.254.0.1110.254.0.1210.254.0.13
RoleControl PlaneWorkerWorker
API ServerYesNoNo
etcd DatabaseYesNoNo
SchedulerYesNoNo
Kubelet AgentYesYesYes
CNI AgentYesYesYes
Application PodsNo (tainted)YesYes

The Control Plane (k8master01) hosts the cluster's decision-making components. The API Server is the single entry point for all commands, whether from kubectl, the dashboard, or automated tooling. The etcd database stores every piece of cluster state. The Scheduler watches for new pods and decides which worker has the capacity to run them. By default, Kubernetes "taints" the master node to prevent user workloads from running there, reserving its resources for cluster management.

The Worker Nodes (k8node01 and k8node02) are where your applications actually run. Each worker runs a Kubelet agent that receives instructions from the master, and a container runtime (containerd) that pulls images and starts containers. When you deploy an application with three replicas, Kubernetes distributes them across both workers so that losing one machine does not take down the service.

Understanding the Two Networks

This is one of the most important concepts to grasp before proceeding. Every Kubernetes cluster operates on two completely separate, non-overlapping networks simultaneously.

The Node Network is your physical (or VM) network. In our case, this is the 10.254.0.0/24 subnet. The three Debian servers use these addresses to reach each other via their Ethernet interfaces. The Kubernetes API server listens on this network at 10.254.0.11:6443. SSH, DNS, and all traditional server-to-server communication happens here.

The Pod Network is a virtual overlay that exists only inside the cluster. Every container (pod) receives a unique IP address from this range. The CNI plugin (Flannel or Cilium) is responsible for routing traffic between pods, even when they are on different physical nodes. When a pod on k8node01 needs to talk to a pod on k8node02, the CNI wraps the packet, ships it across the node network, and unwraps it on the other side.

For Stage 1 (Flannel), we use 10.244.0.0/16 as the pod network. For Stage 2 (Cilium), we use 172.16.0.0/16. In both cases, the range must not overlap with the node network.

Planning Tip

Choose your Pod Network CIDR before running kubeadm init. This value is permanently written to the cluster database (etcd) and cannot be changed without a full re-initialisation. We have seen teams lose hours trying to patch a locked CIDR on a running cluster. Plan once, deploy cleanly.

Preparing Debian 13

Debian 13 (Trixie) is an excellent foundation for Kubernetes: stable, minimal, and well-maintained. However, its modern security posture and systemd integration introduce specific friction points that must be addressed before the cluster can function. Every step in this section must be performed on all three nodes: k8master01, k8node01, and k8node02.

Disabling Swap

Kubernetes requires swap to be completely disabled. This is not a soft preference; the Kubelet (the node agent) will refuse to start if it detects any active swap. The reasoning is fundamental to how Kubernetes manages resources.

Kubernetes operates as a bin-packing system. It calculates exactly how much physical RAM is available on each node, then schedules containers into that space with precise limits. If the operating system silently moves container memory to a slow swap partition on disk, two things break: the performance guarantees Kubernetes makes to your applications become meaningless, and the Scheduler's resource accounting becomes unreliable because it cannot distinguish between fast RAM and slow disk.

On Debian 13, disabling swap requires more than a single command because systemd will aggressively re-enable it on the next boot.

# Disable swap immediately
sudo swapoff -a

# Remove the swap entry from the filesystem table
sudo sed -i '/swap/s/^/#/' /etc/fstab

# Find and permanently mask the systemd swap unit
SWAP_UNIT=$(systemctl list-units --type=swap --all  | grep .swap | awk '{print $1}' | head -n 1)

sudo systemctl stop "$SWAP_UNIT"
sudo systemctl mask "$SWAP_UNIT"

# Verify swap is truly off (should return nothing)
swapon --show

Debian 13 Specific

Debian's systemd-gpt-auto-generator scans the disk at boot time. If it finds a partition with the swap GPT type code, it mounts it automatically, completely ignoring /etc/fstab. Masking the systemd unit is the only reliable way to keep swap disabled across reboots. For persistent cases, running wipefs -a on the swap partition removes the type signature from the disk entirely.

Loading Required Kernel Modules

The Linux kernel is modular by design. Rather than loading every driver at boot, it loads only what the system explicitly requests. Kubernetes requires two modules that Debian 13 does not load by default.

The overlay module provides the filesystem layer that containerd uses to build container images from layers. Without it, the container runtime cannot start. The br_netfilter module is equally critical: it allows the Linux kernel's packet filtering (iptables or eBPF) to inspect traffic that crosses virtual bridges. Both Flannel and Cilium rely on this capability for pod-to-pod networking.

# Create a persistent configuration so modules load on every boot
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# Load both modules immediately (no reboot needed)
sudo modprobe overlay
sudo modprobe br_netfilter

# Verify they are active
lsmod | grep -E 'overlay|br_netfilter'

Enabling Network Forwarding

By default, Linux does not forward packets between network interfaces. Since Kubernetes needs traffic to flow between the physical network and the virtual pod network (and across virtual bridges), we must enable forwarding at the kernel level.

The bridge-nf-call settings tell the kernel to pass bridge traffic through the iptables/netfilter framework, which is where both Flannel and Cilium insert their routing logic.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply without rebooting
sudo sysctl --system

Installing the Container Runtime: Containerd

Kubernetes does not run containers itself. It delegates that responsibility to a container runtime. Containerd is the industry standard, used by Docker, Kubernetes, and every major cloud provider. We install it from the Docker repository rather than Debian's default packages because the Docker-maintained version is more current and better tested against Kubernetes releases.

sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/debian/gpg 
  | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo "deb [arch=$(dpkg --print-architecture) 
  signed-by=/etc/apt/keyrings/docker.gpg]
  https://download.docker.com/linux/debian 
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable"
  | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install -y containerd.io

Configuring Containerd for Cgroup v2

Debian 13 uses Cgroup v2 exclusively. Cgroups (Control Groups) are the Linux kernel feature that enforces resource limits on processes: how much RAM a container can use, how much CPU it can consume, and so on. There are two versions of the cgroup interface, and Debian 13 only supports v2.

Containerd must be explicitly configured to use the systemd cgroup driver. If this setting is left at its default (false), containerd will attempt to manage cgroups independently. This creates a conflict where containerd places container processes in one part of the cgroup hierarchy while systemd and the Kubelet expect them in another. The result is pods that appear to start but are invisible to Kubernetes and the CNI plugin.

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g'   /etc/containerd/config.toml
sudo systemctl restart containerd

Installing the Kubernetes Toolset

Kubernetes provides three command-line tools that work together. kubeadm is the bootstrapper that initialises the cluster and generates security certificates. kubelet is the node agent that runs on every machine, receiving instructions from the master and ensuring containers are healthy. kubectl is the administrative interface that you use to deploy applications, inspect logs, and manage the cluster.

Debian 13 GPG Compatibility

As of February 2026, Debian 13 rejects GPG Signature Packet v3, which the Kubernetes repository still uses for signing. This is a known compatibility gap. The [trusted=yes] flag in the repository definition bypasses this check specifically for the Kubernetes source. This will be resolved when the Kubernetes project updates their signing infrastructure to v4.

echo 'deb [trusted=yes] 
  https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /'
  | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

# Pin versions to prevent automatic upgrades from breaking the cluster
sudo apt-mark hold kubelet kubeadm kubectl

At this point, all three nodes are prepared with an identical base: swap disabled, kernel modules loaded, forwarding enabled, containerd running with the correct cgroup driver, and the Kubernetes tools installed. The nodes are ready to be assembled into a cluster.

Stage 1: Building the Cluster with Flannel

Flannel is the simplest CNI plugin in the Kubernetes ecosystem. It creates a flat virtual network using VXLAN tunnels, giving every pod a unique IP address and handling all inter-node routing transparently. There are no policies, no encryption, no deep packet inspection. It is pure plumbing, and it works reliably.

For teams new to Kubernetes, Flannel is the right starting point because it removes networking complexity entirely, letting you focus on learning how deployments, services, scaling, and self-healing work.

Initialising the Control Plane

The kubeadm init command is the single most important step in building the cluster. It generates the TLS certificates that secure all communication, starts the API server and etcd database as static pods, and creates the join token that worker nodes will use to authenticate.

We pass two critical flags. The --pod-network-cidr tells Kubernetes which IP range to reserve for pods. Flannel expects 10.244.0.0/16 by default. The --apiserver-advertise-address tells the API server to bind to our known IP so worker nodes know exactly where to find it.

Run this on k8master01 only:

sudo kubeadm init
  --pod-network-cidr=10.244.0.0/16 
  --apiserver-advertise-address=10.254.0.11

When initialisation completes, kubeadm prints a join command containing a unique token and certificate hash. Copy this command and save it; you will need it for the worker nodes.

Before you can use kubectl, you need to copy the cluster credentials to your home directory. Without this, kubectl does not know where the API server is or how to authenticate:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point, if you run kubectl get nodes, you will see k8master01 listed with a status of NotReady. This is expected. The node is waiting for a CNI plugin to provide pod networking before it can be considered fully operational.

Installing the Pod Network: Flannel

Now we give the cluster its nervous system. Flannel deploys as a DaemonSet, which is a Kubernetes construct that ensures exactly one copy of a pod runs on every node in the cluster. Each Flannel agent does three things: it claims a /24 subnet slice from the 10.244.0.0/16 range, creates a virtual bridge interface (cni0) on the host, and establishes VXLAN tunnels to every other node for cross-node pod communication.

Run this on k8master01:

kubectl apply -f https://github.com/flannel-io/flannel/
releases/latest/download/kube-flannel.yml

Give Flannel about 30 seconds to initialise. It needs to pull its container image, register with the API server, and write a configuration file to /run/flannel/subnet.env on the master node. Once this file exists, the Kubelet can assign IP addresses to pods, and the node will transition to Ready status.

Verify the network is operational:

# Flannel agents should be Running (one per node)
kubectl get pods -n kube-flannel

# The master should now show 'Ready'
kubectl get nodes

# CoreDNS (the cluster's internal phonebook) should now have IPs
kubectl get pods -n kube-system

Joining the Worker Nodes

With the master running and the pod network established, it is time to bring the workers online. Each worker needs to authenticate with the master using the token generated during initialisation and verify the master's identity using the certificate hash. This ensures that a rogue machine on your network cannot impersonate the control plane.

Run the join command on k8node01 and k8node02:

sudo kubeadm join 10.254.0.11:6443
  --token <TOKEN> 
  --discovery-token-ca-cert-hash sha256:<HASH>

If more than 24 hours have passed since initialisation, the token will have expired. Generate a fresh one from k8master01:

kubeadm token create --print-join-command

Once both workers have joined, Flannel will automatically deploy its agent pods to the new nodes, assign them subnet slices, and establish VXLAN tunnels. After about 60 seconds, all three nodes should appear as Ready:

kubectl get nodes

# Expected output:
# k8master01   Ready   control-plane   ...   v1.30.x
# k8node01     Ready   <none>          ...   v1.30.x
# k8node02     Ready   <none>          ...   v1.30.x

Congratulations. You have a functioning Kubernetes cluster. The control plane is managing state, the workers are accepting workloads, and the pod network is routing traffic between nodes.

Deploying Your First Application

To verify the full stack from top to bottom, we deploy a simple Nginx web server with three replicas distributed across the cluster. In Kubernetes, you do not run applications with a command; you describe the desired state in a YAML manifest, and the system works continuously to make reality match the description.

The manifest below defines two objects. The Deployment tells Kubernetes to run three copies of Nginx and keep them alive. The Service creates a stable network endpoint so the outside world can reach the web server through any node's IP address.

Create a file called nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 32000

Apply the manifest and verify the deployment:

kubectl apply -f nginx-deployment.yaml
# Check pod distribution (note the NODE column)
kubectl get pods -o wide

# Access via any node IP on port 32000
curl http://10.254.0.11:32000
curl http://10.254.0.12:32000
curl http://10.254.0.13:32000        

All three URLs return the same Nginx welcome page. This is the NodePort mesh at work: Kubernetes opens port 32000 on every node, and the kube-proxy service forwards traffic to a healthy pod regardless of which node you connect to. If k8node02 goes offline, its replica is automatically rescheduled to k8node01, and the service continues without interruption.

When Flannel Is Enough

Flannel is well suited for lab environments, small clusters, proof-of-concept deployments, and teams building their first Kubernetes competency. Its strength is simplicity: minimal resource overhead, a small attack surface, and years of proven stability. It does exactly one thing (flat overlay networking) and does it reliably.

However, Flannel's simplicity is also its ceiling. It offers no network policies (you cannot control which pods talk to each other), no encryption between pods, no traffic observability, and its iptables-based routing degrades as the number of services grows. For anything beyond a handful of nodes or a lab environment, these limitations become operational risks.

Consider Flannel appropriate for:

  • Lab, development, and staging environments
  • Small clusters with a limited number of nodes and predictable traffic
  • Internal tooling and proof-of-concept deployments
  • Teams learning Kubernetes who want to focus on orchestration before networking

For production workloads, regulated environments, or any cluster that will grow beyond its initial footprint, we recommend starting with Cilium from day one. The upgrade path exists (as we cover in Stage 2), but starting clean avoids the migration step entirely.

IP Technics Perspective

We deploy Flannel-based clusters for internal lab work, client proof-of-concept environments, and scenarios where operational simplicity reduces risk. For clients in regulated industries -- finance, healthcare, government, and education -- who require network policies, encrypted pod-to-pod traffic, audit-grade observability, or compliance with data protection frameworks, we deploy Cilium from day one.

Stage 2: Upgrading to Cilium

When your cluster moves beyond development and the limitations of Flannel start to surface, Cilium is the natural next step. Cilium does not just route packets; it fundamentally changes how the Linux kernel handles network traffic by replacing iptables rules with eBPF (Extended Berkeley Packet Filter) programs.

In a traditional Flannel setup, every packet passes through a sequential list of iptables rules. As your cluster grows to hundreds of services, this list becomes a bottleneck. Cilium replaces this with hash-table lookups in the kernel, delivering constant-time performance whether you have 10 pods or 10,000. It is the same technology Google selected to power GKE Dataplane V2, and it is available on your own infrastructure.

What Changes with Cilium?

CapabilityFlannelCilium
Routing Engineiptables (linear rule scan)eBPF (constant-time hash lookup)
Network PoliciesNot supportedL3/L4 and L7 (HTTP-aware)
EncryptionNot supportedWireGuard / IPsec
ObservabilityNone built-inHubble (real-time flow maps)
Identity ModelIP-basedLabel-based (survives IP changes)
Service MeshRequires Istio/LinkerdSidecar-free (built-in)
Used by Google GKENoYes (Dataplane V2)

Cilium's identity-based model deserves special attention. In a Flannel cluster, if you wanted to create a firewall rule between pods, you would need the pod's IP address. But Kubernetes assigns new IPs every time a pod restarts. Cilium solves this by assigning a Security Identity to each pod based on its labels (e.g., app: nginx). The identity follows the pod regardless of which node it lands on or which IP it receives. Your security rules reference labels, not addresses, making them stable and human-readable.

Migration Path: Flannel to Cilium

Flannel and Cilium cannot coexist on the same cluster. The migration requires removing Flannel, cleaning up the network interfaces it created, and deploying Cilium in its place. This process takes about 10 minutes and does not require re-initialising the cluster or rejoining worker nodes.

Step 1: Remove Flannel

First, delete the Flannel DaemonSet and all its associated resources. This stops the Flannel agents on every node and removes the configuration from the API server.

Run on k8master01:

kubectl delete -f https://github.com/flannel-io/flannel/
releases/latest/download/kube-flannel.yml

Step 2: Clean Up Network State

Flannel leaves behind virtual network interfaces (cni0 and flannel.1), CNI configuration files, and iptables rules on every node. These must be removed so Cilium starts with a clean slate. If the old cni0 bridge retains its Flannel-assigned IP, Cilium will fail to attach its own addresses.

Run on all three nodes (k8master01, k8node01, k8node02):

sudo rm -rf /etc/cni/net.d/*
sudo rm -rf /var/lib/cni/*
sudo ip link delete cni0 2>/dev/null || true
sudo ip link delete flannel.1 2>/dev/null || true
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -X

Step 3: Install the Cilium CLI

Cilium is managed through its own command-line tool rather than raw YAML manifests. The CLI handles installation, upgrades, health checks, and the built-in connectivity test suite.

Run on k8master01:

CILIUM_CLI_VERSION=$(curl -s 
https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)

curl -L --fail --remote-name-all 
https://github.com/cilium/cilium-cli/releases/download/
${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz

sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz

Step 4: Deploy Cilium

The install command inspects your cluster configuration, detects the Pod CIDR range from the kubeadm settings, and deploys the Cilium agent (as a DaemonSet) and the Cilium Operator (which manages IP address allocation). Within about 60 seconds, eBPF programs are compiled and attached to the kernel's network hooks on every node.

Run on k8master01:

cilium install

Step 5: Apply Debian 13-Specific Configuration

Debian 13 introduces two differences from the standard Kubernetes environment that require additional configuration for Cilium.

CNI Binary Path: The Kubelet on Debian 13 searches for CNI plugin binaries in /usr/lib/cni rather than the standard /opt/cni/bin where Cilium installs them. We create symlinks to bridge the gap:

Run on all three nodes:

sudo mkdir -p /usr/lib/cni
sudo ln -sf /opt/cni/bin/cilium-cni /usr/lib/cni/cilium-cni
sudo ln -sf /opt/cni/bin/loopback /usr/lib/cni/loopback

Cgroup v2 Root: Cilium's default configuration creates its own cgroup hierarchy. On Debian 13, which enforces a unified cgroup v2 tree managed by systemd, this conflicts with the host. We patch the Cilium configuration to use the system's existing cgroup root:

Run on k8master01:

kubectl patch configmap cilium-config -n kube-system 
  --type merge 
  -p '{"data":{"cgroup-root":"/sys/fs/cgroup"}}'

# Restart agents to pick up the change
kubectl rollout restart ds cilium -n kube-system

Step 6: Restart Pods for eBPF Identity Assignment

Any pods that were created before Cilium was fully operational need to be recreated so that Cilium can assign them eBPF security identities. Without this step, those pods exist in the cluster but are invisible to Cilium's networking stack.

Run on k8master01:

kubectl delete pods --all -A

Kubernetes will immediately recreate the pods. This time, the Kubelet calls the Cilium CNI binary, which allocates an IP, assigns a security identity, and compiles a per-pod eBPF program in the kernel.

Step 7: Verify

Cilium provides its own status command that reports the health of every component:

cilium status

# Key line to look for:
# Cluster Pods:  X/X managed by Cilium
# (This must not show 0 -- every pod must be managed)

For a comprehensive validation, run Cilium's built-in connectivity test. This deploys temporary pods across both worker nodes and tests every possible communication path: pod-to-pod on the same node, pod-to-pod across nodes, pod-to-service, and pod-to-external.

cilium connectivity test

Enabling Hubble: Traffic Observability

Hubble is Cilium's built-in observability platform. It taps into the eBPF data path to provide real-time visibility into every connection in your cluster: which pods are communicating, what protocols they use, and whether traffic was allowed or denied by a network policy. For organisations operating under regulatory compliance or security audit requirements, Hubble provides the evidence trail that traditional networking cannot.

# Enable Hubble with the web dashboard
cilium hubble enable --ui

# Wait for the Hubble pods to start
kubectl get pods -n kube-system -l k8s-app=hubble-ui

# Forward the UI to a browser-accessible port
kubectl port-forward -n kube-system svc/hubble-ui
  --address 0.0.0.0 12000:80

# Open in browser: http://10.254.0.11:12000

The Hubble UI displays a live service map showing connections between pods, colour-coded by status (green for allowed, red for dropped). You can drill into individual flows to see source identity, destination identity, protocol, port, and the verdict. This transforms Kubernetes networking from a black box into a transparent, auditable system.

Alternative: Clean Cilium Installation (No Flannel Stage)

If you know from the outset that your deployment requires Cilium's capabilities, you can skip Flannel entirely. The process is identical to Stage 1, with two differences: the Pod CIDR range and the CNI installation step.

# Initialise with a Cilium-compatible range
sudo kubeadm init 
  --pod-network-cidr=172.16.0.0/16 
  --apiserver-advertise-address=10.254.0.11

# Configure kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Install Cilium (instead of Flannel)
cilium install

# Apply the Debian 13 fixes (CNI path + cgroup root)
# Then join workers using kubeadm join
# Verify with: cilium status

Taking It to Production

Everything in this guide so far works identically whether your servers are in a rack under your desk, a colocation facility in Dubai, or a dedicated bare-metal provider anywhere in the world. That is one of Kubernetes' core strengths: the abstraction layer is complete. The same manifests, the same networking model, and the same operational tooling apply regardless of where the hardware lives.

This section covers the infrastructure patterns and considerations that matter when you move beyond a lab environment into production deployments.

Bare-Metal and Private Cloud

For organisations deploying Kubernetes on their own infrastructure, the architecture maps directly to your existing network. The node network (10.254.0.0/24 in our example) maps to your data centre VLAN or private network segment. The pod network is entirely virtual and managed by Cilium's eBPF data plane.

Most bare-metal providers offer private networking between servers, either as a built-in feature or as an add-on. This is the ideal foundation for Kubernetes because API server communication, etcd replication, and Cilium's overlay tunnels stay on a high-speed private link, never touching the public internet.

Where the provider gives you Layer 2 control over the private network, Cilium can be configured for Native/Direct Routing instead of VXLAN encapsulation. This eliminates per-packet overhead and reduces latency, which is meaningful for real-time communications, financial transaction processing, and high-throughput data pipelines.

Geographic Redundancy

Cilium's ClusterMesh feature connects multiple Kubernetes clusters across different locations into a single logical network. Pods in one cluster can discover and communicate with services in another seamlessly, without application-level changes.

For organisations with infrastructure distributed across multiple data centres or regions, this enables active-active deployments where traffic is served from the nearest healthy cluster with automatic failover. For GCC-based organisations operating under regulatory requirements around data residency and disaster recovery, ClusterMesh provides the technical foundation for compliance without the complexity of custom DNS or load-balancer failover configurations.

IP Technics designs and implements multi-cluster architectures with ClusterMesh for clients who need geographic redundancy, whether across UAE facilities, between GCC countries, or spanning international regions.

Migrating from Public Cloud to Private Infrastructure

One of the most compelling advantages of Kubernetes is portability. The same YAML manifests, the same container images, and the same networking model work identically whether the cluster runs on AWS, Azure, GCP, or your own bare-metal servers. This makes Kubernetes the ideal vehicle for organisations looking to reduce their public cloud spend by moving workloads to infrastructure they control.

Many organisations in the GCC region initially adopted hyperscaler platforms for speed of deployment, only to find themselves facing escalating costs, limited control over data residency, and vendor lock-in that makes future migrations painful. Kubernetes on private infrastructure offers a path back to full control without sacrificing the operational benefits of containerisation.

The migration path is well-defined: replicate the cluster architecture on private infrastructure, transfer container images to a private registry, apply the same deployment manifests, and redirect traffic. With Cilium's ClusterMesh, it is even possible to run a hybrid configuration during the transition, with workloads split across the public cloud and private infrastructure until the cutover is complete.

IP Technics: Kubernetes Consulting and Migration

IP Technics designs, deploys, and manages Kubernetes infrastructure for organisations across the UAE and GCC. Our services include greenfield cluster builds on bare-metal and private cloud, migrations from public cloud platforms (AWS, Azure, GCP) to private infrastructure, Cilium networking and security policy design, multi-cluster geographic redundancy with ClusterMesh, and ongoing managed operations. Our clients typically achieve significant cost reductions while gaining full sovereignty over their data and network. Whether you are exploring Kubernetes for the first time, upgrading from a basic lab deployment to production, or moving off a hyperscaler, we bring over 15 years of infrastructure expertise to every engagement. Reach out at iptechnics.com.

Quick Reference

Key Configuration Files

File / DirectoryPurpose
/etc/kubernetes/manifests/Static pod manifests (API server, etcd, scheduler)
/etc/kubernetes/pki/Cluster SSL certificates and CA trust chain
~/.kube/configkubectl credentials (copied from admin.conf)
/etc/containerd/config.tomlContainer runtime config (SystemdCgroup = true)
/etc/cni/net.d/CNI configuration (Flannel or Cilium)
/opt/cni/bin/ + /usr/lib/cni/CNI binaries (symlink required on Debian 13)
/etc/modules-load.d/k8s.confKernel modules: overlay + br_netfilter
/etc/sysctl.d/k8s.confIP forwarding and bridge filter settings
/var/lib/kubelet/config.yamlKubelet agent configuration
/run/flannel/subnet.envFlannel subnet assignment (Stage 1 only)

Essential Troubleshooting Commands

CommandWhen to Use
kubectl get nodesVerify all nodes are Ready
kubectl get pods -A -o wideSee all pods, their IPs, and hosting node
kubectl describe pod <name>Check Events section for startup errors
kubectl logs <pod> -n <ns>View application or system pod logs
journalctl -u kubelet -n 50Kubelet errors (swap, CNI, cgroup issues)
cilium statusCilium health and managed pod count
cilium connectivity testFull connectivity validation across all paths
free -hVerify swap is 0 (check after every reboot)
sudo crictl psCheck running containers at the runtime level
kubeadm token create --print-join-commandGenerate a fresh worker join token

Other related posts