Skip to content

Deploy Master Node

Set up the K3s master (control plane) with Agones for game server orchestration. The master runs the Kubernetes API, scheduler, and Agones controller. Game server pods run on worker nodes that join this master.

┌──────────────────────────────────────────────────────────────────┐
│                        MASTER NODE                               │
│  K3s control plane + Agones controller + Helm                    │
│  Runs: API server, scheduler, Agones allocator                   │
│  Does NOT run game server pods (by default)                      │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│    ┌──────────────┐  ┌──────────────┐  ┌──────────────┐         │
│    │  WORKER #1   │  │  WORKER #2   │  │  WORKER #N   │         │
│    │  K3s agent   │  │  K3s agent   │  │  K3s agent   │         │
│    │  Game server │  │  Game server │  │  Game server │         │
│    │  pods here   │  │  pods here   │  │  pods here   │         │
│    └──────────────┘  └──────────────┘  └──────────────┘         │
│                                                                  │
│    Workers join master via token + IP                             │
│    Add more workers to increase capacity                         │
└──────────────────────────────────────────────────────────────────┘

Prerequisites

  • Server has completed the Setup Server guide (Ubuntu hardened, SSH keys, firewall)
  • Server has completed Production Hardening (recommended)
  • PowerShell or Windows Terminal on your dev machine

Transfer Scripts

MIP includes a ready-made setup script in MIPScripts/kubes/scripts/create_master_node/. Transfer it to your server from your Windows dev machine by running transfer.bat:

MIPScripts\kubes\scripts\transfer.bat

It prompts for the server IP and username, then copies the create_master_node/ folder to the server's home directory via scp.


Run the Setup Script

SSH into your master server and run:

ssh ubuntu@MASTER_IP
cd ~/create_master_node
chmod +x setup-k3s-agones.sh
sudo ./setup-k3s-agones.sh

What the script does

  1. Prompts for a hostname — sets the server's hostname (e.g. mip-master-01).
  2. Detects network interface and public IP — auto-discovers the default interface and external IP for K3s networking.
  3. Installs K3s v1.28.5 — downloads and installs K3s with a systemd override that configures the node name, external IP, and flannel interface.
  4. Installs Helm — the Kubernetes package manager, required for Agones.
  5. Installs Agones v1.21.0 — deploys Agones via Helm with custom port mappings:

    Agones Service Port
    Ping HTTP 8001
    Allocator HTTP 8002
    Allocator gRPC 8003
  6. Adds Tailscale IP to K3s TLS SANs — if Tailscale is running, automatically calls add-tailscale-tls-san.sh to add the server's Tailscale IP to the K3s API server certificate. This allows kubectl to connect via Tailscale without a certificate error. If Tailscale is not running at setup time, run sudo ./add-tailscale-tls-san.sh manually after installing it.

  7. Outputs the kubeconfig — prints the full kubeconfig with the server field rewritten to the public IP and all names set to the hostname (instead of default), ready to merge into your local machine.

Verify K3s and Agones

After the script finishes, confirm everything is running:

sudo kubectl get nodes

The master node should appear with status Ready.

kubectl --namespace default get pods -o wide

All Agones pods (agones-controller, agones-allocator, agones-ping, agones-extensions) should reach Running status. This can take a minute or two — rerun until all are up.

sudo kubectl get pods -n agones-system --watch

Use --watch to stream status updates live. Press Ctrl+C once all pods are Running.


Local Configuration (Scripts 01 → 02 → 03)

After the server setup script finishes, run three scripts in order from your dev machine to configure local kubectl access, generate backend credentials, and set up the cluster pull secret.

Run them from MIPScripts\kubes\config\, or use the Kubes → Config buttons in the MIP Control Panel — they map 1-to-1:

# Script Control Panel button What it does
1 01-get-kubeconfig.sh 01 Merge Kubeconfig SSH into master → fetch kubeconfig → swap public IP for Tailscale IP → rename context to hostname → merge into ~/.kube/config → auto-switches to the new context
2 02-apply-and-get-token.sh 02 Get KUBE Tokens Apply RBAC manifests → create 10-year SA token → print KUBE_* values → optionally write to a .env file → asks if backend is on the same machine (overrides KUBE_SERVER to 127.0.0.1 if yes)
3 03-create-gitlab-pull-secret.sh 03 Create Pull Secret Create GitLab docker-registry secret → patch default + agones-sdk ServiceAccounts

01 — Get Kubeconfig

bash 01-get-kubeconfig.sh

Or click 01 Merge Kubeconfig in the Control Panel (Kubes → Config).

Prompts:

  • Server IP — hostname or IP of the master (e.g. mip-server if using Tailscale MagicDNS)
  • SSH user — defaults to ubuntu if left blank
  • SSH key path — leave blank to use your default SSH key

The script:

  1. Fetches the server's Tailscale IP (tailscale ip -4) so kubectl routes through Tailscale — port 6443 never needs to be open on UFW
  2. Fetches the server's hostname (e.g. mip-master-01) to name the context — avoids the generic default name that causes stale CA conflicts on re-merge
  3. Runs ~/create_master_node/get_kubeconfig.sh on the server to retrieve the kubeconfig
  4. Replaces the server URL with the Tailscale IP and renames all default entries to the hostname
  5. Merges safely into your local ~/.kube/config — will not drop existing contexts

At the end it automatically switches to the new context (kubectl config use-context mip-master-01). Run kubectl get nodes to confirm.


02 — Apply RBAC and Get Backend Token

Option A — MIP Control Panel (recommended):

Open the Control Panel → Kubes tab → click 02 Get KUBE Tokens.

After the script finishes, the Control Panel pops up a file picker — select your backend .env file and it will patch KUBE_SERVER, KUBE_TOKEN, and KUBE_CA in-place automatically. It will also ask whether the backend runs on the same machine as K3s and override KUBE_SERVER to https://127.0.0.1:6443 if so.

Option B — Terminal:

Open Git Bash in MIPScripts/kubes/config/, then:

bash 02-apply-and-get-token.sh

The script:

  1. Applies the manifests from get-token/serviceaccount.yaml, role.yaml, clusterrole.yaml — which create the mip-backend-sa ServiceAccount with Agones RBAC permissions
  2. Creates a 10-year ServiceAccount token for the backend
  3. Reads KUBE_SERVER, KUBE_TOKEN, and KUBE_CA from the current context and prints them
  4. Asks for a .env file path to write the values into — leave blank to skip
  5. If a path was given, asks "Will the backend run on the same machine as K3s?" — if yes, KUBE_SERVER is automatically overridden to https://127.0.0.1:6443 before writing

Full interactive output:

--- Copy into .env ---
KUBE_SERVER=https://100.x.x.x:6443
KUBE_TOKEN=eyJ...
KUBE_CA=LS0t...
---

.env file path to update (blank to skip): /home/ubuntu/mip-be/deploy/.env

Will the backend Docker container run on the SAME machine as K3s?
  If yes  → KUBE_SERVER will be set to https://127.0.0.1:6443
  If no   → KUBE_SERVER stays as https://100.x.x.x:6443

Same machine? [y/N]: y

  KUBE_SERVER overridden → https://127.0.0.1:6443

  NOTE: Your docker-compose backend service must use:
    network_mode: host
  Otherwise 127.0.0.1 inside the container points to the container itself,
  not the host. UFW does NOT block loopback — no firewall changes needed.

Updated: /home/ubuntu/mip-be/deploy/.env
  KUBE_SERVER=https://127.0.0.1:6443
  KUBE_TOKEN and KUBE_CA written.

If the file exists, KUBE_SERVER, KUBE_TOKEN, and KUBE_CA lines are replaced in-place. If the file doesn't exist, it is created with just those three keys. You will need these values during Deploy Backend.

Backend on the same server as K3s — use 127.0.0.1

When the backend Docker container and K3s run on the same server, always use KUBE_SERVER=https://127.0.0.1:6443.

Why not the Tailscale IP? Docker bridge-mode containers do not have a Tailscale interface. The container cannot reach a Tailscale IP even when the host can — you will see connect ETIMEDOUT errors.

Why 127.0.0.1 works:

  • 127.0.0.1:6443 is always included in the K3s TLS certificate — no certificate errors
  • Loopback traffic bypasses UFW entirely — no firewall rules to add
  • K3s always binds its API server on 127.0.0.1:6443

Required docker-compose change: Add network_mode: host to the backend service so 127.0.0.1 inside the container resolves to the host, not the container itself:

services:
  backend:
    image: ...
    network_mode: host   # ← required for 127.0.0.1 to reach host K3s
    env_file:
      - .env

When using network_mode: host, the ports: mapping is ignored (the host ports are used directly) and inter-service DNS names like redis and db no longer resolve — update REDIS_ENDPOINT and MONGO_DB_URL to 127.0.0.1 in the .env file.


03 — Create GitLab Pull Secret

Create a GitLab Deploy Token first

Before running this script, you need a GitLab Deploy Token with read_registry scope. See Kubernetes Pull Secret for how to create one. The script will prompt you for the username and password.

bash 03-create-gitlab-pull-secret.sh

Or click 03 Create Pull Secret in the Control Panel.

The script:

  1. Prompts for your GitLab Deploy Token username and password
    • Get one at: Project → Settings → Repository → Deploy Tokens → create with read_registry scope
  2. Deletes any existing gitlab secret and creates a fresh docker-registry secret in the default namespace
  3. Patches both the default and agones-sdk ServiceAccounts with imagePullSecrets: [gitlab] — pods in the fleet can pull from registry.gitlab.com without embedding credentials in every fleet YAML

Two credentials, two purposes

The Personal Access Token (from Server Deployment Prerequisites) is used on your dev machine (or CI) to docker login and push built images to the GitLab registry — it needs write_registry scope. The Deploy Token here is for Kubernetes — it lets K3s pods pull those images at runtime, and only needs read_registry scope. They are separate credentials and serve different purposes.

Tip

Skip this step if your game server images are public.


Get the Join Token

Worker nodes need the master's node token to join. On the master:

sudo cat /var/lib/rancher/k3s/server/node-token

Save this token — you'll need it when setting up each worker.


Firewall Ports

Ensure the following ports are open on the master node:

Port Protocol Direction Purpose
6443 TCP Workers → Master K3s API server
8472 UDP All nodes ↔ All nodes Flannel VXLAN (pod networking)
10250 TCP Master ↔ Workers Kubelet metrics

Tip

If all nodes are on the same Tailscale network, inter-node traffic is already encrypted and routed privately. These ports are automatically accessible over the Tailscale interface.


Next Steps

  1. Deploy worker nodes to add capacity for game server pods.
  2. Install Docker on the server that will run the backend.
  3. Deploy the backend via Docker Compose — use the KUBE values from above.