Deploy Master Node¶
Set up the K3s master (control plane) with Agones for game server orchestration. The master runs the Kubernetes API, scheduler, and Agones controller. Game server pods run on worker nodes that join this master.
┌──────────────────────────────────────────────────────────────────┐
│ MASTER NODE │
│ K3s control plane + Agones controller + Helm │
│ Runs: API server, scheduler, Agones allocator │
│ Does NOT run game server pods (by default) │
├──────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ WORKER #1 │ │ WORKER #2 │ │ WORKER #N │ │
│ │ K3s agent │ │ K3s agent │ │ K3s agent │ │
│ │ Game server │ │ Game server │ │ Game server │ │
│ │ pods here │ │ pods here │ │ pods here │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ Workers join master via token + IP │
│ Add more workers to increase capacity │
└──────────────────────────────────────────────────────────────────┘
Prerequisites¶
- Server has completed the Setup Server guide (Ubuntu hardened, SSH keys, firewall)
- Server has completed Production Hardening (recommended)
- PowerShell or Windows Terminal on your dev machine
Transfer Scripts¶
MIP includes a ready-made setup script in MIPScripts/kubes/scripts/create_master_node/. Transfer it to your server from your Windows dev machine by running transfer.bat:
It prompts for the server IP and username, then copies the create_master_node/ folder to the server's home directory via scp.
Run the Setup Script¶
SSH into your master server and run:
ssh ubuntu@MASTER_IP
cd ~/create_master_node
chmod +x setup-k3s-agones.sh
sudo ./setup-k3s-agones.sh
What the script does¶
- Prompts for a hostname — sets the server's hostname (e.g.
mip-master-01). - Detects network interface and public IP — auto-discovers the default interface and external IP for K3s networking.
- Installs K3s v1.28.5 — downloads and installs K3s with a systemd override that configures the node name, external IP, and flannel interface.
- Installs Helm — the Kubernetes package manager, required for Agones.
-
Installs Agones v1.21.0 — deploys Agones via Helm with custom port mappings:
Agones Service Port Ping HTTP 8001 Allocator HTTP 8002 Allocator gRPC 8003 -
Adds Tailscale IP to K3s TLS SANs — if Tailscale is running, automatically calls
add-tailscale-tls-san.shto add the server's Tailscale IP to the K3s API server certificate. This allowskubectlto connect via Tailscale without a certificate error. If Tailscale is not running at setup time, runsudo ./add-tailscale-tls-san.shmanually after installing it. - Outputs the kubeconfig — prints the full kubeconfig with the
serverfield rewritten to the public IP and all names set to the hostname (instead ofdefault), ready to merge into your local machine.
Verify K3s and Agones¶
After the script finishes, confirm everything is running:
The master node should appear with status Ready.
All Agones pods (agones-controller, agones-allocator, agones-ping, agones-extensions) should reach Running status. This can take a minute or two — rerun until all are up.
Use --watch to stream status updates live. Press Ctrl+C once all pods are Running.
Local Configuration (Scripts 01 → 02 → 03)¶
After the server setup script finishes, run three scripts in order from your dev machine to configure local kubectl access, generate backend credentials, and set up the cluster pull secret.
Run them from MIPScripts\kubes\config\, or use the Kubes → Config buttons in the MIP Control Panel — they map 1-to-1:
| # | Script | Control Panel button | What it does |
|---|---|---|---|
| 1 | 01-get-kubeconfig.sh |
01 Merge Kubeconfig | SSH into master → fetch kubeconfig → swap public IP for Tailscale IP → rename context to hostname → merge into ~/.kube/config → auto-switches to the new context |
| 2 | 02-apply-and-get-token.sh |
02 Get KUBE Tokens | Apply RBAC manifests → create 10-year SA token → print KUBE_* values → optionally write to a .env file → asks if backend is on the same machine (overrides KUBE_SERVER to 127.0.0.1 if yes) |
| 3 | 03-create-gitlab-pull-secret.sh |
03 Create Pull Secret | Create GitLab docker-registry secret → patch default + agones-sdk ServiceAccounts |
01 — Get Kubeconfig¶
Or click 01 Merge Kubeconfig in the Control Panel (Kubes → Config).
Prompts:
- Server IP — hostname or IP of the master (e.g.
mip-serverif using Tailscale MagicDNS) - SSH user — defaults to
ubuntuif left blank - SSH key path — leave blank to use your default SSH key
The script:
- Fetches the server's Tailscale IP (
tailscale ip -4) so kubectl routes through Tailscale — port6443never needs to be open on UFW - Fetches the server's hostname (e.g.
mip-master-01) to name the context — avoids the genericdefaultname that causes stale CA conflicts on re-merge - Runs
~/create_master_node/get_kubeconfig.shon the server to retrieve the kubeconfig - Replaces the server URL with the Tailscale IP and renames all
defaultentries to the hostname - Merges safely into your local
~/.kube/config— will not drop existing contexts
At the end it automatically switches to the new context (kubectl config use-context mip-master-01). Run kubectl get nodes to confirm.
02 — Apply RBAC and Get Backend Token¶
Option A — MIP Control Panel (recommended):
Open the Control Panel → Kubes tab → click 02 Get KUBE Tokens.
After the script finishes, the Control Panel pops up a file picker — select your backend .env file and it will patch KUBE_SERVER, KUBE_TOKEN, and KUBE_CA in-place automatically. It will also ask whether the backend runs on the same machine as K3s and override KUBE_SERVER to https://127.0.0.1:6443 if so.
Option B — Terminal:
Open Git Bash in MIPScripts/kubes/config/, then:
The script:
- Applies the manifests from
get-token/—serviceaccount.yaml,role.yaml,clusterrole.yaml— which create themip-backend-saServiceAccount with Agones RBAC permissions - Creates a 10-year ServiceAccount token for the backend
- Reads
KUBE_SERVER,KUBE_TOKEN, andKUBE_CAfrom the current context and prints them - Asks for a
.envfile path to write the values into — leave blank to skip - If a path was given, asks "Will the backend run on the same machine as K3s?" — if yes,
KUBE_SERVERis automatically overridden tohttps://127.0.0.1:6443before writing
Full interactive output:
--- Copy into .env ---
KUBE_SERVER=https://100.x.x.x:6443
KUBE_TOKEN=eyJ...
KUBE_CA=LS0t...
---
.env file path to update (blank to skip): /home/ubuntu/mip-be/deploy/.env
Will the backend Docker container run on the SAME machine as K3s?
If yes → KUBE_SERVER will be set to https://127.0.0.1:6443
If no → KUBE_SERVER stays as https://100.x.x.x:6443
Same machine? [y/N]: y
KUBE_SERVER overridden → https://127.0.0.1:6443
NOTE: Your docker-compose backend service must use:
network_mode: host
Otherwise 127.0.0.1 inside the container points to the container itself,
not the host. UFW does NOT block loopback — no firewall changes needed.
Updated: /home/ubuntu/mip-be/deploy/.env
KUBE_SERVER=https://127.0.0.1:6443
KUBE_TOKEN and KUBE_CA written.
If the file exists, KUBE_SERVER, KUBE_TOKEN, and KUBE_CA lines are replaced in-place. If the file doesn't exist, it is created with just those three keys. You will need these values during Deploy Backend.
Backend on the same server as K3s — use 127.0.0.1
When the backend Docker container and K3s run on the same server, always use KUBE_SERVER=https://127.0.0.1:6443.
Why not the Tailscale IP?
Docker bridge-mode containers do not have a Tailscale interface. The container cannot reach a Tailscale IP even when the host can — you will see connect ETIMEDOUT errors.
Why 127.0.0.1 works:
127.0.0.1:6443is always included in the K3s TLS certificate — no certificate errors- Loopback traffic bypasses UFW entirely — no firewall rules to add
- K3s always binds its API server on
127.0.0.1:6443
Required docker-compose change:
Add network_mode: host to the backend service so 127.0.0.1 inside the container resolves to the host, not the container itself:
services:
backend:
image: ...
network_mode: host # ← required for 127.0.0.1 to reach host K3s
env_file:
- .env
When using network_mode: host, the ports: mapping is ignored (the host ports are used directly) and inter-service DNS names like redis and db no longer resolve — update REDIS_ENDPOINT and MONGO_DB_URL to 127.0.0.1 in the .env file.
03 — Create GitLab Pull Secret¶
Create a GitLab Deploy Token first
Before running this script, you need a GitLab Deploy Token with read_registry scope. See Kubernetes Pull Secret for how to create one. The script will prompt you for the username and password.
Or click 03 Create Pull Secret in the Control Panel.
The script:
- Prompts for your GitLab Deploy Token username and password
- Get one at: Project → Settings → Repository → Deploy Tokens → create with
read_registryscope
- Get one at: Project → Settings → Repository → Deploy Tokens → create with
- Deletes any existing
gitlabsecret and creates a freshdocker-registrysecret in thedefaultnamespace - Patches both the
defaultandagones-sdkServiceAccounts withimagePullSecrets: [gitlab]— pods in the fleet can pull fromregistry.gitlab.comwithout embedding credentials in every fleet YAML
Two credentials, two purposes
The Personal Access Token (from Server Deployment Prerequisites) is used on your dev machine (or CI) to docker login and push built images to the GitLab registry — it needs write_registry scope. The Deploy Token here is for Kubernetes — it lets K3s pods pull those images at runtime, and only needs read_registry scope. They are separate credentials and serve different purposes.
Tip
Skip this step if your game server images are public.
Get the Join Token¶
Worker nodes need the master's node token to join. On the master:
Save this token — you'll need it when setting up each worker.
Firewall Ports¶
Ensure the following ports are open on the master node:
| Port | Protocol | Direction | Purpose |
|---|---|---|---|
| 6443 | TCP | Workers → Master | K3s API server |
| 8472 | UDP | All nodes ↔ All nodes | Flannel VXLAN (pod networking) |
| 10250 | TCP | Master ↔ Workers | Kubelet metrics |
Tip
If all nodes are on the same Tailscale network, inter-node traffic is already encrypted and routed privately. These ports are automatically accessible over the Tailscale interface.
Next Steps¶
- Deploy worker nodes to add capacity for game server pods.
- Install Docker on the server that will run the backend.
- Deploy the backend via Docker Compose — use the KUBE values from above.