Deploy Worker Node¶
Add worker nodes to your K3s cluster. Each worker runs game server pods managed by Agones. Repeat this guide for every additional server you want to add — more workers means more capacity for concurrent game sessions.
Prerequisites¶
- A master node is already running with K3s and Agones
- This server has completed the Setup Server guide
- This server has completed Production Hardening (recommended)
- You have the master's join token (see Get the Join Token)
- You have the master's public IP address
Transfer the Join Script¶
From your Windows dev machine, run transfer_agent.bat from the MIPScripts/kubes/scripts/ directory:
It prompts for the server IP, then copies the agent_node/ folder to the server as root via scp.
Run the Join Script¶
SSH into the worker server as root and run:
What the script does¶
- Prompts for a hostname — sets the worker's hostname (e.g.
mip-worker-1,mip-worker-2). - Fixes DNS resolution — disables
systemd-resolvedand sets Cloudflare DNS (1.1.1.1) to avoid DNS issues common on minimal cloud images. - Installs K3s v1.28.5 — same version as the master, installed in agent-only mode (no control plane components).
- Prompts for the master token — paste the node token from
sudo cat /var/lib/rancher/k3s/server/node-tokenon the master. - Prompts for the master IP — the master node's public IP address.
- Joins the cluster — starts the K3s agent and connects to the master at
https://MASTER_IP:6443.
Verify¶
From your dev machine (or the master node):
You should see the master and the new worker:
NAME STATUS ROLES AGE VERSION
mip-master Ready control-plane,master 10m v1.28.5+k3s1
mip-worker-1 Ready <none> 30s v1.28.5+k3s1
If the worker shows NotReady, wait 30 seconds and check again — it takes a moment for the node to fully register and pull system containers.
Firewall Ports¶
Ensure the following ports are open on each worker node:
| Port | Protocol | Direction | Purpose |
|---|---|---|---|
| 6443 | TCP | Worker → Master | K3s API server (outbound) |
| 8472 | UDP | All nodes ↔ All nodes | Flannel VXLAN (pod networking) |
| 10250 | TCP | Master → Worker | Kubelet metrics |
| 7000–8000 | UDP | Internet → Worker | Agones game server ports (players connect here) |
Tip
If all nodes are on the same Tailscale network, only the game server ports (7000:8000/udp) need to be open on the public interface. All inter-node traffic routes privately over Tailscale.
Adding More Workers¶
Repeat this entire guide for each new server:
- Complete Setup Server and Production Hardening
- Transfer the join script with
transfer_agent.bat - Run
join.shwith the same master token and IP
The same join token works for all workers. Each worker picks up fleet definitions and starts receiving game server pod assignments automatically.
After All Workers Are Joined¶
With the cluster ready, deploy your game server fleet:
-
Apply the fleet — from the MIP Control Panel's Kubes tab, or manually:
-
Deploy the backend — use the MIP Control Panel's Backend tab to build and deploy the NestJS backend.
-
Manage from the Control Panel — switch kubectl context to your remote cluster using Kubes → Context, then manage game servers, fleets, and logs from the GUI.
Troubleshooting¶
Worker stuck on NotReady
: Check if the worker can reach the master on port 6443: curl -k https://MASTER_IP:6443. If it times out, check the master's firewall rules.
error: server gave HTTP response to HTTPS client
: The K3s agent can't establish a TLS connection to the master. Verify the token is correct and the master's K3s service is running: sudo systemctl status k3s on the master.
DNS resolution failures inside pods
: The join script disables systemd-resolved. If pods still can't resolve DNS, check that CoreDNS is running: kubectl get pods -n kube-system | grep coredns.
Worker disappears after reboot
: The join script runs k3s agent in the foreground. To make it persist across reboots, enable the K3s agent service:
```bash
sudo systemctl enable --now k3s-agent
```