Skip to content

Deploy Worker Node

Add worker nodes to your K3s cluster. Each worker runs game server pods managed by Agones. Repeat this guide for every additional server you want to add — more workers means more capacity for concurrent game sessions.


Prerequisites


Transfer the Join Script

From your Windows dev machine, run transfer_agent.bat from the MIPScripts/kubes/scripts/ directory:

MIPScripts\kubes\scripts\transfer_agent.bat

It prompts for the server IP, then copies the agent_node/ folder to the server as root via scp.


Run the Join Script

SSH into the worker server as root and run:

ssh root@WORKER_IP
cd /agent_node
chmod +x join.sh
./join.sh

What the script does

  1. Prompts for a hostname — sets the worker's hostname (e.g. mip-worker-1, mip-worker-2).
  2. Fixes DNS resolution — disables systemd-resolved and sets Cloudflare DNS (1.1.1.1) to avoid DNS issues common on minimal cloud images.
  3. Installs K3s v1.28.5 — same version as the master, installed in agent-only mode (no control plane components).
  4. Prompts for the master token — paste the node token from sudo cat /var/lib/rancher/k3s/server/node-token on the master.
  5. Prompts for the master IP — the master node's public IP address.
  6. Joins the cluster — starts the K3s agent and connects to the master at https://MASTER_IP:6443.

Verify

From your dev machine (or the master node):

kubectl get nodes

You should see the master and the new worker:

NAME           STATUS   ROLES                  AGE   VERSION
mip-master     Ready    control-plane,master   10m   v1.28.5+k3s1
mip-worker-1   Ready    <none>                 30s   v1.28.5+k3s1

If the worker shows NotReady, wait 30 seconds and check again — it takes a moment for the node to fully register and pull system containers.


Firewall Ports

Ensure the following ports are open on each worker node:

Port Protocol Direction Purpose
6443 TCP Worker → Master K3s API server (outbound)
8472 UDP All nodes ↔ All nodes Flannel VXLAN (pod networking)
10250 TCP Master → Worker Kubelet metrics
7000–8000 UDP Internet → Worker Agones game server ports (players connect here)

Tip

If all nodes are on the same Tailscale network, only the game server ports (7000:8000/udp) need to be open on the public interface. All inter-node traffic routes privately over Tailscale.


Adding More Workers

Repeat this entire guide for each new server:

  1. Complete Setup Server and Production Hardening
  2. Transfer the join script with transfer_agent.bat
  3. Run join.sh with the same master token and IP

The same join token works for all workers. Each worker picks up fleet definitions and starts receiving game server pod assignments automatically.


After All Workers Are Joined

With the cluster ready, deploy your game server fleet:

  1. Apply the fleet — from the MIP Control Panel's Kubes tab, or manually:

    kubectl apply -f MIPScripts/kubes/fleet.yaml
    kubectl apply -f MIPScripts/kubes/auto-scale.yaml
    
  2. Deploy the backend — use the MIP Control Panel's Backend tab to build and deploy the NestJS backend.

  3. Manage from the Control Panel — switch kubectl context to your remote cluster using Kubes → Context, then manage game servers, fleets, and logs from the GUI.


Troubleshooting

Worker stuck on NotReady : Check if the worker can reach the master on port 6443: curl -k https://MASTER_IP:6443. If it times out, check the master's firewall rules.

error: server gave HTTP response to HTTPS client : The K3s agent can't establish a TLS connection to the master. Verify the token is correct and the master's K3s service is running: sudo systemctl status k3s on the master.

DNS resolution failures inside pods : The join script disables systemd-resolved. If pods still can't resolve DNS, check that CoreDNS is running: kubectl get pods -n kube-system | grep coredns.

Worker disappears after reboot : The join script runs k3s agent in the foreground. To make it persist across reboots, enable the K3s agent service:

```bash
sudo systemctl enable --now k3s-agent
```