homelab/AGENTS.md
Alvin Wang 4301877f33 Update docs to match live cluster state
Audited the running cluster and fixed all .md files:
- Node info: Fedora 43, Lima (not OrbStack), worker IP 10.0.1.58
- Networking: fixed public/internal hostname tables, all *.dog internals
- Storage: removed Longhorn refs (not deployed), documented hostPath/local-path
- Services: moved Seerr to media chart, utils is Zerobyte only
- Bootstrap: reordered steps, MetalLB/traefik-internal as manual pre-deploy
- Headlamp.md/MetalLB.md: added context and explanations

Made-with: Cursor
2026-04-22 14:59:34 -07:00

32 lines
1.7 KiB
Markdown
Executable File

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
localhost.localdomain Ready control-plane v1.34.6+k3s1 10.0.1.2 <none> Fedora Linux 43 (Server Edition) 6.17.1-300.fc43.x86_64 containerd://2.2.2-bd1.34
lima-mac-worker Ready <none> v1.34.6+k3s1 10.0.1.58 <none> Ubuntu 25.10 6.17.0-22-generic containerd://2.2.2-bd1.34
The mac-worker is running inside a Lima VM on macOS.
I have a DNS rewrite pointing *.internal to 10.0.1.250 which is traefik-internal.
/dogstore/ is a NFS path that's available to all nodes
secrets are managed by sops
## Load balancers
Two LB implementations coexist: k3s klipper (servicelb) and MetalLB. They are
separated by `loadBalancerClass` so they don't conflict.
- **klipper** handles services with NO `loadBalancerClass`. It creates svclb
DaemonSet pods that bind host ports directly on every node.
- **MetalLB** handles services with `loadBalancerClass: metallb`. Its pool has
`autoAssign: false`, so it only assigns IPs to services that explicitly
request a pool via the `metallb.io/address-pool` annotation.
| Service | loadBalancerClass | LB | External IPs |
|------------------|-------------------|----------|---------------------------|
| traefik | (none) | klipper | 10.0.1.2, 10.0.1.58 |
| traefik-internal | metallb | MetalLB | 10.0.1.250 |
`loadBalancerClass` is immutable on k8s Services. Changing it requires deleting
the Service first, then redeploying (`kubectl delete svc … && helm upgrade`).