GloriousFlywheel Honey On-Prem Rollout 2026-04-16
Snapshot date: 2026-04-16
Historical rollout issue: #208
Merged rollout baseline: PR #209
Purpose
Capture the authoritative on-prem deployment target for GloriousFlywheel after the reset tranche closed.
This note is the shortest execution-facing answer to:
- what cluster GloriousFlywheel should target now
- how node placement should be reasoned about
- what bootstrap and validation steps are expected around that target
- what still blocks a real local apply from this repo
Canonical Target
The on-prem runner-stack target is:
Move runner/control workloads into the
honeycluster only, keep them tailnet-only, put durable backing state onbumble, and usestingonly for explicit stateless compute capacity.
Interpretation:
- there is one Kubernetes cluster for this rollout:
honey bumbleandstingare not separate clustersbumbleandstingmatter as node-role and placement inputs insidehoney, not as alternative kubeconfig targets
Cluster And Access Truth
Primary on-prem target:
- physical cluster:
honey - primary context:
honey - primary kubeconfig:
~/.kube/kubeconfig-honey.yaml - current API server:
https://100.113.89.12:6443 - management model: private, tailnet-first, no new public management path
Residual compatibility target:
- transitional cloud context:
tinyland-civo-dev - transitional cloud kubeconfig:
~/.kube/kubeconfig-civo.yaml - role: residual or edge-only compatibility, not deployment authority
Node Roles And Placement Bias
honey
- control plane
- primary state anchor
- operator-facing services
bumble
- durable bulk storage
- stateful backends
- OpenEBS ZFS-backed volumes
sting
- stateless compute expansion
- protected scheduling window
- explicit low-blast-radius capacity
Tailnet Service Model
Private service exposure should stay tailnet-only. Current examples include:
grafana-observability.taila4c78d.ts.net:3000loki-observability.taila4c78d.ts.net:3100tempo-observability.taila4c78d.ts.net:3200otlp-observability-grpc.taila4c78d.ts.net:4317bazel-cache-grpc.taila4c78d.ts.net:9092
The Kubernetes API should remain private and tailnet-only as well.
Operator Bootstrap And Validation
Adjacent cluster-ops bootstrap currently lives outside this repo and should be treated as the preflight step before local GloriousFlywheel deployment:
just host-secrets-materialize
source /tmp/blahaj-host-secrets.env
Relevant local validation from that operator surface:
just cluster-access-audit-local
just onprem-baseline-check
Immediate Repo Contract For This Rollout
Within GloriousFlywheel itself, the operator path should be:
- use
config/organization.yamllogical environments such asdevorprodwhile mapping both to the physicalhoneycluster context - point local kubeconfig access at
~/.kube/kubeconfig-honey.yaml - keep deployment and management tailnet-first
- keep Civo examples explicitly classified as residual compatibility only
Live Validation Readout
Read-only local validation from this machine on 2026-04-16:
kubectl --kubeconfig ~/.kube/kubeconfig-honey.yaml config get-contextsconfirms thehoneycontext existskubectl --context honey get nodes -o widesucceeded- all three nodes are
Ready:honeybumblesting
Current namespace read:
- live namespaces include
nix-cache,tailscale,tinyland-staging,tinyland-dev-production,mcp-services,searxng,seaweedfs, andopenebs - there is no live
arc-systemsnamespace yet - there is no live
arc-runnersnamespace yet - there is no live
runner-dashboardnamespace yet
Current live cache surface:
- the cache runtime is already present in namespace
nix-cache, notattic-cache-dev - live pods there include
attic,attic-gc,attic-pg,attic-rustfs, andbazel-cache
Implication:
- cluster access is not the blocker anymore
- backend authority and local operator config are still blockers
- repo examples that assume
attic-cache-devas the live local cache namespace do not match the currenthoneyruntime read
Remaining Blocker
The main remaining blocker is not cluster identity anymore.
It is backend authority:
- the repo supports a generic HTTP backend init contract
- the repo no longer needs GitLab HTTP state as the documented default path
- the backend target direction is now narrowed:
environment-owned S3-compatible state on the local
honeyenvironment, with durable backend data biased towardbumble - but the actual non-legacy backend authority for local and CI convergence is still not named and wired end-to-end
There is also one immediate local operator blocker in this workspace:
config/organization.yamlis not present- no
config/backends/*.hclfiles are present - no
TF_HTTP_*,TOFU_BACKEND_CONFIG_FILE, orTOFU_BACKEND_CONFIG_DIRare set
So a real tofu init or apply from this checkout is still blocked even though
cluster access itself is working.
Until that is chosen, the repo can describe the correct honey target and
operator access model honestly, but it cannot yet claim a final stable local
apply contract.