GloriousFlywheel Honey On-Prem Rollout 2026-04-16

GloriousFlywheel Honey On-Prem Rollout 2026-04-16

Snapshot date: 2026-04-16

Historical rollout issue: #208

Merged rollout baseline: PR #209

Purpose

Capture the authoritative on-prem deployment target for GloriousFlywheel after the reset tranche closed.

This note is the shortest execution-facing answer to:

  • what cluster GloriousFlywheel should target now
  • how node placement should be reasoned about
  • what bootstrap and validation steps are expected around that target
  • what still blocks a real local apply from this repo

Canonical Target

The on-prem runner-stack target is:

Move runner/control workloads into the honey cluster only, keep them tailnet-only, put durable backing state on bumble, and use sting only for explicit stateless compute capacity.

Interpretation:

  • there is one Kubernetes cluster for this rollout: honey
  • bumble and sting are not separate clusters
  • bumble and sting matter as node-role and placement inputs inside honey, not as alternative kubeconfig targets

Cluster And Access Truth

Primary on-prem target:

  • physical cluster: honey
  • primary context: honey
  • primary kubeconfig: ~/.kube/kubeconfig-honey.yaml
  • current API server: https://100.113.89.12:6443
  • management model: private, tailnet-first, no new public management path

Residual compatibility target:

  • transitional cloud context: tinyland-civo-dev
  • transitional cloud kubeconfig: ~/.kube/kubeconfig-civo.yaml
  • role: residual or edge-only compatibility, not deployment authority

Node Roles And Placement Bias

honey

  • control plane
  • primary state anchor
  • operator-facing services

bumble

  • durable bulk storage
  • stateful backends
  • OpenEBS ZFS-backed volumes

sting

  • stateless compute expansion
  • protected scheduling window
  • explicit low-blast-radius capacity

Tailnet Service Model

Private service exposure should stay tailnet-only. Current examples include:

  • grafana-observability.taila4c78d.ts.net:3000
  • loki-observability.taila4c78d.ts.net:3100
  • tempo-observability.taila4c78d.ts.net:3200
  • otlp-observability-grpc.taila4c78d.ts.net:4317
  • bazel-cache-grpc.taila4c78d.ts.net:9092

The Kubernetes API should remain private and tailnet-only as well.

Operator Bootstrap And Validation

Adjacent cluster-ops bootstrap currently lives outside this repo and should be treated as the preflight step before local GloriousFlywheel deployment:

just host-secrets-materialize
source /tmp/blahaj-host-secrets.env

Relevant local validation from that operator surface:

just cluster-access-audit-local
just onprem-baseline-check

Immediate Repo Contract For This Rollout

Within GloriousFlywheel itself, the operator path should be:

  1. use config/organization.yaml logical environments such as dev or prod while mapping both to the physical honey cluster context
  2. point local kubeconfig access at ~/.kube/kubeconfig-honey.yaml
  3. keep deployment and management tailnet-first
  4. keep Civo examples explicitly classified as residual compatibility only

Live Validation Readout

Read-only local validation from this machine on 2026-04-16:

  • kubectl --kubeconfig ~/.kube/kubeconfig-honey.yaml config get-contexts confirms the honey context exists
  • kubectl --context honey get nodes -o wide succeeded
  • all three nodes are Ready:
    • honey
    • bumble
    • sting

Current namespace read:

  • live namespaces include nix-cache, tailscale, tinyland-staging, tinyland-dev-production, mcp-services, searxng, seaweedfs, and openebs
  • there is no live arc-systems namespace yet
  • there is no live arc-runners namespace yet
  • there is no live runner-dashboard namespace yet

Current live cache surface:

  • the cache runtime is already present in namespace nix-cache, not attic-cache-dev
  • live pods there include attic, attic-gc, attic-pg, attic-rustfs, and bazel-cache

Implication:

  • cluster access is not the blocker anymore
  • backend authority and local operator config are still blockers
  • repo examples that assume attic-cache-dev as the live local cache namespace do not match the current honey runtime read

Remaining Blocker

The main remaining blocker is not cluster identity anymore.

It is backend authority:

  • the repo supports a generic HTTP backend init contract
  • the repo no longer needs GitLab HTTP state as the documented default path
  • the backend target direction is now narrowed: environment-owned S3-compatible state on the local honey environment, with durable backend data biased toward bumble
  • but the actual non-legacy backend authority for local and CI convergence is still not named and wired end-to-end

There is also one immediate local operator blocker in this workspace:

  • config/organization.yaml is not present
  • no config/backends/*.hcl files are present
  • no TF_HTTP_*, TOFU_BACKEND_CONFIG_FILE, or TOFU_BACKEND_CONFIG_DIR are set

So a real tofu init or apply from this checkout is still blocked even though cluster access itself is working.

Until that is chosen, the repo can describe the correct honey target and operator access model honestly, but it cannot yet claim a final stable local apply contract.

GloriousFlywheel