Quick Start

Quick Start

This is the shortest honest path to a working GloriousFlywheel stack from this repo against the current on-prem target.

Current Target

For the current Tinyland on-prem rollout:

  • physical cluster: honey
  • primary kubeconfig: ~/.kube/kubeconfig-honey.yaml
  • primary context: honey
  • current API server: https://100.113.89.12:6443
  • former compatibility context: tinyland-civo-dev (Civo decommissioned April 2026)
  • former compatibility kubeconfig: ~/.kube/kubeconfig-civo.yaml

Important:

  • honey is the only cluster target for this rollout
  • bumble and sting are node-role inputs inside honey, not separate clusters
  • GloriousFlywheel dev and prod remain logical deploy environments that can both map to honey

Placement bias:

  • control plane and operator-facing services on honey
  • durable backing state on bumble
  • explicit stateless compute expansion on sting
  • no dependence on Civo (decommissioned April 2026)

1. Enter The Tooling Environment

git clone https://github.com/tinyland-inc/GloriousFlywheel.git
cd GloriousFlywheel
direnv allow

If you prefer not to use direnv:

nix develop

2. Bootstrap On-Prem Access

The current host-secret materialization and baseline checks live in adjacent operator tooling, not in this repo. Run them before you try to deploy from GloriousFlywheel:

just host-secrets-materialize
source /tmp/blahaj-host-secrets.env
just cluster-access-audit-local
just onprem-baseline-check

3. Set Cluster Access

export KUBECONFIG=~/.kube/kubeconfig-honey.yaml
export KUBE_CONTEXT=honey
kubectl --context honey cluster-info
kubectl --context honey get nodes -o wide

The Kubernetes API should remain private and tailnet-only. Do not introduce a new public management path.

The Civo cloud path (tinyland-civo-dev) has been decommissioned as of April 2026 and is no longer available.

4. Copy Local Config

cp config/organization.example.yaml config/organization.yaml
cp .env.example .env
mkdir -p config/backends

Update config/organization.yaml with your real domains and any local backend coordinates you use. In the current Tinyland shape, both logical environments can map to context: honey.

5. Choose Backend Init

The repo currently supports a generic HTTP backend init contract. For the current honey rollout, treat that as a transitional implementation surface. The remaining open decision is the concrete non-legacy backend authority GloriousFlywheel should converge on. The target direction is environment-owned S3-compatible state on the local cluster once that backend is implemented.

Until that is finalized, provide your current backend authority explicitly.

Preferred:

just tofu-backend-scaffold-s3 arc-runners

The scaffolded file becomes the implicit local default for just tofu-init arc-runners. On current main, that is the live S3-compatible backend family for arc-runners.

Do not fill the scaffold from an assumed future RustFS/S3 target unless that endpoint actually exists in your environment. The adjacent ../blahaj rollout still treats RustFS-backed S3 state as a post-baseline migration, and it keeps historical tofu-state/ out of the live cache object store on purpose.

Migration prep:

just tofu-backend-scaffold-s3 arc-runners

That writes the right backend-family scaffold for arc-runners on current main. All four active stacks now use backend "s3" directly.

Alternative:

export TF_HTTP_ADDRESS=https://state.example.com/terraform/runner-dashboard-dev
export TF_HTTP_LOCK_ADDRESS=https://state.example.com/terraform/runner-dashboard-dev/lock
export TF_HTTP_UNLOCK_ADDRESS=https://state.example.com/terraform/runner-dashboard-dev/lock
export TF_HTTP_USERNAME=operator
export TF_HTTP_PASSWORD=...

If you already have real values exported in your shell and are repairing a legacy HTTP backend path, you can persist them into a local stack file without editing HCL by hand:

just tofu-backend-materialize-http <legacy-stack>

If you already have real target-direction S3-compatible values exported in your shell and want to capture them for later cutover:

export TOFU_BACKEND_S3_ENDPOINT=http://s3.example.com:9000
export TOFU_BACKEND_S3_BUCKET=gloriousflywheel-state
export TOFU_BACKEND_S3_ACCESS_KEY=...
export TOFU_BACKEND_S3_SECRET_KEY=...
just tofu-backend-materialize-s3 arc-runners

All four active stacks now use this S3-compatible path directly on current main.

For the current proven honey baseline, ENV=dev uses these live S3 keys:

  • attic -> attic/terraform.tfstate
  • arc-runners -> arc-runners/terraform.tfstate
  • gitlab-runners -> tinyland-infra/gitlab-runners/terraform.tfstate
  • runner-dashboard -> tinyland-infra/runner-dashboard/terraform.tfstate

If you need a different environment or a different key layout, set TOFU_BACKEND_S3_KEY explicitly before materializing the backend file.

To inspect the current stack contract directly:

just tofu-state-contract runner-dashboard

Legacy compatibility:

just tofu-backend-materialize-gitlab-legacy <legacy-stack>
export TF_HTTP_PASSWORD=glpat-...
just tofu-init-gitlab-legacy <legacy-stack>

Use the GitLab path only if you still rely on legacy GitLab-managed HTTP state. This requires gitlab.project_id in config/organization.yaml. Archived mirrors like tinyland/gf-overlay are not enough by themselves; this path only works when a real GitLab project owns OpenTofu state.

Before tofu init, run:

ENV=dev just tofu-backend-audit

That gives a one-screen read across all four active stacks and will tell you whether the current blocker is missing local scaffolding or one shared backend authority decision.

6. Copy The Stack Tfvars You Need

cp tofu/stacks/attic/terraform.tfvars.example tofu/stacks/attic/dev.tfvars
cp tofu/stacks/arc-runners/terraform.tfvars.example tofu/stacks/arc-runners/dev.tfvars
cp tofu/stacks/runner-dashboard/terraform.tfvars.example tofu/stacks/runner-dashboard/dev.tfvars

Set cluster_context = "honey" in the tfvars you actually use unless you are intentionally exercising a transitional compatibility path.

For arc-runners, the local operator path also includes <env>-policy.tfvars before local overrides and <env>-extra-runner-sets.tfvars when present, so the committed ARC baseline and additive lanes like tinyland-nix-heavy are part of the same local rollout.

7. Deploy In Order

Cache first:

ENV=dev just tofu-preflight attic
ENV=dev just tofu-init attic
ENV=dev just tofu-plan attic
ENV=dev just tofu-apply attic

Then ARC if you need GitHub Actions capacity:

ENV=dev just tofu-preflight arc-runners
ENV=dev just tofu-init arc-runners
ENV=dev just tofu-plan arc-runners
ENV=dev just tofu-apply arc-runners

Then the dashboard:

ENV=dev just tofu-preflight runner-dashboard
ENV=dev just tofu-init runner-dashboard
ENV=dev just tofu-plan runner-dashboard
ENV=dev just tofu-apply runner-dashboard

Deploy gitlab-runners only if you still need the legacy GitLab runner path.

8. Verify

kubectl --context honey get pods -n nix-cache
kubectl --context honey get pods -n arc-systems
kubectl --context honey get pods -n arc-runners
kubectl --context honey get pods -n runner-dashboard

Notes

  • just tofu-plan and related recipes read cluster_context from config/organization.yaml, unless you override with KUBE_CONTEXT
  • the canonical rollout sentence is: move runner/control workloads into the honey cluster only, keep them tailnet-only, put durable backing state on bumble, and use sting only for explicit stateless compute capacity
  • just tofu-preflight <stack> is the shortest local check before init; it validates organization.yaml, kubeconfig/context resolution, stack tfvars, and the currently configured backend-init path
  • the repo supports generic HTTP backend init at the root today; stack-local GitLab entrypoints are legacy compatibility surfaces, and the longer-term target direction is environment-owned S3-compatible state on honey
  • the current live Tinyland cache runtime is in namespace nix-cache, not attic-cache-dev
  • current dashboard release authority is still the GHCR image path, not a standalone Nix artifact or FlakeHub publication path

GloriousFlywheel