Getting Started
New adopter? Start with the Adoption Quickstart. This guide covers internal development workflow.
This guide walks through the current local-first GloriousFlywheel operator flow.
Current On-Prem Shape
The current on-prem target is one Kubernetes cluster:
- physical cluster:
honey - primary context:
honey - primary kubeconfig:
~/.kube/kubeconfig-honey.yaml - current API server:
https://100.113.89.12:6443
bumble and sting are node-role inputs inside honey, not separate cluster
targets. tinyland-civo-dev was a Civo compatibility context
(decommissioned April 2026).
Primary Path
The preferred path today is:
- run OpenTofu locally from this repo
- keep stack tfvars in
tofu/stacks/<stack>/<env>.tfvars - provide backend config with
TOFU_BACKEND_CONFIG_FILE,TOFU_BACKEND_CONFIG_DIR, orTF_HTTP_* - use
just tofu-init-gitlab-legacy <stack>only for legacy GitLab backend compatibility
Overlay and GitLab-first stories still exist in the repo, but they are no longer the primary onboarding path.
Prerequisites
You need:
- Nix with flakes enabled
direnvor willingness to runnix develop- kubeconfig access to your target cluster
- backend credentials or backend config for the state authority you use
Clone And Enter The Dev Shell
git clone https://github.com/tinyland-inc/GloriousFlywheel.git
cd GloriousFlywheel
direnv allow
If you prefer not to use direnv:
nix develop
Configure Organization And Environment
Copy the example config:
cp config/organization.example.yaml config/organization.yaml
cp .env.example .env
Update config/organization.yaml with:
- logical environment names
- physical cluster mapping
- kubeconfig context names
- ingress domains
- namespaces you actually use
- optional GitLab compatibility settings only if you still need them
Minimal example:
organization:
name: myorg
clusters:
- name: dev
role: development
physical_cluster: honey
domain: dev.example.com
context: honey
kubeconfig_path: ~/.kube/kubeconfig-honey.yaml
- name: prod
role: production
physical_cluster: honey
domain: example.com
context: honey
kubeconfig_path: ~/.kube/kubeconfig-honey.yaml
Both logical environments can target the same physical cluster. The difference between them is deployment intent, domains, state separation, and tfvars, not a different Kubernetes control plane.
Choose A Backend Init Path
Current backend truth:
- the repo currently initializes stacks through a generic HTTP backend contract
- that is the supported transitional path for the
honeyrollout - the target direction is environment-owned S3-compatible state on
honeyonce that backend family is implemented in active stack code
Preferred: Backend HCL Files
Create one backend file per stack and environment:
mkdir -p config/backends
just tofu-backend-scaffold-s3 attic
just tofu-backend-scaffold-s3 arc-runners
Those files become the implicit local default for just tofu-init <stack>.
For the four active stacks on current main, that now means the live
S3-compatible backend family. HTTP scaffolds remain compatibility-only for
archived or external legacy paths, not the active platform contract.
Do not point these files at an assumed future RustFS/S3 target unless that
endpoint is already real in your environment. The adjacent ../blahaj program
still treats RustFS-backed S3 state as post-baseline work, and keeps
historical tofu-state/ out of the live cache object store on purpose.
If you want to materialize the live S3-compatible backend file for one of the active stacks:
just tofu-backend-scaffold-s3 runner-dashboard
That writes the right backend-family scaffold for runner-dashboard on current
main. All four active stacks now use backend "s3" directly.
Compatibility Alternative: Raw TF_HTTP_*
Export the backend coordinates directly:
export TF_HTTP_ADDRESS=https://state.example.com/terraform/runner-dashboard-dev
export TF_HTTP_LOCK_ADDRESS=https://state.example.com/terraform/runner-dashboard-dev/lock
export TF_HTTP_UNLOCK_ADDRESS=https://state.example.com/terraform/runner-dashboard-dev/lock
export TF_HTTP_USERNAME=operator
export TF_HTTP_PASSWORD=...
If those are already set and you are repairing a legacy HTTP backend path, you can stamp them into a local backend file:
just tofu-backend-materialize-http <legacy-stack>
Migration Prep: S3-Compatible Backend File
If you already have a real environment-owned S3-compatible endpoint and want to capture it into the local backend file ahead of stack cutover:
export TOFU_BACKEND_S3_ENDPOINT=http://s3.example.com:9000
export TOFU_BACKEND_S3_BUCKET=gloriousflywheel-state
export TOFU_BACKEND_S3_ACCESS_KEY=...
export TOFU_BACKEND_S3_SECRET_KEY=...
just tofu-backend-materialize-s3 arc-runners
All four active stacks now use this S3-compatible path directly on main.
For the current proven honey baseline, ENV=dev maps to these live S3 keys:
attic->attic/terraform.tfstatearc-runners->arc-runners/terraform.tfstategitlab-runners->tinyland-infra/gitlab-runners/terraform.tfstaterunner-dashboard->tinyland-infra/runner-dashboard/terraform.tfstate
If you are preparing a different environment or a different key layout, set
TOFU_BACKEND_S3_KEY explicitly before running the helper.
You can inspect the current contract for one stack with:
just tofu-state-contract runner-dashboard
Legacy Compatibility: GitLab HTTP State
If you still store tofu state in GitLab-managed HTTP state:
just tofu-backend-materialize-gitlab-legacy <legacy-stack>
export TF_HTTP_PASSWORD=glpat-...
just tofu-init-gitlab-legacy <legacy-stack>
That path is compatibility-only. New local work on current main should
prefer the S3-compatible backend file or TOFU_BACKEND_S3_*. The materialize
helper requires gitlab.project_id in config/organization.yaml.
Archived mirrors like tinyland/gf-overlay are not enough by themselves; this
path only works when a real GitLab project owns OpenTofu state.
Set Kubeconfig Access
export KUBECONFIG=~/.kube/kubeconfig-honey.yaml
export KUBE_CONTEXT=honey
kubectl --context honey cluster-info
kubectl --context honey get nodes -o wide
The Kubernetes API should remain private and tailnet-only.
Create Stack Tfvars
Copy the example tfvars for the stacks you intend to run:
cp tofu/stacks/attic/terraform.tfvars.example tofu/stacks/attic/dev.tfvars
cp tofu/stacks/arc-runners/terraform.tfvars.example tofu/stacks/arc-runners/dev.tfvars
cp tofu/stacks/runner-dashboard/terraform.tfvars.example tofu/stacks/runner-dashboard/dev.tfvars
Set the fields that actually define your environment:
cluster_contextingress_domain- namespaces
- image references where required
- GitHub App or GitLab OAuth/provider inputs as needed by the chosen stack
For arc-runners, the local operator path also includes
<env>-policy.tfvars before local overrides and
<env>-extra-runner-sets.tfvars when present, so the committed ARC baseline
and additive lanes such as tinyland-nix-heavy are not silently dropped from
local plan/apply.
Deploy In Order
1. Cache Platform
ENV=dev just tofu-preflight attic
ENV=dev just tofu-init attic
ENV=dev just tofu-plan attic
ENV=dev just tofu-apply attic
2. GitHub ARC Runners
Use this if your current goal is GitHub Actions runner capacity.
ENV=dev just tofu-preflight arc-runners
ENV=dev just tofu-init arc-runners
ENV=dev just tofu-plan arc-runners
ENV=dev just tofu-apply arc-runners
3. Runner Dashboard
ENV=dev just tofu-preflight runner-dashboard
ENV=dev just tofu-init runner-dashboard
ENV=dev just tofu-plan runner-dashboard
ENV=dev just tofu-apply runner-dashboard
4. Legacy GitLab Runners
Only if you still need the GitLab runner surface:
ENV=dev just tofu-preflight gitlab-runners
ENV=dev just tofu-init gitlab-runners
ENV=dev just tofu-plan gitlab-runners
ENV=dev just tofu-apply gitlab-runners
Verification
Examples:
kubectl --context honey get pods -n nix-cache
kubectl --context honey get pods -n arc-systems
kubectl --context honey get pods -n arc-runners
kubectl --context honey get pods -n runner-dashboard
If you deploy the legacy GitLab runner stack, also verify the expected runner manager pods and registration state in GitLab.
Current Caveats
just tofu-preflight <stack>is the shortest local prerequisite check before init; it validatesorganization.yaml, kubeconfig/context resolution, stack tfvars, and the current backend-init path- the dashboard still carries GitLab-oriented auth and control-plane surfaces
- the GitLab runner stack is still present, but it is not the primary future runner story
- FlakeHub publication is not yet part of the active operator path
- the current target sentence is:
move runner/control workloads into the
honeycluster only, keep them tailnet-only, put durable backing state onbumble, and usestingonly for explicit stateless compute capacity