Historical - Attic-IaC Host Tree Considerations 2026-04-15
Snapshot date: 2026-04-15
Status
Historical lineage note.
This document captures older host-tree reasoning around the attic-iac
lineage. It remains useful background, but it is not part of the active
GloriousFlywheel execution surface.
Use these notes first for current execution:
- README.md
- gloriousflywheel-program-surface-2026-04-15.md
- gloriousflywheel-post-209-pr-slice-map-2026-04-16.md
Purpose
Capture the deeper host and repo-tree read around the old attic-iac
lineage now living as GloriousFlywheel, with special focus on:
neoyogapetting-zoo-mini(pzm)
The goal is to stop treating these machines as a flat fleet and instead place them correctly inside the current runner, cache, MCP, and operator model.
Executive Read
neo, yoga, and petting-zoo-mini are not equivalent infrastructure nodes.
Current best read from lab, blahaj, tinyland-reorg, and linux-xr*:
neois a primary MCP-enabled workstation and operator surfaceyogais a secondary Linux admin and validation workstation, plus XR rollout validation hostpetting-zoo-miniis the only one of the three that is still explicitly an active runner-bearing host
This matters because older attic-iac / cross-forge docs often blur
workstation, consumer-cluster, and runner responsibilities together.
Host Tree
neo
Source: lab/inventory/hosts.yml, lab/inventory/host_vars/macbook-neo.yml
Current role:
- primary daily-driver workstation
- Tailscale host:
neo.taila4c78d.ts.net - MCP enabled
- LSP enabled
- Nix home-manager host
Important read:
- no explicit
gitlab_runner_install: true - no evidence in current host vars that it should be modeled as runner substrate
- should be treated as an operator and development surface, not CI capacity
Practical implication:
neobelongs in the workstation and control-surface subtree- it is relevant to MCP, planning, and repo editing flows
- it should not silently inherit runner-oriented assumptions from older
attic-iacor multi-cluster docs
yoga
Source:
lab/inventory/hosts.ymllab/inventory/host_vars/yoga.ymllinux-xr-fast/site/docs/yoga.mdlab/cmd/remote-juggler/docs/TRUSTED_WORKSTATION_SETUP.md
Current role:
- secondary admin and Linux workstation
- Rocky Linux 10.1 laptop
- MCP enabled
- no GitLab runner install
- XR rollout and validation host
- TPM-capable trusted workstation target
Important read:
gitlab_runner_install: falsehome_manager_config: "jsullivan2@yoga"linux-xr-fasttreatsyogaas the next Rocky 10 rollout targetRemoteJugglertreatsyogaas the primary TPM testing workstation
Practical implication:
yogabelongs in the Linux operator and validation subtree- it is important for builder-validation and XR rollout confidence
- it is not currently a runner host and should not be conflated with
honeyorpetting-zoo-mini
petting-zoo-mini
Source:
lab/inventory/hosts.ymllab/inventory/host_vars/petting-zoo-mini.ymllab/AGENTS.mdblahaj/docs/GITLAB_RUNNER_LANDSCAPE.md
Current role:
- Darwin admin machine and lab access point
- active runner-bearing host
- external SSD runner-path workstream
- MCP-enabled workstation
- Nix-managed user-space host
Important read:
gitlab_runner_install: true- external SSD runner storage is live under
/Volumes/TinylandSSD/tinyland lab/AGENTS.mdexplicitly sayspetting-zoo-miniis an active Darwin runnerblahajstill documents long-lived runner issues and legacy Liqo residue around this machine
Practical implication:
pzmsits in both the workstation subtree and the runner-capacity subtree- it is the only one of the three that still directly carries legacy CI blast radius
- disk, Lima, and runner cleanup pressure on
pzmremain first-class concerns
Repo Tree Read
GloriousFlywheel
Current role:
- authoritative public runner and cache substrate repo
- current repo name and public positioning no longer match much of the checked-in content
Key drift:
- still carries
attic-iacidentity in core files and docs - still carries older Civo-first and GitLab-state-first assumptions
- still exposes stale cache defaults in
flake.nix - still contains the older
tofu/stacks/atticlineage rather than a clearly currentnix-cachelocal-first shape
blahaj
Current role:
- most current runtime truth for the local-first migration
- current source of truth for
honey-anchorednix-cacheruntime posture
Key read:
honeyis the live control plane and state anchor- local
nix-cacheruntime is authoritative - public Attic HTTP is
https://nix-cache.tinyland.dev - private Bazel gRPC is
bazel-cache-grpc.taila4c78d.ts.net:9092 - the checked-in GloriousFlywheel Attic stack is called out as stale relative to the live runtime
lab (formerly crush-dots surface)
Current role:
- operator and workstation fleet source of truth
- SSOT for MCP definitions through
vars/mcp_registry.yml
Key read:
- all three hosts
neo,yoga, andpetting-zoo-miniare MCP-enabled lab/docs/superpowers/is active planning space for MCP rollouts- stale cache defaults still exist in:
roles/nix-bootstrap/defaults/main.ymldocs/CI_VARIABLES.mddocs/MULTIARCH_NIX_CI_ARCHITECTURE.md
- those still point to
https://nix-cache.fuzzy-dev.tinyland.dev
Practical implication:
- host-level workstation bootstrap and docs still propagate stale
attic-iacera cache assumptions - even if GloriousFlywheel is fixed first, operator machines will keep
re-teaching the old endpoint story until
labis cleaned up too
tinyland-reorg
Current role:
- planning and migration memo repo
Key read:
- still describes GloriousFlywheel as the authoritative bzlmod infra project
- still frames the remaining GitLab dependency as the state backend
- still documents
petting-zoo-miniandhoneyas important self-hosted surfaces
Practical implication:
- useful as migration archaeology
- not current enough to drive runtime decisions by itself
linux-xr and linux-xr-fast
Current role:
- concrete downstream builder consumers and rollout canaries
Key read:
- builds already run on GloriousFlywheel ARC runners
yogais explicitly the next Rocky rollout target- these repos are useful for defining the future Linux builder contract
MCP / Superpowers Read
Source of truth:
lab/vars/mcp_registry.yml
Important points:
- MCP registry is treated as the authoritative config surface
neo,yoga, andpetting-zoo-miniall have MCP enabled at the inventory layerlab/docs/superpowers/is live planning space for MCP fleet work
Practical implication:
- if the goal is to use “superpowers harness MCP” for deeper GloriousFlywheel
work, the real fleet control plane for that is
lab, not GloriousFlywheel itself - GloriousFlywheel should consume this story as part of operator UX, not try to become the MCP SSOT
Tree Classification
The cleanest current tree split is:
- core substrate:
GloriousFlywheelblahajruntime ownership
- operator and workstation control plane:
labneoyogapetting-zoo-mini
- downstream builder and canary consumers:
linux-xrlinux-xr-fast
Within that tree:
neo= operator workstation leafyoga= Linux validation and trusted-workstation leafpetting-zoo-mini= mixed operator-plus-runner leaf
They should not all be grouped as “runner hosts.”
Concrete Drift To Fix Next
- Fix GloriousFlywheel’s own stale
attic-iacidentity and cache defaults. - Fix
labworkstation/bootstrap docs and defaults that still point tonix-cache.fuzzy-dev.tinyland.dev. - Explicitly document that
neoandyogaare MCP-enabled operator surfaces, not runner capacity nodes. - Keep
petting-zoo-minimodeled as mixed-role until the Darwin runner story is intentionally reduced or replaced. - Use
linux-xrandlinux-xr-fastas concrete proof points when defining the Linux builder and validation contract.
Main Conclusion
The old attic-iac mental model flattened too many concerns together:
- cache platform
- runner platform
- overlay distribution
- operator workstations
- consumer validation hosts
The current tree is sharper.
GloriousFlywheel should be treated as the substrate repo, blahaj as current
runtime truth, and lab as the workstation and MCP control plane.
For the three machines in scope:
neois primarily an operator workstationyogais a Linux validation and admin workstationpetting-zoo-miniis still a runner-bearing mixed-role machine
Any reset that does not preserve those differences will keep reproducing the same stale planning and cache drift.