GloriousFlywheel Nix Builder Bootstrap And Scaling Options 2026-04-16
Snapshot date: 2026-04-16
Purpose
Separate three questions that were being blurred together:
- how self-hosted Nix lanes bootstrap the toolchain
- where FlakeHub fits in the Linux-builder story
- how memory-heavy Nix jobs get more than the default
8Girunner envelope
This note is an execution-facing companion to:
- gloriousflywheel-linux-builder-contract-2026-04-15.md
- gloriousflywheel-clean-derivation-promotion-workflow-2026-04-15.md
- gloriousflywheel-honey-runner-memory-envelope-2026-04-16.md
Current Repo Truth
Direct inspection on 2026-04-16 shows:
- the committed baseline
tinyland-nixenvelope is still:nix_cpu_limit = "4"nix_memory_limit = "8Gi"
- ARC scale sets currently autoscale runner count through
minRunnersandmaxRunners - ARC does not automatically resize
cpu_limitormemory_limitfor one runner pod based on job demand - the module already supports additive runner lanes through
extra_runner_sets - the current repo-owned additive Linux canary is
linux-xr-docker - self-hosted Nix workflows cannot assume Nix is preinstalled on every
tinyland-nixrunner - the current proven bootstrap pattern is
DeterminateSystems/determinate-nix-action@v3 - FlakeHub is still future publication and discovery work, not an implemented primary builder bootstrap or memory-scaling feature
What FlakeHub Is Not
In current GloriousFlywheel repo truth, FlakeHub is not:
- the source of Nix installation on self-hosted runners
- a substitute for workflow-owned Nix bootstrap
- a mechanism that resizes ARC runner pods on demand
- the current publication authority for promoted GloriousFlywheel outputs
Meaning:
- FlakeHub should be discussed after builder bootstrap and runner-envelope design are already clear
- it is not the first answer to either
tinyland-nixboot failures or Rust-heavy job OOMs
Actual Scaling Boundary
Current ARC behavior gives GloriousFlywheel one kind of autoscaling:
- horizontal runner-count scaling
It does not currently give:
- vertical per-job memory autosizing
That means an 8Gi tinyland-nix runner can still OOM on honey even if:
- the cluster has abundant free RAM
- other runner pods can still scale out
- namespace quotas are not exhausted
Option Set
Option 1: Raise The Default tinyland-nix Envelope
Change the baseline lane itself, for example from 8Gi to a larger static
limit.
Advantages:
- simplest downstream contract
- no new label for consumers to learn
- no workflow split between ordinary and heavy Nix jobs
Costs:
- every default Nix job becomes more expensive
- pod packing efficiency gets worse
- light Nix jobs pay for heavyweight capacity they do not need
Option 2: Add A Heavier Nix Builder Lane
Introduce an additive lane such as tinyland-nix-heavy through
extra_runner_sets, with a larger static envelope and optional placement bias.
Advantages:
- preserves
tinyland-nixas the general-purpose lane - gives Rust-heavy and memory-spiky jobs an explicit destination
- fits the repo’s existing additive-lane governance model
Costs:
- adds another public runner label
- requires downstream repos to opt in intentionally
- needs a clear promotion and ownership rule
Option 3: Add Placement Policy Without Changing The Envelope
Keep 8Gi, but place the Nix lane or a heavy variant onto preferred nodes.
Advantages:
- improves contention behavior
- compatible with either the default lane or an additive heavy lane
Costs:
- does not solve workloads that genuinely exceed
8Gi - only helps if contention, not envelope size, is the dominant issue
Option 4: Keep Workflow-Level Parallelism Caps
Continue to mitigate memory spikes in downstream repos themselves.
Advantages:
- smallest immediate change
- appropriate for one-off workloads that do not justify a new shared lane
Costs:
- pushes platform ambiguity onto repo consumers
- hides runner-envelope truth instead of fixing it
- weakens the case for a reusable Linux-builder product surface
Recommended Direction
Current recommended direction for GloriousFlywheel:
- keep workflow-owned Nix bootstrap as the contract on self-hosted Nix lanes
- keep FlakeHub out of the bootstrap and memory-scaling story
- keep
tinyland-nixas the general-purpose baseline lane for now - add a heavier additive Nix lane for Rust-heavy or memory-spiky jobs before globally raising the baseline
- pair that heavy lane with explicit node-placement guidance when the
honeytopology justifies it
Current execution state:
- the dev ARC additive policy now includes
tinyland-nix-heavyas a repo-owned heavy Nix canary with a16Gimemory limit - the heavy canary now carries an explicit placement contract:
- target hostname
sting - tolerate
dedicated.tinyland.dev/compute-expansion:NoSchedule
- target hostname
- this keeps the heavy lane aligned with the on-prem rule that
stingis for explicit stateless compute expansion rather than control-plane services onhoney
Why this is the best current fit:
- it matches the repo’s additive-runner governance pattern
- it avoids forcing all Nix jobs to inherit a large memory envelope
- it gives downstream repos a real platform answer instead of only workflow throttling
- it keeps FlakeHub discussion focused on later clean-derivation publication rather than muddying builder runtime behavior
Practical Contract
Until a heavier Nix lane exists:
tinyland-nixshould be treated as an8Gigeneral Nix lane- self-hosted workflows on that lane should bootstrap Nix explicitly
- downstream repos may still cap parallelism when a workload does not justify a new shared lane
Once a heavier lane exists:
- ordinary flake builds stay on
tinyland-nix - heavy Rust/Clippy/Nix jobs move to the heavier label
- current canary label:
tinyland-nix-heavy - benchmarking should compare:
- Nix bootstrap time
- cold-cache build time
- warm-cache build time
- heavy-lane versus baseline-lane runtime for memory-heavy jobs
Exit Condition
- the repo has one explicit answer for self-hosted Nix bootstrap
- the repo has one explicit answer for memory-heavy Nix workloads
- FlakeHub is described only where publication and discovery are the actual topic