Nix Builds
tinyland-nix is the GloriousFlywheel runner lane for reproducible Nix and
flake-based workflows.
The current contract is:
- bootstrap Nix explicitly in self-hosted workflows
- use Attic and Bazel as acceleration layers
- do not treat Attic or Bazel as publication surfaces
- do not treat FlakeHub as part of the primary runner contract today
Runtime Hints
On the Nix runner lane, GloriousFlywheel provides runtime hints such as:
ATTIC_SERVERATTIC_CACHEBAZEL_REMOTE_CACHENIX_CONFIG
These are useful for cache-aware workflows, but they are not a substitute for installing or verifying the Nix toolchain in the workflow itself.
Recommended Patterns
Preferred: nix-job
Use the composite action when you want GloriousFlywheel to bootstrap Nix and configure the runtime contract for you:
jobs:
build:
runs-on: tinyland-nix
steps:
- uses: actions/checkout@v4
- uses: tinyland-inc/GloriousFlywheel/.github/actions/nix-job@main
with:
command: nix build .#default
push-cache: "false"
Raw Nix Commands
If you want full control over the steps, bootstrap Nix first and then run raw
nix commands:
jobs:
build:
runs-on: tinyland-nix
steps:
- uses: actions/checkout@v4
- uses: DeterminateSystems/determinate-nix-action@v3
- run: nix build .#default
- run: nix flake check
Bootstrap Boundary
tinyland-nix is a Nix-oriented runner lane, not a promise that self-hosted
runner image state will always provide nix and related tooling before the
workflow starts.
That means:
- bootstrap Nix in the workflow
- keep bootstrap overhead separate from build time when benchmarking
- only then reason about cache acceleration or future publication flows
Acceleration, Not Publication
Current roles:
- Attic: mutable Nix acceleration layer
- Bazel remote cache: mutable action/CAS acceleration layer
- GHCR: current durable OCI publication surface where applicable
- FlakeHub: future publication/discovery work, not an active Nix runner feature in primary repo surfaces
Memory Envelope Boundary
The current baseline tinyland-nix lane is still an 8Gi runner-pod memory
limit, not “all memory on honey.”
That means:
- ARC can scale the number of Nix runner pods horizontally
- ARC does not automatically raise the memory limit of one
tinyland-nixpod - a Rust-heavy or memory-spiky workload can still be OOM-killed inside that
8Gienvelope even when the cluster has abundant free RAM overall
If a workload regularly exceeds that envelope, the right platform answer is usually a heavier additive Nix lane or an explicit limit change, not treating FlakeHub or cache acceleration as a substitute for more memory.
Current canary:
tinyland-nix-heavyis the current repo-owned additive heavy Nix lane in the dev ARC policy- it is currently pinned to
honeywhile thestingrunner runtime is under repair stingremains the intended future compute-expansion target once that runtime path is healthy again- use it for recurring Rust-heavy or memory-spiky Nix jobs rather than silently inflating the baseline lane for every workflow
Cold Versus Warm Runs
The first build on a cold cache will be slower. Subsequent runs can benefit from Attic and other cached inputs. That is expected.
When comparing Nix workflows, keep these separate:
- toolchain bootstrap time
- cold-cache runtime
- warm-cache runtime