GitHub Actions Runners
Self-hosted GitHub Actions runners powered by ARC (Actions Runner Controller)
on the honey cluster. GitHub is the primary runner product surface.
GitLab remains a compatibility surface elsewhere in the repo.
Available Labels
Use these runs-on values in your GitHub Actions workflows:
| Label | Runner Type | Use Case |
|---|---|---|
tinyland-nix |
nix | Nix builds and reproducible flake workflows |
tinyland-nix-heavy |
nix | Memory-heavy Rust/Nix jobs in environments that enable the additive heavy lane |
tinyland-docker |
docker | General CI: linting, testing, builds |
tinyland-dind |
dind | Docker-in-Docker: container image builds |
Quick Start
jobs:
build:
runs-on: tinyland-nix
steps:
- uses: actions/checkout@v4
- uses: tinyland-inc/GloriousFlywheel/.github/actions/nix-job@main
with:
command: nix build .#default
No tokens, no registration. The runner pool is available to all repos in the installed GitHub App organizations.
For a downstream repo migration checklist and a canonical consumer example, see Downstream Migration Checklist.
Composite Actions
GloriousFlywheel provides composite actions that auto-configure cache endpoints on self-hosted runners.
setup-flywheel
Base action that detects the runner environment and configures cache endpoints.
On self-hosted runners, ATTIC_SERVER, ATTIC_CACHE, and BAZEL_REMOTE_CACHE
are set automatically via cluster DNS. This action does not install Nix.
steps:
- uses: tinyland-inc/GloriousFlywheel/.github/actions/setup-flywheel@main
nix-job
Nix job helper. Bootstraps Nix explicitly, configures GloriousFlywheel cache/runtime hints, and runs your command.
steps:
- uses: actions/checkout@v4
- uses: tinyland-inc/GloriousFlywheel/.github/actions/nix-job@main
with:
command: nix build .#default
push-cache: "true"
docker-job
Standard CI job with Bazel cache configured.
steps:
- uses: actions/checkout@v4
- uses: tinyland-inc/GloriousFlywheel/.github/actions/docker-job@main
with:
command: make build
Cache Integration
ARC runners access caches via cluster-internal DNS. No public management path, no extra ingress layer, and no extra credential exposure:
- Attic:
http://attic.nix-cache.svc.cluster.local - Bazel:
grpc://bazel-cache.nix-cache.svc.cluster.local:9092
Environment variables are injected automatically:
| Variable | Runner Types | Value |
|---|---|---|
ATTIC_SERVER |
nix | Attic API endpoint |
ATTIC_CACHE |
nix | Cache name (default: main) |
BAZEL_REMOTE_CACHE |
nix, docker | Bazel cache gRPC endpoint |
NIX_CONFIG |
nix | experimental-features = nix-command flakes |
Nix Bootstrap Boundary
tinyland-nix is the Nix-oriented runner lane, not a promise that every
self-hosted runner already has Nix preinstalled forever.
Recommended current rule:
- use
nix-jobwhen you want GloriousFlywheel to bootstrap Nix for you - or run
DeterminateSystems/determinate-nix-action@v3before rawnixcommands in self-hosted workflows - treat Attic and Bazel as acceleration layers, not as publication surfaces
- treat FlakeHub as future publication/discovery work, not part of the primary GitHub Actions contract today
- use
tinyland-nix-heavywhen a workflow routinely exceeds the baseline8Gitinyland-nixenvelope and the additive heavy lane is available in the target environment
Architecture
ARC uses a controller + scale set model. The controller watches for
workflow_job webhook events and scales runner pods up/down:
arc-systems namespace
└── ARC controller (gha-runner-scale-set-controller)
arc-runners namespace
├── gh-nix (scale set → runs-on: tinyland-nix)
├── gh-docker (scale set → runs-on: tinyland-docker)
└── gh-dind (scale set → runs-on: tinyland-dind)
All scale sets support scale-to-zero. Runner pods are created on demand
when a workflow job matches the runs-on label.
Important scaling boundary:
- ARC scales the number of runner pods in a scale set
- ARC does not autosize the CPU or memory envelope of one runner pod
- the current
tinyland-nix,tinyland-docker, andtinyland-dindpod limits still come from thearc-runnersstack values - if one job needs more memory than its runner pod limit, the right fix is runner-envelope or builder-lane design, not assuming cluster-wide free RAM will automatically flow into that pod
Infrastructure
The ARC stack is managed by OpenTofu through the root operator path:
export KUBECONFIG=~/.kube/kubeconfig-honey.yaml
export KUBE_CONTEXT=honey
cp tofu/stacks/arc-runners/terraform.tfvars.example tofu/stacks/arc-runners/dev.tfvars
ENV=dev just tofu-preflight arc-runners
ENV=dev just tofu-init arc-runners
ENV=dev just tofu-plan arc-runners
ENV=dev just tofu-apply arc-runners
Use the transitional cloud context only for explicit compatibility testing, not as the primary deployment authority.
GitHub App Setup
ARC authenticates via a GitHub App installed on the target organizations. The App requires:
- Self-hosted runners (Organization): Read & Write
- Metadata (Repository): Read-only
- Actions (Repository): Read-only
Webhook event: workflow_job
Credentials are stored as a Kubernetes secret in the arc-systems namespace.
See Also
- Multi-Org / Cross-Repo Runners — add runners for personal repos or other orgs
- Downstream Migration Checklist — canonical downstream consumer pattern and rollback
- Self-Service Enrollment — GitLab runner enrollment
- Cache Integration — cache configuration details
- Runner Selection — choosing the right runner type