Platform Architecture
GloriousFlywheel is a self-hosted runner and cache platform organized in four layers.
Layer 1: FOSS Core Substrate
The shared infrastructure that works without any managed service dependency.
Components:
- Attic: Multi-tenant Nix binary cache backed by S3-compatible storage
- Bazel remote cache: Optional build acceleration via remote cache endpoint
- Runner images: docker, nix, nix-heavy, future hardware classes (gpu, kvm)
- Cluster substrate: Kubernetes + OpenTofu deployment surfaces
- Dashboard: SvelteKit operator UX with WebAuthn auth
- Operator tooling: Just recipes, health checks, MCP server
This layer deploys on any Kubernetes cluster. The reference deployment runs on
an on-prem RKE2 cluster (honey) with Tailscale overlay networking.
Layer 2: Forge Adapters
Each forge has its own runner registration, scope model, token handling, and event routing. The shared substrate stays the same; the adapter handles forge-specific enrollment.
| Forge | Adapter | Scope Model | Status |
|---|---|---|---|
| GitHub | ARC scale sets | repo, org, enterprise | Primary |
| GitLab | GitLab Runner | project, group, instance | Compatibility |
| Forgejo/Codeberg | act_runner | repo, org, account | Proof path |
Layer 3: Managed Control Plane (Future)
Optional SaaS layer above the FOSS core for fleet enrollment UX, tenant and pool management, usage reporting, cache and runner observability, policy packs, and support diagnostics.
Not required for deploying the base platform, bootstrapping a cluster, using Attic or Bazel cache, or running GitHub ARC on a self-hosted cluster.
Layer 4: Compatibility Kit
Legacy Bzlmod overlay patterns, local_path_override development flows,
downstream merge-and-modify examples, and transitional consumers.
This is not the primary adoption path. See Adoption Quickstart for the recommended onboarding flow.
Multi-Org Enrollment Model
Runner enrollment is modeled along four dimensions:
- Forge scope: GitHub repo/org/enterprise, GitLab project/group/instance, Forgejo repo/org/account
- Operator tenant: team, org, enterprise, managed customer
- Execution pool: docker, nix, nix-heavy, gpu, kvm
- Cache/state plane: Attic tenant/cache view, Bazel cache namespace, state backend authority
Adopting GloriousFlywheel
- Deploy the core substrate on your cluster
- Enroll your forge using the appropriate adapter
- Choose shared or dedicated runner pools
- Attach caches (Attic for Nix, Bazel remote for Bazel)
- Customize runner images, placement, and policy