Forge Adapter Matrix

Forge Adapter Matrix

Snapshot date: 2026-04-19

This document describes the adapter surface for each supported forge: what registration protocol it uses, how cache integrates, how much CI parity exists, and what maturity level the adapter has reached.

Summary

Forge Adapter Runner Registration Cache Integration CI Parity Status
GitHub ARC scale sets GitHub App + org scope Attic + Bazel remote Full Primary
GitLab gitlab-runner Helm Registration token + group/project scope Attic (partial) Validation only Compatibility
Forgejo/Codeberg act_runner Instance token + org/repo scope Attic (planned) Single proof path Proof

GitHub Adapter (Primary)

Registration uses a GitHub App with org-wide scope. The ARC controller manages scale sets that register runners on demand.

Scale sets on honey:

  • tinyland-nix — standard Nix workloads
  • tinyland-docker — container-based workloads
  • tinyland-dind — Docker-in-Docker for image builds
  • tinyland-nix-heavy — elevated memory/CPU for heavy Rust and Nix builds

Scope model: org-wide by default, per-repo targeting available via workflow runs-on labels.

Cache: Attic binary cache for Nix store paths, Bazel remote cache for Bazel build artifacts. Both are production-grade on honey.

Status: production. This is the only forge adapter with a live fleet on honey.

GitLab Adapter (Compatibility)

Registration uses the gitlab-runner Helm chart with a registration token scoped to instance, group, or project level.

Scope model: instance, group, or project. Token scope determines which jobs the runner picks up.

Cache: partial Attic integration. Nix store paths can be pushed to Attic, but the GitLab CI pipeline does not exercise the full cache warm-up path that GitHub Actions workflows use.

CI parity: validation only. The OpenTofu module validates and plans correctly, but there is no live runner fleet on honey for GitLab. The GitLab integration is limited to the OpenTofu state backend (GitLab Managed Terraform State).

Caveat: the tofu module at tofu/stacks/gitlab-runners validates but does not maintain a live fleet. Do not assume GitLab runner capacity exists on honey.

Forgejo/Codeberg Adapter (Proof Path)

Registration uses act_runner with an instance token. Scope can be set to account, org, or repo level.

Cache: Attic integration is planned but not tested. No Bazel remote cache integration exists.

CI parity: one honest disposable proof path is now exercised. On 2026-04-20 a repo-scoped runner on honey executed a push-triggered workflow against an in-cluster disposable Forgejo instance. Forgejo Actions uses GitHub Actions syntax for workflow files, but the runner registration protocol is different from both GitHub ARC and GitLab Runner. See GloriousFlywheel Forgejo Honey Proof 2026-04-20.

Status: proof-of-concept exercised on honey. This is not a production adapter: no shared fleet, no durable state, no tested cache path, and no operator automation surface yet.

What Is Shared Across Forges

These components are forge-agnostic and reused by every adapter:

  • Attic: binary cache backend, store signing, tenant isolation
  • Runner images: docker, nix, nix-heavy base images built by Nix
  • Cluster substrate: Kubernetes, OpenTofu modules, Tailscale overlay
  • Operator tooling: Just recipes, dashboard, MCP server, health checks
  • Policy model: pool placement, resource limits, scheduling constraints

What Is Forge-Specific

Each forge adapter owns these concerns independently:

  • Registration protocol and tokens: GitHub App, GitLab registration token, Forgejo instance token
  • Scope model: how runner visibility maps to org/repo/project hierarchy
  • Event routing: how CI events reach the runner controller
  • CI syntax: workflow file format and location (.github/, .gitlab-ci.yml, .forgejo/)
  • Enrollment Model — how runner enrollment works across forge scope, operator tenant, execution pool, and cache/state plane
  • Platform Layers — where forge adapters sit in the 4-layer architecture

GloriousFlywheel