GloriousFlywheel uses a GitHub App to authenticate ARC (Actions Runner Controller) with GitHub. This guide explains how to install the app on your organization and configure runner access.
GloriousFlywheel is a GitHub App (ID 2953466) that enables self-hosted
GitHub Actions runners via ARC. It listens for workflow_job webhook
events and scales runner pods on demand.
| Permission | Scope | Access |
|---|---|---|
| Self-hosted runners | Organization | Read & Write |
| Metadata | Repository | Read-only |
| Actions | Repository | Read-only |
The app subscribes to workflow_job events. These trigger runner
pod creation when a job matches a self-hosted runner label.
ARC authenticates using a Kubernetes secret containing the GitHub App credentials. This secret must exist in both namespaces:
arc-systems – used by the ARC controller for API authenticationarc-runners – used by runner scale sets for registration# Create the secret in both namespaces
for ns in arc-systems arc-runners; do
kubectl create secret generic github-app-secret \
--namespace="$ns" \
--from-literal=github_app_id=2953466 \
--from-literal=github_app_installation_id=<INSTALLATION_ID> \
--from-file=github_app_pem=<PATH_TO_PEM_FILE>
done
The default runner group must allow public repositories if you want self-hosted runners available to public repos:
gh api -X PATCH /orgs/<ORG>/actions/runner-groups/1 \
-f allows_public_repositories=true
Without this, workflows in public repos will fail to match self-hosted runner labels.
cd tofu/stacks/arc-runners
tofu init
tofu plan -var-file=tinyland.tfvars \
-var=cluster_context=tinyland-civo-dev \
-var=k8s_config_path=$HOME/.kube/config
tofu apply -var-file=tinyland.tfvars \
-var=cluster_context=tinyland-civo-dev \
-var=k8s_config_path=$HOME/.kube/config
This deploys:
arc-systemsarc-runnersGloriousFlywheel provides composite actions that auto-configure cache endpoints on self-hosted runners:
| Action | Description |
|---|---|
setup-flywheel |
Detect runner environment, configure cache endpoints |
nix-job |
Nix build with Attic binary cache |
docker-job |
Standard CI job with Bazel cache |
jobs:
build:
runs-on: tinyland-nix
steps:
- uses: actions/checkout@v4
- uses: tinyland-inc/GloriousFlywheel/.github/actions/nix-job@main
with:
command: nix build .#default
push-cache: "true"
name: Build
on: [push, pull_request]
jobs:
build:
runs-on: tinyland-nix
steps:
- uses: actions/checkout@v4
- run: nix build
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: tinyland-docker
steps:
- uses: actions/checkout@v4
- run: make test
build-image:
runs-on: tinyland-dind
needs: test
steps:
- uses: actions/checkout@v4
- run: docker build -t myapp .
By default, ARC runner scale sets register at the organization level —
every repo in that org can use runs-on: tinyland-nix. But personal repos
or repos in other orgs can’t reach those runners.
The extra_runner_sets variable lets you deploy additional scale sets
scoped to a different org or a single repository, all on the same cluster
and sharing the same ARC controller.
githubConfigUrl Scoping Works| URL pattern | Scope |
|---|---|
https://github.com/ORG |
All repos in the org |
https://github.com/OWNER/REPO |
Single repository only |
Install GloriousFlywheel on the target GitHub account/org (Settings → Applications → Install).
for ns in arc-systems arc-runners; do
kubectl create secret generic github-app-secret-chapel \
--namespace="$ns" \
--from-literal=github_app_id=2953466 \
--from-literal=github_app_installation_id=<NEW_INSTALLATION_ID> \
--from-file=github_app_pem=<PATH_TO_PEM_FILE>
done
extra_runner_sets entry in your tfvars:
extra_runner_sets = {
chapel-nix = {
github_config_url = "https://github.com/Jesssullivan/chapel"
github_config_secret = "github-app-secret-chapel"
runner_label = "chapel-nix"
runner_type = "nix"
max_runners = 3
cpu_limit = "4"
memory_limit = "8Gi"
}
}
arc-runners alongside
the existing ones:
tofu apply -var-file=tinyland.tfvars \
-var=cluster_context=tinyland-civo-dev \
-var=k8s_config_path=$HOME/.kube/config
jobs:
build:
runs-on: chapel-nix
steps:
- uses: actions/checkout@v4
- run: nix build