This guide walks you through deploying GloriousFlywheel infrastructure for your organization, from zero to a running Nix cache, GitLab runner pool, and monitoring dashboard.
There are two ways to deploy:
| Approach | When to Use | What You Get |
|---|---|---|
| Direct | Evaluating, single cluster, no private config | Clone upstream, add organization.yaml, deploy |
| Overlay | Production, multiple clusters, private secrets, CI/CD | Your own repo layered on top of upstream |
Most organizations should use an overlay. It keeps your secrets, tfvars, and CI pipeline in a private repo while pulling shared modules from upstream. See Create Your First Overlay for a step-by-step walkthrough.
If you just want to try things out, the direct approach works fine – you can always migrate to an overlay later.
You need three things installed on your workstation:
# Official installer (Linux / macOS)
curl -L https://nixos.org/nix/install | sh
# Enable flakes (add to ~/.config/nix/nix.conf)
echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
# macOS
brew install direnv
# Nix
nix profile install nixpkgs#direnv
# Add hook to your shell (~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish)
eval "$(direnv hook bash)" # bash
eval "$(direnv hook zsh)" # zsh
direnv hook fish | source # fish
You need a kubeconfig file that can reach your target cluster. The cluster should be running Kubernetes 1.28+ with:
StorageClass for persistent volumesgit clone https://github.com/Jesssullivan/GloriousFlywheel.git ~/git/attic-iac
cd ~/git/attic-iac
direnv allow # or: nix develop
The devShell provides pinned versions of tofu, kubectl, pnpm, node, and other tools. You don’t need to install them separately.
cp config/organization.example.yaml config/organization.yaml
Edit organization.yaml with your cluster details. At minimum, set:
organization:
name: your-org
clusters:
- name: dev
role: development
domain: dev.example.com
context: your-kubeconfig-context
namespaces:
attic:
dev: attic-cache-dev
runners:
all: gitlab-runners
See the Customization Guide for all fields.
cp .env.example .env
Edit .env and set the TF_HTTP_ credentials to a GitLab Personal Access Token with api scope (see .env.example for the required variable names). This token is used for the OpenTofu state backend.
# 1. Cache platform (must be first -- deploys CNPG, MinIO, PostgreSQL,
# Attic API, GC worker, and DNS. Runners reference the cache endpoint.)
just tofu-plan attic
just tofu-apply attic
# 2. GitLab runners
just tofu-plan gitlab-runners
just tofu-apply gitlab-runners
# 3. GitHub Actions runners (ARC) -- optional, only if using GitHub Actions
just tofu-plan arc-runners
just tofu-apply arc-runners
# 4. Runner dashboard
just tofu-plan runner-dashboard
just tofu-apply runner-dashboard
The ARC stack requires a GitHub App secret in both arc-systems and
arc-runners namespaces. See the
GitHub App Adoption Guide for setup
instructions.
kubectl get pods -n attic-cache-dev # Attic API + PostgreSQL + MinIO
kubectl get pods -n gitlab-runners # GitLab runner manager pods
kubectl get pods -n arc-systems # ARC controller (if deployed)
kubectl get pods -n arc-runners # ARC runner pods (scale-to-zero, may be empty)
kubectl get pods -n runner-dashboard # Dashboard pod
Check GitLab: Your Group > Settings > CI/CD > Runners should show the registered runners.
If you deployed ARC, verify the GitHub App installation is working:
kubectl get autoscalingrunnersets -n arc-runners
For production use, create an overlay repository. This keeps your secrets and configuration private while pulling shared modules from upstream.
Follow the full walkthrough: Create Your First Overlay
The short version:
MODULE.bazel, build/overlay.bzl, build/extensions.bzlconfig/organization.yaml and tfvars filesAfter a successful deployment, you have:
Kubernetes Cluster
cnpg-system/
cnpg-controller-manager (CloudNativePG operator)
minio-operator/
minio-operator (MinIO operator)
attic-cache-dev/
atticd (Nix binary cache API, HPA-enabled)
attic-gc (garbage collection worker)
attic-pg (PostgreSQL cluster via CNPG)
attic-minio (S3-compatible object storage via MinIO)
attic-init-cache (one-shot cache initialization job)
attic-cache-warm (daily cache warming CronJob)
gitlab-runners/
docker-runner (general purpose CI)
dind-runner (Docker-in-Docker builds)
nix-runner (Nix builds + Attic cache)
arc-systems/ (optional, if ARC deployed)
arc-controller (Actions Runner Controller)
arc-runners/ (optional, if ARC deployed)
gh-nix (Nix builds, scale-to-zero)
gh-docker (general purpose CI, scale-to-zero)
gh-dind (Docker-in-Docker, scale-to-zero)
runner-dashboard/
dashboard (SvelteKit monitoring UI)
The runners are registered at your GitLab group level. Any project in the group can use them by adding tags: to their .gitlab-ci.yml:
build:
tags: [docker]
script:
- make build
organization.yaml reference