GloriousFlywheel

Getting Started

This guide walks you through deploying GloriousFlywheel infrastructure for your organization, from zero to a running Nix cache, GitLab runner pool, and monitoring dashboard.

Decide: Direct or Overlay?

There are two ways to deploy:

Approach When to Use What You Get
Direct Evaluating, single cluster, no private config Clone upstream, add organization.yaml, deploy
Overlay Production, multiple clusters, private secrets, CI/CD Your own repo layered on top of upstream

Most organizations should use an overlay. It keeps your secrets, tfvars, and CI pipeline in a private repo while pulling shared modules from upstream. See Create Your First Overlay for a step-by-step walkthrough.

If you just want to try things out, the direct approach works fine – you can always migrate to an overlay later.

Prerequisites

You need three things installed on your workstation:

  1. Nix with flakes enabled
  2. direnv (optional but recommended)
  3. kubectl access to a Kubernetes cluster

Install Nix

# Official installer (Linux / macOS)
curl -L https://nixos.org/nix/install | sh

# Enable flakes (add to ~/.config/nix/nix.conf)
echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf

Install direnv

# macOS
brew install direnv

# Nix
nix profile install nixpkgs#direnv

# Add hook to your shell (~/.bashrc, ~/.zshrc, or ~/.config/fish/config.fish)
eval "$(direnv hook bash)"    # bash
eval "$(direnv hook zsh)"     # zsh
direnv hook fish | source     # fish

Kubernetes Access

You need a kubeconfig file that can reach your target cluster. The cluster should be running Kubernetes 1.28+ with:

Direct Deployment

1. Clone and enter the devShell

git clone https://github.com/Jesssullivan/GloriousFlywheel.git ~/git/attic-iac
cd ~/git/attic-iac
direnv allow  # or: nix develop

The devShell provides pinned versions of tofu, kubectl, pnpm, node, and other tools. You don’t need to install them separately.

2. Configure your organization

cp config/organization.example.yaml config/organization.yaml

Edit organization.yaml with your cluster details. At minimum, set:

organization:
  name: your-org

clusters:
  - name: dev
    role: development
    domain: dev.example.com
    context: your-kubeconfig-context

namespaces:
  attic:
    dev: attic-cache-dev
  runners:
    all: gitlab-runners

See the Customization Guide for all fields.

3. Set up secrets

cp .env.example .env

Edit .env and set the TF_HTTP_ credentials to a GitLab Personal Access Token with api scope (see .env.example for the required variable names). This token is used for the OpenTofu state backend.

4. Deploy the stacks (in order)

# 1. Cache platform (must be first -- deploys CNPG, MinIO, PostgreSQL,
#    Attic API, GC worker, and DNS. Runners reference the cache endpoint.)
just tofu-plan attic
just tofu-apply attic

# 2. GitLab runners
just tofu-plan gitlab-runners
just tofu-apply gitlab-runners

# 3. GitHub Actions runners (ARC) -- optional, only if using GitHub Actions
just tofu-plan arc-runners
just tofu-apply arc-runners

# 4. Runner dashboard
just tofu-plan runner-dashboard
just tofu-apply runner-dashboard

The ARC stack requires a GitHub App secret in both arc-systems and arc-runners namespaces. See the GitHub App Adoption Guide for setup instructions.

5. Verify

kubectl get pods -n attic-cache-dev       # Attic API + PostgreSQL + MinIO
kubectl get pods -n gitlab-runners        # GitLab runner manager pods
kubectl get pods -n arc-systems           # ARC controller (if deployed)
kubectl get pods -n arc-runners           # ARC runner pods (scale-to-zero, may be empty)
kubectl get pods -n runner-dashboard      # Dashboard pod

Check GitLab: Your Group > Settings > CI/CD > Runners should show the registered runners.

If you deployed ARC, verify the GitHub App installation is working:

kubectl get autoscalingrunnersets -n arc-runners

Overlay Deployment

For production use, create an overlay repository. This keeps your secrets and configuration private while pulling shared modules from upstream.

Follow the full walkthrough: Create Your First Overlay

The short version:

  1. Create a private GitLab repo for your overlay
  2. Clone upstream as a sibling directory
  3. Add MODULE.bazel, build/overlay.bzl, build/extensions.bzl
  4. Add your config/organization.yaml and tfvars files
  5. Set up a CI pipeline that clones upstream and runs tofu plan/apply
  6. Push to main and let CI deploy

What Gets Deployed

After a successful deployment, you have:

Kubernetes Cluster
  cnpg-system/
    cnpg-controller-manager (CloudNativePG operator)
  minio-operator/
    minio-operator (MinIO operator)
  attic-cache-dev/
    atticd (Nix binary cache API, HPA-enabled)
    attic-gc (garbage collection worker)
    attic-pg (PostgreSQL cluster via CNPG)
    attic-minio (S3-compatible object storage via MinIO)
    attic-init-cache (one-shot cache initialization job)
    attic-cache-warm (daily cache warming CronJob)
  gitlab-runners/
    docker-runner (general purpose CI)
    dind-runner (Docker-in-Docker builds)
    nix-runner (Nix builds + Attic cache)
  arc-systems/          (optional, if ARC deployed)
    arc-controller (Actions Runner Controller)
  arc-runners/          (optional, if ARC deployed)
    gh-nix (Nix builds, scale-to-zero)
    gh-docker (general purpose CI, scale-to-zero)
    gh-dind (Docker-in-Docker, scale-to-zero)
  runner-dashboard/
    dashboard (SvelteKit monitoring UI)

The runners are registered at your GitLab group level. Any project in the group can use them by adding tags: to their .gitlab-ci.yml:

build:
  tags: [docker]
  script:
    - make build

Next Steps