GPU + CPU runners
for GitHub Actions.
Change one line.
On-demand, ephemeral VMs that launch in seconds. T4G spot from $0.00372/min. No credit card required.
The only line you touch.
CUDA is already installed. torch.cuda.is_available() returns True on the first run.
Two failure modes.
Fine until you need more >OOMPH++.
The default 2-core runner is enough for a lot of things. But the moment you need to run model inference, build a large Docker image, or test against something that actually resembles production, you're stuck queuing behind a 2-core machine with 7GB of RAM.
GitHub does offer larger runners. The 4-core x64 costs $0.012/min. Their GPU runner costs $0.052/min. You can spend that, or you can run a T4 spot here for $0.00480/min.
Expensive in a different way.
You provision an instance. You install the runner. You write the cleanup scripts. You deal with the spot interruption. You maintain the AMI. You handle the GitHub token rotation. It works, and it costs you three afternoons and a recurring maintenance burden that never quite goes away.
There's a middle option.
Three steps. Two minutes.
Connect_Github_Org
Sign up, authorise machine.dev on your GitHub org. Takes about two minutes.
Modify_Runs_On
Replace ubuntu-latest with the runner you want. Nothing else changes.
Push_And_Execute
A fresh VM launches in under 60 seconds, runs your job, shuts down. Billed only for runtime.
Six things, every job.
Cold starts under 60 seconds. Ephemeral VMs. CUDA pre-installed. Bundled egress. One-line setup. Rollover billing.
Launches in seconds
From push to running job in under a minute. No pre-warming, no capacity planning, no waiting.
Clean machine, every job
Every run gets a fresh VM. No shared state, no stale dependencies, no "works on my runner" bugs.
Pre-installed stack
NVIDIA drivers 555.58, CUDA 12.1.0, cuDNN 9.2.1, Container Toolkit. All there before step one.
No data transfer fees
Ingress and egress are bundled. Pulling a 10GB model adds zero to your invoice.
Drop-in setup
Change runs-on. That's the full integration. No sidecar, no custom action, no webhook.
Rollover billing
Unused credits carry forward every month. Quiet sprint? You don't lose what you paid for.
From T4G to RTX 6000.
Six GPU tiers, spot and on-demand, from 16GB to 96GB VRAM. T4G starts at $0.00372/min on spot — about 14× cheaper than GitHub's GPU runner.
| TIER | GPU | VRAM | SPOT |
|---|---|---|---|
| [01] | T4G | 16GB | $0.00372/min |
| [02] | T4 | 16GB | $0.00480/min |
| [03] | L4 | 24GB | $0.00570/min |
| [04] | A10G | 24GB | $0.01277/min |
| [05] | L40S | 48GB | $0.01613/min |
| [06] | RTX 6000 | 96GB | $0.01965/min |
Start free. Scale linear.
Sign up and get $10 in free compute — roughly 33 GPU hours on a spot T4G. No credit card.
System ready.
Start running on
real hardware.
$10 free compute. Two minutes to connect your GitHub org. torch.cuda.is_available() == True on the first run.