GPU runners
for GitHub Actions.
CUDA pre-installed.
T4G, T4, L4, A10G, L40S, and RTX 6000 — spot and on-demand. Change runs-on and your next push runs on a real GPU. T4G spot from $0.00372/min.
Five
ML/CI
use cases.
If you're doing any of these in CI, you need a GPU runner.
Running model inference to test outputs before a release.
Fine-tuning or training as part of an automated pipeline.
Evaluating models against a benchmark on every PR.
Building or testing ML pipelines end-to-end.
Running GPU-accelerated tests that fail silently on CPU.
The typical alternative is either skipping GPU tests in CI entirely (and finding breakage later in production) or provisioning your own runners and absorbing the maintenance overhead. Neither is great.
You don't
install CUDA.
It's there.
torch.cuda.is_available() returns True on the first run. No setup step, no custom Docker image, no driver installation script.
CUDA installation is one of the slower parts of a CI job. A typical cuda-toolkit install from scratch adds 3–5 minutes. That time is gone.
GPU & Accelerator Pricing
Live per-minute rates for GPU and AI accelerator runners. Auto-refreshes every 30 seconds.
Three
patterns.
One line.
Basic on-demand, spot for cost-sensitive jobs, and RTX 6000 for the heavy stuff. The only thing that changes is the runs-on array.
Common
questions.
Do I need to install CUDA?
No. CUDA 12.1.0, cuDNN 9.2.1, NVIDIA drivers 555.58, and the NVIDIA Container Toolkit are pre-installed on every GPU runner. Your first step can go straight to pip install.
What if the spot runner gets interrupted?
The job fails and can be re-run manually or with automatic retries in your workflow config. For long or expensive jobs, use an on-demand runner instead.
Can I run Docker containers?
Yes. The NVIDIA Container Toolkit is pre-installed, so GPU-accelerated Docker containers work out of the box. docker run --gpus all works.
What CUDA version is installed?
CUDA 12.1.0, cuDNN 9.2.1, NVIDIA drivers 555.58. If your framework needs a different CUDA version, use a Docker container with the version you need — the container toolkit handles the rest.
Run your first
GPU job today.
$10 free compute. Connect your GitHub org in two minutes. torch.cuda.is_available() == True on the first run.