SKIP_TO_MAIN_CONTENT
SYS.00 // SYSTEM_INTERFACE REV 1.3.1 BUILD 2026.04.26

GPU + CPU runners
for GitHub Actions.
Change one line.

On-demand, ephemeral VMs that launch in seconds. T4G spot from $0.00372/min. No credit card required.

COLD_START
<60 s
T4G_SPOT
$0.00372 /min
FREE_CREDIT
$10 on signup
SETUP
1 line of YAML
01 Drop-in replacement
[REF_001]

The only line you touch.

CUDA is already installed. torch.cuda.is_available() returns True on the first run.

BEFORE.YAML DEFAULT
jobs:
train:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: python train.py
AFTER.YAML EXECUTE
jobs:
train:
runs-on: [machine, gpu=T4]
steps:
- uses: actions/checkout@v4
- run: python train.py
02 System diagnostic
[REF_002]

Two failure modes.

FAILURE_MODE_01 // GITHUB_HOSTED

Fine until you need more >OOMPH++.

The default 2-core runner is enough for a lot of things. But the moment you need to run model inference, build a large Docker image, or test against something that actually resembles production, you're stuck queuing behind a 2-core machine with 7GB of RAM.

GitHub does offer larger runners. The 4-core x64 costs $0.012/min. Their GPU runner costs $0.052/min. You can spend that, or you can run a T4 spot here for $0.00480/min.

FAILURE_MODE_02 // SELF_HOSTED

Expensive in a different way.

You provision an instance. You install the runner. You write the cleanup scripts. You deal with the spot interruption. You maintain the AMI. You handle the GitHub token rotation. It works, and it costs you three afternoons and a recurring maintenance burden that never quite goes away.

> RESOLUTION

There's a middle option.

03 Execution flow
[REF_003]

Three steps. Two minutes.

01 STEP

Connect_Github_Org

Sign up, authorise machine.dev on your GitHub org. Takes about two minutes.

02 STEP

Modify_Runs_On

Replace ubuntu-latest with the runner you want. Nothing else changes.

03 STEP

Push_And_Execute

A fresh VM launches in under 60 seconds, runs your job, shuts down. Billed only for runtime.

04 Core capabilities
[REF_010]

Six things, every job.

Cold starts under 60 seconds. Ephemeral VMs. CUDA pre-installed. Bundled egress. One-line setup. Rollover billing.

[REF_010] <60s
LAUNCH_TIME

Launches in seconds

From push to running job in under a minute. No pre-warming, no capacity planning, no waiting.

[REF_011] EPHEMERAL
STATE

Clean machine, every job

Every run gets a fresh VM. No shared state, no stale dependencies, no "works on my runner" bugs.

[REF_012] 12.1.0
CUDA

Pre-installed stack

NVIDIA drivers 555.58, CUDA 12.1.0, cuDNN 9.2.1, Container Toolkit. All there before step one.

[REF_013] BUNDLED
EGRESS

No data transfer fees

Ingress and egress are bundled. Pulling a 10GB model adds zero to your invoice.

[REF_014] ONE_LINE
INTEGRATION

Drop-in setup

Change runs-on. That's the full integration. No sidecar, no custom action, no webhook.

[REF_015] NEVER_EXPIRE
CREDITS

Rollover billing

Unused credits carry forward every month. Quiet sprint? You don't lose what you paid for.

05 Compute library
[REF_004]

From T4G to RTX 6000.

Six GPU tiers, spot and on-demand, from 16GB to 96GB VRAM. T4G starts at $0.00372/min on spot — about 14× cheaper than GitHub's GPU runner.

TIER GPU VRAM SPOT
[01] T4G 16GB $0.00372/min
[02] T4 16GB $0.00480/min
[03] L4 24GB $0.00570/min
[04] A10G 24GB $0.01277/min
[05] L40S 48GB $0.01613/min
[06] RTX 6000 96GB $0.01965/min
VIEW_FULL_SPECS
06 Subscription tiers
[REF_005]

Start free. Scale linear.

Sign up and get $10 in free compute — roughly 33 GPU hours on a spot T4G. No credit card.

[01] NO_CARD
FREE_TRIAL
$0/mo
$10 VALUE
[02] BONUS_10%
DEVELOPER
$50/mo
$55 VALUE
[03] BONUS_18%
GROWTH
$85/mo
$100 VALUE
[04] BONUS_25%
PRO
$160/mo
$200 VALUE
FULL_PRICING_BREAKDOWN
07 Initialize
[REF_FINAL]

System ready.

READY
> SYSTEM_READY // AWAITING_INPUT

Start running on
real hardware.

$10 free compute. Two minutes to connect your GitHub org. torch.cuda.is_available() == True on the first run.