[INDEX] // ALL_DOCS ›
[TOC] // ON_THIS_PAGE ›
machine.dev offers CPU runners from 2 to 64 vCPUs in X64 (Intel/AMD) and ARM64 (Graviton) flavors. All runners run Ubuntu 22.04 LTS with a configurable gp3 EBS root volume and high-bandwidth networking.
Configurations and pricing
Live prices and current spot interruption rates are at machine.dev/runners. The tables below show the best rates across all regions.
X64 (Intel/AMD)
| vCPU | RAM | Spot $/min | OD $/min |
|---|---|---|---|
| 2 | 4 GB | $0.00032 | $0.00297 |
| 4 | 8 GB | $0.00064 | $0.00595 |
| 8 | 16 GB | $0.00128 | $0.01190 |
| 16 | 32 GB | $0.00255 | $0.02380 |
| 32 | 64 GB | $0.00559 | $0.04760 |
| 48 | 96 GB | $0.00766 | $0.07140 |
| 64 | 128 GB | $0.01021 | $0.09520 |
ARM64 (Graviton)
| vCPU | RAM | Spot $/min | OD $/min |
|---|---|---|---|
| 2 | 4 GB | $0.00028 | $0.00241 |
| 4 | 8 GB | $0.00057 | $0.00482 |
| 8 | 16 GB | $0.00103 | $0.00963 |
| 16 | 32 GB | $0.00207 | $0.01927 |
| 32 | 64 GB | $0.00413 | $0.03854 |
| 48 | 96 GB | $0.00620 | $0.05781 |
| 64 | 128 GB | $0.00827 | $0.07708 |
Choose an architecture
| Architecture | Best for | Tradeoffs |
|---|---|---|
| X64 (Intel/AMD) | Maximum compatibility — most prebuilt binaries, Docker images, and CI tooling assume X64. | ~15–20% more expensive than ARM64 at the same vCPU count. |
| ARM64 (Graviton) | Cost. Modern Linux software runs cleanly on ARM. Excellent for Go, Rust, Java, Python, and most containerized workloads. | A handful of legacy x86-only binaries and older proprietary software won’t run. |
Storage
Every runner gets a 100 GB gp3 EBS root volume by default with 6,000 IOPS and 250 MB/s throughput. You can scale up to 16 TB with custom IOPS and throughput using the disk_size, disk_iops, and disk_throughput labels:
runs-on:
- machine
- cpu=16
- disk_size=500 # 500 GB root volume
- disk_iops=10000 # 10,000 IOPS
- disk_throughput=750 # 750 MB/s throughput
| Label | Default | Range |
|---|---|---|
disk_size=<GB> | 100 | 1 – 16,384 |
disk_iops=<IOPS> | 6,000 | 6,000 – 16,000 |
disk_throughput=<MB/s> | 250 | 250 – 1,000 |
Defaults are included at no additional charge. Increasing IOPS above 6,000 or throughput above 250 MB/s incurs prorated EBS charges. See Pricing for the EBS rate breakdown.
Instance metrics
machine.dev collects metrics by default for every job and renders them as sparkline charts on the dashboard. Collected metrics include CPU utilization, memory usage, disk I/O, and network bytes in/out.
Control collection per job with the metrics and metrics_interval labels:
runs-on:
- machine
- cpu=16
- metrics=true # Enable (default)
- metrics_interval=10 # Collect every 10 seconds (default: 60)
Use it in a workflow
Default (X64):
jobs:
build:
runs-on: [machine, cpu=16] # 16 vCPU X64, 32 GB RAM
steps:
- uses: actions/checkout@v4
- run: make -j$(nproc)
Switch to ARM64:
runs-on: [machine, cpu=16, architecture=arm64]
Use spot pricing:
runs-on: [machine, cpu=16, tenancy=spot]
Pin a region:
runs-on: [machine, cpu=16, regions=us-east-1,us-east-2]
See Configuration options for every available label.
When to use CPU runners vs GPU runners
Use a GPU runner only if your code uses CUDA, calls a GPU library (PyTorch/TensorFlow with cuda), or runs nvidia-smi. Otherwise CPU is faster to start, cheaper, and simpler.
See the CPU vs GPU decision guide for a fuller breakdown.
Next steps
- GPU Runners — when you need CUDA or accelerated ML
- Configuration options — every label for
runs-on - Cost Optimization — spot, region picking, right-sizing
- Pricing — full per-minute rates including subscription plans