GPU Acceleration for GitHub Actions Simplified.
Accelerate your machine learning workflows with powerful GPU runners on GitHub Actions—no infrastructure headaches, instant setup.
name: Train ML Model
on:
push:
branches: [ main ]
jobs:
train:
name: Train with GPU
runs-on:
- machine
- gpu=L40S
- cpu=4
- ram=32
- tenancy=spot
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Train Model
run: |
pip install -r requirements.txt
python train.py
The Problem
Teams wait weeks for computing resources, hampering progress. GitHub Actions is a powerful tool, but lacks robust GPU support.
Slow CPU jobs waste development time
Complex self-hosted runners distract from core work
Delayed infrastructure hurts innovation
Resource bottlenecks create missed deadlines
The Solution
Machine provides GPU-powered GitHub Actions runners with instant setup. Get immediate access to acceleration without infrastructure complexity.
On-demand GPU runners available in minutes
Simple integration with existing workflows
Pay only for what you use, no contracts
Secure and reliable with full GitHub integration
Supercharge Your Workflows
Machine is designed specifically for machine learning and data science teams who need reliable GPU access
Lightning-Fast GPU Workflows
Instantly speed up model training, inference, batch processing, and simulations — up to 100× faster than CPU-only workflows.
Significant Cost Savings
Automatically leverage AWS Spot Instances globally, saving up to 85% compared to GitHub's default GPU runners.
No Operational Burden
Managed end-to-end. We provision, configure, and tear down runners, so you can focus solely on your workflows.
Secure and Ephemeral
Every workflow runs on isolated, ephemeral VMs, ensuring maximum security and zero contamination between jobs.
Global and Reliable
Available in multiple AWS regions for low-latency, sovereign access worldwide.
Native GitHub Integration
Works seamlessly with GitHub Actions. Just add a simple runs-on tag — no need to change your existing CI/CD pipelines.
How Machine Works
Three simple steps to accelerate your GitHub workflows with GPU power
Connect GitHub
Install our GitHub App or connect via OAuth in seconds. Machine securely integrates with your GitHub account to access your repositories and workflows.
No complex setup or infrastructure changes required—just a simple authorization process that gets you up and running quickly.
Configure Your Workflow
Just update your GitHub Actions YAML workflow file:
jobs:
gpu-job:
name: Train with GPU
runs-on:
- machine
- gpu=L40S
- cpu=4
- ram=32
- tenancy=spot
Accelerate Your Workflow
Push your code, and watch Machine rapidly execute your GPU-accelerated workflows. Save time, iterate faster, and maximize developer productivity.
With access to high-performance computing resources on demand, your team can focus on innovation rather than waiting for slow CI/CD runs or managing infrastructure.
Why Machine?
Compare our platform with other options to see why developers choose Machine for their ML CI/CD needs.
Feature | Machine.dev Managed GPU Runners | DIY Self-Hosted Your Own Runners | GitHub-Hosted Enterprise GPU Runners |
---|---|---|---|
GPU Availability | Multiple GPU types | Depends on you | Limited options |
Setup Time | Minutes | Days | Hours (Enterprise only) |
Maintenance | Fully managed | Your responsibility | Managed (limited) |
Cost | Pay only for what you use | High upfront + ongoing | Prohibitively expensive |
ML Optimization | Pre installed tools | DIY optimization | Limited |
Choose a plan that works for you
Start with our Pay-As-You-Go option or choose a plan with included credits for the best value.
What's included
- $0.005 per machine credit
- Max 2 concurrent machines
What's included
- Discount ~9%
- $0.004545 effective cost per machine credit
- Max 3 concurrent machines
- 11,000 included credits per month
- Overages revert to $0.005 per machine credit
What's included
- Discount 15%
- $0.00425 effective cost per machine credit
- Max 5 concurrent machines
- 20,000 included credits per month
- Overages revert to $0.005 per machine credit
What's included
- Discount 20%
- $0.004 effective cost per machine credit
- Max 10 concurrent machines
- 40,000 included credits per month
- Overages revert to $0.005 per machine credit
Need a custom plan?
If you need more credits or concurrent machines for your team, contact us for a custom enterprise solution.
Contact SalesAvailable GPU Runners
Choose from a variety of GPU instances to power your machine learning and AI workflows
GPU | vCPU | RAM | VRAM | Arch | Spot Credits | On Demand Credits |
---|---|---|---|---|---|---|
T4 | ||||||
4 | 16 | 16 | X64 | 2 | 4 | |
8 | 32 | 16 | X64 | 2 | 6 | |
16 | 64 | 16 | X64 | 3 | 9 | |
L4 | ||||||
4 | 16 | 24 | X64 | 2 | 6 | |
8 | 32 | 24 | X64 | 2 | 7 | |
16 | 64 | 24 | X64 | 3 | 9 | |
L40S | ||||||
4 | 32 | 48 | X64 | 3 | 14 | |
8 | 64 | 48 | X64 | 5 | 15 | |
16 | 128 | 48 | X64 | 6 | 21 | |
T4G | ||||||
4 | 8 | 16 | ARM64 | 1 | 3 | |
8 | 16 | 16 | ARM64 | 2 | 4 | |
16 | 32 | 16 | ARM64 | 3 | 6 | |
A10G | ||||||
4 | 16 | 24 | X64 | 3 | 7 | |
8 | 32 | 24 | X64 | 3 | 9 | |
16 | 64 | 24 | X64 | 4 | 11 | |
TRAINIUM | ||||||
8 | 32 | 32 | X64 | 1 | 9 | |
INFERENTIA2 | ||||||
4 | 16 | 32 | X64 | 1 | 6 | |
32 | 128 | 32 | X64 | 3 | 14 |
Note: These rates are only indicative and subject to change based on AWS pricing, and represent the lowest cost achievable across all regions at the time of writing.
Need a custom runner configuration? Contact us
Use Cases
Discover how Machine.dev's GPU-powered runners enable advanced AI and machine learning workflows.
Conversational AI Training
Quickly build specialized chatbots with supervised fine-tuning on conversational data. Train models like Llama 3.2 efficiently using LoRA optimizations and push directly to Hugging Face.
Explore templateParallel Hyperparameter Tuning
Accelerate model optimization by tuning multiple hyperparameter combinations simultaneously. Discover optimal configurations faster with parallel GPU-powered training jobs.
Explore templateBatch Object Detection
Process thousands of images with GPU-accelerated object detection. Automatically identify, annotate, and catalog objects with high precision and lightning-fast speed.
Explore templateLLM Benchmark Arena
Compare language models head-to-head across multiple benchmarks. Generate comprehensive performance visualizations to identify the best model for your specific use case.
Explore templateGRPO Fine-tuning
Enhance small language models with powerful reasoning capabilities using Group Relative Policy Optimization. Leverage spot-priced GPUs for cost-effective training that automatically handles interruptions.
Explore templateBuild Your Own GPU Workflow
Start creating powerful GPU-accelerated workflows with Machine.dev today.
Get StartedJoin The Beta Waitlist
Sign up now to gain:
- Early access to the platform
- Free GPU credits to use immediately
- Priority support from our team
Ready to revolutionize your workflows?
Join the beta waitlist now to start experiencing accelerated CI/CD immediately. Secure your spot today and enjoy free GPU credits!