GitHub Actions
Add human QA testing to your CI/CD pipeline with the Runhuman GitHub Action.
Setup: Add RUNHUMAN_API_KEY to your repository secrets (get your key).
Quick Start
The Runhuman Action automatically:
- Finds or creates a project for your repository
- Tests linked GitHub issues from your PRs
- Manages labels based on test outcomes
- Reports results with screenshots and videos
# .github/workflows/qa.yml
name: QA Test
on:
pull_request:
branches: [main]
jobs:
qa:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
pr-numbers: '[${{ github.event.pull_request.number }}]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
on-success-add-labels: '["qa:passed"]'
on-failure-add-labels: '["qa:failed"]'
fail-on-failure: true
Authentication
User-Scoped API Keys
Runhuman uses user-scoped API keys that work across all your projects:
- Get your key: Visit runhuman.com/settings/api-keys
- Add to repository: Store as
RUNHUMAN_API_KEYin your repository secrets - Use anywhere: The same key works for all your repositories
Automatic Project Management
The action automatically:
- Detects your repository (
owner/repo) - Finds an existing project or creates one
- Runs tests against that project
No manual project configuration needed!
Usage Examples
Test Linked Issues from PRs (Recommended)
Automatically test all issues linked in the PR body:
name: PR Workflow
on:
pull_request:
branches: [main]
jobs:
deploy-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Deploy to staging
- name: Deploy
run: ./deploy-staging.sh
# Test linked issues
- name: QA Test
uses: volter-ai/runhuman-action@v1
with:
url: ${{ vars.STAGING_URL }}
pr-numbers: '[${{ github.event.pull_request.number }}]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
target-duration-minutes: 5
on-success-add-labels: '["qa:passed"]'
on-success-remove-labels: '["needs-qa"]'
on-failure-add-labels: '["qa:failed"]'
on-not-testable-add-labels: '["not-testable"]'
fail-on-failure: true
How it works:
- Action finds issues linked in PR (e.g., “Closes #123”)
- AI analyzes which issues are testable by humans
- Creates QA jobs for testable issues
- Updates labels based on test outcomes
Test Specific Issues
Directly test specific issue numbers:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
issue-numbers: '[123, 456, 789]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Test a URL with Custom Instructions
Run an ad-hoc QA test without issues:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
description: |
Test the new checkout flow:
1. Add items to cart
2. Complete checkout
3. Verify confirmation email
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
output-schema: |
{
"checkoutWorks": {
"type": "boolean",
"description": "Checkout flow works end-to-end"
},
"issuesFound": {
"type": "array",
"items": { "type": "string" },
"description": "Any bugs or issues discovered"
}
}
Combined: PRs + Issues + Custom Tests
Combine issue testing with additional custom tests:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
pr-numbers: '[${{ github.event.pull_request.number }}]'
issue-numbers: '[789]' # Also test this specific issue
description: 'Additionally, verify the header renders correctly on mobile'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Exploratory QA on Production
Run comprehensive exploratory tests on a schedule:
name: Daily Production QA
on:
schedule:
- cron: '0 9 * * *' # 9 AM daily
workflow_dispatch:
jobs:
qa:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://example.com
description: |
Explore the application and report any issues:
- Test core user flows
- Check for visual bugs
- Verify responsiveness
- Test on different browsers
target-duration-minutes: 15
can-create-github-issues: true # Auto-create issues for bugs
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
fail-on-error: false # Don't fail on QA findings
Configuration
Required Inputs
| Input | Description |
|---|---|
url | URL to test (must be publicly accessible) |
api-key | User-scoped Runhuman API key from Settings |
Test Sources
At least one of these is required:
| Input | Description |
|---|---|
description | Custom test instructions |
pr-numbers | PR numbers to find linked issues (JSON array: '[123, 456]') |
issue-numbers | Issue numbers to test directly (JSON array: '[789, 101]') |
Optional Inputs
| Input | Default | Description |
|---|---|---|
github-token | ${{ github.token }} | GitHub token for reading PR/issue data |
api-url | https://runhuman.com | Runhuman API base URL |
target-duration-minutes | 5 | Target test duration (1-60 minutes) |
screen-size | desktop | Screen size (desktop, laptop, tablet, mobile) |
output-schema | - | JSON schema for structured test results |
can-create-github-issues | false | Allow tester to create GitHub issues for bugs found |
Label Management
Automatically update issue labels based on test outcomes:
| Input | Description |
|---|---|
on-success-add-labels | Labels to add on test success (JSON array: '["qa:passed"]') |
on-success-remove-labels | Labels to remove on test success |
on-failure-add-labels | Labels to add on test failure |
on-failure-remove-labels | Labels to remove on test failure |
on-not-testable-add-labels | Labels to add when issue is not testable |
on-not-testable-remove-labels | Labels to remove when issue is not testable |
on-timeout-add-labels | Labels to add on test timeout |
on-timeout-remove-labels | Labels to remove on test timeout |
Workflow Control
Control workflow behavior:
| Input | Default | Description |
|---|---|---|
fail-on-error | true | Fail workflow on system errors |
fail-on-failure | true | Fail workflow if any test fails |
fail-on-timeout | false | Fail workflow if tester times out |
Outputs
Access test results in subsequent workflow steps:
- name: Run QA Test
id: qa
uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
pr-numbers: '[${{ github.event.pull_request.number }}]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
- name: Report Results
run: |
echo "Status: ${{ steps.qa.outputs.status }}"
echo "Success: ${{ steps.qa.outputs.success }}"
echo "Tested: ${{ steps.qa.outputs.tested-issues }}"
echo "Passed: ${{ steps.qa.outputs.passed-issues }}"
echo "Failed: ${{ steps.qa.outputs.failed-issues }}"
echo "Cost: ${{ steps.qa.outputs.cost-usd }}"
| Output | Description |
|---|---|
status | Test status (completed, error, timeout, no-issues) |
success | Whether all tests passed (true/false) |
tested-issues | JSON array of tested issue numbers |
passed-issues | JSON array of passed issue numbers |
failed-issues | JSON array of failed issue numbers |
not-testable-issues | JSON array of not-testable issue numbers |
timed-out-issues | JSON array of timed-out issue numbers |
results | Full JSON results object |
cost-usd | Total cost in USD |
duration-seconds | Total duration in seconds |
How It Works
- Collect Issues: Finds issues from PRs (via body/title references) and explicit
issue-numbers - Deduplicate: Removes duplicate issues
- Auto-Create Project: Finds or creates a project for your repository
- Analyze Testability: AI determines which issues can be tested by a human
- Run Tests: Creates Runhuman jobs for each testable issue
- Apply Labels: Updates issue labels based on outcomes
- Report Results: Sets outputs and optionally fails the workflow
Writing Testable Issues
Make your issues easy to test by including:
✅ Good Example
## Bug Description
The checkout button is unresponsive on Safari.
## Test URL
https://staging.myapp.com/checkout
## Steps to Reproduce
1. Add items to cart
2. Go to checkout
3. Click "Place Order"
4. Nothing happens
## Expected Behavior
Order should be submitted and confirmation shown.
Key Elements
- Clear description of what to test
- Test URL (specific page where bug occurs)
- Reproduction steps (numbered list)
- Expected behavior (what should happen)
Workflow Triggers
After Deploy (Recommended)
Test after your deployment completes:
name: Deploy and QA
on:
pull_request:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
run: ./deploy-staging.sh
qa:
needs: deploy
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
pr-numbers: '[${{ github.event.pull_request.number }}]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
On PR Merge
Test when merging to main:
on:
pull_request:
types: [closed]
branches: [main]
jobs:
qa:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://example.com # Production URL
pr-numbers: '[${{ github.event.pull_request.number }}]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Manual Trigger
Test on demand via workflow_dispatch:
name: Manual QA Test
on:
workflow_dispatch:
inputs:
url:
description: 'URL to test'
required: true
description:
description: 'Test instructions'
required: true
jobs:
qa:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: ${{ inputs.url }}
description: ${{ inputs.description }}
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Scheduled Tests
Run periodic exploratory QA:
on:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
Advanced Usage
Parallel Tests
Test multiple environments simultaneously:
jobs:
test-staging:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com
description: 'Test staging environment'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
test-production:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://example.com
description: 'Smoke test production'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Conditional Testing
Only run tests when specific files change:
on:
pull_request:
paths:
- 'src/checkout/**'
- 'src/payment/**'
jobs:
qa-checkout:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-action@v1
with:
url: https://staging.example.com/checkout
description: 'Test checkout and payment flows'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Continue on Failure
Keep workflow running even if QA fails:
- uses: volter-ai/runhuman-action@v1
continue-on-error: true
with:
url: https://staging.example.com
description: 'Exploratory testing'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
fail-on-failure: false
Troubleshooting
| Problem | Solution |
|---|---|
| ”No testable issues found” | Issues may not include test URLs or reproduction steps. Add clear test instructions to issue bodies. |
| ”Project not found” | The action auto-creates projects. If this fails, check that your API key is valid. |
| Action times out | Increase target-duration-minutes or simplify test instructions |
| URL not accessible | Ensure the URL is publicly accessible (not localhost or private network) |
| Authentication failed | Verify RUNHUMAN_API_KEY is set in repository secrets and starts with qa_live_ |
| High costs | Run tests only on specific branches/paths, or reduce target-duration-minutes |
Migration from Old Actions
If you’re using the legacy runhuman-issue-tester-action or runhuman-qa-test-action:
Old (deprecated):
- uses: volter-ai/runhuman-issue-tester-action@0.0.6
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
test-url: ${{ vars.RUNHUMAN_TESTING_URL }}
New (unified):
- uses: volter-ai/runhuman-action@v1
with:
url: ${{ vars.RUNHUMAN_TESTING_URL }}
pr-numbers: '[${{ github.event.pull_request.number }}]'
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Changes:
test-url→url- Add
pr-numbersfor PR-based testing - Projects are auto-created (no
project-idneeded) - User-scoped API keys (one key for all projects)
Next Steps
| Topic | Link |
|---|---|
| More testing recipes and patterns | Cookbook |
| Full technical specification | Reference |
| Direct API integration | REST API |