Pricing Docs MCP

GitHub Actions

Add human QA testing to your CI/CD pipeline with the Runhuman GitHub Action.

Setup: Add RUNHUMAN_API_KEY to your repository secrets (get your key).


Quick Start

The Runhuman Action automatically:

  • Finds or creates a project for your repository
  • Tests linked GitHub issues from your PRs
  • Manages labels based on test outcomes
  • Reports results with screenshots and videos
# .github/workflows/qa.yml
name: QA Test
on:
  pull_request:
    branches: [main]

jobs:
  qa:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          on-success-add-labels: '["qa:passed"]'
          on-failure-add-labels: '["qa:failed"]'
          fail-on-failure: true

Authentication

User-Scoped API Keys

Runhuman uses user-scoped API keys that work across all your projects:

  1. Get your key: Visit runhuman.com/settings/api-keys
  2. Add to repository: Store as RUNHUMAN_API_KEY in your repository secrets
  3. Use anywhere: The same key works for all your repositories

Automatic Project Management

The action automatically:

  • Detects your repository (owner/repo)
  • Finds an existing project or creates one
  • Runs tests against that project

No manual project configuration needed!


Usage Examples

Automatically test all issues linked in the PR body:

name: PR Workflow
on:
  pull_request:
    branches: [main]

jobs:
  deploy-and-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # Deploy to staging
      - name: Deploy
        run: ./deploy-staging.sh

      # Test linked issues
      - name: QA Test
        uses: volter-ai/runhuman-action@v1
        with:
          url: ${{ vars.STAGING_URL }}
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          target-duration-minutes: 5
          on-success-add-labels: '["qa:passed"]'
          on-success-remove-labels: '["needs-qa"]'
          on-failure-add-labels: '["qa:failed"]'
          on-not-testable-add-labels: '["not-testable"]'
          fail-on-failure: true

How it works:

  1. Action finds issues linked in PR (e.g., “Closes #123”)
  2. AI analyzes which issues are testable by humans
  3. Creates QA jobs for testable issues
  4. Updates labels based on test outcomes

Test Specific Issues

Directly test specific issue numbers:

- uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    issue-numbers: '[123, 456, 789]'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Test a URL with Custom Instructions

Run an ad-hoc QA test without issues:

- uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    description: |
      Test the new checkout flow:
      1. Add items to cart
      2. Complete checkout
      3. Verify confirmation email
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    output-schema: |
      {
        "checkoutWorks": {
          "type": "boolean",
          "description": "Checkout flow works end-to-end"
        },
        "issuesFound": {
          "type": "array",
          "items": { "type": "string" },
          "description": "Any bugs or issues discovered"
        }
      }

Combined: PRs + Issues + Custom Tests

Combine issue testing with additional custom tests:

- uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    pr-numbers: '[${{ github.event.pull_request.number }}]'
    issue-numbers: '[789]'  # Also test this specific issue
    description: 'Additionally, verify the header renders correctly on mobile'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Exploratory QA on Production

Run comprehensive exploratory tests on a schedule:

name: Daily Production QA
on:
  schedule:
    - cron: '0 9 * * *'  # 9 AM daily
  workflow_dispatch:

jobs:
  qa:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://example.com
          description: |
            Explore the application and report any issues:
            - Test core user flows
            - Check for visual bugs
            - Verify responsiveness
            - Test on different browsers
          target-duration-minutes: 15
          can-create-github-issues: true  # Auto-create issues for bugs
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          fail-on-error: false  # Don't fail on QA findings

Configuration

Required Inputs

InputDescription
urlURL to test (must be publicly accessible)
api-keyUser-scoped Runhuman API key from Settings

Test Sources

At least one of these is required:

InputDescription
descriptionCustom test instructions
pr-numbersPR numbers to find linked issues (JSON array: '[123, 456]')
issue-numbersIssue numbers to test directly (JSON array: '[789, 101]')

Optional Inputs

InputDefaultDescription
github-token${{ github.token }}GitHub token for reading PR/issue data
api-urlhttps://runhuman.comRunhuman API base URL
target-duration-minutes5Target test duration (1-60 minutes)
screen-sizedesktopScreen size (desktop, laptop, tablet, mobile)
output-schema-JSON schema for structured test results
can-create-github-issuesfalseAllow tester to create GitHub issues for bugs found

Label Management

Automatically update issue labels based on test outcomes:

InputDescription
on-success-add-labelsLabels to add on test success (JSON array: '["qa:passed"]')
on-success-remove-labelsLabels to remove on test success
on-failure-add-labelsLabels to add on test failure
on-failure-remove-labelsLabels to remove on test failure
on-not-testable-add-labelsLabels to add when issue is not testable
on-not-testable-remove-labelsLabels to remove when issue is not testable
on-timeout-add-labelsLabels to add on test timeout
on-timeout-remove-labelsLabels to remove on test timeout

Workflow Control

Control workflow behavior:

InputDefaultDescription
fail-on-errortrueFail workflow on system errors
fail-on-failuretrueFail workflow if any test fails
fail-on-timeoutfalseFail workflow if tester times out

Outputs

Access test results in subsequent workflow steps:

- name: Run QA Test
  id: qa
  uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    pr-numbers: '[${{ github.event.pull_request.number }}]'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

- name: Report Results
  run: |
    echo "Status: ${{ steps.qa.outputs.status }}"
    echo "Success: ${{ steps.qa.outputs.success }}"
    echo "Tested: ${{ steps.qa.outputs.tested-issues }}"
    echo "Passed: ${{ steps.qa.outputs.passed-issues }}"
    echo "Failed: ${{ steps.qa.outputs.failed-issues }}"
    echo "Cost: ${{ steps.qa.outputs.cost-usd }}"
OutputDescription
statusTest status (completed, error, timeout, no-issues)
successWhether all tests passed (true/false)
tested-issuesJSON array of tested issue numbers
passed-issuesJSON array of passed issue numbers
failed-issuesJSON array of failed issue numbers
not-testable-issuesJSON array of not-testable issue numbers
timed-out-issuesJSON array of timed-out issue numbers
resultsFull JSON results object
cost-usdTotal cost in USD
duration-secondsTotal duration in seconds

How It Works

  1. Collect Issues: Finds issues from PRs (via body/title references) and explicit issue-numbers
  2. Deduplicate: Removes duplicate issues
  3. Auto-Create Project: Finds or creates a project for your repository
  4. Analyze Testability: AI determines which issues can be tested by a human
  5. Run Tests: Creates Runhuman jobs for each testable issue
  6. Apply Labels: Updates issue labels based on outcomes
  7. Report Results: Sets outputs and optionally fails the workflow

Writing Testable Issues

Make your issues easy to test by including:

✅ Good Example

## Bug Description
The checkout button is unresponsive on Safari.

## Test URL
https://staging.myapp.com/checkout

## Steps to Reproduce
1. Add items to cart
2. Go to checkout
3. Click "Place Order"
4. Nothing happens

## Expected Behavior
Order should be submitted and confirmation shown.

Key Elements

  • Clear description of what to test
  • Test URL (specific page where bug occurs)
  • Reproduction steps (numbered list)
  • Expected behavior (what should happen)

Workflow Triggers

Test after your deployment completes:

name: Deploy and QA
on:
  pull_request:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy
        run: ./deploy-staging.sh

  qa:
    needs: deploy
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

On PR Merge

Test when merging to main:

on:
  pull_request:
    types: [closed]
    branches: [main]

jobs:
  qa:
    if: github.event.pull_request.merged == true
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://example.com  # Production URL
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Manual Trigger

Test on demand via workflow_dispatch:

name: Manual QA Test
on:
  workflow_dispatch:
    inputs:
      url:
        description: 'URL to test'
        required: true
      description:
        description: 'Test instructions'
        required: true

jobs:
  qa:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: ${{ inputs.url }}
          description: ${{ inputs.description }}
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Scheduled Tests

Run periodic exploratory QA:

on:
  schedule:
    - cron: '0 */6 * * *'  # Every 6 hours

Advanced Usage

Parallel Tests

Test multiple environments simultaneously:

jobs:
  test-staging:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com
          description: 'Test staging environment'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

  test-production:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://example.com
          description: 'Smoke test production'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Conditional Testing

Only run tests when specific files change:

on:
  pull_request:
    paths:
      - 'src/checkout/**'
      - 'src/payment/**'

jobs:
  qa-checkout:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com/checkout
          description: 'Test checkout and payment flows'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Continue on Failure

Keep workflow running even if QA fails:

- uses: volter-ai/runhuman-action@v1
  continue-on-error: true
  with:
    url: https://staging.example.com
    description: 'Exploratory testing'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    fail-on-failure: false

Troubleshooting

ProblemSolution
”No testable issues found”Issues may not include test URLs or reproduction steps. Add clear test instructions to issue bodies.
”Project not found”The action auto-creates projects. If this fails, check that your API key is valid.
Action times outIncrease target-duration-minutes or simplify test instructions
URL not accessibleEnsure the URL is publicly accessible (not localhost or private network)
Authentication failedVerify RUNHUMAN_API_KEY is set in repository secrets and starts with qa_live_
High costsRun tests only on specific branches/paths, or reduce target-duration-minutes

Migration from Old Actions

If you’re using the legacy runhuman-issue-tester-action or runhuman-qa-test-action:

Old (deprecated):

- uses: volter-ai/runhuman-issue-tester-action@0.0.6
  with:
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    test-url: ${{ vars.RUNHUMAN_TESTING_URL }}

New (unified):

- uses: volter-ai/runhuman-action@v1
  with:
    url: ${{ vars.RUNHUMAN_TESTING_URL }}
    pr-numbers: '[${{ github.event.pull_request.number }}]'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Changes:

  • test-urlurl
  • Add pr-numbers for PR-based testing
  • Projects are auto-created (no project-id needed)
  • User-scoped API keys (one key for all projects)

Next Steps

TopicLink
More testing recipes and patternsCookbook
Full technical specificationReference
Direct API integrationREST API