Pricing Enterprise Docs Agent Skills Learn

GitHub Actions

Add human QA testing to your CI/CD pipeline with the Runhuman GitHub Action.

Setup: Add RUNHUMAN_API_KEY to your repository secrets (get your key from your organization’s API Keys page).

Using an AI agent? Give it this link for setup instructions:

https://runhuman.com/for_agents_github_actions.md

Or, just install the Runhuman Agent Skill and ask the agent to set up the right workflow for your use case.


Quick Start

The Runhuman Action automatically:

  • Finds or creates a project for your repository
  • Tests linked GitHub issues from your PRs
  • Manages labels based on test outcomes
  • Reports results with screenshots and videos
# .github/workflows/qa.yml
name: QA Test
on:
  pull_request:
    branches: [main]

jobs:
  qa:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          on-success-add-labels: '["qa:passed"]'
          on-failure-add-labels: '["qa:failed"]'
          fail-on-failure: true

Authentication

Organization-Scoped API Keys

Runhuman uses organization-scoped API keys that work across all projects in your organization:

  1. Get your key: Go to your organization’s API Keys page in the Dashboard
  2. Add to repository: Store as RUNHUMAN_API_KEY in your repository secrets
  3. Use anywhere: The same key works for all repositories in your organization

Automatic Project Management

The action automatically:

  • Detects your repository (owner/repo)
  • Finds an existing project or creates one
  • Runs tests against that project

No manual project configuration needed!


Usage Examples

Automatically test all issues linked in the PR body:

name: PR Workflow
on:
  pull_request:
    branches: [main]

jobs:
  deploy-and-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # Deploy to staging
      - name: Deploy
        run: ./deploy-staging.sh

      # Test linked issues
      - name: QA Test
        uses: volter-ai/runhuman-action@v1
        with:
          url: ${{ vars.STAGING_URL }}
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          target-duration-minutes: 5
          on-success-add-labels: '["qa:passed"]'
          on-success-remove-labels: '["needs-qa"]'
          on-failure-add-labels: '["qa:failed"]'
          on-not-testable-add-labels: '["not-testable"]'
          fail-on-failure: true

How it works:

  1. Action finds issues linked in PR (e.g., “Closes #123”)
  2. AI analyzes which issues are testable by humans
  3. Creates QA jobs for testable issues
  4. Updates labels based on test outcomes

Test Specific Issues

Directly test specific issue numbers:

- uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    issue-numbers: '[123, 456, 789]'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Test a URL with Custom Instructions

Run an ad-hoc QA test without issues:

- uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    description: |
      Test the new checkout flow:
      1. Add items to cart
      2. Complete checkout
      3. Verify confirmation email
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    output-schema: |
      {
        "checkoutWorks": {
          "type": "boolean",
          "description": "Checkout flow works end-to-end"
        },
        "issuesFound": {
          "type": "array",
          "items": { "type": "string" },
          "description": "Any bugs or issues discovered"
        }
      }

Combined: PRs + Issues + Custom Tests

Combine issue testing with additional custom tests:

- uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    pr-numbers: '[${{ github.event.pull_request.number }}]'
    issue-numbers: '[789]'  # Also test this specific issue
    description: 'Additionally, verify the header renders correctly on mobile'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Exploratory QA on Production

Run comprehensive exploratory tests on a schedule:

name: Daily Production QA
on:
  schedule:
    - cron: '0 9 * * *'  # 9 AM daily
  workflow_dispatch:

jobs:
  qa:
    runs-on: ubuntu-latest
    permissions:
      issues: write
      contents: read
    steps:
      - name: Run Exploratory QA
        id: qa
        uses: volter-ai/runhuman-action@v1
        with:
          url: https://example.com
          description: |
            Explore the application and report any issues:
            - Test core user flows
            - Check for visual bugs
            - Verify responsiveness
            - Test on different browsers
          target-duration-minutes: 15
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          fail-on-error: false  # Don't fail on QA findings

      # Automatically create GitHub issues from QA findings
      - name: Create issues from findings
        if: always() && steps.qa.outputs.extracted-issues != '[]' && steps.qa.outputs.status == 'completed'
        uses: actions/github-script@v7
        env:
          EXTRACTED_ISSUES: ${{ steps.qa.outputs.extracted-issues }}
          JOB_URLS: ${{ steps.qa.outputs.job-urls }}
        with:
          script: |
            const issues = JSON.parse(process.env.EXTRACTED_ISSUES);
            const jobUrls = JSON.parse(process.env.JOB_URLS || '[]');
            const jobLink = jobUrls[0] ? `**Job:** ${jobUrls[0]}` : '';

            for (const issue of issues) {
              const stepsText = issue.reproductionSteps
                .map((step, i) => `${i + 1}. ${step}`)
                .join('\n');

              // Check for duplicate existing issues
              const duplicate = (issue.relatedIssues || [])
                .find(r => r.relation === 'duplicate');

              if (duplicate) {
                // Comment on existing issue instead of creating a new one
                await github.rest.issues.createComment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  issue_number: duplicate.issueNumber,
                  body: [
                    `## QA Finding: ${issue.title}`,
                    '',
                    `> Duplicate (${Math.round(duplicate.confidence * 100)}% confidence): ${duplicate.reason}`,
                    '',
                    issue.description,
                    '',
                    '### Reproduction Steps',
                    stepsText,
                    '',
                    '---',
                    `**Severity:** ${issue.severity}`,
                    jobLink,
                  ].filter(Boolean).join('\n'),
                });
                core.info(`Commented on #${duplicate.issueNumber}: ${issue.title}`);
              } else {
                // Create new issue, mentioning related issues if any
                const related = (issue.relatedIssues || [])
                  .map(r => `- #${r.issueNumber} (${r.relation}, ${Math.round(r.confidence * 100)}%): ${r.reason}`)
                  .join('\n');

                const created = await github.rest.issues.create({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  title: `[QA] ${issue.title}`,
                  body: [
                    issue.description,
                    '',
                    '## Reproduction Steps',
                    stepsText,
                    related ? `\n## Related Issues\n${related}` : '',
                    '',
                    '---',
                    `**Severity:** ${issue.severity}`,
                    jobLink,
                  ].filter(Boolean).join('\n'),
                  labels: ['runhuman:qa-finding', ...issue.suggestedLabels].slice(0, 10),
                });
                core.info(`Created issue #${created.data.number}: ${issue.title}`);
              }
            }

Configuration

Required Inputs

InputDescription
urlURL to test (must be publicly accessible)
api-keyOrganization-scoped Runhuman API key from your Dashboard

Test Sources

Provide at least one of these to define what to test:

InputDescription
descriptionCustom test instructions
pr-numbersPR numbers to find linked issues (JSON array: '[123, 456]')
issue-numbersIssue numbers to test directly (JSON array: '[789, 101]')

If none are provided, the action returns status: no-issues without running tests.

Optional Inputs

InputDefaultDescription
github-token${{ github.token }}GitHub token for reading PR/issue data
target-duration-minutes30Target test duration (1-60 minutes)
device-classdesktopDevice class: desktop or mobile
output-schema-JSON schema for structured test results
template-Template name to use (resolved from repo, project, or builtins). See Templates.
template-file-Path to local template file (e.g., .runhuman/templates/smoke.md). See Templates.

Label Management

Automatically update issue labels based on test outcomes:

InputDescription
on-success-add-labelsLabels to add on test success (JSON array: '["qa:passed"]')
on-success-remove-labelsLabels to remove on test success
on-failure-add-labelsLabels to add on test failure
on-failure-remove-labelsLabels to remove on test failure
on-not-testable-add-labelsLabels to add when issue is not testable
on-not-testable-remove-labelsLabels to remove when issue is not testable
on-timeout-add-labelsLabels to add on test timeout
on-timeout-remove-labelsLabels to remove on test timeout

Workflow Control

Control workflow behavior:

InputDefaultDescription
fail-on-errortrueFail workflow on system errors
fail-on-failuretrueFail workflow if any test fails
fail-on-timeoutfalseFail workflow if tester times out

Outputs

Access test results in subsequent workflow steps:

- name: Run QA Test
  id: qa
  uses: volter-ai/runhuman-action@v1
  with:
    url: https://staging.example.com
    pr-numbers: '[${{ github.event.pull_request.number }}]'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

- name: Report Results
  run: |
    echo "Status: ${{ steps.qa.outputs.status }}"
    echo "Success: ${{ steps.qa.outputs.success }}"
    echo "Tested: ${{ steps.qa.outputs.tested-issues }}"
    echo "Passed: ${{ steps.qa.outputs.passed-issues }}"
    echo "Failed: ${{ steps.qa.outputs.failed-issues }}"
    echo "Cost: ${{ steps.qa.outputs.cost-usd }}"
    echo "Job URLs: ${{ steps.qa.outputs.job-urls }}"
OutputDescription
statusTest status (completed, error, timeout, no-issues)
successWhether all tests passed (true/false)
tested-issuesJSON array of tested issue numbers
passed-issuesJSON array of passed issue numbers
failed-issuesJSON array of failed issue numbers
not-testable-issuesJSON array of not-testable issue numbers
timed-out-issuesJSON array of timed-out issue numbers
resultsFull JSON results object
cost-usdTotal cost in USD
duration-secondsTotal duration in seconds
job-idsJSON array of Runhuman job IDs
job-urlsJSON array of job URLs for viewing results
extracted-issuesJSON array of AI-extracted issues with severity, reproduction steps, and related issue detection

Extracted Issues Schema

Each item in the extracted-issues JSON array:

{
  title: string;                    // Short issue title
  description: string;              // Detailed description
  severity: 'critical' | 'high' | 'medium' | 'low';
  reproductionSteps: string[];      // Steps to reproduce
  suggestedLabels: string[];        // Suggested GitHub labels
  relatedIssues?: Array<{
    issueNumber: number;            // GitHub issue number
    title: string;                  // Issue title
    state: 'open' | 'closed';      // Current state
    relation: 'duplicate' | 'related';
    confidence: number;             // 0.0–1.0
    reason: string;                 // Why it's related
  }>;
}
  • duplicate (confidence > 70%): The finding matches an existing issue — comment on it instead of creating a new one.
  • related (confidence 30–70%): Similar but distinct — mention when creating a new issue.

See the Reference for a full example with sample data.


How It Works

  1. Collect Issues: Finds issues from PRs (via body/title references) and explicit issue-numbers
  2. Deduplicate: Removes duplicate issues
  3. Auto-Create Project: Finds or creates a project for your repository
  4. Analyze Testability: AI determines which issues can be tested by a human
  5. Run Tests: Creates Runhuman jobs for each testable issue
  6. Apply Labels: Updates issue labels based on outcomes
  7. Report Results: Sets outputs and optionally fails the workflow

Writing Testable Issues

Make your issues easy to test by including:

✅ Good Example

## Bug Description
The checkout button is unresponsive on Safari.

## Test URL
https://staging.myapp.com/checkout

## Steps to Reproduce
1. Add items to cart
2. Go to checkout
3. Click "Place Order"
4. Nothing happens

## Expected Behavior
Order should be submitted and confirmation shown.

Key Elements

  • Clear description of what to test
  • Test URL (specific page where bug occurs)
  • Reproduction steps (numbered list)
  • Expected behavior (what should happen)

Workflow Triggers

Test after your deployment completes:

name: Deploy and QA
on:
  pull_request:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy
        run: ./deploy-staging.sh

  qa:
    needs: deploy
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

On PR Merge

Test when merging to main:

on:
  pull_request:
    types: [closed]
    branches: [main]

jobs:
  qa:
    if: github.event.pull_request.merged == true
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://example.com  # Production URL
          pr-numbers: '[${{ github.event.pull_request.number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Manual Trigger

Test on demand via workflow_dispatch:

name: Manual QA Test
on:
  workflow_dispatch:
    inputs:
      url:
        description: 'URL to test'
        required: true
      description:
        description: 'Test instructions'
        required: true

jobs:
  qa:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: ${{ inputs.url }}
          description: ${{ inputs.description }}
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Scheduled Tests

Run periodic exploratory QA:

on:
  schedule:
    - cron: '0 */6 * * *'  # Every 6 hours

Advanced Usage

Parallel Tests

Test multiple environments simultaneously:

jobs:
  test-staging:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com
          description: 'Test staging environment'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

  test-production:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://example.com
          description: 'Smoke test production'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Conditional Testing

Only run tests when specific files change:

on:
  pull_request:
    paths:
      - 'src/checkout/**'
      - 'src/payment/**'

jobs:
  qa-checkout:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: https://staging.example.com/checkout
          description: 'Test checkout and payment flows'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Continue on Failure

Keep workflow running even if QA fails:

- uses: volter-ai/runhuman-action@v1
  continue-on-error: true
  with:
    url: https://staging.example.com
    description: 'Exploratory testing'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    fail-on-failure: false

Troubleshooting

ProblemSolution
”No testable issues found”Issues may not include test URLs or reproduction steps. Add clear test instructions to issue bodies.
”Project not found”The action auto-creates projects. If this fails, check that your API key is valid.
Action times outIncrease target-duration-minutes or simplify test instructions
URL not accessibleEnsure the URL is publicly accessible (not localhost or private network)
Authentication failedVerify RUNHUMAN_API_KEY is set in repository secrets and starts with qa_live_
High costsRun tests only on specific branches/paths, or reduce target-duration-minutes

Migration from Old Actions

If you’re using the legacy runhuman-issue-tester-action or runhuman-qa-test-action:

Old (deprecated):

- uses: volter-ai/runhuman-issue-tester-action@0.0.6
  with:
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    test-url: ${{ vars.RUNHUMAN_TESTING_URL }}

New (unified):

- uses: volter-ai/runhuman-action@v1
  with:
    url: ${{ vars.RUNHUMAN_TESTING_URL }}
    pr-numbers: '[${{ github.event.pull_request.number }}]'
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Changes:

  • test-urlurl
  • Add pr-numbers for PR-based testing
  • Projects are auto-created (no project-id needed)
  • Organization-scoped API keys (one key for all projects in your organization)

Next Steps

TopicLink
More testing recipes and patternsCookbook
Full technical specificationReference
Direct API integrationREST API