Skip to main content

Create & Manage Test Runs

Test runs capture the execution of a set of test cases. They can originate from manual planning or be ingested automatically from CI/CD pipelines. Runs roll up results, time spent, linked defects, and reruns so you always know release readiness.

Creating a manual run

Provide the following when creating a run:

  • Title & description – Communicate the scope (e.g., "Sprint 24 Regression" or "Mobile Smoke – iOS").
  • Milestone – Tie the run to an upcoming release or sprint for analytics.
  • Environment – Select the target environment (web, staging, device lab, etc.).
  • Test plan or case list – Start from a test plan or manually select individual test cases.
  • Assignee – The coordinating tester; all included cases inherit this assignee by default.
  • Tags – Add labels for filtering and reporting.
  • Configurations – Attach reusable configuration options (browsers, locales, data sets) that testers see during execution.

After creation, TestFish automatically:

  • Creates an initial execution record for every test case in the run.
  • Assigns the run and each test run test case to the selected assignee (you can update them individually later).
  • Generates analytics (progress, pass/fail rates, time tracking) as execution data arrives.

Managing runs

  • Edit – Update metadata, add/remove cases, or change assignments (if the run is still active).
  • Clone – Produce a copy for future cycles without wiping historical results.
  • Rerun – Spawn a rerun chain that preserves history but gives testers a clean execution slate. Only original runs can spawn reruns.
  • Complete – Lock the run once testing finishes; results remain read-only for auditing.
  • Delete – Soft delete runs if needed; rerun chains and analytics handle archival gracefully.

Automated test runs

CI/CD systems can post execution results via the /automated_test_runs endpoint:

  • Supply run metadata plus an array of test case results, status IDs, timings, attachments, and optional error messages.
  • Automated runs show up alongside manual runs with full analytics, defect linkage, and rerun support.
  • Use scoped API tokens (ci_cd or integration) to authenticate automation pipelines.

Execution insights

  • Real-time progress – Monitor execution status live as testers complete their work.
  • Execution history – Complete history of all test runs with detailed execution records.
  • Quick re-runs – Re-run failed tests instantly without creating a new run.
  • The run detail page provides a table of all cases with status, workflow state, assignee, external links, and quick filters.
  • Opening a case launches the execution drawer with steps, history, time tracking, attachments, linked defects, and comments.
  • Run stats (pass/fail/blocked/skipped) feed the project overview widgets and exports.