# Getting Started (/docs)
Welcome to Tester.Army - the AI-powered QA testing platform that automates browser testing using natural language prompts.
What is Tester.Army? [#what-is-testerarmy]
Tester.Army uses AI agents to perform real browser-based testing on your web applications. Simply describe what you want to test in plain English, and our AI will navigate your site, perform actions, and report back with detailed results.
Each test run provides you with screenshots captured during the test, a pass/fail result with a detailed description, and auto-generated Playwright test code that you can integrate into your CI/CD pipeline.
Quick Start [#quick-start]
1. **Create an account** — Sign up for a Tester.Army account to access the dashboard.
2. **Pick a workflow**:
* **Prompt-first testing** — use [Quick Test](/docs/platform/quick-test) for fast checks, or continue into project chat for iterative prompt-driven runs.
* **Team workflow setup** — if you want GitHub + Vercel PR automation, use [Project Setup](/docs/platform/getting-started).
Related Guides [#related-guides]
* [Project Setup](/docs/platform/getting-started) for GitHub and Vercel integration.
* [Quick Test](/docs/platform/quick-test) for ad-hoc checks without project setup.
* [TesterArmy CLI](/docs/cli) for agent-first local and CI QA runs.
* [Markdown Test Files](/docs/cli/markdown-tests) for reusable `tests/*.md` suites with shared `TESTER.md` context.
* [API Reference](/docs/api/api-reference) for async runs and webhook integration.
# LLMs (/docs/llms)
* [llms.txt](/docs/llms.txt)
* [llms-full.txt](/docs/llms-full.txt)
# TesterArmy CLI (/docs/cli)
TesterArmy CLI (`testerarmy` / `ta`) is an **agent-first** QA runner.
It is primarily made for agent workflows, while still supporting interactive local usage.
This gives your coding agent like Claude Code, Codex or OpenCode testing super powers. It allows them to spawn multiple testing agents to improve the feedback loop.
Why? [#why]
* Close the agentic loop, give your agent reliable feedback for iterating on changes.
* Run browser QA checks from prompts or markdown test files.
* Execute markdown test suites in parallel with predictable pass/fail behavior.
* Don't pollute the context of your main agent - only important information comes back.
Example [#example]
Install and invoke [#install-and-invoke]
```bash
npm install -g testerarmy
# or no install
npx testerarmy --help
```
Both binaries map to the same CLI:
```bash
testerarmy --help
ta --help
```
Core commands [#core-commands]
ta status [#ta-status]
Show authentication state and API key source.
```bash
ta status
ta status --json
```
`--json` includes `authenticated`, `apiKeySource`, `environmentApiKeySet`, `configApiKeySet`, and `configPath`.
API key source priority:
1. `TESTERARMY_API_KEY`
2. `~/.config/testerarmy/config.json`
ta auth [#ta-auth]
Validate and save API key in local config.
```bash
ta auth
ta auth --api-key YOUR_KEY
ta auth --api-key YOUR_KEY --base-url https://tester.army
```
ta signout / ta logout [#ta-signout--ta-logout]
Clear stored credentials.
```bash
ta signout
ta logout
```
ta run [#ta-run-promptfiledirectory]
Run a single prompt, markdown file, or a directory of markdown tests.
```bash
# Inline prompt
ta run "check pricing page on https://example.com"
# Single markdown test
ta run tests/01-landing-page.md
# Directory mode (parallel)
ta run tests/
ta run tests/ --parallel 5
```
Directory mode notes:
* Scans only top-level `*.md` files.
* Excludes `TESTER.md` and `README.md` from executable test files.
* Prepends `TESTER.md` content (if found in the directory tree) to each test.
Single file and inline prompt notes:
* If `TESTER.md` is found while walking up directories, it is prepended automatically.
* Target URL is only taken from `--url` or `TESTERARMY_TARGET_URL`.
ta list / ta ls [#ta-list--ta-ls]
List recent local runs from `.testerarmy/`.
```bash
ta list
ta ls -n 20
ta list --json
```
run options [#run-options]
* `--url `: Explicit target URL
* `--json`: Print JSON payload (automation friendly)
* `--headed`: Run visible browser
* `--timeout `: Positive integer timeout (default: `600000`)
* `--api-key `: Override key for this run
* `--base-url `: Override API base URL
* `--output `: Write machine-readable envelope/summary to file
* `--system-prompt-file `: Replace base system prompt with file contents
* `--parallel `: Directory mode concurrency (default: `3`)
* `--debug`: Persist full debug transcript artifact
Environment variables [#environment-variables]
* `TESTERARMY_API_KEY`: API key override
* `TESTERARMY_BASE_URL`: API base URL override
* `TESTERARMY_TARGET_URL`: Target URL for `run`
* `MCP_TIMEOUT_NAVIGATION_MS`: Playwright navigation timeout override
* `MCP_TIMEOUT_ACTION_MS`: Playwright action timeout override
Local artifacts [#local-artifacts]
Each run creates `.testerarmy//` with:
* `run-meta.json`: metadata (`runId`, timestamps, PID, mode, prompt, target URL)
* `result.json`: final QA result payload
* `debug-run.json`: detailed stream/tool artifact (only with `--debug`)
When `--output ` is set:
* Single run: writes success/failure envelope with artifact paths.
* Directory run: writes batch summary + per-file results.
Exit behavior [#exit-behavior]
* `0`: success
* `1`: test failure or user cancellation
* `2`: CLI/runtime error
Agent-oriented workflow [#agent-oriented-workflow]
```bash
# 1) Check auth state
ta status --json
# 2) Set target once for scripted runs
export TESTERARMY_TARGET_URL="http://localhost:3000"
# 3) Run focused markdown scenario
ta run tests/01-landing-page.md --json --output .testerarmy/latest.json
# 4) Or run whole suite
ta run tests/ --parallel 3 --json
```
This workflow is the default path for coding agents: deterministic inputs, machine-readable outputs, and reproducible local artifacts.
Related Guides [#related-guides]
* [Markdown Test Files](/docs/cli/markdown-tests) for structuring `tests/` and shared `TESTER.md` instructions.
* [Getting Started](/docs) for platform overview.
* [Project Setup](/docs/platform/getting-started) for GitHub + Vercel project setup.
* [API Reference](/docs/api/api-reference) for async run endpoints and webhooks.
# Markdown Test Files (/docs/cli/markdown-tests)
Use markdown files when you want deterministic, reusable QA checks for local or CI runs.
Recommended structure [#recommended-structure]
```text
TESTER.md
tests/
01-landing-page.md
02-quick-test-runner.md
03-create-project.md
```
* Keep shared setup and environment notes in `TESTER.md`.
* Keep one scenario per file in `tests/`.
* Use numeric prefixes (`01-...`, `02-...`) for stable order in reviews and logs.
How TESTER.md is applied [#how-testermd-is-applied]
* In directory mode (`ta run tests/`), CLI prepends `TESTER.md` to every executable markdown test.
* In single-file mode (`ta run tests/01-landing-page.md`), CLI walks up directories and prepends the nearest `TESTER.md`.
* In directory mode, CLI runs only top-level `*.md` files and skips `README.md` and `TESTER.md`.
This is useful for shared auth flow, target assumptions, and known environment quirks.
Real example (this repo) [#real-example-this-repo]
This repository uses:
* `TESTER.md` in repo root for shared login + environment instructions.
* `tests/*.md` for scenario-specific checks (landing page, project flows, billing checks, and more).
That lets every test stay short and focused, while shared setup stays centralized.
Example files [#example-files]
TESTER.md [#testermd]
```md
# Shared Test Instructions
Use `` as the app under test.
## Authentication
1. Open `/sign-in`
2. Sign in as `tester@tester.army`
3. Complete magic link callback flow
## Environment notes
- Minor visual inconsistencies can be expected in staging
- Focus on functional failures for pass/fail
```
tests/01-landing-page.md [#tests01-landing-pagemd]
```md
# Landing page loads
Goal: verify public landing page is reachable and key CTA is visible.
Steps:
1. Navigate to `/`
2. Confirm page title mentions Tester Army
3. Confirm a primary CTA like "Get Started" or "Sign in" is visible
4. Return pass if all checks succeed; otherwise fail with a short reason
```
Run commands [#run-commands]
```bash
# 1) Set target app URL once
export TESTERARMY_TARGET_URL="http://localhost:3000"
# 2) Run one markdown scenario
ta run tests/01-landing-page.md
# 3) Run whole suite (parallel)
ta run tests/ --parallel 3
# 4) CI/machine readable output
ta run tests/ --json --output .testerarmy/latest.json
```
Authoring guidelines [#authoring-guidelines]
* Write clear action + assertion steps in plain English.
* Avoid hardcoded domains in test content; use `TESTERARMY_TARGET_URL` or `--url`.
* Put only shared instructions in `TESTER.md`; keep test files focused on scenario intent.
* Prefer many small tests over one long script for easier failure triage.
Related Guides [#related-guides]
* [TesterArmy CLI](/docs/cli) for full command/reference docs.
* [Quick Test](/docs/platform/quick-test) for ad-hoc prompt-first checks in dashboard.
# API Reference (/docs/api/api-reference)
The Tester.Army API allows you to programmatically run AI-powered QA tests against any web application. This RESTful API returns JSON responses and uses standard HTTP status codes.
Base URL [#base-url]
```
https://tester.army/api/v1
```
Authentication [#authentication]
All API requests require authentication using a Bearer token. Sign in at [/sign-in](/sign-in), then generate an API key in your dashboard.
Include your API key in the `Authorization` header:
```
Authorization: Bearer sk_xxxxxxxxxxxx_xxxxxxxxxxxxxxxx
```
API keys are prefixed with `sk_` and should be kept secret. Do not expose them in client-side code or public repositories.
Endpoints [#endpoints]
The primary API flow is async test runs via `/runs*` endpoints.
Error Handling [#error-handling]
The API uses standard HTTP status codes to indicate success or failure.
Error Response Format [#error-response-format]
```json
{
"error": "ErrorType",
"message": "Human-readable error description"
}
```
Status Codes [#status-codes]
| Status | Error | Description |
| ------ | ----------------- | ------------------------------------------------------------ |
| `200` | - | Success |
| `400` | `ValidationError` | Invalid request body. Check the `message` field for details. |
| `401` | `Unauthorized` | Missing or invalid API key. |
| `500` | `InternalError` | Server error. Try again later. |
Example Error Response [#example-error-response]
```json
{
"error": "ValidationError",
"message": "prompt: Required"
}
```
***
Async Test Runs (Recommended) [#async-test-runs-recommended]
The async run endpoints return immediately with a run ID. Poll for completion or provide a `webhookUrl` to receive results.
Submit a QA Test Run [#submit-a-qa-test-run]
```
POST /runs
```
Request Body [#request-body]
| Field | Type | Required | Description |
| ---------------------- | ------ | -------- | ------------------------------------------------------------------- |
| `prompt` | string | Yes | Description of what to test. Include the target URL in your prompt. |
| `credentials` | object | No | Optional auth credentials for testing login-protected pages. |
| `credentials.email` | string | No | Email or username for authentication. |
| `credentials.password` | string | No | Password for authentication. |
| `webhookUrl` | string | No | URL to receive a POST callback when the run completes. |
Example Request [#example-request]
```bash
curl -X POST https://tester.army/api/v1/runs \
-H "Authorization: Bearer sk_xxxxxxxxxxxx_xxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Test the signup button on https://example.com",
"webhookUrl": "https://example.com/webhooks/tester-army"
}'
```
Response (202) [#response-202]
```json
{
"id": "c8e0f1b1-2f4c-4c2a-b6a6-8f76a6b9f1a2",
"status": "queued",
"createdAt": "2026-02-12T00:00:00.000Z"
}
```
Get a Test Run [#get-a-test-run]
```
GET /runs/{id}
```
Response (200) [#response-200]
```json
{
"id": "c8e0f1b1-2f4c-4c2a-b6a6-8f76a6b9f1a2",
"type": "chat",
"status": "completed",
"input": {
"prompt": "Test the signup button on https://example.com"
},
"output": {
"featureName": "Signup Flow",
"result": "PASS",
"description": "No issues found.",
"issues": [],
"screenshots": []
},
"testPlan": null,
"error": null,
"durationMs": 12345,
"webhookUrl": null,
"webhookStatus": null,
"createdAt": "2026-02-12T00:00:00.000Z",
"startedAt": "2026-02-12T00:00:01.000Z",
"completedAt": "2026-02-12T00:00:12.000Z"
}
```
List Test Runs [#list-test-runs]
```
GET /runs
```
Query Parameters [#query-parameters]
| Field | Type | Description |
| -------- | ------ | --------------------------------------------------------------------------- |
| `limit` | number | Max results per page (default 20, max 100). |
| `status` | string | Filter by status (`queued`, `running`, `completed`, `failed`, `cancelled`). |
| `cursor` | string | Cursor for pagination. |
Response (200) [#response-200-1]
```json
{
"runs": [
{
"id": "c8e0f1b1-2f4c-4c2a-b6a6-8f76a6b9f1a2",
"type": "chat",
"status": "completed",
"input": {
"prompt": "Test the signup button on https://example.com"
},
"output": null,
"testPlan": null,
"error": null,
"durationMs": null,
"webhookUrl": null,
"webhookStatus": null,
"createdAt": "2026-02-12T00:00:00.000Z",
"startedAt": null,
"completedAt": null
}
],
"nextCursor": "2026-02-12T00:00:00.000Z::c8e0f1b1-2f4c-4c2a-b6a6-8f76a6b9f1a2"
}
```
Cancel a Queued Run [#cancel-a-queued-run]
```
POST /runs/{id}/cancel
```
Only runs in `queued` status can be cancelled.
Response (200) [#response-200-2]
```json
{
"id": "c8e0f1b1-2f4c-4c2a-b6a6-8f76a6b9f1a2",
"status": "cancelled"
}
```
Webhook Delivery [#webhook-delivery]
If you supply `webhookUrl`, Tester.Army sends a POST request when the run completes (either `completed` or `failed`).
Payload [#payload]
```json
{
"id": "c8e0f1b1-2f4c-4c2a-b6a6-8f76a6b9f1a2",
"type": "chat",
"status": "completed",
"output": {
"featureName": "Signup Flow",
"result": "PASS",
"description": "No issues found.",
"issues": [],
"screenshots": []
},
"testPlan": null,
"error": null,
"durationMs": 12345,
"createdAt": "2026-02-12T00:00:00.000Z",
"completedAt": "2026-02-12T00:00:12.000Z"
}
```
Retries & Timeouts [#retries--timeouts]
* Retries up to 3 times with exponential backoff (1s, 2s, 4s).
* Each request times out after 10 seconds.
* `webhookStatus` updates to `delivered` or `failed` after retries.
Related Guides [#related-guides]
* [Project Setup](/docs/platform/getting-started) to connect GitHub and Vercel.
* [Quick Test](/docs/platform/quick-test) to validate prompts before API automation.
* [Getting Started](/docs) for core platform concepts.
# CI / CD (/docs/platform/ci-cd)
Configure your CI / CD provider so TesterArmy automatically runs tests when your application is deployed. When a preview deployment completes, TesterArmy receives a webhook, resolves the PR, runs tests against the deployed URL, and posts results as a PR comment.
Coolify [#coolify]
Setup [#setup]
1. Go to **Project → Settings → CI / CD** and select **Coolify**.
2. Click **Add Webhook** and copy the generated webhook URL.
3. In Coolify, go to [**Notifications → Webhook**](https://coolify.io/docs/knowledge-base/notifications).
4. Paste the webhook URL and enable deployment events.
TesterArmy uses `pull_request_id` from the Coolify payload to identify the PR, so test results are automatically posted as PR comments.
Docker Compose workaround [#docker-compose-workaround]
Coolify has a [known issue](https://github.com/coollabsio/coolify/issues/8958) where Docker Compose deployments don't send the preview URL (`preview_fqdn` is `null`). Other build packs (Nixpacks, Dockerfile, Static) work out of the box.
**Workaround:** When creating or editing the webhook, set a **Preview URL pattern**:
```
https://{{pr_number}}.myapp.com
```
TesterArmy substitutes the PR number from the webhook payload to build the deployment URL.
Vercel [#vercel]
Vercel deployments are handled automatically through the GitHub App integration — no webhook setup needed. TesterArmy detects Vercel preview deployments via GitHub deployment status events and resolves the PR from the commit SHA.
If Vercel deployment protection blocks preview URLs, go to **Project → Settings → CI / CD**, select **Vercel**, and add a **bypass token**. See [Project Setup](/docs/platform/getting-started#vercel) for details.
Troubleshooting [#troubleshooting]
Missing deployment URL [#missing-deployment-url]
**Error:** `Webhook payload does not contain a deployment URL`
* **Coolify Docker Compose:** Preview URL is `null` due to a [known bug](https://github.com/coollabsio/coolify/issues/8958). Configure a **Preview URL pattern** on the webhook, or switch to Nixpacks/Dockerfile build pack.
* **No domain configured:** Make sure your application has a domain set in your deployment provider.
No PR comment appears [#no-pr-comment-appears]
1. **GitHub App connected?** Go to **Project Settings** and verify the GitHub App is installed and the repository is selected.
2. **PR number in payload?** Coolify sends `pull_request_id` for preview deployments. For non-preview or production deployments, TesterArmy cannot identify the PR.
3. **GitHub App permissions:** The app needs **Pull requests: Read & Write** permissions.
Duplicate events ignored [#duplicate-events-ignored]
Each webhook is deduplicated by deployment ID. If you see `"reason": "Duplicate event"`, the same deployment was already processed. Retries from your provider won't create duplicate test runs.
401 Unauthorized [#401-unauthorized]
The webhook URL contains the secret token. If you get a 401:
* Verify the URL is correct and hasn't been truncated
* Regenerate the webhook in **Project → Settings → CI / CD** if the secret was compromised
422 Invalid deployment URL [#422-invalid-deployment-url]
The deployment URL failed validation:
* URL must be a valid HTTPS URL
* URL must not point to localhost or internal addresses
# Project Setup (/docs/platform/getting-started)
Use this guide when you want to connect a repository and wire GitHub + your deployment provider into your testing workflow.
Before you start [#before-you-start]
* You must be signed in to Tester.Army.
* Connect your GitHub account in Tester.Army before starting.
* Your GitHub account must have access to the target repository.
1) Create the project [#1-create-the-project]
1. Go to **Projects** in your team dashboard.
2. Click **New Project**.
3. Fill in:
* **Project Name**
* **Project URL**
2) Connect the GitHub App [#2-connect-the-github-app]
This step lets Tester.Army identify PRs for your repository and post test results as PR comments.
1. In the GitHub step, if no app is installed yet, click **Install GitHub App** or open the install page directly:
[https://github.com/apps/testerarmy/installations/new](https://github.com/apps/testerarmy/installations/new)
2. Authorize the app for the org/user and select at least one repository.
3. Choose the installation and repository for this project.
3) Connect your deployment provider [#3-connect-your-deployment-provider]
Choose your deployment provider below.
Vercel [#vercel]
If you use Vercel, TesterArmy automatically receives deployment events through the GitHub App — no webhook needed. You only need the bypass token if Vercel deployment protection blocks preview URLs.
1. Open the Vercel token field in the project wizard or later in **Project → Settings → Vercel**.
2. In Vercel, go to:
* **Project Settings**
* **Deployment Protection**
* **Protection Bypass for Automation**
3. Create and copy the token and paste it into Tester.Army.
4. Click **Create Project** (or **Save** in settings).
Coolify [#coolify]
If you use Coolify, TesterArmy receives deployment events via a webhook that you configure in Coolify's [notification settings](https://coolify.io/docs/knowledge-base/notifications).
1. During onboarding, select **Coolify** as your provider. Or go to **Project → Settings → Webhooks** and click **Add Webhook**.
2. Copy the generated webhook URL.
3. In Coolify, go to **Notifications → Webhook**, paste the URL, and enable deployment events.
For Docker Compose deployments, Coolify has a [known issue](https://github.com/coollabsio/coolify/issues/8958) where the preview URL is not sent. Configure a **Preview URL pattern** (e.g., `https://{{pr_number}}.myapp.com`) when creating the webhook as a workaround.
See [CI / CD](/docs/platform/ci-cd) for more details and troubleshooting.
4) Validate the setup [#4-validate-the-setup]
1. Make sure the project appears in your dashboard.
2. Open a connected GitHub PR that triggers a deployment.
3. Confirm a test run is queued and appears in project results.
4. Open the result details for screenshot evidence, issues, and generated Playwright steps.
Quick troubleshooting [#quick-troubleshooting]
* **No installations found**: install the Tester Army GitHub App on the correct account/org and refresh.
* **No repositories available**: expand app permissions to include that repo.
* **Protected preview cannot open**: verify the Vercel bypass token is valid.
* **Coolify webhook not triggering**: check that deployment events are enabled in Coolify **Notifications → Webhook**.
Related Guides [#related-guides]
* [CI / CD](/docs/platform/ci-cd) for Coolify integration details and troubleshooting.
* [Quick Test](/docs/platform/quick-test) for one-off checks without project setup.
* [API Reference](/docs/api/api-reference) for async run endpoints and webhooks.
# Projects & Authentication (/docs/platform/projects)
Projects organize your QA tests around a specific application or website. Each project can store login credentials that the AI agent uses automatically during tests.
All plans include unlimited projects.
Creating a Project [#creating-a-project]
1. From the dashboard, click **New Project**
2. Enter a project name and the target URL
3. Click **Create**
The project appears in your sidebar and is accessible to all team members.
Project Settings [#project-settings]
Access settings via the **Settings** tab in any project. The **Authentication** section is where account access details for the agent are managed.
Authentication Credentials [#authentication-credentials]
Save login credentials so the AI agent can authenticate automatically during tests.
Adding Credentials [#adding-credentials]
1. Go to **Project → Settings**
2. Click **Add Credential**
3. Fill in:
* **Label** — A name to identify this credential (e.g., "admin", "test-user")
* **Username** — Email or username for login
* **Password** — The password (stored encrypted)
4. Click **Save**
You can add multiple credentials per project for different user roles.
How Credentials Work [#how-credentials-work]
When you run a test:
1. The AI agent receives all saved credentials for the project
2. When encountering a login form, it automatically uses the appropriate credentials
3. No need to include passwords in your test prompts
Passwords are encrypted with AES-256-GCM and only decrypted server-side at runtime. They're never exposed to the client or stored in plain text.
Managing Credentials [#managing-credentials]
From the Settings tab:
* **Edit** — Update the label, username, or password
* **Delete** — Remove credentials you no longer need
Deleting a credential takes effect immediately for future tests.
# Quick Test (/docs/platform/quick-test)
Quick Test lets you run ad-hoc tests without creating a project. It's useful for one-off checks or exploring the platform.
When to Use Quick Test [#when-to-use-quick-test]
* **Trying out Tester.Army** — Test the platform before setting up projects
* **One-time checks** — Verify something quickly without project overhead
* **Exploratory testing** — Investigate an issue without saving history
How to Use [#how-to-use]
1. Click **Quick Test** in the sidebar
2. Enter your test prompt (include the target URL)
3. Run the test
The AI agent performs the test and shows results in real-time.
Limitations [#limitations]
Quick Test has some restrictions compared to full projects:
| Feature | Quick Test | Projects |
| ----------------- | ------------- | ------------------ |
| Message history | Not saved | Saved |
| Test results | Not saved | Saved & searchable |
| Saved credentials | Not available | Available |
| Prompt templates | Not available | Available |
| Session sharing | Not available | Available |
Usage & Billing [#usage--billing]
Quick Test does not have a separate usage quota. It's included with your team plan, and paid plans are billed per seat.
When to Use Projects Instead [#when-to-use-projects-instead]
Create a project when you need:
* Persistent test history
* Saved login credentials
* Reusable prompt templates
* Team collaboration via session sharing
* Organized test results over time
Related Guides [#related-guides]
* [Project Setup](/docs/platform/getting-started) to connect repository workflows.
* [API Reference](/docs/api/api-reference) for programmatic async test runs.
* [Getting Started](/docs) for first-time setup basics.
# Scheduled Runs (/docs/platform/scheduled-runs)
Use scheduled runs to run project tests repeatedly without waiting for PR activity.
What scheduled runs do [#what-scheduled-runs-do]
When enabled, each schedule runs a test prompt at a fixed time. This is useful for:
* Nightly smoke checks
* Catching regressions during busy development cycles
* Watching key flows outside of PRs
Where to configure [#where-to-configure]
1. Open **Project → Settings → Scheduled Runs**.
2. Click **New Schedule**.
3. Set:
* Frequency: **Hourly**, **Daily**, or **Weekly**
* Time and timezone
* Day of week (for weekly schedules)
* The prompt to execute
4. Save and keep the schedule active.
Limits and behavior [#limits-and-behavior]
* Up to **5 schedules per project**.
* Schedules execute using the same project credentials and prompt context as normal project runs.
* Use clear prompts so each run stays focused on one goal (e.g., checkout flow, login flow).
# Session Sharing (/docs/platform/shared-sessions)
Share test sessions with your team so they can see your testing work. Shared sessions are read-only for viewers.
Sharing a Session [#sharing-a-session]
1. Open a test session in any project
2. Click the menu (three dots) on the session in the history
3. Select **Share with Team**
The session is now visible to all team members.
Viewing Shared Sessions [#viewing-shared-sessions]
Shared sessions appear in the project's chat history alongside your own sessions. They're marked with the owner's avatar and name so you know who created them.
Permissions [#permissions]
| Action | Owner | Team Member |
| ----------------- | ----- | ----------- |
| View messages | ✓ | ✓ |
| View screenshots | ✓ | ✓ |
| View test results | ✓ | ✓ |
| Send new messages | ✓ | ✗ |
| Toggle sharing | ✓ | ✗ |
| Delete session | ✓ | ✗ |
Team members see a notice that the session is read-only.
Making a Session Private [#making-a-session-private]
1. Open the session you own
2. Click the menu on the session
3. Select **Make Private**
The session is immediately hidden from other team members.
Use Cases [#use-cases]
* **Demonstrating issues** — Share a failing test with your team
* **Knowledge sharing** — Show how you tested a complex flow
* **Review** — Let teammates see your testing approach
* **Debugging** — Collaborate on understanding test failures
# Teams & Billing (/docs/platform/teams)
Tester.Army uses teams to organize users, projects, and billing. Every user belongs to at least one team.
Creating a Team [#creating-a-team]
1. Go to the dashboard sidebar
2. Click the team dropdown at the top
3. Select "Create New Team"
4. Enter a team name and submit
New users automatically get a default "Personal" team created on first login. You can rename it during onboarding or later from team settings.
Team Roles [#team-roles]
| Role | Permissions |
| ---------- | ------------------------------------------------------- |
| **Owner** | Full control, manage billing, delete team |
| **Admin** | Invite/remove members, change roles, edit team settings |
| **Member** | Access projects, run tests, view results |
Only owners can access billing settings. Owners cannot leave their team—transfer ownership first or delete the team.
Inviting Members [#inviting-members]
1. Go to **Team Settings → Members**
2. Click **Invite Member**
3. Enter their email address and select a role
4. They'll receive an email invitation (expires in 7 days)
Existing Tester.Army users also see an in-app notification. You cannot invite yourself or existing team members.
Managing Members [#managing-members]
From the Members tab you can:
* **Change roles** — Click the role dropdown next to a member (admin+ only)
* **Remove members** — Click the menu and select "Remove" (admin+ only)
* **Cancel invitations** — Cancel pending invites before they're accepted
* **Leave team** — Non-owners can leave via the menu
Plans & Pricing [#plans--pricing]
Available plans:
| Plan | PR Tests | Price |
| ---------- | --------- | ----------------------------------- |
| Free | Unlimited | Free |
| Pro | Unlimited | $24/seat/mo (annual) or $30/seat/mo |
| Enterprise | Unlimited | Custom (per seat) |
Paid plans are billed per seat (active team member). Free plans include a 7-day Pro trial.
Billing [#billing]
Owners can manage billing from **Team Settings → Billing**:
* View current plan and seat count
* Upgrade or downgrade plans
* Access the Stripe Customer Portal for invoices and payment methods
Plan changes take effect immediately. Downgrades apply at the next billing cycle.
# Prompt Templates (/docs/platform/templates)
Prompt templates let you create reusable test prompts with variables. Instead of typing the same instructions repeatedly, save them as a template and fill in the specifics each time.
Accessing Templates [#accessing-templates]
Templates are project-scoped. Access them via the **Templates** tab in any project.
Creating a Template [#creating-a-template]
1. Go to **Project → Templates**
2. Click **New**
3. Fill in:
* **Name** — A short, descriptive name
* **Description** — Optional context about when to use this template
* **Content** — The prompt text with optional variables
4. Click **Save**
Using Variables [#using-variables]
Variables use double curly braces: `{{variable_name}}`
**Example template:**
```
Test the {{feature}} on {{page_url}}.
Verify that {{expected_behavior}}.
```
When you use this template, you'll be prompted to fill in each variable before the content is inserted into the chat.
Variable Names [#variable-names]
* Use descriptive names: `{{user_email}}` not `{{x}}`
* Underscores or camelCase: `{{page_url}}` or `{{pageUrl}}`
* Variables are case-sensitive
Using a Template [#using-a-template]
1. Open the **Templates** tab
2. Find your template and click **Use Template**
3. Fill in any variables in the dialog
4. The completed prompt is inserted into the chat input
Favorites [#favorites]
Star templates you use frequently. Click the star icon on any template card. Use the **Favorites** filter to quickly find them.
Managing Templates [#managing-templates]
All team members with project access can:
* Create new templates
* Edit existing templates
* Delete templates
* Mark templates as favorites (personal preference)
Changes are visible to all team members immediately.