What is this?
This repository is a guided tutorial. Each checkpoint contains a prompt you feed to an AI coding assistant (Claude, GitHub Copilot, OpenAI Codex, etc.) and a reference implementation showing what the AI produces.
The application under test is Issue Tracker — a Jira-like project management app with Kanban boards, task management, comments, and an admin panel.
You practice by running the prompts yourself, then compare your output against the reference. Differences are expected — AI output varies between runs.
Getting Started
Start from scratch
Clone the repo and open checkpoints/01-setup/PROMPT.md. Copy the prompt into your AI assistant.
Jump to a checkpoint
Each checkpoint has git tags: cpNN-*-prompt (before) and cpNN-*-done (after). Use git checkout cp03-auth-prompt to start from checkpoint 3.
Compare your work
Run git diff cp04-smoke-prompt..cp04-smoke-done to see what the reference implementation changed.
Test Results
✓
36 tests passing
All checkpoints complete. Full suite runs in ~90 seconds against the live app.
| CP |
Checkpoint |
Tests |
What it covers |
| 01 |
Project Setup |
1 |
Playwright config, folder structure, placeholder test |
| 02 |
Explore & Plan |
— |
App analysis, 88-case test plan with priorities |
| 03 |
Auth & Helpers |
1 |
Storage state auth, API helpers, custom fixtures |
| 04 |
Smoke Tests |
6 |
Login, project creation, task CRUD, comments, search |
| 05 |
Task Lifecycle |
9 |
Create, edit, status, priority, assignee, due date, delete, list view |
| 06 |
Comments |
5 |
Add, edit, cancel, delete, cross-user permissions |
| 07 |
Project Settings |
6 |
Edit details, member management, project deletion |
| 08 |
Advanced |
6 |
Admin panel, board filters, user profile, multi-user |
Test Accounts
All accounts use password: password123
| Name | Email | Role |
| Admin User |
admin@example.com |
admin |
| Alice |
alice@example.com |
user |
| Bob |
bob@example.com |
user |
Checkpoints
Initialize a Playwright E2E project from scratch — package.json, playwright.config.ts, TypeScript config, folder structure, and a placeholder test that verifies the login page loads.
View prompt →
I need you to set up a Playwright end-to-end testing project from scratch. This project will contain E2E tests for a web application that is already deployed and running.
### About the application under test
- **URL**: https://issues-demo.devplant.academy
- **What it is**: A Jira-like issue tracker with projects, Kanban boards, task management, comments, and an admin panel
- **Auth**: Email/password login with cookie-based sessions
- **Tech**: It's a Next.js app, but that doesn't matter for our tests — we're testing it as a black box through the browser
### What I need you to set up
**Package & dependencies:**
- Initialize a new Node.js project with `package.json`
- Install Playwright with `@playwright/test` and install browsers
- Use TypeScript for all test files
- Add a `tsconfig.json` appropriate for a Playwright project
**Playwright configuration (`playwright.config.ts`):**
- Base URL: `https://issues-demo.devplant.academy`
- Only use Chromium (we don't need cross-browser testing for this course)
- Reasonable timeouts: 30s for tests, 10s for actions/navigation
- Take a screenshot on failure
- HTML reporter for viewing results locally
- Retain test traces on failure (for debugging)
- Run tests sequentially for now (workers: 1) — we'll deal with parallelism later
**Folder structure:**
- `tests/` — where test files will go
- `tests/helpers/` — for shared utilities and helpers
- Create a placeholder test file `tests/example.spec.ts` with a single simple test that verifies the app's login page loads (navigate to `/login` and check that the page contains a sign-in form). This confirms the setup works.
**Scripts in package.json:**
- `test` — run all tests
- `test:ui` — run tests with Playwright UI mode
- `test:headed` — run tests in headed mode (visible browser)
- `test:report` — open the HTML report
**Other:**
- Add a `.gitignore` to exclude `node_modules/`, `test-results/`, `playwright-report/`, `.env` files, and any Playwright state directories
- Do NOT set up authentication helpers yet — that comes in a later checkpoint
After setup, run the placeholder test to verify everything works.
Study the application spec and OpenAPI docs, then produce a structured test plan with 80+ test cases organized by feature area, priorities (P0/P1/P2), data-testid references, and critical path identification.
View prompt →
I'm setting up E2E tests for an issue tracker web application. Before writing any tests, I need you to study the application thoroughly and produce a structured test plan.
### Application docs
The following documents describe the application in detail:
- `docs/APP_SPEC.md` — Full feature specification with UI behavior, page routes, and `data-testid` selectors
- `docs/openapi.json` — OpenAPI spec for the REST API (base path: `/api/v1`)
Please read both files carefully.
### About the application
- **Live URL**: https://issues-demo.devplant.academy
- **API docs**: https://issues-demo.devplant.academy/api/docs
- **Test accounts** (all use password `password123`):
- `admin@example.com` (admin role)
- `alice@example.com` (regular user)
- `bob@example.com` (regular user)
### What I need you to produce
Create a file `docs/TEST_PLAN.md` with a structured test plan. Organize it as follows:
**1. Test areas** — Group tests by feature area matching the app spec sections:
- Authentication (login, registration, session)
- Projects (list, create, navigation)
- Kanban Board (columns, drag-and-drop, task cards, context menu)
- Task List View (table, sorting, filtering)
- Task Detail (view, inline editing, sidebar fields)
- Comments (add, edit, delete)
- Project Settings (edit details, manage members, delete project)
- User Profile (profile info, notifications, password change)
- Admin Panel (user list, role changes, bulk operations)
- Global Search (Cmd+K, search results, navigation)
**2. For each test area**, list specific test cases with:
- A short descriptive name (this will become the test title)
- Priority: **P0** (critical path — must never break), **P1** (important functionality), or **P2** (edge cases and nice-to-haves)
- Any relevant `data-testid` selectors from the spec
- Notes on test data setup needed
**3. Test data strategy** — Document what test data the tests will need:
- Which test accounts to use and for what purpose
- Whether tests should create their own data or rely on seeded data
- How to handle cleanup between tests
- Recommend using unique names with timestamps to avoid collisions
**4. Critical paths** — Identify the top 5-8 user journeys that are most important to test (these become our smoke tests in a later checkpoint).
### Guidelines
- Be thorough but practical. Focus on what's testable through the UI.
- Don't plan tests for things that are purely visual (animations, exact colors) unless they indicate state.
- Drag-and-drop testing with Playwright is tricky — note it as a test case but flag it as potentially complex.
- For the API, we won't write separate API tests. But note where API calls might be useful for test setup.
- The app uses `data-testid` attributes extensively — prefer these for selectors.
Scaffold authentication infrastructure — storage state setup for 3 accounts, API helpers for test data management, custom Playwright fixtures with auto-cleanup, and the setup project pattern.
View prompt →
I have a Playwright E2E testing project set up for an issue tracker app. Now I need to scaffold the authentication infrastructure and reusable test helpers.
### Context
- **App URL**: https://issues-demo.devplant.academy
- **Auth mechanism**: Email/password login with cookie-based sessions
- **Test accounts** (all use password `password123`):
- `admin@example.com` (admin role)
- `alice@example.com` (regular user)
- `bob@example.com` (regular user)
- **API base**: `/api/v1` — supports REST endpoints for projects, tasks, comments (see `docs/openapi.json`)
- **Test selectors**: The app uses `data-testid` attributes — see `docs/APP_SPEC.md`
### What I need
#### 1. Auth storage state setup
Create a Playwright global setup that:
- Logs in as each test account (admin, alice, bob) via the UI
- Saves each session's storage state to separate files (`.auth/admin.json`, `.auth/alice.json`, `.auth/bob.json`)
- Add `.auth/` to `.gitignore`
Update `playwright.config.ts` to:
- Run the auth setup as a dependency project (Playwright's `setup` project pattern)
- Create test projects that depend on the setup (default project using Alice's session)
- Keep Chromium-only, sequential execution
#### 2. Test constants
Create `tests/helpers/constants.ts` with:
- Test account credentials (email, password, role, display name) as a typed object
- Any other commonly-needed constants
#### 3. API helpers
Create `tests/helpers/api.ts` with helper functions for test data setup via REST API:
- `createProject(request, { name, key, description })`
- `deleteProject(request, projectId)`
- `addProjectMember(request, projectId, email)`
- `createTask(request, projectId, { title, status?, priority?, assigneeId?, description?, dueDate? })`
- `deleteTask(request, taskId)`
- `addComment(request, taskId, body)`
Each helper should accept `APIRequestContext`, use `/api/v1/`, throw descriptive errors on failure.
#### 4. Test fixtures
Create `tests/helpers/fixtures.ts` that extends Playwright's `test` with:
- `aliceApi` — authenticated `APIRequestContext` for Alice
- `testProject` — creates a project with unique name and cleans up after
- Export `test` and `expect`
#### 5. Update the example test
Update `tests/example.spec.ts` to use the new auth infrastructure. Verify authenticated user lands on `/projects`.
After making all changes, run the tests to verify the auth setup works.
Six critical-path smoke tests: login flow, project creation (wizard), task creation on Kanban board, task editing, comments, and global search navigation.
View prompt →
I have a Playwright E2E testing project with authentication infrastructure already set up. Now I need smoke tests covering the critical user journeys of the issue tracker app.
### Context
Read these files to understand the app and existing test infrastructure:
- `docs/APP_SPEC.md` — Full app specification with `data-testid` selectors
- `docs/TEST_PLAN.md` — Test plan with critical paths identified
- `tests/helpers/constants.ts` — Test account credentials
- `tests/helpers/api.ts` — API helpers for creating projects, tasks, comments
- `tests/helpers/fixtures.ts` — Custom fixtures (`testProject`, `aliceApi`)
- `playwright.config.ts` — Current Playwright configuration
### Important: Test isolation
Every test must be fully idempotent:
- Create all data they need — never rely on pre-existing data
- Clean up after themselves
- Use unique names with timestamps
- Use the API helpers for setup/teardown — only use the UI when testing the UI flow itself
### What I need
Create `tests/smoke.spec.ts` with these smoke tests:
1. **Login and view projects** — Log out first, navigate to `/login`, enter Alice's credentials, submit, verify redirect to `/projects`.
2. **Create a project and verify it appears** — Click "Create Project", fill in project name and key, submit, verify the project card appears. Clean up via API.
3. **Create a task on the Kanban board** — Use `testProject` fixture. Navigate to board. Click "+" on "To Do" column. Fill in title, submit. Verify task card appears.
4. **View and edit a task** — Use `testProject`. Create task via API. Navigate to detail. Edit title inline. Change status. Verify changes.
5. **Add a comment to a task** — Use `testProject`. Create task via API. Navigate to detail. Type comment, submit. Verify it appears.
6. **Search for a task and navigate** — Use `testProject`. Create task with distinctive title. Open global search. Type part of title. Verify result appears. Click it. Verify navigation.
### Guidelines
- Import `test` and `expect` from `./helpers/fixtures`
- Use `data-testid` selectors from APP_SPEC.md
- For login test, clear cookies first with `page.context().clearCookies()`
- Use `await expect(...).toBeVisible()` for asserting element presence
After writing the tests, run them to verify they all pass.
Comprehensive task testing — create from board, inline title/description editing with save and cancel, status and priority changes, assignee management, due date set/clear, context menu deletion, and list view sorting.
View prompt →
I have a Playwright E2E testing project with auth infrastructure and smoke tests. Now I need detailed tests for the task lifecycle in the issue tracker app.
### Context
Read these files first:
- `docs/APP_SPEC.md` — Full app spec with `data-testid` selectors (especially Task Detail, Kanban Board, Task List sections)
- `tests/helpers/constants.ts` — Test account credentials
- `tests/helpers/api.ts` — API helpers for creating projects, tasks, etc.
- `tests/helpers/fixtures.ts` — Custom fixtures (`testProject`, `aliceApi`)
- `tests/smoke.spec.ts` — Follow the same patterns
### Important: Test isolation
Every test must be idempotent:
- Create all data via API helpers or the `testProject` fixture
- Never rely on pre-existing data
- The `testProject` fixture auto-cleans up (deleting a project removes all its tasks)
- Use unique names with timestamps
### What I need
Create `tests/tasks.spec.ts` with 9 tests using `testProject` fixture:
1. **Create task from board** — Navigate to board. Click "+" on "To Do". Fill title, submit. Verify task card appears.
2. **Edit task title inline** — Create task via API. Navigate to detail. Click title, change text, press Enter. Reload and verify.
3. **Edit task description with save and cancel** — Create task via API. Navigate to detail. Edit description, save, verify. Then edit again, cancel, verify original persists.
4. **Change task status from detail** — Create task with status "todo". Navigate to detail. Change status to "In Progress". Verify on detail and board.
5. **Change task priority** — Create task via API. Navigate to detail. Change priority to "high". Verify.
6. **Change task assignee** — Create task via API. Add Bob as member. Navigate to detail. Change assignee to Bob. Verify.
7. **Set and clear due date** — Create task via API. Navigate to detail. Set date, verify. Clear date, verify.
8. **Delete task from board context menu** — Create task via API. Navigate to board. Open context menu. Click Delete. Verify card gone.
9. **Task list view and sorting** — Create 2-3 tasks via API. Navigate to list view. Verify all appear. Click column header. Verify sort changes.
### Guidelines
- Import `test` and `expect` from `./helpers/fixtures`
- Use `data-testid` selectors from APP_SPEC.md
- The API returns task keys as `taskKey` (e.g., "ABCD-1"), not `key`
- For dropdowns, after clicking trigger, use `getByRole('option', { name: /.../ })`
After writing the tests, run the full suite to verify everything passes.
Comment CRUD operations plus a cross-user permission test using separate browser contexts — verifying that Bob cannot edit or delete Alice's comments.
View prompt →
I have a Playwright E2E testing project with auth, smoke tests, and task tests. Now I need tests for the comment system.
### Context
Read these files first:
- `docs/APP_SPEC.md` — Comments section and its `data-testid` selectors
- `tests/helpers/constants.ts` — Test account credentials (especially Bob's `storageStatePath`)
- `tests/helpers/api.ts` — API helpers including `addComment` and `addProjectMember`
- `tests/helpers/fixtures.ts` — Custom fixtures
- `tests/tasks.spec.ts` — Follow the same patterns
### Important: Test isolation
Every test must be idempotent:
- Create all data via API helpers or the `testProject` fixture
- The `testProject` fixture auto-cleans up
- Use unique text with timestamps
### What I need
Create `tests/comments.spec.ts` with these tests using `testProject` fixture:
1. **Add a comment** — Navigate to task detail. Type in `comment-input`, click `comment-submit-button`. Verify text appears in `comment-thread`.
2. **Edit own comment** — Add comment via API. Navigate to detail. Find comment by text. Click edit button, change text, save. Verify update.
3. **Cancel comment edit** — Add comment via API. Navigate to detail. Start editing, change text, click cancel. Verify original persists.
4. **Delete own comment** — Add comment via API. Navigate to detail. Click delete. Verify comment removed.
5. **Cannot edit or delete another user's comment** — Add Bob as project member. Create task as Alice. Add comment as Alice. Create Bob's browser context with `browser.newContext({ storageState: TEST_ACCOUNTS.bob.storageStatePath })`. Open task in Bob's context. Verify Bob sees Alice's comment but has NO edit/delete buttons. Close Bob's context.
### Guidelines
- Import `test` and `expect` from `./helpers/fixtures`
- Use `data-testid` selectors from APP_SPEC.md
- Comment selectors use dynamic IDs — find comments by text content within `comment-thread`, then use relative locators for edit/delete buttons
- For cross-user test, you need the `browser` fixture
After writing the tests, run the full suite to verify everything passes.
Project configuration management — view and edit project details, add and remove members, verify read-only fields, and complete project deletion with confirmation dialog.
View prompt →
I have a Playwright E2E testing project with auth, smoke, task, and comment tests. Now I need tests for project settings: editing project details, managing members, and deleting a project.
### Context
Read these files first:
- `docs/APP_SPEC.md` — Project Settings section and its `data-testid` selectors
- `tests/helpers/api.ts` — API helpers (especially `createProject`, `deleteProject`, `addProjectMember`)
- `tests/helpers/fixtures.ts` — Custom fixtures (`testProject`, `aliceApi`)
- `tests/tasks.spec.ts` — Follow the same patterns
### Important: Test isolation
Every test must be idempotent:
- Create all data via API helpers or the `testProject` fixture
- The `testProject` fixture auto-cleans up
- For delete test, create a **separate** project via API (not `testProject`) since the test deletes it
### What I need
Create `tests/project-settings.spec.ts` with these tests:
1. **Settings page shows current project details** — Navigate to settings. Verify `settings-name-input` has project name. Verify `settings-key-input` has key.
2. **Edit project name and description** — Navigate to settings. Change name and description. Save. Reload. Verify persisted.
3. **Project key is read-only** — Navigate to settings. Verify `settings-key-input` is disabled.
4. **Add member to project** — Navigate to settings. Enter Bob's email. Click add. Verify new row appears.
5. **Remove member from project** — Add Bob via API first. Navigate to settings. Find Bob's row. Click remove. Verify gone.
6. **Delete project** — Do NOT use `testProject` fixture. Create project via `aliceApi`. Navigate to settings. Click delete. Confirm. Verify redirect to `/projects`.
### Guidelines
- Import `test` and `expect` from `./helpers/fixtures`
- Use `data-testid` selectors from APP_SPEC.md
- For delete test, use `aliceApi` fixture directly
- Member rows use dynamic `member-row-{userId}` selectors — find members by text content
After writing the tests, run the full suite to verify everything passes.
Multi-user admin panel tests (user list, role changes, bulk operations), Kanban board filtering (search, assignee, priority), and user profile management with a throwaway account for password changes.
View prompt →
I have a Playwright E2E testing project with auth, smoke, task, comment, and project settings tests. Now I need tests for advanced scenarios: admin panel, board filters, and user profile.
### Context
Read these files first:
- `docs/APP_SPEC.md` — Admin Panel, Kanban Board (filters), User Profile sections and their `data-testid` selectors
- `tests/helpers/constants.ts` — Test account credentials (admin, alice, bob and their `storageStatePath`)
- `tests/helpers/api.ts` — API helpers
- `tests/helpers/fixtures.ts` — Custom fixtures
- `tests/comments.spec.ts` — See the cross-user test pattern for creating separate browser contexts
### Important: Test isolation
Every test must be idempotent:
- Create all data via API helpers or fixtures
- Clean up after itself
- Use unique names with timestamps
- For admin role-change tests, **revert the role back** in cleanup
- For password change tests, use a **freshly registered throwaway account**
### What I need
Create `tests/advanced.spec.ts` with these 6 tests:
#### Admin Panel (3 tests)
Use admin browser context: `browser.newContext({ storageState: TEST_ACCOUNTS.admin.storageStatePath })`
1. **Admin sees user list, non-admin is blocked** — Open `/admin/users` in admin context. Verify admin page visible. Open same URL in Alice's page. Verify Alice is redirected/blocked. Check `sidebar-nav-users` visible for admin, not for Alice. Close admin context.
2. **Admin changes a user's role** — Open admin page. Find non-admin user. Change role via dropdown. Verify change. **Revert role** before cleanup.
3. **Bulk role change with confirmation** — Open admin page. Select 2+ users via checkboxes. Verify `bulk-action-bar` appears. Choose role, confirm in dialog. Verify changes. **Revert roles** before cleanup.
#### Board Filters (1 test)
4. **Board filters: search, assignee, priority, and clear** — Use `testProject`. Create 3 tasks with different titles, assignees, and priorities. Navigate to board. Test each filter type. Click `filter-clear` to verify all tasks visible again.
#### Profile (2 tests)
5. **Edit profile display name** — Navigate to `/profile`. Change name. Save. Reload. Verify persisted. **Revert name**.
6. **Change password with throwaway account** — Register new account via UI. Navigate to `/profile`. Click `security-tab-trigger`. Fill current/new password. Submit. Verify success. No cleanup needed.
### Guidelines
- Import `test` and `expect` from `./helpers/fixtures`
- Use `data-testid` selectors from APP_SPEC.md
- For admin tests, use `browser` fixture for separate contexts
- For bulk role change: find user rows by email content, not by IDs
After writing the tests, run the full suite to verify everything passes.
Exploring Apps with AI + Playwright MCP
AI coding assistants can explore web applications autonomously using browser automation tools. With Playwright MCP (Model Context Protocol), tools like Claude Code, GitHub Copilot, and OpenAI Codex can launch a browser, navigate pages, interact with elements, and take screenshots — just like a QA engineer would during manual exploratory testing.
This means you can ask your AI assistant to explore an app before writing a single test. It will discover pages, forms, navigation flows, and interactive elements on its own, then document what it found.
Example prompt for app exploration
Copy this into your AI assistant to have it explore any web app:
I need you to explore a web application and document everything you find.
App URL: https://issues-demo.devplant.academy
Login: alice@example.com / password123
Using the browser, please:
1. Log in with the credentials above
2. Systematically visit every page and screen you can find
3. Document all navigation paths, forms, buttons, and interactive elements
4. Note any `data-testid` attributes you find on elements
5. Take screenshots of each major screen
6. Try creating, editing, and deleting data to understand the workflows
Produce a structured document listing:
- Every route/URL you discovered
- Key UI elements and their selectors on each page
- User flows (what leads to what)
- Any error states or edge cases you noticed
The AI will launch a browser, navigate through the app, and return a comprehensive map of the application — ready to inform your test plan.
Writing Effective Test Prompts
The prompts in this course follow a consistent structure that produces reliable results across different AI assistants. Each prompt includes:
Context first
Tell the AI which files to read before starting. This grounds it in your existing code, patterns, and conventions.
Explicit constraints
State requirements like test isolation, cleanup, and unique naming upfront. AI assistants follow explicit rules better than implied ones.
Numbered test specs
Each test is described with a name, specific steps, and expected outcomes. This prevents the AI from improvising or skipping cases.
Selector strategy
Reference data-testid selectors by name. The AI will use stable selectors instead of fragile CSS classes or text matching.
Verify at the end
Always ask the AI to run the tests after writing them. This catches issues in the same conversation where they can be fixed.
Test Isolation Pattern
Every test in this project is fully idempotent — it creates what it needs and cleans up after itself. This is critical for reliable E2E testing against a shared environment.
test('create a task', async ({ page, testProject }) => {
await page.goto(`/projects/${testProject.key}/board`);
});
For tests that modify shared state (like admin role changes), the prompt explicitly requires reverting changes in cleanup — so the next test run starts from a known state.