The automation tests work on your machine. Now what? The hardest part of test automation isn’t writing tests — it’s working with your team. Who writes what? Who reviews test code? What happens when a developer’s code change breaks 15 tests? Who fixes them?

This post covers the collaboration patterns I’ve seen work across multiple teams. It’s about people and process, not just code.

The Collaboration Problem

Here’s what I see on most teams when automation starts:

  • QC writes tests in isolation → Dev doesn’t know they exist
  • Dev changes the UI → 20 tests break → Nobody fixes them
  • QC merges tests without code review → Brittle tests accumulate
  • No shared ownership → Tests become “QC’s problem”
  • Different coding styles → Inconsistent, hard to maintain

The fix is simple in concept: treat test code like product code. Same repo, same reviews, same standards.

Task Splitting: Who Does What

The Clear Division

TaskPrimary OwnerSupport
Test scenario designQCPO validates acceptance criteria
Page ObjectsQC + DevDev adds data-testid attributes
E2E test scriptsQCDev reviews code quality
API testsDevQC reviews test coverage
Unit testsDevQC reviews edge cases
BDD feature filesQC + PODev reviews step definitions
CI pipeline setupDevQC validates test execution
Test data managementQC + DevBoth contribute test fixtures
Flaky test fixingWhoever wrote itBoth investigate
Visual regression baselinesQCDev reviews UI changes

The Developer’s Responsibilities

Developers don’t need to write E2E tests. But they need to:

  1. Add data-testid attributes when QC requests them:
<!-- Before: Hard for QC to locate -->
<div class="card-wrapper flex-1 p-4">
  <h3>Product Title</h3>
</div>

<!-- After: Easy for QC to locate -->
<div class="card-wrapper flex-1 p-4" data-testid="product-card">
  <h3 data-testid="product-title">Product Title</h3>
</div>
  1. Keep the test environment stable — Don’t change API contracts without updating test mocks
  2. Fix tests their changes break — If a PR changes the login button text, update the LoginPage locator
  3. Review QC’s test PRs — Check code quality, suggest improvements
  4. Expose test hooks — Provide API endpoints for test data setup/teardown

The QC Team’s Responsibilities

  1. Design test scenarios — What to test, which edge cases, priority order
  2. Write and maintain tests — Page Objects, E2E specs, BDD features
  3. Review developer tests — Check coverage, suggest missing edge cases
  4. Monitor test health — Track flaky tests, report test execution trends
  5. Maintain test data — Keep fixtures, mock data, and golden datasets current
  6. Document test patterns — Contributing guides, naming conventions, prompt templates

Repository Structure for Shared Projects

Tests live in the same repository as the application code. This ensures:

  • Tests are versioned alongside the code they test
  • A PR that changes the UI includes the test updates
  • CI runs tests on the same commit
your-project/
├── src/                          ← Application code (Dev owns)
│   ├── components/
│   ├── pages/
│   └── api/
├── tests/                        ← Test code (QC + Dev shared)
│   ├── pages/                    ← Page Objects (QC writes, Dev reviews)
│   │   ├── LoginPage.ts
│   │   ├── BlogPage.ts
│   │   └── DashboardPage.ts
│   ├── fixtures/                 ← Shared fixtures (QC writes)
│   │   └── base.fixture.ts
│   ├── e2e/                      ← E2E tests (QC writes, Dev reviews)
│   │   ├── auth.spec.ts
│   │   ├── blog.spec.ts
│   │   └── dashboard.spec.ts
│   ├── api/                      ← API tests (Dev writes, QC reviews)
│   │   ├── products.spec.ts
│   │   └── auth.spec.ts
│   ├── unit/                     ← Unit tests (Dev writes)
│   │   └── ...
│   ├── features/                 ← BDD features (QC + PO write)
│   │   ├── login.feature
│   │   └── search.feature
│   ├── steps/                    ← Step definitions (QC writes)
│   │   └── login.steps.ts
│   ├── data/                     ← Test data (QC manages)
│   │   └── search-scenarios.json
│   └── prompts/                  ← AI prompt templates (QC maintains)
│       └── page-object.md
├── playwright.config.ts          ← Config (Dev + QC set up together)
├── .github/
│   └── workflows/
│       └── tests.yml             ← CI pipeline (Dev sets up, QC validates)
└── CONTRIBUTING-TESTS.md         ← Testing conventions (QC writes)

CONTRIBUTING-TESTS.md — The Shared Agreement

Create a contributing guide specifically for tests:

# Test Contribution Guide

## Naming Conventions
- Page Objects: `[PageName]Page.ts` (e.g., `LoginPage.ts`)
- E2E tests: `[feature].spec.ts` (e.g., `auth.spec.ts`)
- API tests: `[endpoint].spec.ts` (e.g., `products.spec.ts`)
- BDD features: `[feature].feature` (e.g., `login.feature`)

## Locator Priority
1. `getByRole()` — buttons, links, headings
2. `getByLabel()` — form inputs
3. `getByPlaceholder()` — search fields
4. `getByText()` — static text
5. `getByTestId()` — last resort

## Writing Tests
- Every test must be independent
- Use `beforeEach` for navigation
- Never use `waitForTimeout()`
- Keep assertions in test files, not Page Objects

## Requesting data-testid
If you need a data-testid attribute added to an element:
1. Create a GitHub issue with the element description
2. Tag the relevant developer
3. Include a screenshot of the element

## PR Review Process
- QC tests reviewed by: at least one developer
- Dev tests reviewed by: at least one QC member
- All tests must pass CI before merge

The PR Workflow

Scenario 1: QC Creates New Tests

1. QC creates branch: test/blog-search-automation
2. QC writes Page Object and test files
3. QC opens PR with:
   - Description of what's being tested
   - Which manual test cases are now automated
   - Any new data-testid requests
4. CI runs the tests automatically
5. Dev reviews code quality and locator choices
6. After approval, QC merges

Scenario 2: Dev Changes Break Tests

1. Dev pushes a code change to their feature branch
2. CI runs tests → 3 tests fail
3. Dev checks if the failures are:
   a. Expected (they changed the UI intentionally) → Dev updates tests in the same PR
   b. Unexpected (regression) → Dev fixes the bug
4. If Dev needs help updating tests → tag QC in the PR
5. Both agree on the fix before merging

Scenario 3: New Feature Needs Tests

Sprint Planning:
1. PO describes the feature
2. QC writes acceptance criteria as BDD feature files
3. Dev and QC agree on the Definition of Done

During Sprint:
1. Dev implements the feature
2. QC writes automation tests (can start with feature file)
3. Dev adds data-testid attributes as QC requests
4. Both PRs reference the same ticket

End of Sprint:
1. Feature PR and test PR are merged together
2. CI validates both

Communication Rituals

Weekly Test Review (30 minutes)

Who: QC lead + Dev lead + interested team members When: Every Monday morning Agenda:

  1. Test health dashboard (5 min) — Pass rate, flaky tests, coverage trends
  2. Broken tests triage (10 min) — Who fixes what? Root cause?
  3. New test requests (10 min) — QC needs data-testid? Dev changing APIs?
  4. Blockers (5 min) — Test environment issues, missing test data

Slack Channel: #test-automation

Create a dedicated channel for:

  • CI test failure notifications (automated)
  • Questions about test code
  • Data-testid requests
  • Celebrating catches (“Visual regression caught a CSS overflow on mobile!”)

When Developer Changes Break QC Tests

This is the most common source of friction. Here’s the protocol:

1. CI fails on Dev's PR
2. Dev checks: "Did my change cause this?"
   - If YES → Dev updates the test OR asks QC for help
   - If NO → Flaky test. Flag it in #test-automation
3. The rule: whoever changes the UI, updates the Page Object
4. Exception: if it's a major redesign, QC rewrites the tests

Key principle: The person who makes the change owns the fix. If a developer changes a button from “Submit” to “Save”, they update LoginPage.ts. If a designer redesigns the entire page, QC rewrites the Page Object.

Definition of Done (with Automation)

Update your team’s Definition of Done to include test automation:

## Definition of Done

A user story is "Done" when:

### Code
- [ ] Feature implemented and code reviewed
- [ ] No TypeScript/ESLint errors
- [ ] Deployed to staging environment

### Testing
- [ ] Unit tests written for business logic (Dev)
- [ ] E2E tests written for user flows (QC)
- [ ] Existing tests still pass (CI green)
- [ ] Manual exploratory testing completed (QC)
- [ ] Edge cases documented and tested

### Automation
- [ ] Page Objects updated for new/changed UI elements
- [ ] New data-testid attributes added where needed
- [ ] Test runs in CI pipeline (no manual steps)
- [ ] Visual regression baselines updated (if UI changed)

### Documentation
- [ ] BDD feature file reflects acceptance criteria
- [ ] Test data updated in tests/data/

Handling Conflicts

”The tests are slowing down our releases”

Diagnosis: Tests are too slow or too flaky. Fix:

  • Tag critical tests as @smoke — run these on every PR (fast)
  • Tag full regression as @regression — run on merges to main (comprehensive)
  • Fix flaky tests within 48 hours or quarantine them
  • Parallelize with sharding in CI

”QC is blocking my PR with test failures”

Diagnosis: Tests that the developer didn’t break are failing. Fix:

  • Pre-existing failures should never block PRs
  • Only tests affected by the current change count
  • Flaky test quarantine process (move to non-blocking suite)

“Nobody reviews my test PRs”

Diagnosis: Test code review isn’t prioritized. Fix:

  • Set a 24-hour SLA for test PR reviews
  • Rotate review responsibility (Dev team members take turns)
  • Small PRs get reviewed faster — split large test additions into multiple PRs

”Developers won’t add data-testid attributes”

Diagnosis: Developers don’t see the value. Fix:

  • Show them a test that broke because of a CSS class change
  • Make it easy: provide the exact attributes they need to add
  • Include data-testid in the component template/boilerplate
  • Frame it as “this prevents YOU from debugging test failures”

Measuring Collaboration Success

Track these metrics monthly:

MetricTargetMeaning
Test PR review time< 24 hoursTeam prioritizes test code
Tests broken by Dev changes< 5/sprintClean communication about UI changes
QC test contribution rate10+ tests/sprintQC is actively automating
Dev data-testid response time< 2 daysDevs support automation efforts
Shared test coverage> 60% critical pathsBoth teams contribute
Flaky test count< 5 at any timeTests are trustworthy

Series Navigation

In Part 9, we’ll cover the metrics that actually matter — how to measure quality improvement, track automation ROI, build dashboards, and run retrospectives focused on quality.

Export for reading

Comments