Tool-Specific

Cypress visual regression testing that stays green

Add screenshot comparison to your Cypress tests without drowning in false positives. Pick the right approach, stabilise your captures, and build an approval workflow that scales.

When Cypress visual testing is worth it

Visual testing adds maintenance overhead. Before setting it up, make sure it solves a real problem for your team.

Critical user flows

Checkout, signup, and dashboard views where visual bugs directly hurt the business. Worth the maintenance cost.

Component libraries

Design systems with many consumers. Catch regressions before they propagate across your apps.

Frequent UI changes

Teams shipping UI updates weekly benefit from automated visual checks. Manual QA can't keep up.

Cross-browser requirements

If you support multiple browsers, visual tests catch rendering differences humans miss.

If you're catching visual bugs in production regularly, or your manual QA process can't keep up with UI changes, visual testing will pay off. If your UI rarely changes and you have few browser requirements, the overhead may not be worth it.

Screenshot comparison approaches

Cypress doesn't include visual comparison out of the box. You have three main options:

Local pixel-diff plugins

Plugins like cypress-image-snapshot compare screenshots locally using pixel differencing. Simple to set up, but you'll need to manage baseline storage yourself—usually by committing images to git or using external storage.

Cloud visual testing services

Services handle screenshot capture, comparison, baseline storage, and provide review UIs. More expensive, but they solve the hard problems: cross-browser rendering, team collaboration, and CI integration. See our tool comparison for options.

Hybrid approaches

Some teams use local comparison during development and cloud services for CI. This balances cost with developer experience. The key is consistency—your local and CI baselines must match.

Stability: the make-or-break factor

Visual tests are only useful if they're reliable. Flaky tests get ignored, defeating the purpose entirely.

Disable animations

CSS transitions and JavaScript animations cause timing variance. Disable them before capturing screenshots.

Control fonts

System fonts render differently across OS versions. Use web fonts and wait for them to load, or pin your environment.

Fixed viewport

Always specify exact dimensions. Browser defaults vary between local dev and CI runners.

Network determinism

Mock API responses and stub external resources. Real network calls introduce timing and data variance.

These aren't optional optimisations—they're requirements. Skip them and you'll spend more time investigating false positives than catching real bugs. For a deeper dive, see reducing visual testing flakiness.

Ownership and approval workflow

Detecting visual changes is the easy part. The hard part is deciding what to do about them.

Without clear ownership, visual diffs pile up. Without a defined workflow, approvals become "click accept on everything" buttons.

  • Assign ownership: Who reviews visual changes? The PR author? A design team member? Define this upfront.
  • Scope changes: Keep PRs focused so visual diffs are small and reviewable. Massive diffs get rubber-stamped.
  • Block vs report: Decide whether visual failures block merges or just report. Start with reporting to build trust.

See approval workflow patterns for structured approaches that scale.

Cypress visual testing setup that stays green

  • Choose a screenshot comparison approach (plugin, service, or hybrid)
  • Disable CSS transitions and JavaScript animations in test mode
  • Configure consistent viewport dimensions across all tests
  • Wait for fonts to load before capturing screenshots
  • Mock API responses and external resources
  • Set up baseline storage accessible to CI
  • Define ownership for reviewing and approving diffs
  • Start with 5-10 critical flows, expand once stable

Related guides

Frequently Asked Questions

Does Cypress have built-in visual testing?
Cypress includes cy.screenshot() for capturing images, but no built-in comparison. You need a plugin or external service for actual visual regression testing—comparing screenshots against baselines and detecting differences.
What are my options for Cypress visual testing?
Three main approaches: pixel-diff plugins that compare locally, cloud services that handle comparison and storage, or hybrid solutions. Local plugins are simpler but require you to manage baseline storage. Cloud services add cost but simplify CI workflows.
Why do my Cypress screenshots differ between runs?
Usually animations, fonts, or dynamic content. Animations capture at different frames. System fonts render differently across environments. Timestamps, ads, and user-specific data change between runs. Address each source of variance systematically.
Should I run visual tests on every commit?
Not necessarily. Visual tests are slower than unit tests. Many teams run them on PRs to main branches only, or in a separate pipeline stage. Find the balance between coverage and feedback speed that works for your team.
How do I handle flaky Cypress visual tests?
Identify the source: animations, fonts, network timing, or dynamic content. Add explicit waits for stability rather than arbitrary timeouts. Consider masking dynamic regions. If a test is consistently flaky, it's telling you something about your app's determinism.
Can I use Cypress visual testing with component testing?
Yes. Cypress component testing mounts components in isolation, which is often more stable for visual tests than full E2E. Fewer moving parts means fewer sources of variance. Consider Storybook as an alternative for component-level visual testing.

Build visual testing workflows that actually work

Get early access