Visual regression testing #427
Replies: 16 comments 31 replies
-
Additional Features (maybe via plugin) could be
|
Beta Was this translation helpful? Give feedback.
-
@daKmoR what's the benefit of saving approved images not in git but to an external host? MDC were using google storage (report example) for their screenshots tests before version 5.0 (they have now moved to an internal infra screenshots tests). |
Beta Was this translation helpful? Give feedback.
-
for a bigger list of existing solutions, you can check this: |
Beta Was this translation helpful? Give feedback.
-
@MathieuPuech thanks for the links, that's really helpful. When you store images in git, each change retains the old image in history not the diff between the two images. This blows up repository size pretty quickly. However we don't force users here, they can choose how to store the images. We should just make it possible to hook it up to whatever you want. |
Beta Was this translation helpful? Give feedback.
-
When you do visual regression testing, you can have some challenges with the render done by browsers in different environments.
A solution can be to use docker to run tests but it's slower and the developer needs docker installed. |
Beta Was this translation helpful? Give feedback.
-
I created a first implementation of visual regression in WTR: https://github.com/modernweb-dev/web/tree/master/packages/test-runner-visual-regression This implements taking screenshots from the browser, and failing when there are diffs. We can use this to experiment and test, and decide on the followup actions from there. I split up the discussion in some different topics, which are linked in the first post in this thread. |
Beta Was this translation helpful? Give feedback.
-
Hi, would like to see the ability to pass image data instead of dom node. Currently i'm using jest together with https://github.com/Prior99/jest-screenshot to do visual regression for pdf (generated in client side) screenshots. This is a few features that keep me from ditching jest completely |
Beta Was this translation helpful? Give feedback.
-
Hi, we started using this in our project and discovered an issue we would like fixed. We've created a PR here: #1068 |
Beta Was this translation helpful? Give feedback.
-
Created a project with an example config of running visual tests in Sauce Labs: |
Beta Was this translation helpful? Give feedback.
-
Configuration options that I'd be interested in seeing:
I'm happy to dive into adding these if they make sense. If we can coalesce around API structure, I can start by converting this conversation into issues for tracking. |
Beta Was this translation helpful? Give feedback.
-
I really like the idea of not storing baselines somewhere but running base (e.g. master) and compare (e.g. new feature/bugfix branch) at the same time and comparing... this would reduce all the pain point for needing multiple baselines for each system/browser combination... It's true it would require more CPU time... but it would work out of the box and if CPU/CI time is your issue then you can think of adding caching (e.g. storing baselines) on top 🤔
https://twitter.com/WestbrookJ/status/1365725266291666949?s=20 |
Beta Was this translation helpful? Give feedback.
-
Put together an update to the comparison on non-size matched screenshots, PTAL: #1352 |
Beta Was this translation helpful? Give feedback.
-
I've been experimenting with adding visual regression testing and was hoping this would be the easiest solution but it doesn't seem to work. I'm running the test from the README exactly and it always fails with: Error: Protocol error (Runtime.callFunctionOn): Execution context was destroyed.
at /home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/Connection.js:208:63
at new Promise (<anonymous>)
at CDPSession.send (/home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/Connection.js:207:16)
at ExecutionContext._evaluateInternal (/home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/ExecutionContext.js:201:50)
at ExecutionContext.evaluate (/home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/ExecutionContext.js:107:27)
at ElementHandle.evaluate (/home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/JSHandle.js:102:46)
at ElementHandle._scrollIntoViewIfNeeded (/home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/JSHandle.js:280:34)
at ElementHandle.screenshot (/home/espeed/projects/rikaikun/node_modules/puppeteer/lib/cjs/puppeteer/common/JSHandle.js:599:20)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async Object.executeCommand (/home/espeed/projects/rikaikun/node_modules/@web/test-runner-visual-regression/dist/visualRegressionPlugin.js:40:45)
at async TestRunnerApiPlugin._onCommand (/home/espeed/projects/rikaikun/node_modules/@web/test-runner/node_modules/@web/test-runner-core/dist/server/plugins/api/testRunnerApiPlugin.js:145:32) The only custom part of my config for this test is the browser setup required to run tests at all: browsers: [
puppeteerLauncher({
launchOptions: {
executablePath: '/usr/bin/google-chrome',
headless: true,
// disable-gpu required for chrome to run for some reason.
args: ['--disable-gpu', '--remote-debugging-port=9333'],
},
}),
], I haven't been able to find many details about that error on the web but maybe others have come across it here... |
Beta Was this translation helpful? Give feedback.
-
Hey! I've been evaluating this tool for use. I have experience using jest/puppeteer to create visual snapshots. One thing that I immediately miss from a jest setup is storing the snapshots adjacent to the test file (I like both tests and snapshots to be as close as possible to my components). So that the directory stucture could be like:
Another aspect related to this is automatically generating image file names from some combination of test names. e.g. given this test: describe('my-component', () => {
it('passes some test', async () => {
const element = document.createElement('p');
element.textContent = 'Hello world';
element.style.color = 'blue';
document.body.appendChild(element);
await visualDiff(element);
});
}); Then the file could be automatically named It seems the current extensibility points do not allow for this. Is there any plan to support this kind of structure/setup? |
Beta Was this translation helpful? Give feedback.
-
it could be nice if we can pass a clip of what we want to screenshot instead of doing a screenshot only of the element. Sometimes, you want to add some margin around your element to screenshot shadows and sometimes your element open an overlay you want to screenshot too (for example an opened menu/select) |
Beta Was this translation helpful? Give feedback.
-
I really like how customizable this plugin is through its options. Using that customization, I made it so when new screenshots fail, the "baseline" screenshot gets overwritten with the new screenshoot but the test still fails (the first time). This makes it very easy to update screenshots. If the new baseline is actually bad, that will get caught in the git diff or pull request code review. Since the first test run fails, CI pipelines will still fail on legitimate errors. |
Beta Was this translation helpful? Give feedback.
-
We should support visual regression testing with the test runner. Because we run tests in a real browser, it will be relatively easy to support this within the test runner. Most of the basic building blocks are already present.
Some different topics:
Beta Was this translation helpful? Give feedback.
All reactions