One Eye on Everything

Visual regressions are the quietest way a build fails you.

No exception. No red test. The CI pipeline is green. The button is still there. It just moved three pixels on the tablet breakpoint, or the font color degraded on Safari, or the modal now clips behind the nav on a 375-pixel viewport. Everything passed. You'll find out when someone screenshots it.

I've shipped that kind of failure more than once. Everyone has. The problem isn't carelessness. The problem is that knowing a codebase too well is a form of blindness. You know where the button is supposed to be. Your eyes confirm what they expected to see. The regression lives in the gap between expectation and reality, and familiarity is great at bridging that gap invisibly.

That's what Spectral Cyclops is for.

It crawls your dev server — follows every link, maps every route — so you don't have to maintain that list yourself. Routes drift. You add a page, forget to update your screenshot manifest. The crawler doesn't care. It walks from the root, follows internal links, caps at fifty pages and three levels deep, writes the manifest, and hands it to the capture engine. One eye, no preconceptions.

Capture runs through Playwright. Every route, every screen, saved as a PNG. The before shots are your baseline — set once, or pulled from main. The afters are whatever you just built. The differ runs pixelmatch over the pairs at a 0.1 threshold, which catches real changes and ignores anti-aliasing noise, and writes a highlighted diff image for anything that moved. Then the orchestrator branches, commits the changed screenshots, and opens a GitHub PR with before, after, and diff embedded directly in the PR body. Side-by-side tables in GitHub Markdown. No separate dashboard. The review surface is where the work already is.

That part is useful. The watcher is where it gets interesting.

You save a file. Hot-reload fires. Two seconds later, Spectral Cyclops captures every screen in the current state and logs one line to the terminal:

📸  UI updated! New screenshots saved for AI review.

That's the whole handshake. The screenshots land in screenshots/current_state/. Claude Code, or any AI assistant with filesystem access, can see the actual rendered UI — not the code, not your description of the UI — and work from that.

This changes the iteration loop in a specific way. You're not explaining to the AI what the page looks like and hoping the model builds a useful mental image. You're showing it. The screenshots are the source of truth. The AI can tell you the card grid broke at the mobile breakpoint because it can see the card grid breaking. Every save is a new observation. The feedback cycle stays grounded in what actually rendered, not what you said rendered.

Whether you use the AI loop or not, the regression audit stands on its own. Run it before a merge. Run it after a dependency bump. Run it after any change touching shared styles or layout components — the kind of change where the damage is usually somewhere you weren't looking.

The build lies to you. Not on purpose. Just because you know it too well to see it clearly anymore.

One eye on everything, no prior expectations. That's the point.


GhostInThePrompt.com // The regression lives in the gap between expectation and reality. Familiarity bridges that gap invisibly.