arrow_back Back to blog / labs.sarmkadan.com
SEO 2 April 2026 8 min read

Dwell-time is the new backlink: why QA bugs tank your SEO

Backlinks are not what they used to be. A page can rank at the top of a query for exactly one reason: the people who land on it stay. A page can lose that rank for exactly one reason: they do not.

Since the 2024 and 2025 Helpful Content rollouts, Google has been increasingly transparent about what it already measured quietly for years. User behaviour on search results - clickthrough, time on page, return-to-SERP, pogo-sticking - is now openly described in their documentation as part of the quality signal stack. In practice, the effect is sharper: pages that bleed users lose rank within weeks, not months.

This is where AI-generated UIs lose their lunch.

The four signals that actually move

There are four friction signals that matter in 2026, and they compound:

  • Largest Contentful Paint (LCP) above 2.5 seconds. The moment the page feels done. If it slips past 2.5s on mobile, the bounce curve cliffs. Google's CrUX report makes this explicit and factors it into ranking for the query the user came from.
  • Interaction to Next Paint (INP) above 200ms. INP replaced FID in March 2024 and is more honest: it measures every interaction, not just the first. AI-generated React components tend to ship INP in the 400-800ms range because nothing hydrates below a certain size budget.
  • Rage clicks and dead clicks. Captured by every major RUM vendor (Sentry, Datadog, LogRocket, Clarity) and - critically - by Chrome itself in aggregate. A button that looks clickable but is not is a friction signal. An LLM that styles a div like a button and forgets to add an onClick is a friction machine.
  • Pogo-sticking. User clicks your result, bounces back to the SERP within a few seconds, clicks the next result. This is the single most damaging signal in the stack. It tells the ranker, directly, that your page failed the query.

Why AI-generated UIs fail this specifically

Three structural reasons, in order of impact.

Layout shift on mount

LLMs generate components with missing width and height hints on images, missing aspect-ratio CSS on embeds, and layouts that depend on JS hydration to resolve. Cumulative Layout Shift spikes. The first interaction the user attempts is often misdirected because the page moved. That is a rage click.

Interactive latency under adversarial conditions

The generated code runs fine on the laptop it was generated on. The real user is on a three-year-old mid-range Android on a 4G connection in a tunnel. Every inline <script>, every blocking third-party tag, every un-deferred marketing pixel adds to INP. Generated code is bloated with observers, analytics, and hydration boilerplate that no one asked for and no one audited.

Content that does not answer the query

The cruellest one. LLM-generated marketing copy optimises for fluency, not specificity. A user searching "how do I rotate my API key" lands on a page that describes the philosophy of API rotation for three paragraphs before getting to the steps. The user hits back before paragraph two. That is pogo-sticking, that is a ranking penalty, and no amount of schema markup will save the page.

What moves the needle

Fixing this is not a branding exercise. It is measurable engineering:

  • Run Lighthouse and PageSpeed Insights on your five highest-traffic landing pages. Aim for LCP under 2.5s and INP under 200ms on mobile, not on the desktop preview.
  • Install a session replay tool on a 5% sample and watch ten sessions. Every rage click tells you where a dead element is.
  • Get the answer to the query above the fold. Not the brand story, not the newsletter pitch. The answer. If a user can Cmd-F the core answer within two seconds, dwell-time rises and pogo-sticking drops.
  • Audit your third-party tags. Segment, HubSpot, Intercom, a marketing team's dozen pixels. Each one is an INP risk. Gate them behind interaction, not page load.
  • Stop shipping AI-generated content unchecked to marketing pages that serve search traffic. It is the fastest way to dissolve rank you spent a year earning.

The uncomfortable corollary

QA used to be "does it work." Now QA is also "does it work fast enough that the user does not leave." Those are the same question at the engineering level and they have the same answer: measure on real hardware, under real networks, against real queries, and fix the parts that lose users.

The AI-built SaaS that rank in 2026 are the ones whose founders realised this early. The ones that do not, rank for a month and then disappear.

A note on measurement

People ask: "how do you know dwell-time is weighted more than it was in 2022?" You do not, not precisely. Google does not publish the coefficient. What you can observe is behaviour in the SERP: pages that lose HCU rounds almost uniformly show up with weak Core Web Vitals scores and high return-to-SERP rates in Chrome's aggregated UX data. Pages that gain in those same rounds show the opposite. The correlation is tight enough that I treat it as causal for planning purposes, and the advice above is consistent with whatever the actual ranking formula is: make the page faster, make it more honest, make it more useful.

If I am wrong about the mechanism, you still win on conversion rate. That is the best kind of hypothesis.

Written by Vlad Zaiets - Founder, Sarmkadan Labs.

Remote-first senior QA for AI-built SaaS. We audit codebases like the one above before you ship them.

Ship boring releases.

Book a 20-min call.

Book a 20-min call arrow_forward