All posts
·4 min read

How a 200ms Page Load Improvement Increased Our Client's Conversions by 34%

We cut 200ms off a marketplace's LCP. Conversions moved 34%. Here's the exact change set, the metrics we tracked, and what we'd do differently.

How a 200ms Page Load Improvement Increased Our Client's Conversions by 34%

Performance work has a credibility problem. Engineers know it matters. Founders nod and then de-prioritize it. Designers add a hero video and undo a quarter of optimization in one PR.

This is the story of one engagement where shaving 200 milliseconds off a single page lifted conversions 34%, what we actually changed, and what we'd push back harder on if we did it again.

The starting state

A B2B marketplace, $4M ARR, 60% of organic traffic landing on a single search-results page. That page was their funnel — get them to scroll the results, click a listing, sign up to contact the seller.

We pulled their RUM data on day one:

  • LCP: 4.2 seconds (75th percentile)
  • CLS: 0.34 (terrible — text was reflowing as fonts loaded)
  • INP: 380ms
  • Sign-up conversion from that page: 2.1%

Their PageSpeed Insights score was 41 on mobile. Lighthouse called it "needs improvement," which in our experience is software-vendor for "actively losing you money."

Why 200ms

The team's instinct was to do a full performance overhaul — a quarter-long rebuild. We pushed back. The data said most of the LCP delay was in two places: an unbatched API call that blocked the largest image and a 280KB JavaScript bundle for a dropdown menu. Fix those two things and we'd get most of the gain.

We picked one number to move: the 75th-percentile LCP. We picked one target: under 2.5 seconds (Google's "Good" threshold). We didn't promise a conversion lift. Conversion lift is downstream — promising it makes the engagement feel like marketing, not engineering.

The actual changes

Three weeks of work. Here's the changeset.

1. Streaming the search results page

The page was server-rendered, which sounds fast but wasn't — every request blocked on a single getResults() call that fetched listings, filters, *and* the user's recently-viewed items in series.

We split it. Listings rendered immediately. Filters and recently-viewed streamed in below the fold using React Server Components with Suspense boundaries. The user saw the largest content (the listing grid) 1.8 seconds earlier on average.

2. Killing the 280KB dropdown

Their nav had a country selector built on a UI library that pulled in the entire ICU localization dataset. 280KB gzipped, blocking the main thread for 90ms on mid-range Android.

We replaced it with a 4KB native <select> styled to match. Nobody noticed. Conversion went up.

This is the part that's hard to do politically and easy to do technically. Someone shipped that dropdown for a reason. Convincing them to revert it took longer than writing the replacement.

3. Font subsetting and font-display: optional

Their custom display font was 184KB across 4 weights. We subset it to Latin-only (cut to 31KB), preloaded the one weight used above the fold, and switched to font-display: optional so a slow font load wouldn't re-flow text.

CLS dropped from 0.34 to 0.02 the day this shipped.

4. Image work

Hero images were full-resolution JPEGs served at any viewport. We migrated to AVIF with WebP fallback, added explicit width/height to every <img>, used a sizes attribute that matched the actual layout, and lazy-loaded everything below the fold.

The result

Three weeks later:

  • LCP: 4.2s → 1.1s (-74%)
  • CLS: 0.34 → 0.02
  • INP: 380ms → 140ms
  • Lighthouse mobile: 41 → 96
  • Sign-up conversion from the search page: 2.1% → 2.82% (+34%)

That conversion lift held for the next 90 days. We checked.

What we'd do differently

Track conversion as a leading indicator from day one. We had RUM, but we weren't joining it to the analytics events. Took us a week to build the join after we wanted the answer.

Push harder on the third-party tag stack. They had nine analytics tools loading on every page. We removed two; we should have removed five. Every script tag is a vote against your LCP.

Don't promise the conversion number. We didn't, but we got close. Conversion is downstream of a hundred things. Promise the engineering metric. Let the business metric come along for the ride.

The takeaway

The biggest performance gains in your app are probably not in the framework you chose. They're in the three or four specific files where someone made a defensible choice that compounded badly with another defensible choice somewhere else. Find those, fix those, ship.

If you want an outside read on where your own page is bleeding milliseconds, we'll do that for free — see /free-audit. One page, three findings, ranked by impact.

Want this read on your own app?

Free audit. Three findings, ranked. No credit card.