107: Andres Glusman: Why Your A/B Tests Are Lying (and How to Fix Them)
The 89% Rule: Turning Failed Experiments into SaaS Growth Rocket Fuel
Andres Glusman, experimentation expert and former Meetup product leader, unpacks why 89% of experiments fail - and why that’s a good thing. From redesign pitfalls to balancing data with intuition, he shares hard-earned lessons on turning failure into explosive growth.
It’s not about running more tests, but smarter ones.
Timestamps & Segments
00:01:02 – 00:10:00
The Experimentation Conundrum
Andres breaks down why most experiments fail, the "VC mindset" for testing, and why a 65% loss rate is better than industry standards.
00:10:05 – 00:20:00
Low Traffic, Big Swings
Strategies for companies with limited traffic: why timid tweaks waste time and how to bundle changes for measurable impact.
00:20:05 – 00:30:00
Redesigns: When to Rip the Band-Aid
Why most redesigns backfire, how to test without tanking metrics, and the political minefield of shipping "VP-approved" flops.
00:30:05 – 00:40:00
Storing (and Ignoring) Experiment Data
The lifecycle of experiment insights, why social proof is overrated, and how to avoid repeating past mistakes (hint: spreadsheets).
00:40:05 – 00:49:00
Hot Takes & Must-Reads
Andres’ controversial read on stale A/B testing dogma + book recs to overhaul your product playbook.
Hot Takes
🔥 “89% failure rate? You’re doing it right.” – Andres
🔥 “If stats aren’t significant after 4 months, ship it anyway—your samples are probably garbage.” – Leah
Audio:
Youtube:
Connect
Follow:
LinkedIn: Andres Glusman
Follow Leah Tharin:
LinkedIn: Leah Tharin
Season 4 of the Product Tea
We spill the tea on how to go to Market through Product-led Sales and Product-led Growth in B2B and the realities of senior leadership.
I like the way Don Reinertsen describes the science behind the experiments. A well-designed experiment should have an equal chance of succeeding and failing. That's where we get most new information.
If we design an experiment we expect to fail, and it does, we learn little. Ditto when we design one that we expect to succeed, and it does.
However, my experience in product development is similar to what Andres shares. A significant majority of experiments fail. Does it mean we design them wrong?
That's where we should take into account the payoff function. If the potential payoff is huge, we can be on the "wrong" side of the curve and still get the right outcome. It's like buying a lottery ticket for $1 that allows you to win $100. If the chances are, say, 1:50, we're good to play this lottery. Given that we can have enough shots, that is.
That's how I perceive experimentation in product development. We lose a lot. However, when we win, those wins are often so big that they more than compensate for all the failed attempts.
In fact, what Andres mentioned about the early Meetup experiments (that they made them go up exponentially) is a perfect example. Also, Andres' story covers both the tangible outcome (traction, traffic, revenues, what have you) *and* learning. I wrote a little bit more on that here: https://pawelbrodzinski.substack.com/p/90-of-times-validation-means-invalidation
Great episode, BTW!