Discussion about this post

User's avatar
Pawel Brodzinski's avatar

I like the way Don Reinertsen describes the science behind the experiments. A well-designed experiment should have an equal chance of succeeding and failing. That's where we get most new information.

If we design an experiment we expect to fail, and it does, we learn little. Ditto when we design one that we expect to succeed, and it does.

However, my experience in product development is similar to what Andres shares. A significant majority of experiments fail. Does it mean we design them wrong?

That's where we should take into account the payoff function. If the potential payoff is huge, we can be on the "wrong" side of the curve and still get the right outcome. It's like buying a lottery ticket for $1 that allows you to win $100. If the chances are, say, 1:50, we're good to play this lottery. Given that we can have enough shots, that is.

That's how I perceive experimentation in product development. We lose a lot. However, when we win, those wins are often so big that they more than compensate for all the failed attempts.

In fact, what Andres mentioned about the early Meetup experiments (that they made them go up exponentially) is a perfect example. Also, Andres' story covers both the tangible outcome (traction, traffic, revenues, what have you) *and* learning. I wrote a little bit more on that here: https://pawelbrodzinski.substack.com/p/90-of-times-validation-means-invalidation

Great episode, BTW!

Expand full comment

No posts