Why SaaS got priced out
HubSpot is up 141%. The market doesn't care.
Classical B2B SaaS is not doing that well anymore. Not because the businesses are broken, but because the market has already decided the future isn’t there.
Broad software indices down ~25% from 12-month highs. The median public SaaS valuation multiple: 18.6× revenue in 2021, 5.1× at the end of 2025. That’s 73% compression, while most of the underlying businesses kept growing. The market is saying something:
Everybody’s trying to predict what tech looks like in five years. Fair. Hard problem. But there’s a more useful question: what does it definitely not look like?
Because we know that. And knowing where it’s not going is underrated as a signal. It’s not going where the past was.
HubSpot is the cleanest example.
Not a broken business. Revenue grew from $1.3B in 2021 to $3.1B in 2025. +141%. They’re crushing quarterly results. Their stock is down 71% from its 2021 peak.
That’s not a broken business. That’s a business the market has decided has no future growth story. There’s a difference.
The innovator’s dilemma is playing out exactly as described. HubSpot’s existing customer base limits how aggressively it can transform.
Go too fast: lose current customers.
Go too slow: miss the next paradigm shift.
Lightning in a bottle twice is rare. Even if they pivoted completely to AI-first tomorrow, they’ve already missed the window. The bet that made them great, going from sales-led to product-led, was right. They were smaller then. This time is different.
Maybe. I was wrong before; maybe the HubSpot brand can turn it all around. The point is, this is a pivotal moment for solutions in an old market.
The input model is what’s actually broken.
Classical SaaS is built on one assumption: humans manually enter structured data into fields, or are the bottleneck in that relationship. Name. Company. Deal stage. That assumption is becoming indefensible.
Today?
Picture a salesperson driving home from a client meeting. They’re not opening HubSpot and filling in fields. They’re talking. The AI pulls context from the calendar, the conversation, and the conversation history. It asks: do you want to add anything? They correct what’s wrong, out loud.
Done.
That’s not a feature upgrade to the existing model. It’s a different paradigm for interacting with software.
In what world would a product built on individual manual input have a future?
Not this one for sure, if AI solved one problem, then it’s understanding unstructured input that is fuzzy. Voice, spelling mistakes, whatever… it does work.
But at the same time, this is not about speed.
Speed is the parlor trick. Quality is the problem.
AI makes people faster. That’s real. But speed is a demo: easy to show, commoditized fast, and marginal gains on top of it stop mattering within months as everyone ships the same thing.
The actual unsolved problem is quality. Two distinct failure modes:
Model collapse. AI-generated training data fed back into models degrades quality over time. Shumailov et al. (2024), published in Nature: models trained on their own outputs lose the extremes of their data distribution first, then collapse toward homogeneity. Each generation is a little worse than the last.
Context rot. Wrong information propagates across the web. AI cites it as the consensus truth.
My favorite example from last month: London City Airport has no business lounge. I fly from there every two weeks. But Google will happily serve you articles saying it does, with fake reviews, because enough sources replicated the wrong data. That’s what public training data looks like. A lot of it is opinions. A lot of it is wrong.
And here’s the error rate problem nobody’s talking about because they simply don’t understand it: AI doesn’t reduce your error rate necessarily.
It applies the same error rate to more work. If you’re making 5% errors and you double your throughput, you’ve doubled your absolute mistakes. Your test surface doesn’t scale with output. You still have 20,000 customers. You can’t A/B test more experiments just because you shipped more. Going wrong faster is not neutral.
This applies to everyone in the company, from the CEO all the way down to the individual contributor who just ships without looking at the quality that much anymore.
Why anyone would believe that the problem was that shipping (or making decisions) wasn’t happening fast enough is still beyond me.
If anything, we shipped and decided already way too fast.
Future markets expect speed as a table stake and pay for quality as a differentiator.
This isn’t doomerism. It keeps happening
In 2000, Siebel Systems was the dominant CRM. Profitable, market-leading, untouchable. Salesforce hired 25 actors in “death to software” t-shirts to march outside Siebel’s user conference. Siebel called the police. The story broke in the WSJ, Forbes, and the NYT. Everyone thought it was a joke.
Siebel was acquired by Oracle in 2006. It effectively ceased to exist as a product.
The category didn’t die. CRM is bigger than ever. The paradigm of how you deliver and use it shifted. Then HubSpot disrupted Salesforce on PLG. Now HubSpot is in the Siebel position.
The cycle eats itself. One generation at a time. New demand always emerges. We are catastrophically bad at predicting what that demand looks like, but it always shows up.
The market doesn’t disappear when a paradigm shifts. It transforms.
Domain expertise is what makes AI output usable.
Here’s what I keep seeing: a marketer using AI outperforms a marketer who isn’t. An engineer using AI outperforms an engineer who isn’t. What doesn’t happen is a marketer using AI outperforming a senior engineer on a hard engineering problem. Or the reverse.
AI amplifies what you already know. It doesn’t replace knowing.
The reason is mundane: you can only spot a bad output if you know what good looks like. A marketer can tell when the campaign copy is off. An engineer can tell when the architecture is fragile. AI produces outputs fast. Judgment is what determines whether those outputs ship or get thrown out. And judgment is domain-specific.
This also connects to the error rate problem. If you can’t spot bad AI outputs in your domain, you’re not making faster decisions. You’re shipping bad ones at scale.
The people who are going to be okay are not the ones who used AI to do a different job. They’re the ones who used it to become significantly better at the job they already had.
The table stakes just changed.
I don’t know what the specific services are that emerge from this. But I know the directional signal.
The market that’s forming is people who already use AI in their daily work. I’m one of them. A lot of people reading this are too.
We don’t need more task managers. We don’t need another thing to log into. We don’t need 20 form fields.
We need things that slot into the workflows we already have. And everyone ends up having their own custom model, one way or another.
That’s a fundamental shift in what product-market fit means. The first question used to be: is this useful? Now it’s: Does this work with how I already use AI? If the answer is no, the conversation is probably over before it starts.
And here’s what changes everything on the supply side: anyone can install an API now. Not developers. Anyone that can tell an AI to “install” something.
I dictated the thinking for this article into Claude while working through it. I didn’t sit down to write. That’s a different product relationship than anything SaaS was built around.
The services that win won’t necessarily have interfaces. They’ll have compatibility. They’ll slot in. They’ll make the AI you’re already using better.
That’s a real market. We just don’t know the brand names yet.
I’ve been wrong before.
I said PMs need to know SQL. I haven’t written SQL in three years. I said PMs need deep technical AI knowledge to work effectively with it. Completely wrong. The people using AI best right now are treating it like an assistant, not becoming technical operators.
Double down on your strengths. Build products for problems that persist in the future.
Using AI now to just build what worked in the past, but faster, seems like a waste of time to me.
So take all of this with that caveat.
What I do know about the future markets: speed is the demo. Judgment is the actual job. It always has been.
AI just made the gap between the two more expensive to ignore.







In fairness 3 years ago SQL was very useful!