A business case is commonly understood to be an analysis of a potential opportunity to evaluate whether it’s worth it for a business to invest in it. If we knew with 100% certainty that an opportunity pans out we would not need a business case.
In other words, “business casing” is about risk management.
The reason why there are almost no resources out there that show you how to do them is because they are highly situational. I’ll walk you through an example of a business step by step.
In my career, the skill of creating business cases was reserved for senior executives mostly and was not something that was ever demanded of me. I believe this to be a missed opportunity by companies and individuals.
Great business cases surfaced by operational teams (mostly orchestrated by their PMs) are extremely powerful and will be demanded in the future more and more, specifically from PMs.
If we assume it to be true that shipping something is becoming easier and easier then a differentiating factor for teams does not become who can ship faster but who can ship (and unship) the correct things.
However, aside from measuring, learning, and leveraging these learnings once you built a product business cases serve as an important first gate to avoid wasted effort by our research or product teams.
Great vs. bad business cases
Great business cases are not just highlighting opportunities we aren’t aware of they also highlight and examine high-risk failure points before we get to them. They focus on reducing risks through honest examination where it makes the most sense.
Simplicity
Another core component of great business cases is that they are simple. While business cases can be definitely in-depth examining opportunities their other purpose is to create alignment.
Alignment between people who do not have your in-depth understanding is created by keeping it simple and not overcomplicating it. Crafting a simple but well-founded case takes time and expertise but is a great way to start off on the right foot.
Context
Great business cases are placed in a context. An opportunity of 5mio ARR is not good or bad. It depends on the business context:
Is that affecting a good portion of the company’s revenue?
Is it the best opportunity available to us at this time?
Confirmation Bias
There is a particularly dangerous effect of having an ambitious company culture. It is extremely easy to bend a business case to look better than it should and just look for confirming facts to build something you just want to make happen.
I was told several times in my career to look for bigger opportunities and I do confess that there were instances where making a 2mio ARR case to 4mio ARR is way too easy as it often hinges on just dialing on one metric.
This is not just the problem of the PM doing this but also of the company of demanding it.
Reward honest assessments, not fairy tales that look good on a scorecard.
Learn with me
I have rewritten my Weekend course (7th/8th July 2023) at Maven to focus on Strategy and Business casing thanks to the communities feedback. This material is a result of that rewrite and will be explored more in-depth in the cohort.
My rough process:
Draft (We build the core assumption and its main value drivers)
Breaking it up (We break the main drivers apart)
We evaluate confidence/impact/effort for each risk
Confidence of Assumptions
Understanding Impact
Effort (Difficult vs. Complex)
We connect it to Value capture (Revenue etc.)
1. Draft - Business case “Collaboration”
Let’s assume we manage a product which is about handling digital documents (PDFs, doc, xls, etc.) and we want to evaluate whether it’s worth it to have our users collaborate around these documents. A “feature” which is completely missing from the product.
Key business assumptions
In order to have any kind of assumption whether this makes sense we have to assume some business numbers:
50mio monthly active free users
50mio ARR obtained purely through self-serve.
All of its revenue is coming from 1mio paying user
The average lifetime value is 50$
The product subscription costs ~4$ / month per user
Meaning our average customer churns after 1 year
We assume that we’re only investigating initiatives for our product that affect at least 10% of our yearly ARR (>5mio ARR)
The very first step is to assume which main channels we affect. In this case, we assume that adding collaboration to our product affects:
Conversion: Adding value to the product should influence our conversion metrics for users to convert to trial and then from trial to pro.
Network Effects: Because collaboration features build on people collaborating with each other they should have a big impact on people inviting users. This benefits users by inviting users which has a positive effect.
Retention: Adding this crucial feature will have an effect on our existing customer base. Some of them should have this need, ideally, it reduces churn and increases stickiness.
The “number” we already make an assumption on how big this opportunity is. In this case, I assume it’s 6mio ARR uplift.
Our initial assumption of this being worth 6mio ARR is within the threshold (after all we wouldn’t have touched this opportunity otherwise). But it’s standing on very shaky legs. In order to evaluate this number we are breaking now the affected channels up.
2. Breaking up the Draft
Our initial draft looked like this. we have 3 main areas with conversion, network effects, and retention that we assume to be affected:
In general, it makes sense to also limit yourself to the most important affected subchannels so your case doesn’t get overly complicated.
Let’s assume we manage a product which is about handling digital documents (PDFs, doc, xls, etc.) and we want to evaluate whether it’s worth it to have our users collaborate around these documents. A “feature” which is completely missing from the product.
2.1. Conversion
For the conversion channel I have identified 4 different areas that will be affected by this new feature:
SEO, the company’s main strategy to get new users on board is through SEO. This existing stream will definitely be affected due to its existing size.
Paid, It’s possible that we add paid advertising to advertise the new collaboration feature to test this adoption
Free to Trial / Trial to Pro, We have a hypothesis that the funnel from free to paid is definitely affected. Moreso on trial to pro when users see how useful it is to collaborate with others around their documents.
LTV Impact, The business has a current lifetime of 50$ per user which seems to be low, if a portion of our users start to adopt collaboration and invite others they are locked in to the product and LTV should rise as a result. Users who come also because of the collaboration feature into the product should have also naturally a higher LTV compared to those that don’t.
Let’s visualize it:
Looks good enough, let’s move on to network effects:
2.2 Network effects
For network effects we have 2 mainly affected areas:
External users: we have people inviting others to the product due to them being able to collaborate. These people would not have come into our product otherwise. We should visualize how we assume that they move through the product and at which rate
LTV: Network effects are some of the strongest drivers for retention if you count being in a Team as a network-related effect. Since we already identified it as an effect from our conversion channel we simply visualize the connection so we don’t consider it double.
This is how it looks now:
2.3. Retention effects
For retention, we have a myriad of effects to consider but when we think about them they are all connected with each other. If this was a new business we wouldn’t have to think about them as there’s not much to defend and we would focus on our conversion channel and new customer acquisition even more.
Existing Customers: We have an existing customer base of 1mio. users. They will definitely be affected by our new feature addition. If we assume that our feature affects them it will also have a direct effect on our churn rate but those are implied and connected to each other.
We also have to be careful when considering churn as churn is directly affecting LTV which we already opened up in our first channel. Let’s keep it in mind for the moment.
Our broken-up case looks like this now:
Simplify and sense check
Paid vs SEO
As I’m looking over this case I’m starting to think that the “Paid” channel doesn’t make a lot of sense in relation to the others. While we could definitely have some effect there in relation to all the pull we have from our other channels (the main business is driven by SEO) I will delete it from the graph.
Another reason is that we never proved that we can run a good paid acquisition channel for this example business, most of it comes from SEO. Delete.
Let’s add color to it
I use a very simple system which I’ll elaborate more about in the next article but for now:
Red: Wild guess, unclear impact
Yellow: Somewhat based guess, medium to high impact
Green: A well-educated guess, high impact
The color should reflect how certain we are that something will be affected by our feature addition and whether it’s going to be a lot. It’s for the moment a mixture of impact and confidence. At this point, we already should only have assumptions in the case that can drive enough uplift, that’s why nothing so far should be low impact. (If there is, delete it) The point of this exercise now is less to be sure but to think about each point:
Let’s think it through:
SEO: So much of our business comes from SEO that even a small change should have a big impact but SEO is also notoriously difficult to move. After all, collaboration is not a core use case of our users but more of an added benefit. Yellow
Free to Trial: Same reason as above. Most people might not sign up for the feature mainly, they don’t look for a collaborative document solution, most of them look for a document solution.
Trial to Pro: People will love the collaboration aspect of the product if they start using it which is likely if they enter into the trial, that should have a big impact. Green
LTV: We know from experience and other products that LTV is driven a lot by collaborative team use cases. collaboration is at the heart of this. Green
Externals: Collaboration is a huge network effect driver. It’s maybe the main reason for looking at it from far away, we also identified it in our strategy as a core hypothesis to fix our bad LTV. Green
Existing Customers: We know from the market that collaboration features are having a huge impact on LTV and retention in product usage. It also fits our product but we’re unsure of how many of our existing customers really “need” it. They didn’t sign up for it after all. Let’s put it at yellow for the moment.
Churn: For the existing customers that ARE affected the churn improvement should be substantial for sure though. I know from experience that in some cases you can drive LTV up by almost 200% for the affected customers. Green
This is a good starting point to now put in the actual work and connect our metrics as we know them to the estimated effects.
3. Impact & Confidence
When we think about what can influence the outcome of a business case we have 2 dimensions that are crucial to consider without overcomplicating things:
Impact: How much does a specific change mean to a specific channel/user group compared to what they already have? “Impact” is the delta between the most likely alternative of what someone could do otherwise and what we give them.
Confidence: How confident are we that our assumption about Impact is correct? The less confident we are the bigger the range of potential outcomes is.
For the following sections keep in mind, business cases are not science, and therefore too much validation is counterproductive. It’s a tool to evaluate quickly whether something could be worth it before making substantial investments.
At the same time, we want to understand conceptually whether we can challenge our ideas better than just “Oh this must work! I believe in it!”
Impact
When we think about the Impact of anything on our users I like to think of it in a range from Optimization (low impact) to Innovation (high impact).
Added user Value is the delta of what they have otherwise vs. what we introduce
In general innovative solutions have a higher impact on users (negative or positive) due to this introduced delta. Optimizations have a smaller impact on our users but are also largely safer bets.
In an ideal world, we would love to have innovative solutions with high confidence. Whenever we find something like that our business case should be able to reflect that.
No workaround to QoL Improvement
Another way to look at Impact is on a range of having absolutely no workaround so far (high impact) to a quality of life improvement (low impact):
If we have a feature that allows you to export your document in a pdf and we now allow you to batch export multiple documents then depending on who you ask this is a small, medium, or even big improvement:
“I need to convert 4 documents”: It’s annoying but I can do it
“I need to convert 100 documents”: It’s possible theoretically but not feasible.
So why should we bother doing any optimizations then if their impact is low? The answer comes usually in the fact that optimizations are far easier to predict than innovations.
The more mature a business is the more bias there is towards optimizations since when you change something you can also negatively impact a growing existing user base.
Companies still should do both but bigger companies risk more when innovating compared to new startups.
Confidence
Confidence is a different way of saying “How sure are we that whatever we think is happening?”
By and large, high confidence is tied to quantitative measurements whereas confidence decreases the more we have to rely on qualitative measurements, feelings, and product sense.
Another way of looking at confidence is to compare it with what we already built which has similar conditions.
Domain Closeness & Similar feature
While we can’t deduct from this how big the impact is we have already proven specific conversion rates are possible within our user groups
Domain Closeness: If Tesla adds a truck to its offering that’s close to its main domain. This increases our confidence about whatever we assume.
If it’s a bicycle we get further out. We’re still dealing with transportation but ultimately a bike is further away from a car than a truck.Similar feature proof: Similar to domain closeness if we already built cars and shipped them to customers we have proven that we can sell “something” at a specific conversion rate and specific price point. These are very strong confidence signals to any assumption.
Questioning Customers and Users vs. Market
In general, it is dangerous to assume from internal interviews how market adoption will look like because of two effects:
Existing users are not a representative sample of your overall market. Those that are not interested in your product in its current state but would be if you add the feature are not in your questioned group. This is specifically relevant if your feature should acquire new market shares
People are bad at telling you what they want. Qualitative interviews are emotionally powerful but even if they are validated in a quantifiable way (how many % of the user group is saying something) we still don’t know whether people “do” vs what they say they will do
Examination of past behavior is far superior to hypothetical assumptions for these reasons. (And also why prototyping is so popular)
Modeling Impact & Confidence
Our case from the other two parts looks at the moment like this:
Let’s break apart each one, here’s a reminder of the business metrics we work with:
Key business assumptions
In order to have any kind of assumption whether this makes sense we have to assume some business numbers:
50mio monthly active free users
50mio ARR (Annual recurring revenue) obtained purely through self-serve.
All of its revenue is coming from 1mio paying user
The average lifetime value is 50$
The product subscription costs ~4$ / month per user
Meaning our average customer churns after 1 year (4x12 ~50$)
100’000 customers churn per month overall
Keep in mind: All following numbers are highly specific to the business you work in but I want to show the typical thought process on arriving at these numbers.
3.1 Conversion
The question we have to ask ourselves now is from start to finish. How is each channel affected in detail from start to finish? We try to keep in mind the differentiation of impact and confidence.
Existing free traffic
Of the 50mio monthly active users we have currently we think that 9mio are directly interested in this feature because they use our product in smaller teams. (roughly 20%) We think we know this because of past research studies. If we didn’t have those we could have gone by ICP groups that have the best fit and then assume their ratio to the overall traffic.
Assumption: 9mio affected by existing traffic.
SEO
The company’s main strategy to get new users on board is through SEO. This existing stream will definitely be affected due to its existing size (50mio MAU). After a quick keyword analysis (“collaborate on a document”, “annotate document”, “comment document”) and comparing our existing domain authority with other keywords (for instance “convert document”), we already rank with.
However SEO is difficult to get right so I’m ranking this will yellow confidence.
Assumption: We reach 1mio additional user per month on all these keywords (Medium)
LTV
Our current LTV (Lifetime Value per customer) is at 50$, collaboration should significantly boost that since people invite others and make it harder to just switch solutions. I’m assuming a hefty 50% improvement there.
50% is a lot. But keep in mind that we only apply this improvement to a conversion funnel here that is specifically interested in this feature. We’re not saying that our average LTV for all users is going up by 50%.
Assumption: LTV +50% for users + new SEO users that we convert because of this feature
Visualize
Therefore we have 10mio Users affected per month by this feature (SEO + Existing Traffic). A quick analysis from these users shows a standard option rate of 1% from free to trial. We also know that these users then on average convert to fully paid with a 45% conversion rate from trial to pro.
This means out of these 10mio users we will have 100’000 (1%) trials started. By adding this feature I expect this conversion to be improved. I have absolutely no data to go by and have to rely on my “product sense”.
Assumption:
Standard adoption behavior 2% (high confidence, existing numbers)
25% relative improvement for these users (low confidence)
Impact:
This results in 20’000 new trials starting per month. (25% of 100k) If these convert normally (45% conversion rate) we end with 9’000 new paying customers per month.
Assuming that our LTV for them is 75$ instead of 50$ we arrive at our first number:
8mio ARR from our Channel “Conversion”
What’s also helpful is if somebody looks at this from the outside they can immediately spot where we aren’t sure. A lot of this channel hinges on the 25% relative improvement in the actual conversion. On the other hand, we’re relatively sure that the LTV for those people will be increased a lot.
3.2 Network effects
(We assume for the following that we keep friction low, only the person who owns the document needs to have a paid account with us, others can collaborate freely)
We know from our conversion funnel that we have a rough number of how many customers we create per month additionally. (9k)
Those additional customers are specifically coming because of our new feature. We know from preliminary research that an average worker invites about 4 people on average per month to collaborate on their documents.
We think we can convert 5% of them into paying customers. They use the collab features and maybe use the platform afterward also to collaborate on their own documents. This is 5 times more than our standard 1% conversion rate for users above, but keep in mind that someone only invites people that they think will collaborate with them.
Assumption:
Each converted “collaboration” customer will invite 4 other people to collaborate on their documents. (yellow)
5% conversion rate for invited users at some point (red)
These invited users also have an increased LTV (see above) of 75$ (yellow)
3.3 Retention
We not only have new customers and users but also an existing customer base (1mio.). We know from preliminary research that 25% of those have said that collaboration around documents is important to them.
For those, we should be able to increase the yearly retention by 5% as an effect. Retention effects are notoriously difficult to anticipate though.
To complicate it further, we assume based on this increased retention 1.25% of our customers will survive that would have churned otherwise.
Assumption:
25% of existing customers have this need (yellow, survey done)
5% increased absolute yearly retention (red)
1.25% of customers survive as a result out of all churned customers
This is how the first version of our business case looks like with numbers. While this is already valuable, we will now do an analysis and try to connect it to a tangible decision.
4. “Value Capture” - Making sense
We could have ended now and been happy with ourselves, after all, we created a good-looking business case that already serves value.
In reality, there’s an important piece missing that we can still optimize. Revenue (capturing value) or uplift does not exist in a vacuum. Of course, we need to put everything into context.
Here’s where we left off with last time:
We “found” revenue in these 3 channels:
Conversion: 8mio ARR
Network Effects: 1.6mio ARR
Retention: 1.1mio ARR
Which comes together to 12.7mio ARR. For our example business case, we determined that our business is pushing 50mio ARR.
We also said that we’re looking mostly into initiatives that affect at least 15% of our yearly revenue. We are far above that (25+%) so it’s looking good enough for the moment.
Low Confidence - Failure points
In our visualization we already marked wild assumptions with red, we have none or very little verification for them. We have two options for each of them:
Try to find alternative ways to drive confidence up
Accept the risk and consider them in our final proposal
Our low confidence points are (red post-its):
25% improvement in conversion for our existing traffic
Each new customer because of collaboration will invite 4 users per month
The feature will increase our absolute yearly retention by 5%
Which leads to an increased 1.25% survival rate for our churn
Driving confidence up
Our first low confidence point is specifically problematic. It drives the majority of the business case uplift (8mio ARR vs, 3mio ARR from the other channels) so this is the one we should get wrong the least.
To further complicate the issue the network effects are solely dependent on this entire channel as well. If it fails the second channel will likely fail as well.
A good way to think about it is that the earlier in a process a risk is the more potential it has to disrupt the entire case. (Up or down)
So what should we do in this case? I’m not sure yet but I see two options which I will consider later on:
If the entire feature is not that costly to build we will just live with the uncertainty but put increased attention to this particular conversion improvement. It looks like a good point to measure the success of this project to decide whether we should keep it going once we start building.
Luckily we can at least test some of the conversion improvements with a painted door test or something similar. While that won’t give us complete visibility over the entire activation funnel we will still have real-life data on the first step of the new feature.
Why did I not make a decision yet? Because it depends on how complex the entire thing is to build. Blanket statements like “Let’s build an MVP” are missing the most important thing, context.
But we’ll get to it:
For the other low confidence points
I will accept the low confidence for the moment, I don’t see a way of validating them short of building the actual feature out. Especially our assumption of other people inviting others seems impossible to anticipate.
On the retention argument, we have the problem that we have 2 highly uncertain assumptions right after each other.
It’s possible that our yearly retention goes up but it doesn’t translate into a better survival curve. Beyond “it makes sense” I have nothing.
Effort
How difficult all of it is to build aka how expensive it is will heavily influence our decision whether we should build out what we suggest here and also what our immediate plan of action is.
Do not overcomplicate this step. It makes no sense to do story point estimations, especially for bigger projects. But it’s definitely worth it to think it through with your engineering managers.
Complex and cheap vs. Simple but expensive
Let’s compare two scenarios and consider only time as the expense:
Complex and cheap: I have to make one pizza and never made pizza myself before
Simple but expensive: I have to make 10 pizzas that I already made in the past
Between figuring out what to buy, following a recipe, and getting it right I’d be careful and say it takes me probably about triple the time that it takes in the 2nd scenario to prepare one pizza.
That should give me enough safety margin. Right?
The problem is, we’re inherently bad at estimating anything that we haven’t done before. “It can’t be *that* complicated” right?
When you combine multiple steps of something you have never done before the likelihood of one showstopper in there is increasing dramatically. What if the recipe lists something that you never bought before? Tomato sauce is *that* complicated? Do I need baking paper? And so forth.
I’ve been doing this for quite some time now, and I know there is always something going wrong with anything you do if you haven’t done it a million times before. Especially in business.
I’m not trying to convince you now that 1 or 2 is going to be more expensive since it depends on what you consider to be a good pizza (or outcome).
The point is… scenario 2 can be estimated with a high degree of certainty.
Or in other words:
The length of an effort is not the only factor when figuring out development time. The main factor is actually whether we did something similar before.
Expect difficulties that you don’t see yet. The others are manageable.
We tend to move on too quickly from things that “look” simple without evaluating whether they are complex.
Another factor that you should consider is whether what you build can be built by one team or requires heavy cross-team collaboration. The latter introduces a ton of complexity.
Bringing it back into the case
If I have to make a quick estimation now on how long it takes to build the entire collaboration case into the product to a state where it’s good enough I would have talked to my engineering team.
I have to put an entire team on it for 2 quarters fully. (Along with their other maintenance duties)
A simplifying factor in our example here is that I think the team can build it themselves. They have the technical knowledge in the team but we never built anything comparable in the company before which introduces complexity again.
While they don’t have to rely on outside expertise they’re building a tool that affects everything. Marketing, Growth Teams, and Sales potentially… smells difficult.
On second thought, let’s make this 4 quarters.
But wait
Remember when I said the riskiest assumption could be evaluated with a painted door test? This becomes crucial now. We can materialize and learn about this project along the way without having to build it for 4 quarters before we see an impact. We need to reflect this in our final proposal.
5. The final proposal - do or don’t?
We have a very rough idea for the effort (4 Quarters). We have our assumptions for the impact (12mio ARR). Are we doing it or not?
The answer is still “it depends”:
What other opportunities do we have? If this is the best thing in terms of uplift and effort then yes. For this, we need to make comparable cases. That’s why creating cases fast is such a valuable skill.
Does this fit into our strategy? Not every opportunity is worth doing, especially if it doesn’t fit into your overall strategy. (I touch on simple strategy creation as a condition for a business case in my course)
For this comparative analysis, there’s one thing missing.
The summary of our summary serves as a great “note” to compare it with other opportunities:
“Adding Collaboration to our Product”
“We see an overall possible uplift of 12mio ARR to our core business in this opportunity once fully delivered mainly through 3 channels:
Direct conversion effects from our existing traffic
Network effects
Retention effects on our existing customer base.
While we believe that it takes 4 quarters for one team to fully build out our collaboration feature we think we can derisk this case dramatically by delivering an MVP within one Quarter that focuses on assessing the free-to-trial adoption rate as a baseline. If we cannot achieve at least a 15% increase in existing traffic conversion on the affected traffic (10mio MAU) after one quarter the project should be reevaluated.
This can either be done as a painted door test or a working prototype.
→ Link to a notion page with a more detailed breakdown”
It is up to you how detailed you present or compare your own business cases. I like to keep them very simple and keep my more complicated assessment in the backhand. I usually have 1 summary, a miro visualization, and a more detailed notion analysis with sources.
The above is a great little summary that can live next to other similar summaries. It’s good enough.
Now we are in the position to either put our research teams on it, build it out, or throw it away into a notion page that no one ever reads again.
Either way: It’s a muscle you have to exercise as a product manager.
The better you are at it the more you can protect other resources in your company:
Summary
The goal of business cases in the end is to create something comparable, and simple to assess an opportunity. In order to get there you need to think about it in some varying level of detail.
That process in itself is highly valuable, but keep it simple.