Wrong Facts, perfect Systems
How I taught my AI to catch what my team keeps missing
This happened to me in a similar form about 3 times last week:
A PM presented a report showing a significant traffic spike in a specific period. The numbers were pulled correctly. The charts looked clean. The analysis was thorough.
I asked one question: did you check whether that date range includes the weeks we had a bot network scraping our sites?
They hadn’t. The inflated traffic was baked into the baseline. Every comparison, every trend line, every conclusion built on top of it was wrong. Not because the data was fabricated. Because the data was polluted and nobody flagged it.
The processing was correct. The “fact” underneath it was not. And the knowledge that those weeks were compromised? It lived in my head, not in any system the PM could have checked. Because I’ve done an analysis on this faulty traffic 3 weeks ago.
When I ran the same report through my AI it flagged the same thing that I knew because I taught it that fact back then.
This is the problem we should be talking about with AI
The entire AI engineering discourse right now is about retrieval, how to find information accurately, fast. How do we pull the right information from massive datasets? These are real engineering challenges. They matter.
But for leadership roles and product managers, they are solving the wrong problem. Information for us is about accuracy.
You can retrieve the exact correct data point from your database, run it through the best analysis model and it is still objectively wrong, because the fact itself was wrong in the first place because it was pulled without the correct context.
The retrieval was “technically” perfect. The system worked flawlessly. The output is still garbage.
Retrieval correctness is not factual correctness.
And no amount of better precision with retrieval (aka. improving a generic model more and more) will fix that.
You can either correct the data in the beginning, or catch it at the end before it is accepted as new fact and compounds.
---
Here’s a simpler way to see it.
Who is the most important person in your life? If I look at your email data, I’d say it’s whoever you message most. That’s a reasonable heuristic. For most people, it’s probably close enough.
But maybe the most important person in your life gets one message a month. Both can be true. And in the ratio that matters, the heuristic is completely wrong.
If you use a heuristic at the input layer, no system downstream will ever correct for it.
Not fine-tuning. Not a bigger context window. The bias is baked in before retrieval even starts.
You can’t always catch a wrong fact at the source. Bot traffic doesn’t announce itself. Polluted data looks like clean data. The only reliable gate is at the other end: when someone who understands the process in detail (an analyst looking at an analysis, a PM at a summary of an interview they attended) actually looks at the output and doesn’t blindly trust it.
The problem is that person’s correction usually dies in the meeting where they made it. It never enters a system anyone else can check.
---
So what’s the fix?
It’s embarrassingly simple. A verified context layer. A place where leadership writes down what they know to be true, in plain text, so an AI can check new information against it.
Here’s what that looks like in practice:
Every Monday, leadership reviews a dashboard. Someone flags churn as up 15% month-over-month. The room tenses. But the CEO knows this happens every Q1 because of annual renewal cycles. It’s seasonal. It happens every year. Same conversation, same panic, same correction.
After the meeting, the CEO writes one line in a text file: *”Q1 churn is seasonal. Do not compare MoM without adjusting for renewal cycle.”*
Next week, the AI checks the new deck against that file. Catches it before the meeting starts. No drama. No wasted executive time. No bad decision made in a moment of panic.
The AI didn’t generate the insight. The CEO took 30 seconds to write down something they already knew. The AI just made sure that knowledge wasn’t lost the next time it mattered.
You should do the exact same thing. Every day.
---
I think about this like code commits.
A report is a pull request against your organization’s knowledge. Before it gets merged into a decision, it should be checked against committed facts. If it contradicts something leadership has verified to be true, it gets flagged. If it surfaces something new that leadership validates, that gets committed too.
Your verified context layer is the repository. Leadership decisions are the merge approvals. The AI is the CI pipeline that runs the checks.
Every organization already has this knowledge. It just lives in people’s heads instead of in a system.
The PM who built their analysis on bot-inflated traffic wasn’t wrong. They just didn’t have access to the context that would have told them to exclude those weeks. That context existed. It was verified. It just wasn’t written down anywhere a person or an AI could find it.
How I do it
I still make mistakes. I ship reports with errors in them despite being extremely careful and checking things in detail before they go out. That hasn’t changed.
What has changed is the model I’ve built around my AI. Every time I catch an inaccuracy, I write the correction back into the context it loads before conducting any analysis. The same query that gave me a wrong output last week gives me the correct one this week, automatically.
In practice, this looks mundane:
Output format: “When I say ‘report,’ produce an HTML. Nothing else.”
Traceability: “Always include a methodology section with sources.”
Data anomalies: “Before any 2025 analysis, read timeline_2025.md” — a simple chronological list of events that affected the data.
It’s Sisyphean at first. Every correction feels like one more rock up the hill. But unlike Sisyphus, this one stays at the top. Each fix improves every future workflow in efficiency and, more importantly, accuracy.
---
The hard question isn’t whether this works. It does. The hard question is the system around it: who contributes, who approves, how do you incentivize people to do this consistently? That’s a different piece for a different day.
For now, the minimum viable version is this: when you catch something wrong in a report and you correct it, write down the correction. One line. Plain text. Make it findable.
"You don't need a better model. You need to write down what you already know. And then make it findable."






the blind trust make the motivated one more dangerous then ever lol the thing about using AI is that the more you compound context through time, the better it gets