Thinking in Bets
How to embrace uncertainty, minimise the cost of being wrong, and turn every initiative into a calculated experiment.
TL;DR
- Product management is not about making perfect decisions. It's about making high-quality, calculated bets.
- Frame every initiative as a hypothesis. If you can't state what you'll measure, you're guessing.
- AI has collapsed the cost of bets. You can now test more hypotheses, faster, at lower cost. Use that.
You will be wrong. Regularly. The skill isn't avoiding mistakes. It's minimising the cost of being wrong and maximising the learning from every outcome.
This is the engine of learning. It transforms uncertainty from a threat into a strategic advantage.
The four-step betting framework
1. Frame every initiative as a hypothesis
Before you build anything, articulate a clear hypothesis that links a proposed solution to a specific, measurable outcome:
"We believe that [this action] for [this user] will result in [this outcome]. We will know this is true when [this metric changes]."
This forces critical thinking about assumptions and what success actually looks like. If you can't fill in those blanks, you're not ready to build.
AI introduces new categories of bets worth framing explicitly. Model selection bets: "We believe Claude Opus 4.6 will outperform GPT-5.4 on this extraction task at 40% lower inference cost." Prompt strategy bets: "We believe a chain-of-thought prompt with examples will reduce hallucination rates from 12% to under 3%." Architecture bets: "We believe a multi-agent pipeline will produce higher-quality outputs than a single model call, and the latency trade-off is acceptable for this use case."
These are real product decisions with measurable outcomes. Treat them the same way you'd treat any feature hypothesis.
2. Test assumptions with minimal effort
Use a hierarchy of evidence to validate hypotheses, starting with the riskiest assumptions first. Test them with the least effort possible:
- AI-coded prototypes (vibe-code a working version in hours, not weeks)
- Eval suites (run structured evaluations against test datasets before shipping AI features)
- Landing page tests (gauge interest before building)
- Concierge MVPs (manually perform the service before automating)
- AI-simulated user behaviour (generate synthetic test scenarios to stress-test assumptions)
AI tools have compressed the time and cost of every testing method on this list. A prototype that took two sprints now takes an afternoon. An eval suite that required a data science team can be built by a PM with the right tooling. This doesn't change the principle (test before you commit). It changes the economics radically. The excuse "we didn't have time to validate" no longer holds.
3. Instrument everything to measure impact
If a feature ships without a defined mechanism for measuring its impact, it is not "done." Instrument products from the outset to capture the data needed to prove or disprove hypotheses. Use the data to understand what changed, then go back to customers to understand why.
4. Conduct post-launch reviews
After a feature has been in the market, review it. This is not a celebration. It's a critical analysis of your initial hypothesis. Review the data, compare results to predicted outcomes, and document key learnings.
The best product teams treat failures as valuable data. A hypothesis that's proven wrong is a success if it generates learning that improves the next bet.
The collapsed cost of bets
When inference is cheap and prototyping takes hours, bet size shrinks and cycle time accelerates. This changes the math on experimentation.
Previously, testing a hypothesis might cost two engineers for six weeks. That's an expensive bet, so you'd better be fairly confident before placing it. Now, the same test might cost one person and an afternoon of AI-assisted building. You can afford to be less certain going in, because the downside of being wrong is small.
The practical effect: run more experiments per quarter. Kill losing bets faster. Double down on winners sooner. Teams that still agonise over whether to build a prototype are being outpaced by teams that build three prototypes and let the data decide.
Hallmarks of a disciplined bettor
| Behaviour | In practice |
|---|---|
| Data-driven | Points to specific metrics that prove success and can articulate what they'll do if the data isn't positive. |
| Disciplined | Actively uses the hypothesis framework and prioritises tasks that provide the most learning for the least effort. |
| Resilient | Not emotionally attached to solutions. Can pivot when a hypothesis is proven wrong. |
| Transparent | Communicates bets and their results (both successes and failures) as learning opportunities. |