Latent Demand Is Your AI Roadmap

TL;DR
- The strongest product signals come from watching users misuse your product to do something it wasn't designed for
- This principle applies doubly to AI products, where the model itself exhibits latent demand by trying to use tools in ways you didn't anticipate
- I built three products at Cotality by spotting latent demand in user behaviour long before any roadmap committee identified the opportunity
The best products aren't invented. They're extracted from behaviour that already exists.
Facebook Marketplace started because 40% of posts in Facebook Groups were people buying and selling things. Nobody at Facebook designed Groups for commerce. Users figured it out because the tool was close enough to what they needed. The gap between "close enough" and "purpose-built" was Marketplace, one of the most successful product launches in Facebook's history.
This is latent demand: users bending a product toward a use case it wasn't designed for, because the underlying need is strong enough to tolerate the friction.
For AI products, latent demand is the single most reliable signal for what to build next.
The pattern at Cotality
I saw this pattern three times across a portfolio of eight products, and each time it produced a product that outperformed anything generated by a planning process.
OnTheHouse and the give-to-get funnel. OnTheHouse was a free consumer property portal. We built it for brand awareness and SEO. What we observed was that consumers weren't just browsing. They were using the free property reports as ammunition for conversations with real estate agents. They'd walk into an agent's office with a printed OnTheHouse report and use our data to negotiate. The agents noticed. They started asking us: can we get the same data, but formatted for our use case?
That observation, consumers weaponising free data in professional contexts, was the latent demand signal for the give-to-get lead generation model. We didn't invent the insight. We watched it emerge from user behaviour and built the infrastructure to capture it. The result was a new product category connecting consumer tools to agent workflows. No roadmap committee would have surfaced that. It required watching what people actually did with the product, not what we designed it for.
Rita and SMS prospecting. When we acquired Rita in 2021, it was an AI-powered SMS prospecting tool for real estate agents. The original use case was outbound lead generation. What we observed post-acquisition was that agents were using the two-way SMS capability not just for prospecting but for ongoing relationship management. They were treating Rita as a CRM they could talk to. The messages weren't cold outreach. They were follow-ups, check-ins, appointment reminders.
The latent demand signal was clear: agents didn't want another prospecting tool. They wanted a system that maintained relationships at scale through conversational channels. That insight reshaped how we positioned and developed the product within the broader portfolio.
Plezzel and portfolio-as-funnel. When we acquired Plezzel in 2023, it was an advertising platform for real estate agents. Standalone, it was a digital ad product competing in a crowded market. What I observed was that the agents using Plezzel were also using OnTheHouse data, also getting leads from our consumer funnel, also interacting with Rita. They weren't using four separate products. They were operating across one ecosystem, even though we hadn't designed it that way.
That accidental overlap was latent demand for portfolio integration. The funnel wasn't consumer traffic → leads → prospecting → advertising because we planned it that way. It was because users were already connecting the dots themselves. We just needed to remove the friction.

How latent demand works in AI products
AI products generate a second layer of latent demand that traditional products don't. The model itself has latent demand.
When you give an LLM a set of tools, it doesn't just use them the way you intended. It discovers uses you didn't anticipate. This is the model's version of users misusing your product: the model is "misusing" tools to accomplish goals you didn't explicitly design for.
I saw this building AI features for OpenChair and OpenTradie. The AI receptionist for beauty salons was designed to handle booking calls. What I observed in production was the model using its available tools to handle enquiries that weren't bookings at all: pricing questions, service availability checks, even rudimentary customer complaints. It was routing around the limitations of its prompt by creatively combining tools I'd given it for other purposes.
That's latent demand from the model. The correct response isn't to constrain the model back into its box. It's to ask: what tool would make this emergent behaviour work properly?
The practical framework is:
-
Give the model more tools than it strictly needs. Not every tool needs to serve the primary use case. Some tools exist to see what the model does with them. A model with access to your knowledge base, your calendar, your CRM, and your analytics will combine those tools in ways you didn't predict. Those combinations are product features waiting to be discovered.
-
Instrument what the model tries to do, not just what it succeeds at. Most teams log successful tool calls. Few log the attempts that fail. A model that repeatedly tries to access a data source it doesn't have, or tries to perform an action it lacks permissions for, is telling you what tool to build next.
-
Watch for off-label use by non-target users. When data scientists start using your coding agent for SQL analysis, that's latent demand. When marketers use your developer tool to generate landing pages, that's latent demand. The users who shouldn't be using your product but are anyway are the ones showing you where the market is.
The anti-pattern: roadmap-driven AI products
The opposite of latent demand is roadmap-driven development, where a planning committee decides what to build based on competitive analysis, stakeholder requests, and strategic assumptions.
I've run roadmap processes for nine years. They're useful for incremental improvement. They're terrible for discovering new products. Every product I built from a roadmap was good. Every product I built from latent demand was great. The difference is that roadmap products solve problems leadership can articulate. Latent demand products solve problems users demonstrate through behaviour.
For AI products, roadmap-driven development is particularly dangerous because the capability surface changes every few months. You can't roadmap against a moving target. By the time you've planned, scoped, and resourced a feature based on today's model capabilities, the model has moved on and the feature either works better than expected or is already obsolete.
Latent demand works because it's empirical. You're not predicting what will matter. You're observing what already matters and building the infrastructure to support it.
How to find latent demand in your product
If you're a product leader looking for latent demand signals, here's where to look:
Support tickets that aren't complaints. When users contact support to ask "can your product do X?" and X isn't a feature, that's a signal. Track these requests separately from bug reports. Cluster them by theme. The themes that recur across unrelated users are latent demand.
Workarounds users build themselves. When users export data from your product into spreadsheets, build Zapier automations around your API, or use your tool in combination with three other tools to accomplish a workflow, they're demonstrating a need your product doesn't serve. Map those workarounds. The most common ones are your roadmap.
Usage patterns that surprise you. When a feature designed for weekly use gets used daily, or a tool designed for analysts gets adopted by sales teams, or an enterprise product starts showing up in small business contexts, something is happening that your product assumptions didn't predict. Don't normalise the surprise. Investigate it.
Model behaviour logs. For AI products specifically, examine the traces where the model went off-script. Where did it try to use tools in unexpected combinations? Where did it attempt actions it didn't have permission for? Where did it hallucinate capabilities it wished it had? Each of these is a signal about what the product should do that it currently doesn't.
The best AI roadmap isn't a spreadsheet. It's a telemetry dashboard that shows you what your users and your model are trying to do that your product won't let them do yet.
Build that.
Frequently Asked Questions
How do you distinguish latent demand from noise?
Frequency and diversity. A single user doing something weird is noise. Ten unrelated users doing the same weird thing is latent demand. The signal strengthens when the users come from different segments, geographies, or use cases, because it means the underlying need is structural, not idiosyncratic.
Does latent demand replace strategic planning?
No. Strategic planning sets direction and constraints. Latent demand identifies opportunities within those constraints. You still need a thesis about who you serve and what problems matter. Latent demand tells you which specific products to build within that thesis. The two are complementary, not competing.
How do you instrument model behaviour to find latent demand?
Start with trace logging. Record every tool call the model makes, including failed attempts. Build dashboards that surface the most common failure patterns, unexpected tool combinations, and tool sequences that differ from your designed workflows. Review these weekly. The patterns that recur across users are your next features.
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms.