TL;DR
- Established the AI Governance Working Group at Cotality, defining responsible AI guardrails for a regulated data environment
- Delivered the company's first commercial GenAI integration (Claude Sonnet), including unit economics and investment case
- Scaled from zero to 10 production AI features serving clients including CBA, NAB and ANZ
The Problem
Cotality (formerly CoreLogic) is one of Australia's leading property data analytics companies. Our clients included AFSL-holding financial institutions: banks, mortgage brokers, insurers. Introducing AI into this environment required more than technical capability. It required governance frameworks that could satisfy regulatory scrutiny while enabling product teams to ship.
The risk wasn't just technical. A hallucinated property valuation shown to a bank that uses it for lending decisions is a regulatory incident. The governance needed to be real.
The Approach
I treated AI governance as a product discipline, not a compliance exercise, an argument I make in detail in the public case for risk-tiered AI governance. The product leader owns the user experience, the unit economics and the deployment risk. Legal and compliance are critical partners, but they can't own the framework because they don't own the decisions about what to build, how to build it, or how to price it. The comprehensive AI governance framework captures the full model that emerged from this work.
Key Decisions
-
Product-led governance working group. I established a cross-functional AI Governance Working Group with representation from product, engineering, legal, compliance and data science. Product chaired the group because product owns the accountability.
-
Risk-tiered feature classification. Not all AI features carry the same risk. An AI-generated property description has different risk characteristics than an AI-generated valuation estimate. We classified features into risk tiers with corresponding governance requirements.
-
Unit economics before build. Every AI feature required an investment case: inference cost per request, expected usage volume, pricing impact, margin analysis. This prevented the "add AI to everything" trap.
-
Claude Sonnet as first commercial integration. Selected based on quality-cost-latency tradeoffs for our specific use cases. Built the commercial wrapper: metering, billing, usage tracking, quality monitoring.
-
Incremental rollout with enterprise partners. Co-designed AI features with Tier 1 clients (CBA, NAB, ANZ) rather than shipping and hoping. Enterprise clients want to be involved in AI governance decisions that affect their compliance posture.
Results
- AI Governance Working Group established and operational
- First commercial GenAI integration delivered (Claude Sonnet)
- Zero to 10 production AI features in regulated environment
- Investment case framework adopted across product organisation
- No AI-related regulatory incidents during my tenure
What I'd Do Differently
- Push for dedicated AI engineering capacity earlier (initial features were built by generalist engineers)
- Establish automated eval pipelines from the first AI feature, not the fifth
Tech Stack
This was enterprise product leadership, not a solo build. The relevant "stack" was organisational:
Governance: Cross-functional working group, risk classification framework, investment case templates
Integration: Claude Sonnet via API, usage metering, quality monitoring dashboards
Partners: Tier 1 enterprise clients (CBA, NAB, ANZ) as co-design partners