AI Product BuildingAI SaaSSolo OperatorMulti-Model OrchestrationAI Voice AgentReact Native

OpenChair: Building a Production AI SaaS Platform as a Solo Operator

End-to-end case study of building a 50+ AI feature multi-tenant SaaS platform for beauty and wellness venues, as a single operator using AI at every layer of the stack.

TL;DR

  • Built a production multi-tenant SaaS platform with 50+ AI features, native mobile apps, and Stripe billing as a solo operator
  • Designed multi-model orchestration across six LLMs via OpenRouter with prompt caching achieving 90% cost reduction
  • Shipped AI voice receptionist, vision-based recommendations, AI growth coach, smart scheduling and business intelligence narratives to production

The Problem

Beauty and wellness venues run on fragmented tooling: separate systems for bookings, payments, marketing, staff management and client communication. Venue owners cobble together five to eight different SaaS products, none of which talk to each other, and none of which use AI beyond basic automation.

The hypothesis: a single, AI-native platform could replace this entire stack, and a single AI-augmented operator could build it.

The Approach

I treated this as a deliberate validation exercise, grounded in the product builder identity that defines my approach. Could one person, using AI tools at every layer (code generation, testing, design, copy, ops), match the output velocity of a full-stack product team?

The answer required building a real product, not a prototype. Production infrastructure, real users, App Store deployment, Stripe billing, the works. I wrote the full build narrative covering how this came together end-to-end.

Key Decisions

  1. Multi-model orchestration over single-model dependence. Different AI tasks have different cost-quality tradeoffs. I route across six LLMs (Claude Sonnet, Claude Haiku, GPT-4o Mini, Gemini) via OpenRouter, matching model capability to task complexity. Prompt caching via OpenRouter reduced inference costs by 90%.

  2. tRPC over REST. End-to-end type safety across 63 routers eliminated an entire class of integration bugs. With a solo operator, there's nobody to catch mismatched contracts in code review.

  3. Supabase with aggressive RLS. 68+ row-level security policies enforce multi-tenant isolation at the database layer. This is non-negotiable for a platform handling business data across venues.

  4. AI voice receptionist in production. Built on Retell AI and Twilio, handling real phone calls for venue bookings. This required solving latency, conversation flow design, and graceful fallback to human handoff.

  5. React Native (Expo) for native mobile. Shipped to both App Store and Play Store from a shared codebase, with real-time booking sync and offline-capable architecture.

Results

  • 50+ AI features in production
  • 63 tRPC routers, 180+ database migrations
  • 130+ screens across web and native mobile
  • 12+ third-party integrations (Stripe, Twilio, Xero, Google Calendar, Microsoft Graph)
  • Deployed to App Store, Play Store and Vercel
  • Solo-built replacement for 8 specialised roles

What I'd Do Differently

  • Start with fewer AI features and deeper integration testing on the core booking flow before expanding
  • Invest in automated eval pipelines earlier (added Langfuse mid-build, should have been day one)
  • Be more disciplined about mobile-first design from the start rather than retrofitting responsive patterns

Tech Stack

Frontend: Next.js 16, React 19, Tailwind CSS 4, Tamagui, NativeWind v4

Mobile: React Native 0.83, Expo SDK 55, custom Stripe Terminal modules

Backend: tRPC 11, Drizzle ORM, PostgreSQL (Supabase)

AI: OpenRouter (Claude Sonnet, Claude Haiku, GPT-4o Mini, Gemini), Vercel AI SDK, Retell AI

Infrastructure: Supabase (Auth, Realtime, RLS, pg_cron), Stripe (Billing, Connect, Terminal), Vercel, GitHub Actions

Integrations: Twilio, Resend, Google Calendar, Microsoft Graph, Xero, Firecrawl, PostHog, Langfuse