Scaling an AI Supernova: Lessons from Anthropic, Cursor, and fal
SaaS frameworks are collapsing under AI’s speed and scale. Here’s how founders are rewriting the rules of growth, pricing, and go-to-market.
The SaaS formula of the 2010s was elegant in its simplicity: the startups that had an efficient sales motion generated recurring revenue with predictable margins. CEOs found new strategies to be inventive with their business models, pricing, and multiple acts, and software markets matured into predictability. Today, founders are building in the dynamic Wild West of AI, where products from Anthropic, Cursor, and fal are not just an evolution, but rather a step function in technology, delivery, and business model. SaaS playbooks are becoming obsolete as startups build novel playbooks.
At SaaStr Annual this year, Partner Talia Goldberg hosted a discussion with three leaders at the forefront of the AI: Kelly Loftus, GTM leader at Anthropic; Jacob Jackson, Machine Learning leader at Cursor and founder of Supermaven; and Gorkem Yurtseven, co-founder of fal. Together, they explore how the formulas of SaaS are being rewritten for AI.
“No one in AI really has 80–90% gross margins,” said fal’s co-founder Gorkem Yurtseven. “The cost to serve each customer is real. Everyone has less margin—but they’re growing like crazy.”
The AI economy is rewriting startup physics. Compute replaces code as the unit of cost. Margins compress, velocity accelerates, and growth outpaces every known benchmark. Here’s what early-stage founders need to understand to build enduring AI companies in this new landscape.
Takeaways for founders in < 240 characters
- Orient towards new AI metrics: Traditional SaaS benchmarks don’t apply. Track compute efficiency, usage, retention, and growth velocity—not Rule of 40—to measure real AI business health.
- COGS = CAC: In AI, compute is the new customer-acquisition cost. Keep margins tight by building self-selling products that grow through usage and community.
- Price for outcomes: Move from flat subscriptions to usage- and outcome-based pricing that ties revenue directly to delivered results and productivity gains.
- Build an adaptive GTM: Replace rigid sales quotas with shadow targets. Build smaller, technical GTM teams focused on learning loops, automation, and customer feedback.
- Embrace new North Stars: Success equals use, love, and leverage. Track engagement, wallet share, and team adoption—because customer devotion drives durable growth.
Eight realities when scaling AI startups
1. The old SaaS metrics don’t fit perfectly in the new AI economy.
In traditional SaaS, every new user was almost pure profit. But for AI-native companies, each additional customer consumes GPU cycles, electricity, and model inference time.
These non-zero marginal costs compress gross margins, even as revenue growth skyrockets. Some AI companies are scaling from zero to $50 million ARR in record time—but with margin profiles closer to 40–50%, not 80–90%.
“Advancements in software are happening faster than advancements in hardware,” said Gorkem. “It’s getting more expensive to run the best models—even as we scale.”
For founders, this means redefining what “healthy growth” looks like. Focus less on legacy benchmarks like Rule of 40 or gross margin expansion, and more on unit economics that balance growth with compute efficiency.
2. COGS is the new CAC.
Talia summed it up succinctly: “COGS (cost of goods sold) is the new CAC. In SaaS, the constraint was customer acquisition cost.
In AI, the constraint is compute cost. Founders can’t afford both high COGS and high CAC, so the best products are designed to sell themselves—driving adoption through product quality, virality, and community.
This new dynamic of “high cost to serve, low cost to acquire,” rewards teams that build sticky, habit-forming products and leverage organic distribution instead of outbound motion.
3. Design pricing for outcomes.
AI isn’t just changing what we build—it’s changing how we monetize it. Companies like Cursor and fal are moving away from traditional seat-based SaaS models. Flat monthly subscriptions are giving way to usage-based and outcome-based frameworks, where revenue scales directly with results delivered.
“Running the same model might get cheaper,” said Gorkem of fal, “but everyone wants the best model—and those are more expensive to run.”
Unlike software-as-a-service, where each new user costs almost nothing to serve, AI applications carry a real marginal cost per inference. Every token, frame, or call draws on GPUs, electricity, and compute.
As Jacob of Cursor put it: “When you receive $10 from the customer, you can’t just spend 10 cents on AWS. GPUs are expensive, and they have a real footprint in electricity and heat.”
Founders must now design pricing systems that balance customer value with infrastructure economics—tying revenue to measurable productivity or outcomes.
Leading models include:
- Usage-based pricing: Customers pay for consumption—API calls, tokens, or tasks completed.
- Outcome-based pricing: Customers pay when the AI delivers a quantifiable business result, such as resolved tickets or generated content.
- Hybrid models: A base subscription provides predictability, while variable tiers capture upside as usage scales.
“When I first sold developer tools for $49,” Jacob recalled, “I realized a 0.01% productivity boost made it worthwhile. Now, with AI accelerating developers two-fold, pricing should map to that impact.”
AI monetization is evolving from access to outcomes. Founders who tie revenue directly to measurable value—usage, productivity, or results—will build more durable and defensible businesses.
But now every model call now carries a cost. The companies that design for this reality, rather than fight it, will define the economics of the AI era.
AI pricing is evolving from access to outcomes. The winners will align their business models with the tangible value they create.
4. Shadow targets are the new quotas for AI GTM teams.
At Anthropic, sales operates without traditional quotas. “When I joined, we couldn’t predict adoption,” said Kelly Loftus, who leads startup sales. “So instead of quotas, we built around feedback and mission. We call them ‘shadow targets.’”
The unpredictability of model adoption makes long-term forecasting nearly impossible. Instead, AI companies are creating smaller, technical GTM teams that deeply understand the product and focus on learning loops over pipeline management.
Cursor’s salespeople, for example, also write code and automate parts of their own workflow using Cursor itself. “We use AI to qualify inbound leads,” said Jacob. “Many on our sales team are also building internal tools to help the org.”
The takeaway: GTM structures must be adaptive, not rigid. Hire builders and feedback gatherers, not just sellers.
5. AI is operationalized within and across the organization.
AI companies embrace AI across all facets of the organization from day one. Each panelist shared examples of how they use AI internally to scale their own teams:
- Anthropic built a Claude-powered Slack assistant that searches internal knowledge bases to answer employee questions, improving onboarding and time-to-insight.
- Cursor uses asynchronous “background agents” that complete tasks developers can audit, correct, and iterate on—improving precision and control.
- fal hires top researchers through an open compute grants program, where candidates propose and run experiments on the platform before joining full-time.
These tactics show how AI can transform workflows and culture and help teams operate leaner, smarter, and faster.
6. Focus to dominate an emerging AI category.
fal’s breakthrough came when the team narrowed its focus. fal pivoted to own the “generative media inference” category, and avoided the temptation of serving inference for all models and modalities, including LLMs. This decision sharpened positioning and accelerated adoption.
“Everyone assumed all AI models were one market,” said Gorkem. “We saw early that buyers for media models were different. That focus changed everything.”
In a market overflowing with generalist AI tools, focus is the ultimate differentiator. The winners will define clear categories and dominate them before expanding horizontally.
7. Growth will stem from collaboration, not competition.
Anthropic and Cursor represent a new kind of partnership in AI: co-opetition (a combination of the words “co-operation” and “competition.”) Though Anthropic builds Claude Code and Cursor serves as an AI coding assistant, the two companies collaborate closely, sharing feedback and improving together.
“We work with partners like Cursor to push model capabilities forward,” Kelly said.
“Whenever the models get better, Cursor gets better,” Jacob added.
This dynamic shows how AI ecosystems thrive through mutual acceleration. Model providers and application developers are not zero-sum; they’re compounding allies.
8. The new North Stars focus on usage, customer love, and leverage.
In the SaaS era, investor decks measured success in ARR, gross margins, and net retention. But in the AI era, these metrics only tell part of the story.
The most forward-looking founders are tracking a new set of signals. Signals that measure usage, customer love, and leverage, not just revenue.
- Usage and engagement over time: the clearest leading indicators of retention and expansion.
- Internal NPS: whether your own team genuinely loves using the product.
- Logo diversity & new logo acquisition: revenue distributed across at least 30+ accounts, not concentrated in a few whales, and ramping.
- Wallet share: the percentage of a customer’s AI or media spend flowing through your platform.
As Jacob of Cursor put it: “Revenue lags users, and users lag product quality. Our top metric is whether we personally want to use it every day.”
That’s the signal of real product-market fit in AI, not just adoption, but devotion as AI solutions are becoming not just tools, but often coworkers. Customer love and product utility are the new North Stars that drive engagement, value, and ultimately revenue.
Finding new order in the chaos
For two decades, SaaS gave us predictability. The formulas worked. Growth was methodical, efficient, and compounding. But AI has reopened the frontier.
The leaders behind Anthropic, Cursor, and fal are scaling faster than companies we’ve ever seen in human history while building amidst uncertainty. But that’s where enduring companies are born. The old benchmarks—Rule of 40, gross margin targets, sales quotas—can’t initially guide this exploration. New laws will emerge from experimentation, feedback loops, and product love. As in every technological revolution, the pioneers who master the chaos will define the order.
At Bessemer, we’re partnering with AI builders and leaders and learning from our field notes on just how these astonishing AI Supernovas and Shooting Stars are changing AI business models and building things differently. Don’t miss out on the new playbooks, insights, and benchmarks — subscribe to Atlas.



