Developer laws in the AI era
AI developer platforms are rewriting the rules of pricing, design, and defensibility in the age of agentic development. Here are eight (work in progress) laws for founders building for humans and agents.
Software is being rewritten for a world where AI agents and human developers are simultaneously end users and collaborators. Old patterns from the traditional SaaS era are being replaced by new realities in pricing, product design, and what it means to build defensible developer businesses. Twelve years after publishing our original developer laws and six years after publishing our v1 amendments, we’re attempting to redraft and contextualize our developer laws in the AI paradigm.
These rules are still a work in progress, but they are the v2 amendments for building developer platforms in a landscape shaped by AI agents, with direct insight from leaders at Anthropic, Cursor, Port, Fal AI, Fern, Render, Appwrite, Netlify, Recall, Vapi, Resolve AI, Graphite, Marimo, and Resend.
As enterprises navigate this next wave of AI-driven software innovation, developer platforms lead the charge for new-age infrastructure as they always have. The original "Eight laws of developer platforms" (2013) and their 2019 amendments traced the rise of DevOps, open source, cloud-native architectures, and API-first ecosystems. Now, in 2025, a new paradigm has emerged: agentic development, where AI agents collaborate with developers to design, build, deploy, and maintain software at scale.
The eight laws of AI dev platforms
Law #1: Agent Experience (AX) matters as much as Developer Experience (DX)
Today’s developer platforms require equal attention to both the Agent Experience (AX) and the Developer Experience (DX). In many ways, DX directly informs and augments AX. This includes comprehensiveness of documentation (discussed further in Law #2 – “Documentation must serve agents and humans equally”), API surface area, and easily understandable schemas. All the investment made in OpenAPI specs, REST APIs, and SDKs in the last five to ten years make it easier for both humans and agents to use the product.
“Instead of viewing DX as an antagonist to AX, we discovered that all the changes we made to DX are actually enhancing AX. For example, we invested a lot of time trying to make the onboarding flow as easy as possible for humans. Turns out, that made a huge difference in getting agents to use Resend, too."
– Zeno Rocha, CEO of Resend
However, there are also capabilities and structures that diverge for humans versus agents. While human developers can interpret ambiguous documentation and adapt to inconsistent APIs, agents need structured, predictable interfaces. This means OpenAPI schemas with comprehensive error handling, session persistence for multi-step workflows, and real-time feedback mechanisms like WebSocket streams. Netlify's deployment agents, for instance, must maintain state across entire CI/CD pipelines while providing immediate build feedback which are requirements that traditional developer tools weren't designed to handle.
Second, the emergence of protocols like Model Context Protocol (MCP) represents a fundamental shift in how developer tools serve their users. Many companies are now using solutions like FastMCP by Prefect to host their own MCP servers because they know their developers are working in Cursor and Claude Code. This means within their IDEs, developers can supercharge their agents to directly access platforms’ live data and perform actions on their behalf.
Lastly, we’ve started thinking about the role of dashboards in debugging, monitoring, and logging. Today, humans log in directly to dashboards as the central pane of glass for information gathering. Teams like Recall have started to make all dashboard functionality accessible via APIs so agents today can also contribute to issue resolution. Relatedly, there remain open questions around reducing or even eliminating context switching (version control, integrations, using APIs, pushing to prod) for agents. This trend is already happening as MCP servers enable agents to pull real-time information and execute commands without the developer context-switching to dashboards or CLIs.
Law #2: Documentation must serve models as well as humans
Documentation is often well-intended but poorly-maintained within an engineering team, failing to capture real-time changes and reflecting outdated guidance. Developers accessing these docs are often frustrated but naturally embody some buffer of tolerance for incompleteness or imperfection.
Conversely, for LLMs, converting complex HTML pages with navigation, ads, and JavaScript into LLM-friendly plain text is both difficult and imprecise. Instead, agents strongly benefit from concise, expert-level information gathered in a single, accessible location. This is particularly important for use cases like development environments, where LLMs need quick access to programming documentation and APIs. LLMs also require up-to-date structured API references, and audit logs that track both human and agent actions. These requirements force a fundamental rethink of information architecture.
This is also where Generative Engine Optimization (GEO) comes in. Just as SEO ensures discoverability for search engines, GEO ensures that models can quickly parse and surface accurate answers within documentation, keeping developers in flow rather than interrupting them with context-switching searches.
With the proliferation of coding agents, technical documentation becomes a dual-purpose product asset – companies need docs that serve both agent and human developer audiences effectively, with proper versioning, change management, and discoverability for agents while remaining useful for human developers.
“Developers want a polished docs site, while agents need clean markdown to parse. Teams are trending toward a docs-as-code approach, where documentation is first written in markdown and then published as developer-friendly websites and machine-readable files like llms.txt.”
– Co-founders of Fern
Law #3: Pricing strategies remain focused on reducing friction to onboard
Pricing must account for both cost structure and value delivery. This is particularly salient for AI companies as the cost of serving the marginal user goes from zero in traditional SaaS to a meaningful line item for AI-native applications, driven by inference costs. To address this, we have observed a few pricing paths that companies servicing developers are currently experimenting with:
1. Usage-based pricing with massive intra-customer account expansion driven by the incredible utility of the product. All platforms are being re-integrated with AI and, as with every wave, developers are leading the charge and driving infrastructure and tooling spend. Usage and monetization grows with customers – we’ve observed this is currently the most common pricing pattern.
2. Enterprises prize predictability of spend so vendors incorporate AI as part of the core, seat-based product experience rather than as an add-on though often along with a usage-based overage fee.
3. Outcomes-based pricing or bundling activities into meaningful business processes and charging based on a completed workflow.
Early data also suggest that upsell triggers could differ between a traditional developer and a vibe-coder (to be explored further in Law #5 – “The Definition of a Developer Continues to Widen Dramatically”). In other words, the gating factors of building and shipping will meaningfully inform what software creators are willing to pay for (i.e., CI/CD functionality for vibe coders vs. traditional developers).
No matter which path companies choose, every platform is still most focused on reducing friction to onboard. (i.e., maintaining a compelling free tier, great documentation, robust developer community are all ways to scalably reduce onboarding frictions).
“We are not trying to force an old SaaS model onto a new kind of product. Value should map to outcomes…pricing makes sense when the agent is doing real engineering work and the paywall belongs where the system is delivering measurable value, actually taking responsibility for outcomes like reducing downtime, keeping systems stable, or accelerating delivery.”
– Spiros Xanthos, CEO of Resolve
Law #4: AI developer tooling spend is breaking out of traditional budgets
New categories of spend are emerging as enterprises create dedicated AI budgets, moving initially from the CIO through to all parts of the organization. Many companies are already making trade-offs between spending on AI tools versus hiring additional engineers, constantly asking whether they can accomplish goals with agents instead of adding headcount.
As we’ve observed across other vertical software companies selling into historically services-heavy industries, the delegations and workflow to coding agents, for example, are beginning to both supplement and supplant the junior engineer. The focus is also not just on productivity gains and cost savings but the maximization of skills (i.e., for individuals to have altogether net new capabilities so they are less reliant on others to get things done).
Budget sources also reveal a more complex, multi-stakeholder purchasing environment. While developer-led GTM remains king in a noisy competitive landscape, within the enterprise, CIOs, engineering leaders, product teams, and individual developers all influence buying decisions differently than in the previous generation of developer tools because of the level of guardrails required to integrate a non-deterministic system.
Success metrics are shifting toward consumer-like expectations of immediate value and magical experiences. Traditional developer tool metrics around productivity are being supplemented by outcome-based measures: time from idea to working prototype, reduction in total development cycles, and business user productivity gains. Cursor’s analytics, for example, track granular metrics including number of suggestions shown, accepted suggestions, lines of code produced with AI assistance and even acceptance rates of AI-generated suggestions.
Law #5: The definition of developer continues to widen dramatically
AI is making software creation accessible to more people, fundamentally expanding who counts as a “developer” (a trend we were privileged to bear witness to starting nearly a decade ago with our seed investment in Zapier). The vast proliferation of vibe coding and AI-assisted development are creating new categories of builders who create custom software without writing or caring about code directly.
Platforms like Lovable, Bolt, Create, and v0 are already driving users to developer platforms that traditionally only service technical users. This cohort is also easily identifiable by the types of questions they ask: they don’t yet have the ability to troubleshoot, read error code, understand what it means to have a database server separate from a web server, load balancer, etc. And because these users often get stuck in the stages between prototyping and production, we’ve found companies bucket this usage less as quality revenue and more as efficient marketing, though we suspect this will also shift over time as developers start to live at a higher level of abstraction. Further, we’ve also seen how non-technical team members are helping to free up precious developer time for coding and engineering tasks outside of the company’s primary product. For example, given the right tools, AEs can now create custom demos for technical products, marketers can create sample apps to share on X, and content marketers can write technical blog posts.
Above all, this shift fundamentally redefines valuable skills: domain expertise and customer communication now matter more than coding ability across all roles, while systems thinking becomes more critical as work evolves from low-level implementation to orchestration and strategy. Success hinges on individuals and teams understanding how complex pieces connect, knowing where to trust automation, and recognizing when human intervention is essential. Despite software being faster and easier to ship than ever, the changing definition of the developer reinstates the importance of the fundamentals of durable businesses.
“Today there are 17 million JavaScript developers - these are traditional developers. But we expect that number to reach 100 million in the next 10 years.”
– Mathias Biilmann Christensen, CEO of Netlify
Law #6: Stronger network effects incentivize early ecosystem positioning
Traditional developer companies cultivated network effects through a few mechanisms, particularly open source and community contributions, as well as integrations and plug-ins. Now, network effects are being redefined and reimagined with the proliferation of agentic development.
First, agent-to-agent network effects have begun emerging, where AI agents become more useful when they can communicate and compose with other agents. For example, a scheduling AI agent that can book meetings becomes more powerful once it can communicate with other people's travel agents, expense management agents, and calendar agents (which is made possible through protocols like MCP). Second, data network effects have become amplified due to context - the more context an AI agent has, the more it can complete the work you want it to do. Products who possess that context become increasingly valuable. As an example, Linear's Product Intelligence can suggest task assignments, categorize issues, and streamline product operations because it has years of accumulated data on how thousands of engineering teams actually work.
However, network effects have weakened where integration lock-in effects traditionally created switching costs. As David Gu, CEO of Recall, noted, "It is now easier than ever to switch between different APIs, because instead of you as a human needing to manually write that integration code, you have AI agents help you.” MCP further reduces lock-in by enabling AI agents to discover and integrate tools automatically, and LLMs in general make it easier for anyone to research and synthesize options during their evaluation processes.
Finally, in ecosystems where AI is driving developer tooling recommendation decisions, the role of subjective feedback from humans presents a paradox. AI agents may ignore subjective preferences, like ease of use, and focus purely on objective performance metrics, like performance and latency. On the other hand, AI agents may lean more on subjective human feedback as it learns over time. This paradox means the highest-quality products benefit regardless – developer-led growth, product launches, documentation, educational content, conferences, community forums, and reviews all become even more crucial, and speed matters more than ever as first-mover advantages compound.
As we noted earlier though, these laws are a WIP and company leaders unveil diverging perspectives. For example, Nikhil Gupta, CTO of Vapi believes, “AI weakens non-objective based network effects and strengths objective network effects. For example, people might find Stripe’s API easiest to use relative to others, but AI agents probably don't care about the ease of use when comparing Stripe API vs Ayden API. However, if Stripe is more reliable, all AI agents will pick that.” In a similar vein, Spiros Xanthos, CEO of Resolve, affirms, “An agent-first GTM is about proof, not hype. Show up in a customer’s environment, deliver outcomes that matter, and adoption grows naturally. That is the new evangelism.”
Law #7: Platform engineers are evolving into autonomous flow architects
The role of platform engineering is expanding from managing software to creating autonomous engineering flows. Platform engineers are becoming responsible for the user experience of all technical teams and their importance in the organization is increasingly reflected in the urgency with which they’re being hired.
The transformation affects multiple areas of responsibility. Platform engineers now need skills in designing agentic flows with clear human oversight stages, enforcing robust guardrails to manage risks from agents performing incorrect actions, and owning system and information architecture beyond just uptime and reliability. They're building what amounts to AI control centers for the most complex strategic decisions while letting agents handle routine operations.
As AI agents handle more of the actual code generation, software engineers are transitioning from craftspeople to product owners of their own systems. This fundamental shift means engineers increasingly care about outcomes rather than implementation details. This creates new workflow requirements: robust testing and monitoring become critical, documentation must explain system behavior rather than just code structure, and code review transforms from checking syntax to validating business logic and architectural decisions. The implications extend beyond individual productivity – teams need new processes for knowledge transfer, incident response becomes more challenging when no human fully understands the generated code, and technical debt accumulates differently when the original implementation logic isn't human-readable. Organizations must invest heavily in observability, automated testing, and architectural governance to maintain system reliability when their engineers become operators rather than authors of their own code.
As AI generates code at unprecedented speed, the primary bottleneck shifts from writing code to verifying its correctness. This fundamentally changes development velocity – teams can produce thousands of lines of code in minutes, but validating that it works as intended, integrates properly with existing systems, and meets security and performance requirements takes significantly longer. Companies that optimize for verification speed through better testing frameworks, real-time validation tools, and visual confirmation systems will have significant advantages in AI-assisted development cycles.
“The most significant ongoing shift in platform management is transitioning from managing infrastructure to optimizing the developer workflow. Engineering teams now see that building and maintaining a bespoke internal development and deployment platform is often undifferentiated work that drains resources from the core business. By leveraging managed platforms like Render to handle the underlying infrastructure, platform engineers can focus on higher-value automation.”
– Anurag Goel, CEO of Render
Law #8: Defensibility is about continuous evolution and platform control
At its core, being a platform is about creating extensible infrastructure for third parties to build alongside and on top – enabling an ecosystem that grows more valuable as more users contribute and exhibit real community love.
While this concept remains constant from the SaaS era, the AI era has elevated certain pillars of defensibility. First, entry point control (e.g., GitHub owning code repositories or VS Code dominating text editing) grants platforms the strategic right to expand functionality on top of established user behaviors. Second, data advantages manifest through proprietary product datasets and company-specific context that enable features competitors cannot replicate.
The most fundamental change, however, is that continuous evolution remains paramount. The best platforms actively coordinate multiple AI models, data sources, and workflows to take autonomous action. They tend to both possess unique data from their ecosystems and are also able to quickly utilize the data for real-time feedback loops from agentic and customer interactions.
On top of all this, speed is key, both in terms of shipping additional capabilities and establishing strategy. Companies must think through their Act 2 and Act 3 visions dramatically earlier than was necessary in the SaaS era and we are excited to see how this continues to evolve.
“It’s about being the first one to change how things are done, and from a product perspective, building something that will continuously evolve. A platform like a CRM for example - someone manages it, controls it, has opinions for it and iterates from its core building blocks.”
– Zohar Einy, CEO of Port
Open sourcing our research in building stellar B2D businesses
The playbook grows continually more complex for building stellar B2D businesses, and winning requires building products and companies that treat agents as first-class citizens, design for the full maturity curve in pricing, and invest in continuous adaptation and defensibility.
As our team continues to evolve our investment roadmaps in the tooling and platforms for R&D teams, these laws aim to help guide the founders on building on the edges of innovation. We want to know how these rules are helping you build — so if you have feedback for us or what to contribute to our ongoing qualitative research — don’t hesitate to reach out: Lindsey Li (lli@bvp.com) and Libbie Frost (lfrost@bvp.com).
Further recommended reading
- Roadmap: Developer tooling for Software 3.0
- How to activate the developer relations flywheel with the why, try, buy, fly method
- Scaling your engineering team from one to 50 and beyond
- Research to Runtime
- AI-powered R&D—vibecoding, taste, and the evolution of full-stack design
- Bessemer’s AI agent autonomy scale—a new way to understand use case maturity
- Seven product strategies to prevent churn for B2B AI app leaders
- What’s driving the Data Shift Right market?
- Eight laws for developer platforms (2017)
- New developer laws that are harder, better, faster, stronger (2019)