11.18.25

From tasks to systems: A practical playbook for operationalizing AI

How to automate real workflows and upskill your team with the CRAFT Cycle framework.

Task-level AI was just the warm-up. The real transformation happens when organizations move beyond using AI to complete isolated tasks and start using it to run entire workflows. That’s how small teams are now operating with the leverage and speed of organizations ten times their size.
 
“AI offers arbitrage across workflows and gives internal business users technical superpowers,” says Nick Scavone, CEO of Seam AI, whose 10-person team is scaling like a much larger org. As adoption deepens, employees evolve into managers of AI systems: directing, reviewing, and refining outputs instead of starting from scratch.
 
“AI offers arbitrage across workflows and gives internal business users technical superpowers.”
Most organizations are already tapping AI for productivity—synthesizing information, brainstorming ideas, generating code—but that’s still a small fraction of AI’s role at work. The next competitive frontier is workflow automation: using AI not just to accelerate work, but to fundamentally rewire how work gets done.
 
“A lot of us have gotten into the habit of using AI to take over the mundane work we’d rather not do,” says Rachel Woods, former Meta data scientist and founder of DiviUp. “But the real value comes when you automate entire processes—scaling what your team does best and unlocking ‘infinite time’ for the work only humans can do.”
 
When operationalized well, AI doesn’t just make teams faster—it makes them stronger. It enhances existing skills, unlocks new capabilities, and compounds innovation across every function.
 
This guide is your playbook for getting there. Drawing on the expertise of Rachel Woods, Nick Scavone, and the Operating Advisor Network at Bessemer, we introduce CRAFT Cycles, a step-by-step methodology for process automation developed by Rachel Woods. You’ll learn how to identify high-ROI use cases, design effective automations, assign the right roles, and build systems that improve over time—turning AI from a helpful tool into a true operating advantage. Plus we share the best practices early stage founders are implementing today. 

Top takeaways for leaders on process automation 

  • Move from tasks to systems (workflow automation > task automation). The competitive edge comes from automating end-to-end processes, not just isolated tasks—so AI consistently executes, humans review and refine, and the org compounds speed and quality over time.
  • Adopt the CRAFT Cycle to operationalize AI reliably. Follow each stage: Clear Picture → Realistic Design → AI-ify → Feedback → Team rollout to turn documented workflows into stable automations with tight feedback loops, measurable outcomes, and ongoing iteration.
  • Start with “tiny but useful” automations, then scale via progressive delegation. Pick low-risk, high-clarity slices of a process first; get them working well and then expand scope. Avoid boiling the ocean or chasing the hardest use case first.
  • Integrate the right roles on a team: CAIO, AI operator, AI implementer. A Chief AI Officer sets vision, governance, and change management; an AI operator (often PM-like) owns discovery, design, adoption, and iteration; AI implementers build and integrate the solutions. 
  • Enablement is the #1 ROI. Prioritize use cases that unlock new capabilities (custom demos, data extraction, faster research) before counting cost savings and productivity gains. Tie each automation to a clear business outcome.
  • Build the foundation: That looks like tools, policies, and upskilling. Provide access to general LLM tools (e.g., Anthropic/Perplexity), publish responsible-AI guidelines, protect data, and create lightweight learning loops (e.g., peer demos, office hours) to drive safe, confident usage.
  • Institutionalize adoption and re-adoption. Make enablement someone’s job, which includes, training, playbooks, governance, metrics. Revisit “failed” use cases every ~6 months—models improve quickly, and yesterday’s misses can become tomorrow’s wins.

How to build the foundation of AI-first work

Leaders achieve the most success with AI adoption when they approach it for what it really is: technology implementation and a form of robust organizational change.

Most founders and teams reading this guide are already exploring the AI “tinkering phase,” Rachel Woods says. It’s a critical on-ramp to larger, more ambitious use cases and creates a culture of experimentation and technology skill building. 
 
But start small, automating simple “smart intern-like” tasks, such as summarizing notes, organizing and cleaning up unstructured data, etc. In parallel, leaders can begin purchasing solutions, designing policies, and create upskilling opportunities based on their team’s experiences with AI instead of relying on general best practices alone.
 
Part one of this AI upskilling series offers a detailed guide on how to lead the necessary cultural shift and set the conditions for bold experimentation. But at high-level, you need to establish:
  1. Comfort with AI: Operationalizing AI requires team-wide trust and participation. Leaders can build that comfort through transparency, positive incentives, and by modeling AI use—not mandating it. “While not required, AI use is culturally encouraged,” explains Nick. “Over time, certain tools (like Cursor) have become the de facto standards.”
  2. Upskilling opportunities: You don’t need a formal upskilling program to start using AI, but your team should have ways to learn and ask questions—through peer mentorship, demos, “lunch and learns,” or short courses.
  3. Feedback loops: As leaders outline their AI goals, employees need spaces to share ideas, experiments, and concerns. Individual contributors often know which processes are best for automation and can spot issues early—so it’s essential their voices are heard.
  4. Tools: Even if you plan to build custom or specialized AI solutions, your team still needs access to general tools like ChatGPT, Gemini, or Perplexity to experiment and run early automations using CRAFT Cycles or similar methods.
  5. Policies and safeguards: Automating core or support workflows carries reputational, legal, and security risks. Leaders must implement AI safely, protecting employees, customers, and the business, by proactively following responsible AI practices. This includes vetting use cases, setting clear usage guidelines, and ensuring employees don’t share sensitive data with public models.

Process automation playbook

“Most of us already know how to use AI to help with narrowly-defined tasks, but automating a process requires a fundamentally different approach,” explains Rachel. 

Tasks have the benefit of short feedback loops—you provide an input and immediately get AI’s output, which allows for rapid iteration and a high level of oversight. “You can have ChatGPT write you an email by just providing specific points to include and samples of your past writing. If the first email it creates is missing context or is too formal, you can provide feedback, and continuously go back and forth until you end up with a satisfactory result. Then you can save that entire exchange in a doc so you don’t have to go through as many revisions next time.”
 
But when automating a process, you don’t have the opportunity to intervene at each step. If the first step is poorly executed, it will impact the second step, and so on. By the time a human finally enters the loop to review and make final adjustments, the result may not be usable at all. 

Task vs. process automation

 
Function Task-based use cases Process automation
Recruiting Draft initial email outreach for an applicant based on a prompt Scan candidate resumes, extract key skills and experiences, and match them against open roles 
Marketing Provide copy ideas for a social post based on a prompt Review a report, pull important quotes and stats, and draft social posts and email blurbs tailored to different audiences
Sales Turn bullet points into a slide deck for a pitch meeting Research a prospect, pull insights from a discovery call, create a deck, and draft speaker notes for the call 
 
Rachel approaches delegating processes to AI the same way you would an intern. “If you give an intern a huge, disorganized internal doc and ask them to figure it out how to run a process on their own, there’s a good chance they're going to fail and not learn anything that will help them improve. It’s the same with AI. If we try to get AI to execute a process by putting a bunch of random context and examples in one really long prompt, we’re essentially saying: ‘This is your job. Now, go read all this and figure out how to do it yourself.’ And you end up with slow feedback loops, a high chance of error, and a lack of control over the final outcome.”
 
Before building an automation, you first need to make sure your process is clearly defined and unambiguous, determine whether AI could reasonably execute the process (provided the right context and training), and finally, take the time to prepare that context and training. The goal of CRAFT Cycles isn't to spin up as many automations as possible, as quickly as possible; it’s automating well-designed, high-impact processes so that AI can consistently deliver a good result.

How to run a CRAFT Cycle

CRAFT Cycles, a framework created by Rachel Woods, is a system for continuously operationalizing AI. 

“There’s five steps to a CRAFT Cycles, and when you take the time to do each one well, teams find it to be one of the easiest, fastest, and most guaranteed ways to get AI to reliably execute a process—and do it effectively enough that you can actually leave it to AI versus having to redoing the work later,” explains Rachel. 
 
C Clear picture Define the process, who's involved, and what success looks like. Document your existing workflows, identifying pain points and establishing goals for AI integration.
R. Realistic design Define a minimum viable AI solution that would be useful to implement. Focus on the smallest version that delivers value while intentionally limiting scope for future iterations.
A AI-ify Build out and implement the AI solution, whether through prompts, automations, or more sophisticated agent-based approaches. Success at this step requires being thorough in the previous two.
F Feedback Test your AI implementation and gather feedback, focusing on clear, actionable, and necessary improvements. Track what works and what doesn't across multiple test runs.
T Team rollout Create a plan to launch, train, and maintain the AI solution, including designating who will use it, what training they need, and how to measure success.
This framework was developed by Rachel Woods and The AI Exchange, shared with Bessemer Venture Partners for this guide.

Step 1: Define the process 

Ambiguity is CRAFT Cycle’s kryptonite. Before involving AI, define and document your existing workflows as precisely as possible, including: the goal, the people involved and their roles, the inputs required the steps of the process, the output of each of those steps, potential pain points, and the success indicators and ideal outcomes (aka what good looks like). 

Make sure the people who execute the given process—not just design or manage them—are in the room when you’re doing this exercise. Individual contributors who are often closest to the work will know how that work gets done tactically and whether there are deviations from how the process was envisioned or any documentation that exists. 
 
“Let’s say you create a bi-weekly customer newsletter that highlights news, resources, and expert advice that’s relevant to the persona or industry that you serve,” says Rachel. “If you want to delegate some (and eventually all) of the work to AI, our first step is to document the ideal process—how your team would go about doing this if time wasn’t a constraint.”
 
What is the goal of this process? Create a newsletter with curated insights to boost brand credibility and engagement
Who is involved and what are their roles? Content marketing manager who researches and writes; executive who provides POV
What are the inputs to start the process? Monthly topic, information on target audience, past examples of successful newsletters
What are the steps and the output of each? i.e. Summarize each resource into 2–3 bullet points; generate a clean, formatted list
What are the qualities of a good deliverable? i.e. All articles don’t have a paywall and are published within the past two weeks
What are the success indicators? i.e. Click-through rate (as compared to previous newsletters)
Where are the pain points or time sinks? i.e. Time-intensive sourcing, summarization
 
The more you invest in step one, the simpler and more successful the rest of the steps—and future CRAFT Cycles—will be. Teams may have to repeat step one with multiple workflows before selecting one to automate first. But that effort is far from wasted. While some processes won’t be viable or high-impact use cases now, you can revisit them as priorities shift, workflows change and improve, and AI capabilities advance. 
 
Seam’s CEO encourages teams to revisit attempted use cases and AI tools about every six months. “Just because something might not work well today doesn't mean it's not going to be great in the near future, and it’s important to try things again and even failed experiences. I remember our designer tried using AI to generate design files. The first time she tried it, it just didn’t work. Once I noticed the platform’s models improving, I had the team to test the tool again and it turned out to be really useful. So leaders need to help their teams keep an open mind.”
 
We provide more guidance on what to look for and avoid when selecting process automation use cases in this section below.

Step 2: Create a realistic design

The best automations are built incrementally. After you’ve mapped out an entire process, the next step is to hone in on the minimum viable process: a portion of the steps you’ve laid out that would be manageable to delegate to AI while still providing value.

“Instead of boiling the ocean and trying to AI to do every single step of your process right off the bat, carve out a chunk of your process that you feel confident you can get AI doing well quickly. The smaller initial bite you take, the faster you can get this working and move to the next step. Think of continuous development and continuous improvement,” says Rachel.
 
Once you go through the CRAFT Cycles for that chunk, you can come back and do a second cycle for the next one. “We’ve found it’s better to get AI to do a smaller portion very well than it is to invest a ton of time on something that’s ultimately too ambitious, and have nothing to show for the hours you put into the process.”
 
Returning to the newsletter example, you might find that the beginning portion is the most tedious and best suited to AI: source resources, summarize resources, and create interview questions, and decide to remove the actual drafting of content from scope for the initial go around. 
 
For the MVP you’ve portioned, you’ll create what Rachel calls an AI playbook. Similar to an Standard Operating Procedure (SOP) or onboarding document, the playbook breaks down a process, step by step, so that AI can “learn” to execute each step. “Instead of trying to teach AI to do everything all in one go, you separate out the process into small steps and teach AI to do each step well before moving onto the next one. There’s four key benefits to this approach: faster feedback loops, built for iteration, it’s scalable, and it’s easy to control,” explains Rachel. 

Sample playbook outline 

Playbook inputs
  • Monthly topic (optional)
  • Newsletter date
  • Context on customer base
 
Playbook steps
  • Source resources
    • Tools to use: Perplexity Pro
  • Summarize resources
    • Tools to use: Anthropic Claude
  • Determine monthly topic
  • Tools to use: Perplexity Pro
  • Create interview questions
  • Tools to use: Anthropic Claude
 
Playbook outputs
  • Summarized resources with links
  • Monthly topic
  • Interview questions
 
This framework was developed by Rachel Woods and The AI Exchange, shared with Bessemer Venture Partners for this guide.

Step 3: Build the automation

With your playbook in hand, it’s time to start building your automation. Approaches can vary from entirely prompt-based solutions to agents and custom-built AI applications. Even though it can be tempting to spend a lot of time and energy picking the right AI tool, Rachel suggests that the important part is actually the playbook, and you can run it in nearly any AI tool. The matra they use is “own the playbook, rent the tech.”

“AI is changing so quickly and there’s a concern that you’ll spend all this time building an automation only to find there’s a better way to do it next week. But if you invest in documenting your process, and providing clear instructions with plenty of context, you can feed it to another AI tool if a better option becomes available and avoid having to start from scratch.”

Types of automations

Type Overview Example tools Approach
Prompt-based AI completes steps and learns what to do, but the process is not executed automatically. It requires you to input each prompt. Claude, Perplexity, ChatGPT Document process in with a prompt for each step. Then, copy and paste the prompts one at a time into an LLM of your choice.
Prompts and automations Similar to the prompt-based approach, but with these solutions, the process is executed automatically versus requiring to trigger each step individually. Zapier, Airtable For each step, write a prompt (as you did with the previous approach). Then, connect those prompts in an automation tool and create a trigger that initiates the process.
Agents Similar to the previous approach, you’ll teach AI the process and AI will execute it automatically. Agents can handle more complex decision-making bu harder to control. 
Relevance AI, Claude Code
The approach and tech will depend on the agent setup you’re using, but Rachel recommends delegating one step in the process to each agent rather than delegating the entire process to one agent.
 

Case study: Building custom GPTs

Seam AI has integrated several out-of-the-box AI solutions to expand the capabilities of their lean team, such as Claude Code or Cursor to help junior engineers with code generation and Loveable to allow business teams to spin up web applications. The team has also built custom GPTs to execute common internal processes with a higher level of specificity. 

For example, Seam’s LinkedIn GPT is trained on a repository of the team’s past posts so that it can generate draft content that matches the team’s tone of voice, so the marketing team just has to refine before publishing. Their data-extraction GPT writes SQL queries and pulls custom datasets from the internal warehouse, allowing business users to run deeper analysis.

The Seam team’s approach to building custom GPTs is similar to using CRAFT Cycles for prompt-based process automations:

  1. Identify repetitive workflows such as writing social media posts, or answering recurring sales or support questions.
  2. Create a knowledge repository on Notion or Google Docs with written context and any reference materials.
  3. Upload that context into your GPT of choice.
  4. Run the prompt, and test and iterate until results are consistent and high-quality.
  5. Deploy internally and share across the team for feedback and adoption.

Step 4: Give feedback and improve

Iteration is a three-part, continuous cycle: identify issues areas, give feedback, and re-run the automation. “When a prompt-based AI executes a task poorly, most of us tend to upload more context or examples. A better approach is to give feedback by editing your initial prompt, and then run it again to see if it makes the same mistake or if the change introduces new mistakes.”

Effective feedback for AI is clear, actionable, and necessary. For each step of the process, log all your feedback and make sure it meets the criteria. create a way to track progress (i.e. whether the feedback has been implemented, and whether it’s resolved or another solution is needed). 
 
Feedback Clear? Actionable? Necessary?
The articles have a paywall 
Yes
Maybe, need to test whether the AI is able to check for paywalls Yes, if customers can’t read the article, there’s not value
Some of the articles are from sources that aren’t credible Yes No, need to list out traits of trustworthy sources Yes, inaccurate information damages the brand
Interview questions for the subject matter expert are generic No, unclear what makes a question generic vs. not No, need to provide context, examples, and counter-examples No, it’s possible to elicit insights with general questions 
This framework was developed by Rachel Woods and The AI Exchange, shared with Bessemer Venture Partners for this guide.
 
If feedback meets the criteria but clarifying the instructions in the playbook still isn’t effective, it may be time to try a new tool (i.e. Perplexity Pro versus ChatGPT). The issues that you aren’t able to solve become known limitations. In those cases, it may be time to refine the process so that it can be executed by AI, find a targeted out-of-the-box solution specializing in the use case, function, or industry, or, as a last result, return to step two to change your scope. 
 
It’s important to remember that the way you initially taught AI to execute a process will likely not be the best way to execute it in the long run. “Set it and forget it” is a recipe for disaster when operationalizing AI. At our best, employees are always looking for opportunities to improve the way we work and the same should be true even when we’re automating that work. 

Step 5: Roll it out to your team

Now that you have a working automation, it’s time to prepare for launch! While the momentum of a successful build can make it tempting to rush into the next CRAFT cycle, at least one person needs to be responsible for ensuring every automation is understood, used, and well-maintained. “This person should identify who will use the given playbook and who else needs to know about it, and make those people aware and answer their questions,” explains Rachel.
 
If the owner of the rollout is an individual contributor or mid-level manager, it’s best if a senior leader also updates the team, especially for early automations when the team may be unfamiliar with AI and need more encouragement to get involved. Leaders also need to ensure that the owner has the bandwidth, forum, and support to provide tactical training to those who need it. 
 
“Adoption doesn’t happen on its own—just because you built the automation doesn’t mean it’ll get used. Someone has to be responsible for enablement: training the team and making sure it sticks. In that sense, AI operators are building habits as much as they’re building tools.” 
 
Want to learn how to build an effective upskilling program to support AI adoption within your organization? Read part one of this series.

How to select use cases

Before building automations, each department or functional area should index and review their business processes and rank them according to priority. While processes that are mundane, time-consuming, or low-impact are common targets of automation, these aren’t actually strong indicators of good use cases. Instead, teams should assess use cases based on the business impact and technical feasibility, and meet in the middle. 

Below we provide more detail on what makes a great process automation use case, but at a high-level, you’re looking for:
  1. Clear ROI
  2. A precisely-defined, repeatable process that your team has mastered (or close to)
  3. A reasonable expectation that an AI can execute that process safely and effectively without degrading your customer experience or creating more work for your team 
  4. Or, even better, cases where AI can actually improve the process by introducing new capabilities that your team can’t do or don’t have time for now

Narrow scope and low risk (to start)

“I encourage teams to make a list of the highest ROI use cases and start with what’s ‘tiny but useful’” says Rachel. “Leaders often want to tackle the highest ROI use cases right off the bat, but you can lose momentum if you start with something that’s too difficult to automate and your first attempts fail. If you start with easier use cases, and let your team get a few cycles under their belt, those harder, higher ROI use cases won’t actually feel as hard because the process will be so familiar. That’s the beauty of CRAFT Cycles.”

It’s the same with risk. If delegating a process to AI could potentially compromise your customer experience or their privacy, or introduce security, fairness, or compliance risks, be honest about whether you can sufficiently address those risks before proceeding. Even if you think the chances of a problem arising are low, it’s worth waiting until your team is more experienced (as even making an accurate risk assessment requires expertise).

Standardized repeatable processes

Look for processes that your team has down pat: ones that are done regularly and the same way every time, and generate consistently positive results. Newer processes or ambiguous processes should continue to be done by a person who can refine, improve, and standardize the process until it’s ready for automation. 

Seemingly straightforward processes may actually require someone to make a lot of judgement calls in order to get to the desired outcome. “A marketing ops person might ask me to build an automation that pulls reports for a monthly meeting. On the surface, it’s a good use case for AI, but when I start to ask simple questions like—“What reports do you pull every month?”—they’ll tell me it depends on the goals, the campaigns they ran that month, etc. It’s okay to have variance, but that means you’ll need to codify, very precisely, how you come to each decision at each step of the process. Otherwise, AI is not going to give you the result that you’re looking for.”

Aim for progressive delegation

“AI doesn’t have to fully replace a workflow in order to be valuable,” says Rachel. In fact, making complete automation the goal can sometimes be detrimental. “If your team believes AI has to fully replace a workflow to be worth using, that mindset can stop any AI operations project in its tracks. The real wins come from what our team calls ‘progressive delegation.’”

Only a portion of the process needs to be suitable for automation (the “manageable chunk” you carved out in the realistic design stage). An employee can continue executing the rest, and, in doing so, find ways to standardize, mitigate risks, and codify decision-making associated with the remaining tasks until those pieces can be automated too. 

Enablement is gold standard of ROI

Increased productivity is the most commonly touted form of ROI for AI, but not necessarily the strongest one. Seam’s CEO categorizes ROI into three types: enablement, cost savings, and productivity gains, and views enablement as the most tangible and strategic form of AI ROI. 

  1. Enablement: AI unlocks new skills or capabilities you could not do before (i.e. whipping up custom demos for customers without requiring engineering time).
  2. Cost savings: AI allows companies to reduce hiring and contractor needs, consolidate or buy fewer seats for certain SaaS tools, or otherwise cut down on operational costs. 
  3. Productivity gains: AI saves time on tasks which can be redirected towards strategic work (but remember: this is only valuable if people actually reinvest that time into work).
In the beginning, it’s important to be choosy about what you automate. While it can certainly pay off in the long run, operationalizing AI into your internal workflows is not free (from a technology or time standpoint) and choosing use cases based on preferences or weak criteria backfire.

New AI roles and responsibilities

As AI becomes integral to how organizations operate, new roles are emerging, existing ones are evolving, and org charts are being reimagined. While structures differ by industry and business model, most companies are converging around three core areas of opportunity:

  • Chief AI Officer (CAIO)
  • AI operators (including roles like GTM engineers)
  • AI implementers

Leadership

Ideas for AI automation shouldn’t flow only from the top down. They should also rise from the bottom up. The best leaders create the conditions for everyone, not just executives or engineers, to contribute to how AI transforms their organization.

At the same time, leadership plays a guiding role. Founders and executives must determine how AI advances their strategic goals—whether by reinventing internal workflows, enabling new business models, or driving product innovation. The most successful companies treat AI not as a one-off project but as a system-level shift in how work gets done and value is created.
 
In startups, these efforts often begin with the founders themselves. “It’s common for founders to become the startup’s AI visionary,” says Rachel. “They’re the ones lying awake at night, thinking about the problems their teams face and imagining how AI could change the equation. I encourage founders to embrace that role—it’s both natural and necessary.”

Chief AI Officer (CAIO)

The CAIO role is gaining momentum as companies move from experimentation to fully embedding AI as a strategic differentiator. This leader oversees AI governance, risk, value creation, and company-wide integration.

A CAIO is far more than a technical appointment—it’s a strategic bridge between business ambition and machine intelligence. The best CAIOs define the company’s AI vision, orchestrate change management, and lead the upskilling required for teams to thrive in the AI era. They’re not just implementing tools; they’re rearchitecting how the organization learns, adapts, and competes.

AI operators

Operationalizing AI is a complex, ongoing effort that requires more than a single visionary. Even with a strong AI leader, turning ideas into scalable, programmatic solutions demands dedicated ownership.

“People are realizing how valuable it is to have someone focused solely on this work,” says Rachel. “That’s why we’re seeing the AI operator role explode right now.”
 
Sometimes called an AI automation specialist or AI enablement lead, the AI operator functions like the product manager of the CRAFT Cycle. They manage the full automation process—from identifying and prioritizing opportunities, to implementation, iteration, and adoption—ensuring systems work reliably and improve over time.
 
“The AI operator role requires at least 20 hours per week when done well,” Rachel adds. “You can hire internally or externally, but if you promote from within, give people time to learn and rebalance their workload. Don’t just pile it on.”

What makes a great AI operator

AI operators come from diverse backgrounds, both technical and non-technical, but share a common mindset. They are:

  • Holistic systems thinkers
  • Process- and user-oriented (like product managers)
  • Experienced in project management or operations
  • Skilled communicators and stakeholder managers
That last quality is critical. The AI operator is the connective tissue across teams—translating institutional knowledge into automation playbooks. “If we’re building a house, the AI operator is the general contractor,” says Rachel. “They need to understand what the people living in the house actually want, and then organize the resources to make it happen.”
 
For example, an AI operator might sit down with the sales team and ask: “Can you share your screen and walk me through your inbound deal vetting process? Why do you do it this way? What works and what’s frustrating?” This discovery process allows them to reimagine workflows that AI can later execute—anchored in real human needs.

AI operators must drive adoption and change

The work doesn’t end once an automation goes live. The AI operator often leads Phase Five of the CRAFT Cycle—rollout and adoption.

Zapier’s SPARK framework, for instance, includes a dedicated AI enablement lead whose full-time role is to drive adoption, curate tools, train teams, track wins, and maintain governance.
 
“Adoption doesn’t happen on its own,” says Rachel. “Just because you built the workflow doesn’t mean it’ll get used. Someone has to own enablement—training the team and making sure it sticks. AI operators are building habits as much as they’re building tools.”

Emerging models of AI operations

Depending on stage and resources, companies may embed AI operators within specific functions—similar to decentralized data teams. 

  • Sales teams are becoming more data-driven, using AI to streamline pipelines and boost efficiency.
  • Marketing teams are merging previously siloed roles, enabling smaller teams to accomplish more with AI—forcing CMOs to become more hands-on as they scale the GTM AI stack.
A prime example is the rise of the GTM Engineer: a hybrid role bridging engineering and revenue operations. Their mandate is to design scalable AI workflows for go-to-market teams that tie automation directly to customer outcomes. 
 
Engineering leadership is evolving too. Traditional VP or Head of Engineering roles are becoming far more cross-functional and externally engaged, reflecting AI’s deep integration into both the product and the business. Technical depth, architecture vision, and model operationalization are now baseline leadership expectations.
 
For smaller startups, Zapier’s model offers a practical blueprint: appoint one AI operator supported by AI ambassadors across teams. These ambassadors coach peers, help build lightweight automations, and escalate complex projects to the operator.

AI implementers

The counterpart to AI operators, AI implementers are responsible for the technical aspects of building automations. “The AI implementor is more focused on making sure solutions work effectively whereas the AI operator is focused on making them easy to use.”

It’s typical to have separate AI implementers and operators, but some organizations may find someone with the right skillset for both.
 
AI implementers should have:
  • Aptitude to technical problem-solving
  • The ability to build solutions from requirement
  • Up-to-date knowledge on the AI tool landscape. 

Parting advice for CEOs on operationalizing AI effectively 

As we cover in more depth in part one of this series, the challenge for founders and CEOs when operationalizing AI is not just buying or building the right tools and making the right hires, but also fostering a culture of experimentation, resilience, and trust that encourages employees’ active participation in AI initiatives. 

Nick of Seam AI offers this distilled advice for leaders in the early stages of operationalizing AI within their organizations:
  1. Focus on business-side workflows first: Enablement gains are often more significant in sales, marketing, and operations than in engineering.
  2. Encourage continuous re-adoption: Tools are improving rapidly, and teams shouldn’t abandon use cases just because of one failed attempt—or even several.
  3. Prioritize ROI by enablement then cost savings then productivity: To unlock the true value of AI, look for opportunities to unlock new capabilities, not just incremental efficiency.
If you are a leader looking to help upskill your organization on how to operationalize AI and turn tasks into systems, reach out to Rachel Wood of Diviup at rachel@diviupagency.com. A special discount code of BESSEMER is available for Atlas readers who are interested in signing up to the AI Operator Bootcamp.