Most teams paste AI into their work ad hoc. The teams that win document AI steps in their SOPs — specific tools, specific prompts, specific review standards. Here's how to build them, with 3 real examples.
Quick Answer
AI SOPs document which tools to use at which workflow steps, with specific approved prompts and quality review standards. Teams that document AI in their SOPs achieve 3x higher adoption rates than teams that leave AI usage to individual discretion. Three real SOP examples included.
Key Takeaways
On this page
An AI-enabled SOP is a standard operating procedure where AI tool usage is a documented, specific step — not a suggestion tacked on at the end.
Here's the difference:
❌ Traditional SOP (AI as afterthought)
"Step 3: Draft the client summary. You can use AI tools to help speed this up if desired."
Vague. Different employees do wildly different things here.
✅ AI-enabled SOP (AI as documented step)
"Step 3: Open Claude. Run the [Client Summary Prompt] from Atlas. Input: project brief + last 3 meeting notes. Review output for accuracy. Send to client."
Specific. Every employee does it the same way.
The five elements of a good AI step in an SOP: (1) which AI tool, (2) which prompt (linked to a prompt library), (3) what input to provide, (4) what the expected output is, and (5) what human review is required before the output moves forward.
When a team starts using AI tools, the SOPs that governed their work become outdated almost immediately. Here's what typically happens:
Shadow AI usage: Employees start using AI for SOP steps but don't document it. The official SOP says one thing; what people actually do is different.
Inconsistent results: Employee A uses a well-crafted prompt. Employee B wings it. They submit work product of wildly different quality for the same task.
No knowledge transfer: The best AI prompt for client emails exists in one person's brain. When they leave, it leaves too.
Compliance gaps: Your AI use policy says "approved tools only." But your SOPs don't specify which tools are approved for which tasks. Employees don't know what compliant looks like.
The fix is straightforward: rebuild your SOPs to include AI steps as first-class process steps, not footnotes. Here's how.
Inventory your existing SOPs
List all your documented processes. For each one, identify which steps currently involve AI (even informally) and which steps could benefit from AI.
Map the AI tools to specific steps
For each AI-eligible step, decide: which tool (ChatGPT, Claude, Gemini, etc.), what type of AI task (writing, analysis, summarization, classification), and what level of human review the output requires.
Write or collect the prompts
For each AI step, create the specific prompt that produces the best output. Store all prompts in Atlas so every SOP can reference them by name. Never embed prompts directly in SOPs — they'll go stale.
Rewrite the AI steps specifically
Replace vague AI references ("use AI to help") with specific instructions: which tool, which prompt (linked), what input, expected output, required review.
Add a review gate
Every AI output that has external impact (goes to clients, informs decisions, gets published) needs a documented human review step. Define who reviews, what they're checking for, and what "approved" means.
Test with a new employee
The real test of an AI SOP: hand it to someone who has never done the task. Can they execute it start to finish using only the documented steps? If not, the SOP has gaps.
These are actual, usable SOP examples with specific AI steps. Adapt them for your team.
Client Meeting Prep SOP (with AI)
Triggered by: calendar invite for a client meeting confirmed
Gather context
Pull the last 3 meeting notes, the current project brief, and any open action items from the project folder.
Generate meeting brief
AI STEPOpen Claude. Run the [Client Meeting Brief Prompt] from Atlas. Input: paste last meeting notes + open action items + today's agenda.
Tool: Claude (Atlas: Client Meeting Brief Prompt)
Expected output: A 1-page brief with context summary, open items status, suggested agenda, and 3 questions to ask.
Human review
Read the brief. Verify that action item statuses are accurate. Remove any confidential info that shouldn't be in a shareable doc. Add any context the AI missed.
Send to attendees
AI STEPEmail the brief to all meeting attendees 30 minutes before the call. Use the [Meeting Brief Email Prompt] from Atlas to draft the email.
Tool: Claude (Atlas: Meeting Brief Email Prompt)
Expected output: Short email with brief attached, 1–2 sentence framing.
Outcome: Consistent meeting prep; every attendee receives context in advance; no duplicate work across team members.
New Employee Onboarding SOP (with AI)
Triggered by: employee start date confirmed in HR system
Create employee profile
Add new hire to HRIS, Slack, email, and relevant project management tools.
Draft welcome message
AI STEPOpen ChatGPT. Run the [New Employee Welcome Message Prompt] from Atlas. Input: new employee name, role, start date, team, and 2–3 things you want to call out specifically.
Tool: ChatGPT (Atlas: New Employee Welcome Message Prompt)
Expected output: Personalized Slack welcome message, email welcome, and first-week agenda overview.
Generate 30-day plan
AI STEPRun the [30-Day Onboarding Plan Prompt] from Atlas. Input: role description, key projects, team structure, and 3 priority goals for Q1.
Tool: Claude (Atlas: 30-Day Onboarding Plan Prompt)
Expected output: Week-by-week plan with learning objectives, key meetings to schedule, and success metrics.
Review and send
Manager reviews the 30-day plan. Adjusts for specifics the AI couldn't know (personality, current team dynamics, urgent projects). Sends to new employee with their welcome message.
Outcome: Every new hire gets a consistent, personalized onboarding experience. Manager prep time cut from 3 hours to 45 minutes.
Weekly Ops Report SOP (with AI)
Triggered by: Friday at 3 PM (recurring)
Pull data
Export the week's key metrics: project status updates, pipeline changes, support ticket volume, and any anomalies flagged during the week.
Generate narrative summary
AI STEPOpen Claude. Run the [Weekly Ops Report Prompt] from Atlas. Input: paste raw data, bullet list of key events, any concerns to highlight.
Tool: Claude (Atlas: Weekly Ops Report Prompt)
Expected output: Executive-ready narrative summary: what happened this week, what's on track, what needs attention, recommended actions.
Review and calibrate
Ops lead reads the summary. Verifies that priority flags match reality. Adds any context the AI wouldn't have (politics, client relationships, upcoming decisions).
Distribute
AI STEPSend to leadership via email. Use the [Ops Report Distribution Email Prompt] from Atlas to write a 2-sentence framing email. Post summary in #ops-updates Slack channel.
Tool: Claude (Atlas: Ops Report Distribution Email Prompt)
Expected output: Short framing email and Slack post.
Outcome: Leadership gets a consistent, readable summary every Friday. Ops lead time cut from 2 hours to 30 minutes. No more "what happened this week?" slack messages.
An AI-enabled SOP is a standard operating procedure that includes specific steps for using AI tools — the exact tool to use, the exact prompt to run, what to do with the output, and how to review or verify the result. It's different from a traditional SOP that just says "use AI to help with this step." An AI-enabled SOP is specific enough that a new employee could follow it on day one.
Traditional SOPs describe processes as they existed before AI. When employees start using AI tools, they modify steps informally — they add a ChatGPT step here, paste output somewhere else — but those changes don't get documented. The result: your SOPs are out of date the day after AI adoption starts. Different employees run different AI processes for the same task. Quality becomes inconsistent. And when someone new joins, they have no documented AI workflow to follow.
This is exactly what Atlas is built for. Your SOP references a prompt by name ("Run the [Client Onboarding Summary Prompt] from Atlas"), and employees click through to Atlas to find it. When you update the prompt, every SOP that references it automatically uses the new version. This is much better than embedding prompts directly in the SOP document, which creates maintenance nightmares when prompts change.
Specific enough that a new employee could follow them without guessing. That means: (1) Which tool to use (ChatGPT, Claude, Gemini — be specific), (2) Where to find the prompt (Atlas, a specific folder, etc.), (3) What input to provide, (4) What to do with the output, (5) What human review is required before the output moves forward. If any of those five are missing, the step is too vague.
A prompt library is a collection of good prompts. An AI SOP is a documented workflow that includes AI steps. You need both. The SOP tells employees when and how to use AI. The prompt library gives them the specific inputs to run. Atlas connects both: your SOPs can reference prompts directly from your Atlas library, so employees follow the documented process and use the vetted prompt without jumping between tools.
Atlas stores your prompts alongside your SOPs — so AI steps reference your approved prompts directly. One tool for your entire operational AI stack.