A complete, free AI use policy for teams of 10–500. Covers all 8 critical areas — approved tools, data handling, IP ownership, prohibited uses, and more. Ready to customize and deploy.
Quick Answer
An AI use policy for small business covers 8 areas: approved tools, prohibited uses, data handling rules, output review requirements, IP and ownership, incident reporting, training requirements, and policy review cadence. A complete template for teams of 10–500, ready to customize in under 2 hours.
Key Takeaways
On this page
An AI use policy is a written document that tells employees how they are and aren't allowed to use artificial intelligence tools at work. It covers which AI tools are approved, what kinds of data can be shared with them, how AI-generated outputs should be reviewed, and what happens when things go wrong.
Think of it like your internet use policy from 2005 — except the stakes are higher. AI tools can accidentally expose confidential client data, generate incorrect information presented as fact, create IP ownership confusion, and introduce regulatory risk. A policy doesn't prevent all of this, but it dramatically reduces the blast radius of mistakes and makes your expectations clear.
An AI use policy is different from an AI strategy (what you're trying to accomplish with AI) and different from AI training (how employees learn to use AI). It's specifically the governance document that sets the rules.
If you don't have an AI use policy, your employees already have one — it's just unwritten, inconsistent, and based on individual judgment. That's a problem.
Data exposure risk: Without guidelines, employees will paste confidential client data, financial information, or PII into consumer AI tools. Once it's in there, you've lost control of it.
Quality inconsistency: When every employee prompts AI differently and reviews outputs differently, you get wildly inconsistent quality across deliverables.
IP ambiguity: Who owns the copyright on AI-generated work? If you don't have a policy, the answer is unclear — and that matters for client contracts.
Compliance exposure: GDPR, HIPAA, SOC 2, and other frameworks increasingly require documentation of how you handle data. AI use is now part of that.
Legal liability: If an employee uses AI to write discriminatory content, fabricate research, or create false impressions, your company is exposed. A policy is your first line of defense.
The good news: you don't need a 40-page legal document. A clear, concise 2–3 page policy covers everything most small businesses need. Here's exactly what to include.
List the specific AI tools your company has reviewed and approved for use. Don't write "AI tools that are appropriate" — name them. Approved tools typically include ChatGPT (Teams or Enterprise only), Microsoft Copilot, Google Gemini for Workspace, and any purpose-built tools your team uses.
Separately, address tools that require approval before use (shadow IT risk) and tools that are prohibited outright (consumer-tier tools with unclear data practices, tools hosted outside your jurisdiction, etc.).
This is the most critical section. Be explicit about what data can and cannot be shared with AI tools. Most policies use a tiered system:
Typical data tiers:
AI makes mistakes. Confidently. Your policy should specify that AI-generated content requires human review before use, especially for: client-facing content, factual claims, legal or financial advice, code going to production, and anything with numbers.
Specify who is responsible for review. "The employee who used the AI tool is responsible for verifying accuracy before use" is a clear, enforceable standard.
Be explicit about what AI cannot be used for. Common prohibitions include:
Address three IP questions:
All employees using AI tools should complete baseline AI literacy training before using AI in client-facing work. Specify what that training covers (prompt writing, output evaluation, data handling) and who provides it.
Link to your team's AI training program and your shared prompt library if you have them.
Define what constitutes an AI incident (data exposure, significant output error used externally, AI-related security issue) and how to report it. Employees shouldn't fear punishment for reporting — early reporting usually means smaller problems.
Include: who to report to, what information to capture, and the response timeline. Most incidents should be acknowledged within 24 hours.
AI moves fast. Your policy should specify a review schedule (minimum every 6 months) and a process for adding or removing approved tools between scheduled reviews. Assign an owner — typically the operations lead, IT manager, or founder — who is responsible for keeping the policy current.
Copy the template below. Replace bracketed fields with your company's information. For regulated industries (healthcare, finance, legal), have your attorney review before publishing.
[COMPANY NAME] AI USE POLICY
Version 1.0 | Effective: [DATE] | Owner: [NAME/ROLE]
1. PURPOSE
This policy establishes guidelines for the responsible use of artificial intelligence (AI) tools at [Company Name]. Our goal is to enable employees to use AI productively while protecting client data, company information, and our business reputation.
2. SCOPE
This policy applies to all employees, contractors, and interns who use AI tools in connection with their work at [Company Name], regardless of whether the tool is company-provided or personal.
3. APPROVED AI TOOLS
The following AI tools are approved for business use:
Tools not on this list require written approval from [Owner] before use. Do not use consumer-tier AI tools (free ChatGPT, etc.) for work involving company or client data.
4. DATA HANDLING
APPROVED to share with AI tools:
PROHIBITED from sharing with AI tools:
When in doubt, anonymize before sharing or ask [Owner].
5. OUTPUT REVIEW REQUIREMENTS
All AI-generated content must be reviewed by the employee who used the AI tool before use. Do not publish, send to clients, or submit AI-generated content without review for:
You are responsible for the content you submit, even if AI assisted in its creation.
6. PROHIBITED USES
You may not use AI tools to:
7. INTELLECTUAL PROPERTY
Work product created using approved AI tools in the course of employment is owned by [Company Name], subject to each tool's terms of service. AI-generated content may not be protected by copyright — employees should not represent AI-generated work as fully original creative work in client-facing contexts without disclosure. When client contracts address AI usage, those terms govern.
8. TRAINING
Before using AI tools for client-facing work, employees must complete [Company Name]'s baseline AI literacy training. This covers prompt writing fundamentals, output evaluation, and data handling. Training is provided [via / by] [training provider/method]. Contact [Owner] to schedule.
9. INCIDENT REPORTING
Report AI incidents immediately to [Owner] at [contact]. An AI incident includes: inadvertent sharing of prohibited data with an AI tool, significant AI output errors used externally before detection, and any AI-related security concern. Prompt reporting is encouraged — we will not penalize good-faith reporting of mistakes. Incidents will be acknowledged within 24 hours.
10. POLICY REVIEW
This policy will be reviewed every 6 months by [Owner]. Employees may request changes at any time by contacting [Owner]. When significant changes occur (new tool approvals, regulatory changes, incidents), an updated policy will be communicated within 30 days.
11. QUESTIONS
Contact [Owner] at [contact] with questions about this policy.
💡 Pro tip: Store your policy in Atlas
Atlas lets you store governance documents like this policy alongside your team's prompts and SOPs — so everything is in one place. Start free →
Yes — if your employees use any AI tools at work (ChatGPT, Copilot, Gemini, etc.), you need a policy. Without one, employees make their own decisions about what data to share, which tools to use, and how to verify AI outputs. A policy protects your clients, your IP, and your business from liability. It also helps employees use AI more confidently because they know what's allowed.
For a small business (under 200 employees), 2–4 pages is the right length. Long enough to cover the critical areas (approved tools, data handling, IP, prohibited uses), short enough that employees will actually read it. The template in this guide is approximately 1,200 words — about 3 pages. Avoid creating a 20-page compliance document nobody will read.
Yes, that's the point. The template is designed to be copy-paste ready. You'll need to fill in your company name, add your specific approved tools, and adjust any sections that don't fit your industry. For regulated industries (healthcare, finance, legal), have your attorney review before publishing. For most small businesses, the template works as-is with minor customization.
At minimum, review it every 6 months. AI tools change fast — a tool that was approved 8 months ago may have changed its data practices. Review more frequently if: you onboard major new AI tools, there's a high-profile AI data breach in your industry, you change how you handle customer data, or new AI regulations affect your business. Set a calendar reminder.
Your policy should specify consequences — typically following your existing disciplinary process. Minor violations (using an unapproved tool for a low-risk task) might warrant a conversation and retraining. Major violations (sharing confidential client data with an external AI tool) might warrant more serious action. The goal of the policy is prevention, not punishment — make sure employees understand the policy before you enforce it.
The ShiftWorks AI Governance Launchpad delivers 6 custom governance documents for your team in 2 weeks — AI use policy, data classification framework, approved tool registry, incident response plan, training curriculum, and ROI tracking template.
$2,500 flat