Resources/AI Use Policy Template
Free Template · Copy-Paste Ready

AI Use Policy Template for Small Business

A complete, free AI use policy for teams of 10–500. Covers all 8 critical areas — approved tools, data handling, IP ownership, prohibited uses, and more. Ready to customize and deploy.

Quick Answer

An AI use policy for small business covers 8 areas: approved tools, prohibited uses, data handling rules, output review requirements, IP and ownership, incident reporting, training requirements, and policy review cadence. A complete template for teams of 10–500, ready to customize in under 2 hours.

Key Takeaways

  • The most important policy element is data handling — what can and cannot go into AI tools.
  • Prohibited uses should be specific, not vague: list exact scenarios, not general principles.
  • Include an IP/ownership clause — AI-generated work product ownership varies by jurisdiction.
  • Review and update your policy every 6 months; AI capabilities and risks change fast.
  • Distribute via Atlas so employees must acknowledge it — not just email it and hope.
12 min read·Updated June 2025·By ShiftWorks AI

What is an AI use policy?

An AI use policy is a written document that tells employees how they are and aren't allowed to use artificial intelligence tools at work. It covers which AI tools are approved, what kinds of data can be shared with them, how AI-generated outputs should be reviewed, and what happens when things go wrong.

Think of it like your internet use policy from 2005 — except the stakes are higher. AI tools can accidentally expose confidential client data, generate incorrect information presented as fact, create IP ownership confusion, and introduce regulatory risk. A policy doesn't prevent all of this, but it dramatically reduces the blast radius of mistakes and makes your expectations clear.

An AI use policy is different from an AI strategy (what you're trying to accomplish with AI) and different from AI training (how employees learn to use AI). It's specifically the governance document that sets the rules.

Why your team needs one now

If you don't have an AI use policy, your employees already have one — it's just unwritten, inconsistent, and based on individual judgment. That's a problem.

Data exposure risk: Without guidelines, employees will paste confidential client data, financial information, or PII into consumer AI tools. Once it's in there, you've lost control of it.

Quality inconsistency: When every employee prompts AI differently and reviews outputs differently, you get wildly inconsistent quality across deliverables.

IP ambiguity: Who owns the copyright on AI-generated work? If you don't have a policy, the answer is unclear — and that matters for client contracts.

Compliance exposure: GDPR, HIPAA, SOC 2, and other frameworks increasingly require documentation of how you handle data. AI use is now part of that.

Legal liability: If an employee uses AI to write discriminatory content, fabricate research, or create false impressions, your company is exposed. A policy is your first line of defense.

The good news: you don't need a 40-page legal document. A clear, concise 2–3 page policy covers everything most small businesses need. Here's exactly what to include.

What to include: 8 sections

1. Approved AI tools

List the specific AI tools your company has reviewed and approved for use. Don't write "AI tools that are appropriate" — name them. Approved tools typically include ChatGPT (Teams or Enterprise only), Microsoft Copilot, Google Gemini for Workspace, and any purpose-built tools your team uses.

Separately, address tools that require approval before use (shadow IT risk) and tools that are prohibited outright (consumer-tier tools with unclear data practices, tools hosted outside your jurisdiction, etc.).

2. Data handling rules

This is the most critical section. Be explicit about what data can and cannot be shared with AI tools. Most policies use a tiered system:

Typical data tiers:

  • ✓ Approved: General business writing, internal processes, non-confidential research, marketing copy
  • ⚠ Review required: Client project details (anonymized), internal strategy, financial data
  • ✗ Prohibited: PII, client contracts, passwords/credentials, HR data, trade secrets, attorney-client privileged info

3. Output review requirements

AI makes mistakes. Confidently. Your policy should specify that AI-generated content requires human review before use, especially for: client-facing content, factual claims, legal or financial advice, code going to production, and anything with numbers.

Specify who is responsible for review. "The employee who used the AI tool is responsible for verifying accuracy before use" is a clear, enforceable standard.

4. Prohibited use cases

Be explicit about what AI cannot be used for. Common prohibitions include:

  • • Generating content that misrepresents facts or fabricates information
  • • Impersonating real people or companies
  • • Creating discriminatory, harassing, or defamatory content
  • • Bypassing security controls or accessing unauthorized systems
  • • Submitting AI-generated work as original human work in contexts where that matters (RFPs, certifications, etc.)
  • • Using AI to make final decisions on hiring, firing, or performance reviews

5. Intellectual property

Address three IP questions:

  1. 1. Ownership of outputs: Work product created using company-approved AI tools in the course of employment belongs to the company, subject to review of any tool-specific terms.
  2. 2. Copyright risk: AI-generated content may not be copyrightable. Employees should disclose to managers when work product is substantially AI-generated.
  3. 3. Client work: Check client contracts — some explicitly address AI-generated content. Don't assume it's fine.

6. Employee training requirements

All employees using AI tools should complete baseline AI literacy training before using AI in client-facing work. Specify what that training covers (prompt writing, output evaluation, data handling) and who provides it.

Link to your team's AI training program and your shared prompt library if you have them.

7. Incident reporting

Define what constitutes an AI incident (data exposure, significant output error used externally, AI-related security issue) and how to report it. Employees shouldn't fear punishment for reporting — early reporting usually means smaller problems.

Include: who to report to, what information to capture, and the response timeline. Most incidents should be acknowledged within 24 hours.

8. Review cadence

AI moves fast. Your policy should specify a review schedule (minimum every 6 months) and a process for adding or removing approved tools between scheduled reviews. Assign an owner — typically the operations lead, IT manager, or founder — who is responsible for keeping the policy current.

Free AI Use Policy Template

Copy the template below. Replace bracketed fields with your company's information. For regulated industries (healthcare, finance, legal), have your attorney review before publishing.

[COMPANY NAME] AI USE POLICY

Version 1.0 | Effective: [DATE] | Owner: [NAME/ROLE]

1. PURPOSE

This policy establishes guidelines for the responsible use of artificial intelligence (AI) tools at [Company Name]. Our goal is to enable employees to use AI productively while protecting client data, company information, and our business reputation.

2. SCOPE

This policy applies to all employees, contractors, and interns who use AI tools in connection with their work at [Company Name], regardless of whether the tool is company-provided or personal.

3. APPROVED AI TOOLS

The following AI tools are approved for business use:

  • • [Tool 1] — for [use cases]
  • • [Tool 2] — for [use cases]
  • • [Tool 3] — for [use cases]

Tools not on this list require written approval from [Owner] before use. Do not use consumer-tier AI tools (free ChatGPT, etc.) for work involving company or client data.

4. DATA HANDLING

APPROVED to share with AI tools:

  • • General business writing and communication drafts
  • • Internal process documentation
  • • Non-confidential research and analysis
  • • Marketing and public-facing content

PROHIBITED from sharing with AI tools:

  • • Personal identifiable information (PII) of clients, employees, or third parties
  • • Client contracts, proposals, or confidential project details
  • • Financial data, forecasts, or proprietary business information
  • • Passwords, API keys, or authentication credentials
  • • Health information (PHI) or legally privileged information
  • • Trade secrets or competitive intelligence

When in doubt, anonymize before sharing or ask [Owner].

5. OUTPUT REVIEW REQUIREMENTS

All AI-generated content must be reviewed by the employee who used the AI tool before use. Do not publish, send to clients, or submit AI-generated content without review for:

  • • Factual accuracy (AI frequently hallucinates facts, citations, and statistics)
  • • Appropriateness for context and audience
  • • Consistency with company values and standards
  • • Legal or compliance requirements where applicable

You are responsible for the content you submit, even if AI assisted in its creation.

6. PROHIBITED USES

You may not use AI tools to:

  • • Generate false, misleading, or fabricated information
  • • Impersonate individuals, companies, or organizations
  • • Create content that is discriminatory, harassing, defamatory, or illegal
  • • Circumvent security controls or access unauthorized systems
  • • Make final employment decisions (hiring, firing, performance ratings)
  • • Submit work as entirely original when it is substantially AI-generated, in contexts where originality is represented (certifications, RFPs, academic submissions)

7. INTELLECTUAL PROPERTY

Work product created using approved AI tools in the course of employment is owned by [Company Name], subject to each tool's terms of service. AI-generated content may not be protected by copyright — employees should not represent AI-generated work as fully original creative work in client-facing contexts without disclosure. When client contracts address AI usage, those terms govern.

8. TRAINING

Before using AI tools for client-facing work, employees must complete [Company Name]'s baseline AI literacy training. This covers prompt writing fundamentals, output evaluation, and data handling. Training is provided [via / by] [training provider/method]. Contact [Owner] to schedule.

9. INCIDENT REPORTING

Report AI incidents immediately to [Owner] at [contact]. An AI incident includes: inadvertent sharing of prohibited data with an AI tool, significant AI output errors used externally before detection, and any AI-related security concern. Prompt reporting is encouraged — we will not penalize good-faith reporting of mistakes. Incidents will be acknowledged within 24 hours.

10. POLICY REVIEW

This policy will be reviewed every 6 months by [Owner]. Employees may request changes at any time by contacting [Owner]. When significant changes occur (new tool approvals, regulatory changes, incidents), an updated policy will be communicated within 30 days.

11. QUESTIONS

Contact [Owner] at [contact] with questions about this policy.

Last reviewed: [DATE] | Next review: [DATE] | Policy owner: [NAME]

💡 Pro tip: Store your policy in Atlas

Atlas lets you store governance documents like this policy alongside your team's prompts and SOPs — so everything is in one place. Start free →

Frequently Asked Questions

Does my small business actually need an AI use policy?

Yes — if your employees use any AI tools at work (ChatGPT, Copilot, Gemini, etc.), you need a policy. Without one, employees make their own decisions about what data to share, which tools to use, and how to verify AI outputs. A policy protects your clients, your IP, and your business from liability. It also helps employees use AI more confidently because they know what's allowed.

How long should an AI use policy be?

For a small business (under 200 employees), 2–4 pages is the right length. Long enough to cover the critical areas (approved tools, data handling, IP, prohibited uses), short enough that employees will actually read it. The template in this guide is approximately 1,200 words — about 3 pages. Avoid creating a 20-page compliance document nobody will read.

Can I just copy the template in this guide and use it?

Yes, that's the point. The template is designed to be copy-paste ready. You'll need to fill in your company name, add your specific approved tools, and adjust any sections that don't fit your industry. For regulated industries (healthcare, finance, legal), have your attorney review before publishing. For most small businesses, the template works as-is with minor customization.

How often should we update our AI use policy?

At minimum, review it every 6 months. AI tools change fast — a tool that was approved 8 months ago may have changed its data practices. Review more frequently if: you onboard major new AI tools, there's a high-profile AI data breach in your industry, you change how you handle customer data, or new AI regulations affect your business. Set a calendar reminder.

What happens if an employee violates the AI use policy?

Your policy should specify consequences — typically following your existing disciplinary process. Minor violations (using an unapproved tool for a low-risk task) might warrant a conversation and retraining. Major violations (sharing confidential client data with an external AI tool) might warrant more serious action. The goal of the policy is prevention, not punishment — make sure employees understand the policy before you enforce it.

Need all 6 governance docs, not just one?

The ShiftWorks AI Governance Launchpad delivers 6 custom governance documents for your team in 2 weeks — AI use policy, data classification framework, approved tool registry, incident response plan, training curriculum, and ROI tracking template.

$2,500 flat