If you searched "AI governance template," I'm guessing your CEO dropped this on your plate sometime in the last few weeks. Maybe legal flagged it. Or maybe a customer asked about your AI policy and you realized you don't have one. Either way, you're now the person at your company expected to figure out what one of these things should actually look like.

I've spent the last few months looking at every AI governance template I could find. The ones companies publish, the ones lawyers draft, the ones HR teams cobble together from Google searches. Most of them are bad. Either they're a generic doc that ChatGPT obviously wrote, or they're a 40-page enterprise framework nobody at a normal-sized company is ever going to read.

So this post is the middle path. The practical framework. The sections that actually need to be in your AI governance template, why each one matters, and the questions you have to answer for each. Not the boilerplate language (a lawyer should write that part). The structure that has to be in front of you before any of that starts.

Why a real AI governance template matters

Here's what actually happens at companies without one:

  • Marketing pastes customer emails into ChatGPT to draft replies. Nobody told them not to.
  • An engineer drops production source code into Cursor or Copilot. There's no rule against it, so technically it's not a violation.
  • An AI-drafted proposal goes out to a client with no disclosure. The client notices. And it gets awkward.
  • Someone signs up for a new AI tool with a personal account and a corporate credit card. Nobody approved it. Nobody even knows it's there.

And none of these are hypothetical. I see this kind of thing happen every week at companies without a written AI policy.

But here's the thing: nobody's acting in bad faith. They just don't have any rules to follow, so every employee is quietly making up their own. Twenty different sets of private rules, none written down, none consistent. That's the actual problem.

A written AI governance template fixes this. Not because it's bureaucratic, but because it gets everyone working from the same page. Acceptable use, data handling, disclosure, escalation. All in one document anyone on the team can read in five minutes and know exactly what's OK and what isn't.

It's also the artifact regulators and auditors are going to ask for. ISO/IEC 42001, the international standard for AI management systems, treats a written governance policy as a baseline. A growing list of U.S. state laws (Colorado, California, New York City) are heading the same direction. You'll need this anyway, so you might as well do it once and do it right.

What to include in a company AI governance template (quick list)

Here's the structure. Each section is explained below.

  1. Purpose & scope: who the policy covers and what actually counts as an "AI tool."
  2. Core rules: five to seven commandments everyone has to know by heart.
  3. Risk tiers: the difference between "drafting an internal email" and "shipping an autonomous agent."
  4. Data handling: what never goes into an AI tool, no matter what.
  5. Disclosure: when AI assistance has to be flagged externally, and when it doesn't.
  6. Prohibited uses: the short list of things that are off-limits, period.
  7. Approval process: how a new AI tool gets onto your approved list.
  8. Roles & responsibilities: who owns this, who reviews, who decides.
  9. Incident reporting: what counts as an incident and the deadline to report it.
  10. Review cadence & training: how the policy stays current as AI keeps changing.

1. Purpose & scope

This section answers two questions: who does the policy apply to, and what counts as an "AI tool."

Both look obvious until you sit down to write them. "Personnel" should explicitly cover employees, contractors, interns, and advisors, not just full-time staff. Otherwise when a contractor pastes customer data into ChatGPT, the policy doesn't technically apply to them, and you don't have grounds to do anything about it.

Same with "AI tool." Chatbots are obvious. But you also need to cover code assistants, image generators, transcription services, autonomous agents, and (this is the one most companies miss) AI features baked into software you already use. The AI summary in Notion is an AI tool. The AI rewrite in Gmail is an AI tool. So is Slack AI. If you don't include these, half your AI usage sits outside your own policy.

One more thing: the scope clause should state that the policy applies regardless of who paid for the tool or whose device it's running on. If the work involves company information, the policy applies. That's the line.

2. Core rules

These are five to seven sentences anyone at your company should be able to recite from memory. Long policies fail because nobody remembers them. But short, memorable rules survive contact with the actual workday.

The set we use in the AI Policy Template covers seven things: who owns the output, sticking to approved tools only, what data can never go in, when to disclose AI assistance, when prior approval is required, how fast to report an incident, and using the current version of the policy. Exact wording matters less than keeping the list short and obvious.

If your core rules section runs to fourteen bullet points, it's not a core rules section. It's a glossary, and nobody's going to read it.

3. Risk tiers

This is the part most generic AI governance templates miss, and it's the part that does the most work.

Not every AI use carries the same risk. Drafting an internal Slack message isn't the same as shipping a production agent that takes actions inside customer accounts. If your policy treats them identically, one of two things happens. Either the rules are too loose and the agent slips through, or they're too tight and people stop using AI for the easy stuff (which is where most of the productivity gain actually lives).

So the fix is a tiered system. Something like:

  • Low risk: internal-only, low-stakes, no regulated data. No approval needed.
  • Medium risk: customer-facing copy, marketing, non-production code. Tell your manager.
  • High risk: production code, regulated data (HIPAA, PCI, SOC 2 scope), customer-affecting decisions, autonomous agents, anything you'd be embarrassed to see on the front page. Formal sign-off required.
  • Prohibited: see the next section.

This shape borrows from the NIST AI Risk Management Framework, which is the closest thing to a public-sector reference for structuring AI governance by risk level. You don't have to follow it to the letter. The underlying logic of "categorize before you control" is the durable part.

One rule that's worth writing down: when in doubt, classify upward. The cost of over-escalating is one extra approval email. The cost of under-escalating is the kind of incident that ends up in a press release.

4. Data handling

If only one section of your AI governance template gets read carefully, this should be it. The single most common cause of an AI incident is the wrong data going into the wrong tool.

The data handling section needs to do three things:

  1. Define restricted inputs. These are categories of data that can't go into any AI tool unless that specific tool is approved for that specific category. Customer PII, employee data, source code, financial records, legal correspondence, anything under NDA, anything regulated.
  2. Set prompt hygiene rules. Replace identifying names with placeholders. Strip credentials and internal URLs. Summarize before pasting raw exports.
  3. Cover conversation history. Turn off training-on-chat where the option exists. Clear sensitive sessions. Prefer enterprise accounts over personal ones.

And one thing that's worth writing down explicitly because most policies miss it: an AI output that contains restricted inputs is itself restricted. If you summarized a customer list and the summary still has names in it, that summary is now a restricted document. Treat it accordingly.

5. Disclosure

When does AI assistance have to be flagged, and when is it just a productivity tool you can use silently?

The framing we use is material AI assistance. If AI generated the structural draft, the substantive content, or the final form of work going to a customer, partner, regulator, or the public, that's material. Disclose it. Spell-check, light rephrasing, and personal productivity use don't need disclosure.

Disclosure doesn't need to be heavy. A footnote, a metadata tag, a sentence in a methodology note, or a line like "Portions of this document were drafted with the assistance of [Tool] and reviewed by [Author]" is enough. The point is that the receiving party knows.

6. Prohibited uses

This is the short list of things that are off-limits regardless of risk tier or approval. Keep it tight. If it sprawls, it loses force.

Categories worth covering:

  • Final decisions about a person's employment or pay
  • Generating content meant to deceive
  • Harassing or unlawful content
  • Bypassing security controls
  • Putting restricted inputs into unapproved tools
  • AI-generated legal or medical advice
  • Training outside models on your restricted data

The principle behind all of these is simple: AI can't be used in ways that create risks your company wouldn't accept from a person acting alone. If a human employee can't legally or ethically do it, AI can't do it for them.

7. Approval process for new tools

New AI tools ship every week. Without a structured approval process, one of two things happens: people just start using whatever they want (shadow AI), or the process is so slow that they go around it on principle.

So a workable approval process is mostly about what you ask for. At minimum:

  • Tool name and vendor
  • Intended use case
  • Data categories involved
  • The vendor's data processing terms
  • Whether an enterprise contract exists

A turnaround time matters too. Without one, requests die in a queue. We use ten business days. The number is less important than committing to one and meeting it.

Two things that are non-obvious but worth including: the process applies to free tiers and personal accounts (this is where shadow AI actually lives), and a denied request can't be held against the person who asked. If people get punished for asking, they stop asking. And now you don't know what's being used.

8. Roles & responsibilities

A policy without an owner is a wishlist.

The roles section names the actual human accountable for each piece. At minimum:

  • AI Governance Lead. Owns the approved-tools register, reviews high-risk submissions.
  • People Managers. Acknowledge medium-risk AI use on their teams.
  • Department Heads. Sign off on department-wide AI workflows.
  • Legal Counsel. Vendor and regulatory questions.
  • IT and Security. Technical controls, incident investigation.
  • Executive Sponsor. Owns the policy itself, approves changes.

If your company is small, one person can hold several of these roles. The point isn't headcount. The point is that each role has an actual name attached, not a job title in a vacuum.

9. Incident reporting

Define what counts as an incident, and the deadline to report it.

An incident usually includes: suspected exposure of restricted inputs through an AI tool, an AI output that has caused or could cause harm, the discovery of shadow AI in regular use, or a vendor security disclosure affecting an approved tool. The reporting window we recommend is 24 hours from discovery, sent to a single named email address.

One thing the policy has to say: reporting in good faith is protected. If someone reports promptly and completely, they shouldn't be disciplined for the underlying issue. Without that protection, incidents go unreported. And the unreported ones are always the ones that grow into the actual problem.

10. Review cadence & training

AI capabilities change monthly. A policy written in 2024 that listed specific tools by name is already out of date.

In other words, two mechanisms keep yours current:

  • Scheduled review. At least every six months, plus any time a new category of capability becomes broadly available, a regulation changes, or an incident reveals a gap.
  • Onboarding and refresh training. New hires complete AI training within their first 30 days. Everyone refreshes annually. Material policy updates trigger a quick supplement to the team.

The discipline that matters most: never hardcode tool names into the policy itself. The approved tools list belongs in a separate register the AI Governance Lead maintains. That way the policy stays stable while the tool list updates quietly in the background.

Common pitfalls

A few patterns I've seen when companies try to write their AI governance template from scratch:

  • Listing tools by name in the policy itself. Six months later, half the names are dead products and the whole policy looks neglected. Use a separate register.
  • Writing 30 pages. Nobody finishes a 30-page policy. Three pages plus an appendix register is the right size for most companies under 5,000 people.
  • No risk tiering. Treat the autonomous agent the same as the email summarizer and you'll either over-restrict the email or under-restrict the agent. Both fail.
  • No incident reporting deadline. Without a number, incidents drift. Pick one. 24 hours from discovery is the standard.
  • No owner. If you can't name the AI Governance Lead, you don't have an AI governance template. You have a document.
  • Skipping legal review. Even a strong template should go past your counsel before rollout. The structure can be standard; the wording often needs to fit your jurisdiction and industry.

FAQ

Do I need a separate AI governance and AI use policy? +

For most companies under 5,000 people, no. One document covering acceptable use, governance, and disclosure is easier to maintain and easier for employees to remember. Larger or more regulated companies sometimes split governance (the framework) from use (the rules), but that's a structural choice, not a requirement.

How long should an AI governance template be? +

Three to five pages of policy, plus an appendix-style register for approved tools and roles. Longer than that and the people who need to read it won't.

Will a template that's a year old still be relevant? +

Only if the framework is structural, not tool-specific. A template that names "ChatGPT" or "GitHub Copilot" in the policy itself is dated the day a new model ships. A template that defines categories like "code assistant" or "agentic system" stays useful, and only the tools register needs updating.

Can I just have my legal team write one from scratch? +

You can. Expect 2 to 4 weeks of legal time and somewhere between $2,000 and $5,000 in fees. Starting from a solid template and having counsel review it ends up faster and cheaper, and the result is usually about the same.

Do small companies actually need this? +

If your team uses AI tools at all, yes. The risk doesn't scale with headcount. A five-person team can leak customer data into ChatGPT just as easily as a five-thousand-person team. The smaller the company, the lighter the document can be, but it still has to exist.

The bottom line

The structure above is the framework. Filling it in with the actual policy language (the sentences that show up in front of your team) is the work. You can do that from scratch (slow), with a lawyer ($2,000 to $5,000), or by starting from a template that's already drafted and reviewed.

So if you want to skip the drafting and just get something rolled out this week, that's literally what I built the AI Policy Template for. Word doc and Notion, lawyer-approved, $52, 7-day money-back guarantee. Same structure as the framework above, with the wording already filled in.

Either way: the part that actually matters is that the document exists. Once it does, AI isn't every employee quietly figuring it out on their own anymore. It's something your company has actually decided about.

That's it. Now go write your policy.

Get the lawyer-approved template for $52.

3-page Word doc + Notion template. Fill in the blanks and roll it out this week.

Get the AI policy template