Icertis and Dioptra Partner to Accelerate AI-Powered Contracting

A Practical Framework for Drafting Your Organization’s AI Policy

Published on: Aug 18, 2025

Artificial Intelligence is moving from pilot projects to embedded business operations. For in-house legal teams and general counsel, the question is no longer whether to have an AI policy—it’s how to create one that is credible, practical, and fit for purpose.

Below is a pragmatic framework you can use to structure your organization’s AI policy, alongside examples from leading institutes that have already published templates.

Step 1: Understand your Organizational Processes and Needs

A policy is not just words on paper pulled from the internet.  It needs to be an actionable guide for your organization.  So before you can draft an AI policy, you need to start with a map, a course, and a model.

The Map

As your policy should both set rules on AI use as well as provide processes around AI use approvals, you should start with a map of what teams and roles in your organization own decision-making around AI use. This is unlikely to be a single person or team, more likely is that multiple teams are involved, potentially including procurement, IT, security, legal, compliance, data governance, operations, and the teams that own particular data sets.  

The Course

AI opens the door to huge opportunities and huge risks.  Before you can finalize an AI policy there needs to be a general meeting of the minds of the leadership team as to what risks are acceptable, what risks are not acceptable, and which risks require an individual review.  As the AI landscape is evolving so rapidly, these assessments should be revisited with more regularity than we have with other technologies.

The Model

Your AI policy should match the format, tone, and complexity of your organization’s other policies.  AI policies can be extremely detailed and complex or they can be short and simple.  The form that is right for your organization should be influenced by how you have historically written policies.

Step 2: Define the Purpose and Scope

  • Why the policy exists (protecting stakeholders, compliance, trust).

  • Where it applies (enterprise-wide, or limited to specific teams/uses).

  • Who it applies to (employees, contractors, vendors).

Step 3: Understand the Current Regulatory and Standards Landscape

Your AI policy should be sufficient to enable your organization to meet its regulatory obligations, adhere to your current standards around security and governance, and meet your customer’s expectations.  Laws and frameworks that should be part of your consideration are ISO/IEC 42001, NIST AI Risk Management Framework, OECD AI Principles, the EU AI Act, the laws of any other jurisdiction in which you are doing business, industry specific regulations.  In addition, you should be aware of laws that are not specific to AI, but may impact how it can be used such as privacy laws, intellectual property laws, and employment laws.

Leverage information provided by organizations such as the AI Governance Library and Responsible AI or your outside counsel.

Step 4: Start Drafting

A robust AI policy typically covers:

  1. Responsible Use Principles – Transparency, fairness, accountability, security, privacy.

  2. Acceptable vs. Prohibited Use – What AI tools are sanctioned and what uses are out of bounds (e.g., discriminatory practices, sensitive data risks).

  3. Governance & Oversight – Who is accountable (AI steering committee, legal, compliance).

  4. Risk Management – Process for reviewing, testing, and monitoring AI systems.

  5. Compliance – References to EU AI Act, sector-specific regs, and internal company policies.

  6. Enforcement & Training – Expectations, disciplinary measures, and employee education.

Instead of starting from scratch, you can start from a template provided by experts and modify to meet the needs you identified in the above steps.  Here are 4 that you may want to consider as starting places. 

Responsible AI Institute Template Hybrid – integrates ISO 42001 + NIST RMF, with EU AI Act & U.S. EO terminology High – comprehensive, broad coverage Enterprises needing a credible “all of the above” policy with regulatory credibility
SANS (CRF + SANS Institute) AI policy w/ explicit EU annexes; cybersecurity-first perspective Medium – balances governance + security Companies needing EU compliance coverage and a strong security emphasis
ISACA Readable, medium-length AI policy with governance guardrails Medium-Low – digestible but credible Mid-sized organizations seeking balance between compliance and accessibility
AIHR Simple, employee-facing AI usage guidelines Low – short, clear, high-level Startups and HR teams needing quick rollout and workforce awareness

Step 5: Keep It Alive

AI policies are not “set and forget.” Establish a review cadence (quarterly or bi-annually) to account for regulatory updates, new use cases, and lessons learned from deployments.

Final Thought

Your AI policy is both a shield (against regulatory, reputational, and security risks) and a compass (guiding responsible innovation). The examples above give you a starting point; the framework ensures you can tailor them to your organization’s risk appetite and culture.