Artificial Intelligence is moving from pilot projects to embedded business operations. For in-house legal teams and general counsel, the question is no longer whether to have an AI policy—it’s how to create one that is credible, practical, and fit for purpose.
Below is a pragmatic framework you can use to structure your organization’s AI policy, alongside examples from leading institutes that have already published templates.
A policy is not just words on paper pulled from the internet. It needs to be an actionable guide for your organization. So before you can draft an AI policy, you need to start with a map, a course, and a model.
The Map
As your policy should both set rules on AI use as well as provide processes around AI use approvals, you should start with a map of what teams and roles in your organization own decision-making around AI use. This is unlikely to be a single person or team, more likely is that multiple teams are involved, potentially including procurement, IT, security, legal, compliance, data governance, operations, and the teams that own particular data sets.
The Course
AI opens the door to huge opportunities and huge risks. Before you can finalize an AI policy there needs to be a general meeting of the minds of the leadership team as to what risks are acceptable, what risks are not acceptable, and which risks require an individual review. As the AI landscape is evolving so rapidly, these assessments should be revisited with more regularity than we have with other technologies.
The Model
Your AI policy should match the format, tone, and complexity of your organization’s other policies. AI policies can be extremely detailed and complex or they can be short and simple. The form that is right for your organization should be influenced by how you have historically written policies.
Your AI policy should be sufficient to enable your organization to meet its regulatory obligations, adhere to your current standards around security and governance, and meet your customer’s expectations. Laws and frameworks that should be part of your consideration are ISO/IEC 42001, NIST AI Risk Management Framework, OECD AI Principles, the EU AI Act, the laws of any other jurisdiction in which you are doing business, industry specific regulations. In addition, you should be aware of laws that are not specific to AI, but may impact how it can be used such as privacy laws, intellectual property laws, and employment laws.
Leverage information provided by organizations such as the AI Governance Library and Responsible AI or your outside counsel.
A robust AI policy typically covers:
Instead of starting from scratch, you can start from a template provided by experts and modify to meet the needs you identified in the above steps. Here are 4 that you may want to consider as starting places.
AI policies are not “set and forget.” Establish a review cadence (quarterly or bi-annually) to account for regulatory updates, new use cases, and lessons learned from deployments.
Your AI policy is both a shield (against regulatory, reputational, and security risks) and a compass (guiding responsible innovation). The examples above give you a starting point; the framework ensures you can tailor them to your organization’s risk appetite and culture.