Generative AI – framing a business-centric policy to address opportunities and risks
As companies consider the benefits of using generative artificial intelligence, they must also balance such benefits against certain legal and business risks.
In this article, we look at possible frameworks for a policy that strategically leverages the benefits from using generative AI while mitigating risks.
The objective of a generative AI policy
There are several benefits associated with developing and implementing a policy to manage the use of generative AI across a business.
First, having a policy should help the company realize the advantages and benefits from using certain generative AI tools and services, while ensuring the company addresses the risks of such use from a company infrastructure, legal, and liability standpoint.
Second, having an established policy should provide guidance on the ways generative AI tools and services can be used in a manner that preserves the company’s intellectual property rights and proprietary information - in particular, its rights in its trade secrets and other confidential information.
Third, the policy should align with the terms and conditions governing the use of approved generative AI tools or services for the company to minimize the likelihood of incurring liability from breaching those terms and conditions.
Scope of your policy
Who: In practice, a policy should apply to anyone who uses generative AI in their work: employees (including directors and officers), independent contractors and consultants, and any third-party service providers involved in product development and business operations.
When and how: Your generative AI policy should set out how generative AI may be used within a company. You may allow it to be used in such areas as marketing, advertising, human resources, or software and product development, while limiting its use in other areas. You may also want to set certain limits on such usage. For example, you may allow use of generative AI when you are developing brief, simple marketing materials, but not when you are developing a broader advertising campaign. The policy should provide sufficient guidance to cover any uses of generative AI tools and services applicable to the company and the industry in which it operates.
Why: The policy should identify potential risks associated with the use of generative AI and, where applicable, ways to mitigate those risks. We believe it is also important to provide context that explains the purpose of the policy, why certain uses of generative AI are problematic, why you’ve put certain elements of the policy in place, and ways risks may be mitigated.
What: Your generative AI policy should be consistent and compatible with your other existing policies. Importantly, a new generative AI policy should not, in most cases, replace any existing company policies, including those regarding the protection of intellectual property, employee responsibilities, and any open source policy. Company personnel and third-party service providers should be required to comply with the generative AI policy separate and distinct from the requirement to comply with all other company policies, procedures, and regulatory requirements.
Defining a framework for your policy
At one extreme, some companies may have a policy that outright bans all use of generative AI. At the other extreme, companies may have no generative AI policy at all, leaving company personnel and third-party service providers free to use Generative AI so long as they do not violate any other company policy. Between these extremes, other companies may permit the use of generative AI for some use cases and not others where the attendant risk is perceived as too high (for example, product development). In the case of product development involving software, a company may permit the use of generative AI for debugging or to develop code that will be used internally, but not for code that is distributed externally.
Where a company wants to consciously put in place a strategic, tailored, approach to using generative AI, we suggest adopting a policy that includes a procedure for reviewing and approving the use of generative AI prior to its implementation and deployment.
In such an approach, any proposed use of generative AI would be reviewed and approved, typically by the company’s legal department, before being implemented. By requiring an approval process for each use case, a company would also be able to track how generative AI is being used within the company, as well as how those within the company are hoping to use it.
A request to use generative AI for a particular use case would include sufficient information for the request to be fully analyzed, such as:
- The identity of the business group intending to use the generative AI tool (for instance, human resources, marketing, product development)
- The name of the generative AI tool to be used
- The current stage of the AI tool’s lifecycle (is it in development or ready to deploy)
- The estimated cost of use
- A detailed description of the proposed use case, including:
- the intended operator and end user
- whether the use case is high impact or high risk
- the nature of the inputs
- the likely or desired output from the generative AI tool
- the benefits the user hopes to derive from the use of the generative AI tool, including how the output will be used by the business
- the identification of any of the risks set forth in the policy that are likely to be most relevant, and how they are relevant and
- how the user intends to mitigate those risks.
Any request should be assessed in light of the potential benefits and risks of using generative AI for the requested use case. Further, for high-impact or high-risk use cases, a company may want to consider reviewing inputs proposed for use and any outputs produced.
Of course, if a proposed use of the generative AI tool is not approved, then the generative AI tool may not be used for the requested use case.
In some cases, the policy might either prohibit or pre-approve at the outset certain uses of generative AI. In policies that adopt such a framework, “prohibited” would refer to use of specific generative AI tools for specific use cases which present or have the potential to present risks a company determines are not easily or reasonably mitigated. “Pre-approved” would refer to uses of specific generative AI tools for specific use cases which a company determines to be lower and manageable risk, or where the benefits of using generative AI have been determined to outweigh the corresponding risks. Ideally, even where a use case has been deemed “pre-approved,” the company would track the use to maintain an accurate repository of generative AI use within the company’s business operations.
Developing and adopting a generative AI policy is a key step for companies considering their approach to this brave new world. Of course, business uses of generative AI are still in their infancy. We can confidently predict that as the legal considerations surrounding artificial intelligence continue to evolve, more commonly accepted industry practices will emerge, and as new features and functionalities appear, the relevant terms governing use of generative AI tools and services will themselves change. While we have provided a framework for a business-centric policy that is relevant today, we encourage companies to regularly review their policy to ensure that their use of generative AI tools and services remains at the fore, compliant with industry standards and any newly relevant laws. For further guidance on developing or updating a generative AI policy, please contact any of us.
Before creating or acquiring a technology solution that is generated by AI, consider your...
27 April 2023 .9 minute read
Generative AI for essential corporate functions: Use cases and legal considerations
13 June 2023 .10 minute read