Add a bookmark to get started

13 June 202310 minute read

Generative AI for essential corporate functions: Use cases and legal considerations

As technology transactions lawyers, we are always thinking about ways new innovations influence our practice – and that has been especially so as we consider generative artificial intelligence (AI). As this new technology races to the fore, what should companies know about it? How might it affect the ways they do business, hire employees, and craft contracts?

In this article, we look at a few possible use cases for generative AI tools through the lens of a typical corporation's essential functions and think through some legal considerations for each.

What Is Generative AI?

At its most basic level, a generative AI tool generates “output,” typically in response to instructions from a user, referred to as the “input” or “prompt.” The output is based on an algorithmic model that has been trained on vast amounts of data – which could be text, images, music, computer code or virtually any other type of content. Given the breadth of these possible inputs, the potential use cases are theoretically unlimited – and so are the risks.

It is true that for some time now we've already been relying on algorithms to analyze inputs and make predictive outputs. We've all experienced this when we apply for a credit card and our application is scored based on a set of data. But what makes generative AI different from more familiar algorithm-based machine learning technology is that it draws on enormous – and potentially unlimited—sources to almost instantaneously create seemingly new, task-appropriate rich content: essays, blog posts, poetry, designs, images, videos, and software code.

Use cases

Employment, HR and data privacy

Bias. If a business uses a generative AI tool in making employment decisions, such as creating a set of interview questions for a specific position, algorithmic bias may be a key issue. Data inadvertently or deliberately introduced in the input phase may result in biased hiring decisions, which may lead to liability for the company.

In 2022, more than a dozen states introduced AI bills addressing this concern and further legislation is anticipated this year. At the federal level, the Equal Employment Opportunity Commission (EEOC) has provided guidance on the use of AI in the context of the Americans with Disabilities Act (ADA), and EEOC enforcement actions involving AI bias are expected to increase. One recent case brought by the EEOC alleges that the defendant programmed its tutor application software to reject female applicants 55 or older and male applicants 60 or older.

Privacy. When providing input for a generative AI tool, remember that personal data is likely within the scope of applicable privacy laws. Use of personal data as an input into a third-party generative AI tool likely requires disclosure to the individual whose data is the input and that individual's consent for the use of that data. In fact, the terms of use of some generative AI platforms require that users who input personal data represent that they are using that data in accordance with applicable laws, including providing all required notices and obtaining all necessary consents.

Additionally, to the extent applicable law gives a data subject the right to request the deletion of their data, is it clear that the vendor of a generative AI tool is capable of, and indeed will, carry out the deletion request? Further, the user of the generative AI tool may have granted an ongoing right to the vendor to use the input, even after the user stops using the tool. For example, some terms of use can provide that the platform vendor's rights to the input, including any personal data, survive termination.

Key takeaways. Have a governance process in place to assess the risks of using AI in making employment decisions by identifying potential sources of bias, consider whether it is possible to shift risk to the tool vendor, and reserve rights to remedies in contract terms if a claim is brought based on decisions made using the tool. Also consider implementing a way to monitor employment decisions that are based on AI output for possible discrimination or bias. Regarding privacy concerns, ensure that privacy laws are followed if personal data is used as an input, and de-identify personal data before inputting.


Marketing and intellectual property

There may be several potential use cases for generative AI in a company's marketing function–typically involving intellectual property concerns.

Copyright. A generative AI tool's terms of use may state the user owns the input and the output. However, ownership of the original source of the input—i.e., the trained data—that may be used to generate output is likely unclear–the source may well be owned by others who have not authorized their content to be used, and so your use of that input could be a violation of their ownership rights in that input. A recent example: the Getty Images case, in which the stock photography company is objecting to use of its copyrighted works by a generative AI tool, which incorporated millions of Getty-owned images in its inputs without permission or compensation.

Similarly, perhaps a marketing manager is facing a tight deadline to draft text for a report for the company website. A simple prompt to a generative AI tool may provide what looks like the perfect text. But is copyright protection available when that work was created by a generative AI tool? For the time being, at least in the US, copyright protection is only available for works where a human is the author or where human creativity was involved in the selection, coordination, and arrangement of the work.

Key takeaways. The output you receive from a generative AI tool may not be eligible for copyright protection in the US and may even be infringing someone else's rights. Also, note the availability of the fair use defense in the context of generative AI is still unsettled. Furthermore, fair use is a courtroom defense to a claim of copyright infringement–by the time your company gets to that point, it is already dealing with the expense, publicity, and reputational harm of litigation.


Trademarks. In another potential use case, a user prompts a generative AI tool to create a new trademark. It may be simple to generate an attractive set of potential trademarks based on a modest prompt, but that is only the start of the strategic IP protection process. For instance, is this output protectable as a trademark? Does it create the likelihood of confusion with another mark?

Key takeaways. The use of generative AI may supplement the company's research or ideation process, but it should not replace an intellectual property program. A key element of good governance is having a human in the loop at key points. Protecting your intangible assets, including proper clearance process, remains as essential as ever.


Product development/support

Many companies’ engineering and product development teams are expressing interest in the use of generative AI tools – attracted by benefits similar to those provided by the use of open source software, such as the possibility of accelerating product development.

Software development. In this use case, we are considering the potential risks in this example: a software developer inputs the source code of the company's original program in a generative AI tool in order to convert the program's source code from one language to another. But here's the rub: such code is enormously valuable – indeed, tech companies often refer to their software source code as their crown jewels. Unless the company has made the decision to open source its software, that code is treated as a trade secret – ie, it is information that provides an economic advantage because it is not publicly known and reasonable efforts are taken to maintain it as a trade secret.

So what happens when code that was considered trade secret information is input into a generative AI tool? Your formerly proprietary information could then be seen and used by others, and you may have destroyed the legal argument that the information was still subject to reasonable efforts to maintain it as a trade secret. In other words, there is a potential risk that information could lose its status as a trade secret if it is used as an input in a generative AI tool.

Notably, one popular generative AI platform's terms of use include a confidentiality clause that only obligates the user to protect confidential information that is made available to the user; there is no express obligation of confidentiality that protects the inputs – though there is an option to opt out of having inputs used to train the tool.

Key takeaways. As we saw in the marketing use cases, enforcing and protecting a company's intellectual property is a high priority in today's knowledge-based economy. For software developers, inputting your code into a generative AI tool may put your company's proprietary information at risk. Similarly, using code generated by an AI tool may put you at risk for infringement of someone else's IP. If that output is then monetized and made available to the market commercially, the potential liability for infringement could be multiplied with each customer that uses the output.


Going forward

In sum, while the use of generative AI can help accelerate the development of your intangible assets, careful use of generative AI tools is essential to preserve the value of your intellectual property assets and ensure you are not inadvertently infringing assets owned by others. And before using a generative AI tool, it would be wise to review the applicable terms of use/service to understand the rights a business may be giving to the tool provider and the rights the user may be receiving, if any.

This article originally was published by Bloomberg Law on April 27, 2023.


Print