Generative AI for essential corporate functions: Use cases and legal considerations
As technology transactions lawyers, we are always thinking about ways new innovations influence our practice – and that has been especially so as we consider generative artificial intelligence (AI). As this new technology races to the fore, what should companies know about it? How might it affect the ways they do business, hire employees, and craft contracts?
In this article, we look at a few possible use cases for generative AI tools through the lens of a typical corporation's essential functions and think through some legal considerations for each.
What Is Generative AI?
At its most basic level, a generative AI tool generates “output,” typically in response to instructions from a user, referred to as the “input” or “prompt.” The output is based on an algorithmic model that has been trained on vast amounts of data – which could be text, images, music, computer code or virtually any other type of content. Given the breadth of these possible inputs, the potential use cases are theoretically unlimited – and so are the risks.
It is true that for some time now we've already been relying on algorithms to analyze inputs and make predictive outputs. We've all experienced this when we apply for a credit card and our application is scored based on a set of data. But what makes generative AI different from more familiar algorithm-based machine learning technology is that it draws on enormous – and potentially unlimited—sources to almost instantaneously create seemingly new, task-appropriate rich content: essays, blog posts, poetry, designs, images, videos, and software code.
Employment, HR and data privacy
Bias. If a business uses a generative AI tool in making employment decisions, such as creating a set of interview questions for a specific position, algorithmic bias may be a key issue. Data inadvertently or deliberately introduced in the input phase may result in biased hiring decisions, which may lead to liability for the company.
In 2022, more than a dozen states introduced AI bills addressing this concern and further legislation is anticipated this year. At the federal level, the Equal Employment Opportunity Commission (EEOC) has provided guidance on the use of AI in the context of the Americans with Disabilities Act (ADA), and EEOC enforcement actions involving AI bias are expected to increase. One recent case brought by the EEOC alleges that the defendant programmed its tutor application software to reject female applicants 55 or older and male applicants 60 or older.
Key takeaways. Have a governance process in place to assess the risks of using AI in making employment decisions by identifying potential sources of bias, consider whether it is possible to shift risk to the tool vendor, and reserve rights to remedies in contract terms if a claim is brought based on decisions made using the tool. Also consider implementing a way to monitor employment decisions that are based on AI output for possible discrimination or bias. Regarding privacy concerns, ensure that privacy laws are followed if personal data is used as an input, and de-identify personal data before inputting.
Marketing and intellectual property
There may be several potential use cases for generative AI in a company's marketing function–typically involving intellectual property concerns.
Similarly, perhaps a marketing manager is facing a tight deadline to draft text for a report for the company website. A simple prompt to a generative AI tool may provide what looks like the perfect text. But is copyright protection available when that work was created by a generative AI tool? For the time being, at least in the US, copyright protection is only available for works where a human is the author or where human creativity was involved in the selection, coordination, and arrangement of the work.
Key takeaways. The output you receive from a generative AI tool may not be eligible for copyright protection in the US and may even be infringing someone else's rights. Also, note the availability of the fair use defense in the context of generative AI is still unsettled. Furthermore, fair use is a courtroom defense to a claim of copyright infringement–by the time your company gets to that point, it is already dealing with the expense, publicity, and reputational harm of litigation.
Trademarks. In another potential use case, a user prompts a generative AI tool to create a new trademark. It may be simple to generate an attractive set of potential trademarks based on a modest prompt, but that is only the start of the strategic IP protection process. For instance, is this output protectable as a trademark? Does it create the likelihood of confusion with another mark?
Key takeaways. The use of generative AI may supplement the company's research or ideation process, but it should not replace an intellectual property program. A key element of good governance is having a human in the loop at key points. Protecting your intangible assets, including proper clearance process, remains as essential as ever.
Many companies’ engineering and product development teams are expressing interest in the use of generative AI tools – attracted by benefits similar to those provided by the use of open source software, such as the possibility of accelerating product development.
Software development. In this use case, we are considering the potential risks in this example: a software developer inputs the source code of the company's original program in a generative AI tool in order to convert the program's source code from one language to another. But here's the rub: such code is enormously valuable – indeed, tech companies often refer to their software source code as their crown jewels. Unless the company has made the decision to open source its software, that code is treated as a trade secret – ie, it is information that provides an economic advantage because it is not publicly known and reasonable efforts are taken to maintain it as a trade secret.
So what happens when code that was considered trade secret information is input into a generative AI tool? Your formerly proprietary information could then be seen and used by others, and you may have destroyed the legal argument that the information was still subject to reasonable efforts to maintain it as a trade secret. In other words, there is a potential risk that information could lose its status as a trade secret if it is used as an input in a generative AI tool.
Key takeaways. As we saw in the marketing use cases, enforcing and protecting a company's intellectual property is a high priority in today's knowledge-based economy. For software developers, inputting your code into a generative AI tool may put your company's proprietary information at risk. Similarly, using code generated by an AI tool may put you at risk for infringement of someone else's IP. If that output is then monetized and made available to the market commercially, the potential liability for infringement could be multiplied with each customer that uses the output.
This article originally was published by Bloomberg Law on April 27, 2023.
Before creating or acquiring a technology solution that is generated by AI, consider your...
27 April 2023 .9 minute read
Regulatory, litigation and disclosure considerations concerning artificial intelligence
27 February 2023 .2 minute read