Add a bookmark to get started

Artificial Intelligence
27 February 20232 minute read

Regulatory, litigation and disclosure considerations concerning artificial intelligence

Question: What is the future of AI regulation?

ChatGPT: It is difficult to predict the future of AI regulation with certainty, as it will likely depend on a number of factors, including advances in AI technology and how it is used, as well as changes in public attitudes towards the use of AI. However, it is likely that as AI technology continues to advance and become more widespread, there will be an increased need for regulation to ensure that it is used safely and ethically. This may involve establishing guidelines for the development and use of AI, as well as creating oversight mechanisms to enforce those guidelines. Ultimately, the goal of AI regulation would be to ensure that the technology is used in a way that benefits society, while minimizing any potential negative impacts.

Note to readers: This article was not generated by ChatGPT. However, the attention surrounding the newest chatbot is representative of the growth in the use and influence of artificial intelligence (AI) and its potential implications across industries and society.


Global investment in AI is accelerating rapidly. According to the IBM AI Adoption Report (2022), 68% of US companies have knowingly employed or are exploring adopting AI. Far more are using AI algorithms without knowing it. Private investment more than doubled from 2020 to 2021 to $93.5 billion. Nearly 80% of everyday devices now employ some form of AI. McKinsey has predicted AI could deliver “an additional global economic output of about $13 trillion by 2030, boosting global GDP by about 1.2 percent a year.”

The disruption and potential risks of AI are increasingly the focus of regulators around the world. And for good reason. AI is different in many respects from past technological iterations and disruptions: AI is difficult to understand, challenging to oversee, adaptable and influential. As the US Department of Commerce’s National Institute for Standards and Technology just pointed out in its long-anticipated, congressionally mandated risk framework, AI is “a uniquely challenging technology to deploy and utilize both for organizations and within society,” with at least 14 distinct types of risk “that are not comprehensively addressed by current risk frameworks and approaches.”

Further, AI systems aren’t limited to chatbots and image generation systems. AI is increasingly being used in critical systems from defense to healthcare to financial services. As a result, AI has the ability to amplify benefits and risks throughout key industries at enterprise scale. One healthcare AI with discriminatory outputs was deployed in a system covering hundreds of millions of patients.

At the same time, disclosure by public companies regarding the use and influence (and attendant risks) of AI within their businesses remains limited. As the use and influence of AI grow in significance, companies may need to consider the resultant impacts on their businesses from economic, regulatory, risk management and control perspectives and provide appropriate public disclosures regarding the same.

In this paper, we set out the regulatory, litigation and disclosure considerations about the use of AI that organizations should be keeping in mind today.

Print