Add a bookmark to get started

23 June 20251 minute read

Legal red teaming, one year in: Reports from the field

In our June 2024 white paper, Legal red teaming: A systematic approach to assessing legal risk of generative AI models, we presented legal red teaming, a methodology aimed at helping organizations that develop and deploy generative artificial intelligence (GenAI) systems to proactively surface and address legal and compliance risks through adversarial testing. Legal red teaming introduces law and regulation as a grounding framework for adversarial testing of GenAI, complementing more traditional technical and “sociotechnical” attack vectors.

Since the publication of our white paper, the landscape of AI risk assessment and the legal scrutiny surrounding AI has continued to evolve. Consequently, many organizations are no longer asking whether to engage in adversarial testing of their GenAI systems, but rather how and when.

In our latest white paper, we provide an update on legal red teaming, including lessons we have learned from deploying it in practice over the past year. Drawing on real-world engagements and feedback from our clients, we highlight common challenges, share patterns that have emerged across diverse use cases, and propose strategies and best practices for using legal red teaming and technical testing in a complementary way.

Print