Medicine

21 April 2026

FDA Warning Letter highlights risks of using AI in drug manufacturing

In the United States, facilities that manufacture drugs must implement quality systems that comply with current good manufacturing practice (cGMP) regulations in 21 CFR Parts 210 and 211. While cGMP requirements are not new, the manner in which companies have implemented the regulations has evolved since the 1970s, from paper records to computerized systems, and more recently, to tools incorporating artificial intelligence (AI).[1] The failure to do this correctly can lead to adverse inspectional findings and enforcement action by the US Food and Drug Administration (FDA).

Case in point: In April 2026, FDA cited a drug manufacturer for improper reliance on AI in carrying out its cGMP obligations for the first time. The Warning Letter discusses several examples of how the company utilized AI agents in generating core documents and records that manufacturers are required to maintain under cGMP regulations, such as drug product specifications, procedures, and master production records.

The Warning Letter also revealed that when investigators probed into the company’s failure to comply with cGMP obligations, personnel responded that they were unaware of certain legal requirements because the AI agent the company relied upon did not tell them. FDA cautioned the company that AI-generated outputs or recommendations must undergo review and approval by an authorized Quality Unit (QU) representative.

FDA tied these violations to two specific cGMP regulations. Under 21 CFR § 211.22(c), the QU is responsible for approving or rejecting all procedures and specifications that affect the identity, strength, quality, or purity of a drug product. In this case, FDA concluded that the company improperly relied on AI generated procedures and specifications without adequate review and approval by authorized QU personnel, underscoring that responsibility for cGMP compliance cannot be entirely delegated to AI tools. This finding reinforces the principle that for many good practice (GxP) functions, especially those that directly impact product quality or patient safety, a “human-in-the-loop” is still the norm and the expectation.

FDA also cited 21 CFR § 211.100, which requires drug manufacturers to establish and follow written production and process control procedures designed to ensure products have the identity, strength, quality, and purity they purport to possess.

During the inspection, FDA determined that the company had distributed drug products without first performing required process validation. When notified of this deficiency, the firm stated it was unaware of the validation requirement because the AI agent did not identify it. Therefore, FDA’s response shows that AI does not excuse the failure to implement fundamental controls. The agency’s response also emphasizes 1) the principle that AI used for GxP or to support other product development functions must be validated for those purposes and 2) that appropriate AI validation documentation should be integrated into quality system records and standard operating procedures (SOPs). FDA described these principles in its draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products (Jan. 2025), which we discuss here.

Takeaways for the life sciences industry

To be clear, FDA does not prohibit the use of AI to assist with cGMP activities. However, this Warning Letter demonstrates the risks of overreliance on AI, which can lead to broad quality system failures. AI adoption does not minimize or eliminate the manufacturer’s ultimate responsibility to understand and comply with its regulatory obligations under cGMP. This is an important reminder that incorporating a human-in-the-loop to oversee, review, and approve all AI-generated outputs is key; the failure to do so constitutes a cGMP violation according to FDA. Further, cGMP records and documentation must be accurate, complete, and compliant, regardless of how they were created.

For companies that have already integrated AI into workflow or are considering available options, deploying AI in a cGMP environment raises unique regulatory compliance risks and considerations.

Companies may consider the following steps to proactively mitigate these issues:

  • Document the human-in-the-loop review and decision-making. In workflows with AI-generated outputs, companies are encouraged to provide clear instructions on how, when, and where to document that the outputs were reviewed and approved by qualified individuals in the QU.

  • Evaluate AI use within existing validation and governance frameworks. Before onboarding new AI tools, companies are encouraged to evaluate whether the tools are properly validated for the contemplated use cases consistent with their internal AI governance policies and FDA’s draft guidance, as discussed above.

  • Review written agreements with AI vendors to address regulatory expectations and responsibilities. To the extent that companies are procuring AI tools from third-party vendors to support regulated activities, they should carefully review their written agreements to ensure that regulatory compliance matters are addressed appropriately and not treated as an afterthought. Contractual provisions such as data integrity, validation, change management, and representations around cGMP compliance may be critical depending on the context and applications.

  • Update FDA inspection readiness training. The Warning Letter signals that investigators are increasingly scrutinizing how AI tools are being used in cGMP settings and how companies are exercising control and oversight. FDA inspection readiness training for staff should include the dos and don’ts around using AI to carry out cGMP activities. Training should reinforce that AI tools should never be treated as compliance shortcuts.

  • Consider broader implications for other GxP applications. While this Warning Letter focused on cGMP requirements, the same compliance risks apply across other GxP functions that have incorporated AI tools. Companies should consider conducting a cross-functional review of where AI is being used, how outputs are being reviewed and vetted, and whether existing SOPs adequately address these points.

DLA Piper is here to help

DLA Piper’s team of FDA and AI lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and their impacts on industry across the world.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI. Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.

[1] US Food and Drug Admin., “Guidance for Industry, Quality Systems Approach to Pharmaceutical CGMP Regulations” (Sept. 2006), https://www.fda.gov/media/71023/download

Print