16 May 20245 minute read

DOL and OFCCP release guidance on AI in employment

On April 29, 2024, the US Department of Labor (DOL) Wage and Hour Division (WHD) and the Office of Federal Contract Compliance Programs (OFCCP) issued separate guidance in response to President Joe Biden’s call for a coordinated US government approach to ensuring the responsible and safe development and use of artificial intelligence (AI). 

As discussed below, the guidance from both federal agencies largely has the same underlying message: Although the use of AI in the workplace has the potential to increase legal compliance, as well as productivity and efficiency in employment decision-making, employers are ultimately still responsible for their legal obligations under applicable federal and state laws.

OFCCP’s guidance

The OFCCP’s guidance relates to federal contractors’ and subcontractors’ Equal Employment Opportunity (EEO) compliance obligations. The two-part guidance includes (1) an FAQ about the use of AI and EEO and (2) “promising practices” for the development and use of AI in the EEO context. 

The FAQ portion of the guidance reminds federal contractors that the use of AI does not insulate them from the risk of violating EEO and nondiscrimination obligations in the workplace. The guidance also outlines recommendations (which the OFCCP clarified are not requirements) that contractors can incorporate into their practices to help avoid potential harm to workers and promote trustworthy development and use of AI. 

These “promising practices” include suggestions on notice to employees, general use of AI systems, due diligence on vendor-created AI systems, and accessibility and disability inclusion in the AI space. For example, the OFCCP encourages federal contractors to: 

  • Provide advance notice and appropriate disclosure to applicants, employees, and their representatives if the contractor intends to use AI in the hiring process or employment decisions that allows individuals to understand how they are being evaluated

  • Routinely monitor and analyze where the use of the AI system is causing a disparate or adverse impact before implementation, during use at regular intervals, and after use, and

  • Retain and safely store documentation of the data used to develop or deploy the AI system with the contractor or ensure that such documentation is easily obtainable from the vendor.

While the OFCCP’s guidance does not elaborate on next steps for implementation of these promising practices, employers may look to similar requirements that are already law in New York City. For example, New York City requires that AI tools used to make hiring and promotion decisions must undergo a bias audit at least annually. That city’s law also requires employers to provide at least ten business days’ advance notice prior to using certain AI tools in connection with hiring and promotion decisions (such notice does not need to be provided on an individualized basis; the requirement can be satisfied via general notice on the company’s employment section of their website, for job applicants, or its existing written policies or procedures, for employees). 

WHD’s guidance

In Field Assistance Bulletin No. 2024-1 (FAB), the WHD’s guidance relates to compliance with federal labor standards for employers who use AI and other automated systems in the workplace. The FAB focuses on employer’s legal obligations under the Fair Labor Standards Act (FLSA), the Family and Medical Leave Act (FMLA), the Providing Urgent Maternal Protections for Nursing Mothers Act, and the Employee Polygraph Protection Act of 1988. These laws impose various obligations on employers, including tracking working time, monitoring break times, tracking waiting time, calculating wages due for different work rates and duties, keeping track of FMLA leave, and assuring breaks and other protections for nursing mothers.

The FAB’s overarching guidance is that the same obligations apply to all employers, regardless of whether or not AI is used in the workplace. For example, if an employer uses AI to assist with its wage and hour-related obligations, the WHD emphasized that the employer ultimately is responsible for any noncompliance. 

Accordingly, employers are encouraged to ensure that any use of AI in the workplace is paired with proper human supervision. Similarly, if an employer uses an employee monitoring tool to track work time based on the measurement and analysis of metrics of worker productivity or activity (eg, computer keystrokes, mouse clicks, and website browsing), there should also be human oversight to ensure that any time worked that may not be accurately captured by such metrics is still included in the company’s timekeeping system as “hours worked” under the FLSA.


Employers using AI in the workplace are encouraged to ensure that humans are reviewing recommendations made and serving as the final decision-makers. Employers may also want to develop criteria for users to report discrepancies or other anomalies that could be indicative of bias and implement both external and internal auditing of tools by quality control reviewers. Further, federal contractors and subcontractors are urged to review the promising practices detailed in the OCCP’s guidance and assess how they apply to current policies and procedures.

For more information, please contact the authors or your DLA Piper relationship attorney.