
3 October 2025
California AI employment regulations take effect: Top points
On October 1, 2025, regulations took effect that amend the California Fair Employment and Housing Act (FEHA)’s regulatory framework and regulate the use of artificial intelligence (AI) in employment within the state.
Originally published in June 2025, the regulations follow the September 2024 enactment of more than 30 bills governing AI in the California workplace.
In this alert, we summarize the regulations and discuss compliance obligations for employers going forward.
Key aspects of the regulations
1. Definition
The latest regulations apply to “automated-decision systems” (ADS), which are defined as a “computational process that makes a decision or facilitates human decision making regarding an employment benefit.” The regulations specifically identify the following non-exhaustive list of potential employment uses for ADS:
- Computer-based assessments or tests to:
- Make predictive assessments about applicants or employees
- Measure skills, dexterity, reaction time, and other abilities or characteristics
- Measure personality traits, aptitude, attitude, and/or cultural fit
- Screen, evaluate, categorize, or recommend applicants or employees
- Direct job advertising or recruiting materials to targeted groups
- Screen resumes for particular terms or patterns
- Analyze facial expressions, word choice, and/or voice in online interviews
- Analyze employee or applicant data obtained from third parties
Excluded from the definition of ADS are “word processing software, spreadsheet software, map navigation systems, web hosting, domain registration, networking, caching, website loading, data storage, firewalls, anti-virus, anti-malware, spam- and robocall-filtering, spellchecking, calculators, database, or similar technologies, provided that these technologies do not make a decision regarding an employment benefit.”
2. Unlawful discrimination
The regulations prohibit employers and other entities covered under FEHA from using an ADS or selection criteria that discriminates against applicants or employees on the basis of a characteristic protected by FEHA. The regulations expressly provide that evidence, or the lack of thereof, “of anti-bias testing”; “similar proactive efforts to avoid unlawful discrimination”; “quality, efficacy, recency, and scope of such effort”; “the results of such testing”; “and the [employer’s] response to the results” are relevant to an employee’s discrimination claims and the employer’s defenses.
3. Employers’ agents
The term “agent” of an employer has been expanded to include third parties that “exercise a function traditionally exercised by the employer… includ[ing] applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system.” Under this definition, vendors that develop, administer, and/or provide AI tools to employers for use in employment-related decisions qualify as an agent of the employer.
4. Prohibited medical inquiries
Assessments conducted by ADS, such as tests, questionnaires, or puzzles, games, and other challenges that elicit information about a disability, may constitute unlawful medical inquiry.
5. Record maintenance and retention
Employers and covered entities are required to maintain records of “automated-decision system data” for a minimum of four years. This includes any data used in or resulting from the application of an ADS, data reflecting employment decisions and outcomes, and data used to develop or customize an ADS for use by a particular employer.
Key takeaways
Potential liability
The new regulations could increase risks related to the use of ADS. These include the following:
- Employers may be liable for utilizing an ADS in employment decisions that produces results disproportionately impacting (ie, disparate impact) members of a protected group.
- Every applicant belonging to the affected group who was denied employment, and every employee belonging to the affected group who was denied an employment benefit (eg, promotion or raise) or experienced an adverse employment action (eg, layoff) resulting from the use of an ADS, may raise a potential claim.
- Employers can be potentially liable even if the ADS was developed or administered by a third-party vendor.
Audits and testing
Evidence of anti-bias testing can support the employer’s defenses to potential discrimination claims, demonstrate that an ADS did not produce a disparate impact against any protected group, and/or help establish that the employer did not have any unlawful or discriminatory intent. To mitigate risk, employers may consider the following measures:
- Obtaining an ADS vendor’s data and bias audits and testing before contracting services
- Performing regular bias audits and testing on any ADS currently in use
How DLA Piper can help
With a fully integrated technical and legal team, DLA Piper is capable of coupling AI testing solutions with legal analysis. Testing solutions may vary by client and include, for example, evaluating AI applications at the code base level; assessing AI relative to an array of standards, such as National Institute of Standards and Technology (NIST) and International Organization for Standardization (ISO) 42001 and the EU AI Act; evaluating risks of internal and external AI models and systems; and offering legal recommendations.
For more information about California’s new AI employment regulations, including using AI to assist in employment and human resource functions under federal law and other state laws, as well as other considerations and risks related to AI, please contact any of the authors or your DLA Piper relationship attorney.


