September 2, 2025 – The California Civil Rights Department (CRD) has enacted new regulations, which establish that employers may face liability for deploying artificial intelligence (AI) in employment decisions if the result is discriminatory. The new rules will take effect on October 1, 2025, making California one of the first states to adopt extensive regulations addressing the use of AI as an Automated Decision System (“ADS”) in employment decisions.
The regulations broadly define ADS as technology that assists human decision-making, whether through artificial intelligence (AI) or other tools that provide automated decision-making ability. The definition of ADS can include machine learning, algorithms, statistics, and other data processing techniques used to make employment decisions. For example, ADS may involve technologies that evaluate an applicant’s tone of voice, facial expressions, or other traits connected to protected characteristics under the FEHA. The covered systems do not include word processing software, spreadsheet software, spellcheck, or similar technologies.
The new California regulations prohibit employers from using ADS to discriminate against an applicant or employee on the basis of a characteristic protected by the Fair Employment and Housing Act (FEHA), the California state law barring discrimination in employment and housing. In addition to clarifying existing anti-discrimination laws in the context of new technology, the regulations will require employers to maintain employment records that include automated-decision data. Importantly, the regulations:
- Prohibit employers from using ADS that discriminates against an applicant or employee on any basis covered by the FEHA.
- Establish that third-party “agents” using ADS for employment decisions are covered under the FEHA.
- Impose the duty to maintain employment records, including ADS decision-making data, for a minimum period of four years, extending the previous two-year requirement.
- Require employers to consider the need for reasonable accommodation if ADS screening is used during the hiring process.
- Provide a potential affirmative defense for employers who engage in “anti-bias testing or similar proactive efforts to avoid unlawful discrimination.”
- Affirm that the use of an automated decision-making system alone does not replace the need for individualized assessment when considering an applicant’s criminal history.
- Provide examples of tests or challenges used in ADS assessments that may constitute unlawful medical or psychological inquiries.
- Clarify that any effort to audit, test, or otherwise prevent ADS from engaging in unlawful discrimination will be considered in any claim or defense.
Coverage extends to ADS used through a third-party “agent” utilized or contracted by a covered employer. The regulations define “agent” as “any person acting on behalf of an employer, directly or indirectly, to exercise a function that is traditionally exercised by the employer.”
Employers are also prohibited from using ADS by “proxy.” The regulations define “proxy” as a “characteristic or category closely correlated with a basis protected by the Act.” Therefore, employers who use ADS that evaluate applicants or employees because of certain facial expressions, tone, or other mannerisms may be engaging in unlawful conduct (for example by using expressions in a manner that might suggest the employer was screening based on national origin, race, or perceived disability).
Safe use of ADS requires adequate human oversight. Employers who choose to use ADS should monitor its impacts, frequently engage in anti-bias testing, and inquire with vendors about safeguards. In addition, the new regulations appear to leave room for the argument that even the use of ADS tools which appear on the surface to have a disparate impact on the basis of some protected characteristic might still be defensible if the offending screening criteria – including, but not limited to – degree, certification, or experience requirements are job-related and consistent with business necessity. To that end, therefore, California employers may want to carefully scrutinize any such AI-used criteria to make sure that is the case.
The emergence of new regulations can be complex and burdensome to manage. Employers should take proactive measures to avoid compliance issues. Please contact one of our Kullman attorneys to assist you in preparing for the regulations to take effect.