LEARN
MORE

The Challenge Ahead: Addressing the Risks of Using Virtual Tools in Employment Decision Making

CATEGORY: Blog Posts
CLIENT TYPE: Nonprofit, Private Education, Public Education, Public Employers, Public Safety
DATE: Oct 30, 2024

OpenAI’s launch of ChatGPT nearly two years ago kicked off the rapid integration of artificial intelligence into society’s daily activities. Today, established tech giants and upcoming startups alike are seemingly adding some level of AI to every product. This developing landscape provides employers with possibilities of both increased efficiency and increased liability.

While the technology is new, the potential harms are familiar. In a lawsuit developing in the Northern District Court of California, Mobley v. Workday, Inc., a plaintiff is suing the HR-provider Workday alleging that its algorithmic decision-making tools screened employment applications in a discriminatory manner. The litigation has focused on whether the plaintiff could bring such a lawsuit in the first place. In this regard, the court recently reasoned, “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one.” At least for this judge, employers must ensure that their AI tools comply with existing employment laws.

The EEOC’s Guidance on AI and Hiring

Absent new laws specifically addressing AI use, regulators aim to address potential AI risks under existing legal frameworks. The Equal Employment Opportunity Commission (“EEOC”) published guidance earlier this year focusing on actions employers may take to monitor their AI tools. The EEOC has taken the position that employers are responsible under Title VII for their use of AI tools even if another entity designed or administered them. The EEOC also noted that employers may be held responsible for the actions of their agents, such as software vendors.

The EEOC specifically focused on employers’ obligations to prevent “disparate impact” or “adverse impact” absent a business necessity. A disparate impact occurs when a selection procedure has the effect of disproportionately screening out a protected classification. As an example, if an existing workforce has a large number of male supervisors, AI software may inappropriately correlate being male with success and favor males for hire and promotion.

As a rule of thumb, the EEOC uses the “four-fifths rule” to determine disproportionate impact. The selection rate of one group is substantially different from the selection rate of another group if their ratio is less than four-fifths, or 80%. For example, if a selection procedure results in hiring 80 applicants, 30% of whom are Black while 60% are White, the procedure may have a disparate impact on Black applicants. This is because the proportion between the selection rates (30/60 = 1/2) is 50%, which is less than 80%.

Analyzing the potential adverse impact of an AI tool is an easy step since it focuses on the output data of the tool, rather than attempting to determine the technical parameters of an algorithm. However, adverse impact is only one form of discrimination, and the “four-fifths” rule is only a general rule of thumb. Employers should still attempt to form other guardrails over AI use.

Indeed, the EEOC’s recent Title VII guidance supplements a 2022 guidance on potential risks of violating the ADA using AI tools. In that guidance, the EEOC noted areas of concern such as failing to provide a reasonable accommodation to applicants that cannot be fairly rated by automated application procedures or that perhaps reveal a medical restriction.

California’s Proposed Regulations

Late last year, Governor Gavin Newsom signed Executive Order N-12-23. The Executive Order instructed several California agencies to analyze and report on potential risks of AI on governmental functions. It also directed the agencies to establish guidelines ensuring responsible development of AI systems and to prepare the government for AI use.

Significantly, there may be new AI-focused state regulations on the horizon. On May 17, 2024, the Civil Rights Department’s Civil Rights Council (“Council”) noticed its Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems. The initial public comment period for the proposed regulations closed on July 18, 2024.

On October 17, 2024, the Council noticed its first modification of the proposed regulations. The comment period for the proposed modifications closes on November 18, 2024. Significantly, the Council is taking the position that an “agent” that utilizes an automated decision-making tool, directly or indirectly, on behalf of an employer to facilitate decision-making traditionally exercised by an employer is also an “employer.” The Council may be relying on the California Supreme Court’s recent holding in Raines v. U.S. Healthworks Medical Group (2023) 15 Cal.5th 268 for this position.  Raines concluded that an employer’s business entity agents could be directly liable under the Fair Employment and Housing Act (“FEHA”) when they carry out FEHA-regulated activities on behalf of an employer.

The regulations also broadly define automated decision systems to mean a “computational process that makes a decision or facilitates human decision making.” The Council initially tried to carve out basic tools like calculators or excel spreadsheets, but the amended regulations appear to reverse course if those tools facilitate human decision-making. Thus, employers need to have some level of confidence that any calculation or formula used to make employment-related decisions does not create a disparate impact. The proposed regulations note that proof of anti-bias testing or similar proactive efforts to avoid algorithmic discrimination may be relevant evidence to a claim of employment discrimination. However, it recently deleted a previously articulated business necessity defense—leaving it to the courts to determine the appropriate nature and scope of what that defense will look like (if at all).

The Council maintains that the proposed regulations do not impose any new requirements. Instead, it asserts that they are only clarifying how existing regulations apply to AI tools. Both employers and software vendors are likely to test that assertion in court.

The October 17, 2024 modifications reflect that the Council is receptive to some concerns. Particularly, the original proposals would have defined “medical or psychological examinations” to include “personality-based” questions, which include questions that measure optimism/positive attitudes, personal/emotional stability, extroversion/introversion, and “intensity.” The original proposed regulations did not limit the definition to AI-use, nor clearly limit the scope of “personality-based” questions. Thus, an employer could potentially violate the law by asking any pre-offer interview questions that attempt to determine a candidate’s personality in any way. In the modified draft regulations, the Council more plainly defined medical or psychological examinations to “include a test, question, puzzle, game, or other challenge that leads to the identification of a disability.”

AI at Work

Beyond management’s use of AI tools, employers should also be aware of their employee’s use of AI tools for work. More than likely, at least several of any workplace’s employees have used AI tools. As a result of the increasing AI integration into existing products, employees may have even used AI without realizing it. For example, Google searches result in an “AI Overview” which summarizes several webpages into one result.

In the context of employee use of AI tools, the general risks in using AI apply. One primary concern is accuracy. AI systems may “hallucinate” false information. Even Google’s AI Overview is prone to make mistakes. Employers should instruct employees to not rely on AI summaries, and instead confirm the information by visiting the sources of information.

Also, agencies often interact with sensitive information from members of the public. For example, employees could use AI tools to draft incident reports or personnel documents. Employers should contemplate specifically whether to allow such use, and if so, employees should receive guidance on how to safely use AI without jeopardizing information security.

Further, agencies must be ever mindful of their obligations under the Public Records Act. A member of the public may argue that “communications” between employees and AI tools are public records, which must be disclosed.

Evolving Scene

Unquestionably, the impact of AI on the employment landscape will continue to develop quickly. It is unclear when or if the Council’s regulations will be implemented, or whether the state legislature (which is actively working on AI-related statutes) will beat them to the punch. What is certain, however, is that employers have an opportunity now to take a hard look at the formulas and software being used to assist with their employment decisions, directly or indirectly through a vendor. Employers should actively question whether anti-bias testing or other proactive methods have been implemented and can be cited as a potential defense, as well as the possibility of indemnity provisions in contracts with software or recruitment vendors.

AI will transform our world in the coming years, and its adoption and utilization will become ubiquitous. Employers must be mindful, however, of the risks associated with AI and ensure they are considering the ways it can be a double-edged sword. LCW continues to monitor these issues with specific attention to how AI will affect California’s public employers.

View More News

Blog Posts
Artificial Intelligence and the FLSA After Recent DOL Guidance
READ MORE