WORK WITH US
AI and Employment Decisions, Is the Risk Worth the Reward?
More than 80% of all employers, and more than 90% of Fortune 500 companies, now report using some form of artificial intelligence (AI) in employment, according to Charlotte A. Burrows, chair of the U.S. Equal Employment Opportunity Commission (EEOC). While putting employment decisions in the hands of an emotionless software program may seem like a leap forward in ensuing fairer and more merit-based employment decisions, this practice has alarmed interest groups across the United States. And state and federal agencies are taking action.
This is because AI and other types of automated decision-making tools run the risk of screening out applicants or employees based on protected characteristics or conditions, inadvertently replicating human biases or relying on data that serves as a proxy for protected characteristics or conditions. For example, a program that favors applicants with residential zip codes near an employer’s business may inadvertently discriminate against other qualified applicants if the preferred zip codes are proxies for certain racial groups.
The EEOC recently released technical guidance on the use of algorithmic decision-making tools in the context of Title I of the Americans with Disabilities Act (ADA). The ADA prohibits private employers, as well as state and local governments, from discriminating against qualified individuals with disabilities. The U.S. Department of Justice (DOJ) has also released guidance that explains how artificial intelligence can lead to discrimination under the ADA. The EEOC and DOJ’s focus on the ADA in particular is likely due to that population’s unique susceptibility to the effects automated decision-making tools.
State and local governments have also taken steps to address the use of AI in employment decision-making. The City of New York, for instance, adopted legislation—due to go into effect in 2023—that regulates “automated employment decision tools” in employment unless the tool has undergone a “bias audit.” Not to be outdone, it should be no surprise that California, with some of the strictest employment laws in the country, now has AI firmly in its crosshairs.
California’s Civil Rights Council has been workshopping regulations aimed at addressing the intersection between AI and predictive algorithms with the Fair Employment and Housing Act, or “FEHA”—the statutory framework that protects California employees from discrimination, retaliation or harassment based on protected characteristics or conditions. The draft regulations seek to cover the full panoply of employment decision-making, from hiring and firing to everything in between.
On Aug. 10, 2022, the Council agreed to move the draft regulations into a formal rulemaking phase, meaning it intends to publicly release a formal notice of proposed rulemaking with a public comment period. (Employers can subscribe to receive updates from the Council here.)
As a general overview, the current version of the proposed regulations would:
- Define and incorporate the term “automated decision systems” (“ADS”) throughout the existing regulatory framework. The proposed regulations would prohibit ADS that has a disparate impact, or constitutes disparate treatment, on an applicant or employee or class of applicants or employees unless job- related and consistent with business necessity. ADS is defined as “a computational process, including one derived from machine-learning, statistics or other data processing or artificial intelligence, that screens, evaluates, categorizes, recommends or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”
- Prohibit the use of ADS that utilize proxies for protected characteristics or conditions.
- Specify that employers are responsible for their own ADS and their vendor’s automated decision system. That is, if an employer uses a third-party to assist with any form of employment decision-making (such as recruiting), that third-party would be considered an “agent” of the employer. If the third-party vendor uses any form of ADS that directly or indirectly discriminates against a person or class of persons, the employer can be liable.
- Requires employers to retain “machine-learning data” for four years. It also requires entities that sell, advertise or facilitate the use of ADS to retain records of the assessment criteria used by the employer for four years.
- Specifies that third parties who sell, advertise or facilitate the use of ADS for an unlawful purpose can be subject to aider and abettor liability.
- Specifies that prohibited forms of pre-offer physical, medical and psychological examinations can include those utilizing ADS. As examples, the regulations identify personality based questions, puzzles, games, or other gamified challenges.
California’s draft regulations serve as a not-so-subtle warning to California’s employers, and the consequences of utilizing improper ADS could range from individual lawsuits to messy class actions. In other words, the stakes are very high. Importantly, the Council has now repeatedly stressed that the draft regulations do not create “new” liabilities, but rather reflect how existing laws apply to ADS right now.
Even if that general proposition is true, the draft regulations are dense, confusing, at times commentary-like, and offer little practical guidance for well-intentioned employers (or recognition of the obstacles they will face navigating this complex area). For instance, third-party vendors who rely on proprietary or confidential software are unlikely to simply turn over their source materials. Likewise, employers interested in a bias audit will face the reality that standards on this front are likely to be varied.
Meanwhile, the public’s awareness of AI as a tool in employment decision-making is only going to grow, and that is likely one of the principal goals of the regulations (after all, you are reading this article). It is only a matter of time before requests for AI and other predictive tools become a ubiquitous component of routine employment litigation.
What’s Coming Down the Pike
Fortunately, California’s draft regulations serve as a window into the future. Employers utilizing AI or other automated decision-making tools need to evaluate and understand them to ensure they do not discriminate. Employers may also want to consider developing practices around the use of AI. Some “promising practices” noted by the EEOC include:
- Disclosing to applicants and employees the traits the algorithm is designed to assess, the method in which they are assessed and factors that affect the rating;
- Informing all applicants and employees who are being rated that reasonable accommodations are available, with clear instructions for requesting them (e.g., taking a test in an alternative format or being assessed in alternative ways); and
- Only measuring abilities or qualifications necessary for the job, directly, instead of measuring characteristics or scores correlated with the desired abilities.
If an employer relies on a third-party vendor’s ADS, it cannot simply stick its head in the sand. It will need to understand that software or explore other areas of risk mitigation.
What the above makes clear is that an employer’s decision to use AI or other automated decision tools is actually quite complicated. There are a whole host of considerations to field before pressing the AI button. Using AI or other automated tools is also fraught with risks, especially in the absence of recognized industry standards or more clear guidance for employers. And, in this environment, employers should ask themselves if the risks are worth the reward.
You can access the EEOC and DOJ’s technical guidance here and here.
______________________________________________
Republished with permission from HR News