With the increased availability and popularity of career aggregation websites, the process of mass applying to job positions without reviewing the necessary qualifications — or even knowing what the jobs entail — is as simple as clicking a button. Human resources departments, in turn, save money and time by utilizing artificial intelligence (AI) to engage in automated resume sorting, evaluating resumes for key terms, and utilizing “knock-out” questions to winnow applicant pools.
However, when barriers to applying for jobs are lowered, employers and employment agencies receive significantly more applications. A job that once received fewer than a hundred applications might now receive thousands. To deal with this deluge, HR departments have inevitably been forced to rely on AI to sort applicants.
But are the tools working? Many employers report that they believe that qualified, highly skilled candidates have been vetted out of the applicant pool by automated technology because they did not match the exact criteria in the job description.
In addition, few employers feel that the vendors they utilize for resume analysis functions are transparent about their methods to prevent discrimination or bias. There are many examples of AI engaging in apparent discrimination because the AI had not been trained with data that included diverse sets of people. Discrimination arises mainly because companies rely on limited historical data sets. These data sets most commonly use data collected from previous decades; however, in many industries, data from previous decades of potential discrimination.
The takeaway is not that AI should never be used, but rather that AI is not a “magic bullet.” When used responsibly and within certain boundaries to help a company’s human capital be more strategic, AI can actually prove to be highly useful to employers. For example, AI trained with diverse data sets could help combat issues of implicit bias by sorting resumes without including names in its resume analysis, unlike a human reviewer who would implicitly consider the name at the top of the resume.
Legal regulation is coming. Illinois recently amended its Illinois Artificial Intelligence Video Interview Act to require employers who rely solely on AI to determine whether an applicant will receive an in-person interview to collect and disclose certain demographic information to the Illinois Department of Commercial and Economic Opportunity.
New York City was the first jurisdiction in the United States to add notice and audit requirements for AI. Under those requirements, an employer or employment agency may not use an Automated Employment Decision Tool (AEDT) unless (1) the tool has been subject to a bias audit completed by an independent auditor no more than one year prior to the tool’s use; and (2) a summary of the most recent bias audit and the distribution date of the tool has been made publicly available on the employer’s or the employment agency’s website prior to the use of the tool. Bias audits are subject to explicit requirements depending on how the tool operates.
In 2021, the Equal Employment Opportunity Commission (EEOC) launched an agency-wide Artificial Intelligence and Algorithmic Fairness Initiative to ensure the use of AI, machine learning and other emerging technologies used in hiring and employment decisions complies with federal civil rights laws enforced by the agency.
California, New York, and New Jersey have each introduced bills similar to those in the NYC AEDT law. California’s bill would impose on employers obligations similar to the NYC AEDT law to evaluate the impact of an automated decision tool through an impact assessment, provide notice regarding its use and provide for the formation of a governance program. The bill further prohibits a deployer of an automated decision tool from using it in a way that contributes to algorithmic discrimination.
In July, Senate Democrats introduced the No Robot Bosses Act. The bill would bar employers from using automated programs alone to make employment decisions and would require them to implement training on such systems. Similar to the NYC law, the bill would also require employers to have humans oversee automated programs and regularly test and validate the programs for bias and discrimination.
In addition to the Federal Trade Commission, the Department of Justice and the Consumer Financial Protection Bureau all have indicated they will apply existing discrimination laws to developing AI, and the National Labor Relations Board General Counsel has indicated that automated surveillance and management tools might violate the National Labor Relations Act.
So, what do employers who need to utilize AI to sort through an avalanche of resumes need to know?
• KNOWLEDGE IS POWER
Have a basic understanding of the AI being used, even if the company outsources the algorithms. If HR departments outsource their AI programs, they should thoroughly vet the program and discuss bias concerns with third-party vendors, who should provide transparency as to how their AI works. Consider consulting outside experts with AI-specific knowledge to discuss the pitfalls of AI resume review.
• AI IS A WORK IN PROGRESS
Carefully consider initiatives that handpick candidates who fall outside the scope of the AI resume pull, regardless of whether the AI seems to be skewed.
• GIVE PEOPLE A CHANCE
Consider interviewing candidates who do not meet all the qualifications that AI was programmed to and, as such additional qualifications might still indicate satisfactory job performance.
• CONSIDER BACK-END EFFECTS
HR departments should monitor whether the AI appears to be pulling members of only one ethnicity or gender.
• DO NOT “OVER TAILOR” AI PROGRAMS
Often, AI programs can get too specific when HR departments add skills and qualifications to a job description when the job does not actually require these skills. This results in AI assessing too many baseline factors (e.g., requiring a college degree, requiring no gaps in resumes, etc.) and yielding inaccurate results.
• KEEP A PAPER TRAIL
Whether or not HR departments design their own AI or deploy third-party AI, they should document how the AI is used and the usual outcomes. When in doubt, consult with counsel. The attorneys at BakerHostetler stand ready to assist.
Shareef Farag is a partner with BakerHostetler. Learn more at bakerlaw.com.