The Equal Employment Opportunity Commission (“EEOC”) recently released new guidance regarding Artificial Intelligence (“AI”) software used in the employment selection process and how it could potentially violate Title VII of the Civil Rights Act of 1964 (“Title VII”). This document comes one year after the EEOC released guidance regarding AI software and the risk of a violation of the Americans with Disabilities Act (the “ADA”). The EEOC understands that companies would like to utilize AI software for different types of decision-making processes regarding employee selection. Within these publications, the EEOC discusses how a company can monitor its AI software so that it doesn’t violate the ADA or Title VII.
What is Artificial Intelligence Software?
Congress defines AI to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
How does AI work in a Human Resources Context?
Software or applications that include AI are used at various stages of employment, including hiring, performance evaluation, promotion, and termination. This can be, but isn’t limited to: resume scanners, employee monitoring software that rates employees on the number or accuracy of their keystrokes or other factors, and “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
Potential Title VII Violations
Companies risk Title VII violations if they use AI tests or selection procedures that aren’t related to the job position and seem neutral, but end up excluding people based on race, color, religion, sex, or national origin. It is important that, if a company is using AI tests or selection procedures to assist in hiring, the measures or filters are directly related to the job or have a business necessity.
As an example of how a seemingly neutral algorithm can have a disparate impact on a protected class, in 2015, a large tech company discovered that the AI software it was using to filter resumes was heavily biased against women. This occurred because the algorithm examined the number of resumes submitted over the past ten years and attempted to match qualities it registered as “top talent” from resumes submitted in that timeframe. Since most applicants in those past 10 years were men, the algorithm trained itself to prefer men. It penalized resumes that had the word “women’s” and downgraded graduates of two all-women’s colleges.