When It Comes to AI-based Recruiting Tech, Tread Carefully
Workspan Daily
February 14, 2023
Key Takeaways

  • EEOC prioritizing AI inspection. The EEOC recently indicated it will prioritize an investigation into how AI is used by employers in hiring. The decision could have a major impact on employers in their quest to bring new talent aboard if certain precautions are ignored.  
  • Existing laws in place. Several states and localities have enacted laws that outline what data can be collected and some may require notice and consent or other legal obligations.  
  • Consider the risks of overreliance. While using AI in hiring can help expedite a lengthy process, it also could shrink the talent pool and lead to less diverse hiring. 
  • Important steps to take. An employer must be involved in developing a tool that will meet business recruiting objectives, while complying with legal obligation. Additionally, employers should know what data is feeding the algorithm and do their due diligence after implementation.  

The Equal Employment Opportunity Commission (EEOC) recently indicated it will prioritize its investigation of the use of employment screening tools that rely on artificial intelligence (AI), or otherwise use algorithms to assist employers in making recruiting, hiring and other employment decisions. 

That EEOC decision could have a major impact on employers in their quest to bring new talent aboard if certain precautions are ignored, according to experts. 

Jim Paretti, a shareholder at the Washington, D.C., office of Littler Mendelson P.C., the employment law firm, says states and localities are following the EEOC’s lead, with some laws already enacted — New York City as one example — and literally dozens pending in other states and municipalities.  

“Each of these laws varies in their particulars, but all are focused on ensuring that these automated tools do not intentionally or unintentionally screen out employees based on protected traits,” Paretti explained.  

Ensuring Data Privacy and Diversity 

Paretti says firms preparing to deploy algorithmic hiring tools should familiarize themselves with data privacy laws, which will impact the types of data that may be collected and may require notice and consent or other legal obligations.  

Paul Lewis, chief customer officer at Adzuna, a job search engine company, said one of the pitfalls of AI is that by imitating human behaviors, it can reinforce certain biases. 

“Therefore, when training AI to make current hiring decisions based on past hiring choices, it’s critical to make sure the underlying data is diverse enough to make this a fair process,” he said. Lewis adds that AI systems should be audited for bias before use, and regular reviews must be held to interrogate the inclusivity of the hiring process. 

Also, Lewis says the rules for any AI screening must be carefully considered. For example, does the employer really want to exclude all resumes other than those featuring an Ivy League education, or mentioning certain GPA scores?  

“This is a one-way ticket to missing out on great, diverse talent,” he said. 

Lewis notes that you can't mention AI without mentioning ChatGPT, adding that one simple way hiring managers should be leveraging this technology to make their jobs easier is using it to generate an expanded list of interview questions for candidates for a specific role. For example, they could ask for a set of first round interview questions based on the text within a specific job posting. 

“When it comes to crafting the job ad itself, AI is also useful,” he says, adding that AI can be used to generate a list of keywords for inclusion within job postings themselves, in order to help candidates find your role.  

“It could also be used to create a first draft of a job posting that a hiring manager could then improve on,” he said. 

Another example is to lessen the burden of communication within the interview process. Lewis says it’s best practice to respond to every application received for a role, but sometimes this can create a behemoth of a task. Enter AI, which can be used to create template outreach emails to get back to candidates, either moving them to the next stage, or letting them down gently.  

“Chatbots and automated emailing systems do exactly the same thing here,” Lewis said.  

Finally, it’s also important to be aware that candidates may themselves be using AI like ChatGPT within their job applications. Any written tasks, cover letters or answers to questions may be generated by the tool.  

“Luckily, there is a quick way to check and rule out candidates using AI for written tasks,” he explained. “OpenAI has released an AI classifier for indicating AI-written text, which applications should be checked against. 

Risks for Employers  

Littler’s Paretti says employers are well aware of the risks of using pre-hire assessments and recognize that algorithmic selection tools have the potential to amplify those risks due to the sheer volume of potential decisions. With that in mind, Littler has identified a few key steps that employers considering deploying algorithmic hiring tools should take: 

  • Become an educated consumer. Understand the potential and limits of the tools, and be involved in developing a tool that will meet business recruiting objectives, while complying with legal obligations. 
  • Know your data. Decisions will largely be a function of the training data, so firms must have a strong grasp of the types of information they use to train an algorithm. For instance, are there objective measurements of performance, or subjective summaries? Are there historical hiring patterns evident in the data that the firm wishes to correct? Here, the adage, “garbage in, garbage out” applies in full force. 
  • Post-adoption due diligence. After adoption, an employer should put in place systems to review its processes. “A ‘set it and forget it’ mentality will undoubtedly create risk,” Paretti said. In that light, one of the best uses of algorithmic processes is to run an algorithmic program in tandem with human recruiters to augment, and not replace, the process. The algorithm can help to identify “hidden gems” that human recruiters may have missed. Paretti said including some role for human decision-making also ensures that any potential issues with screening tools can be spotted and corrected sooner rather than later. 

The “sheer power” of algorithms and computing has the capability to fundamentally improve the recruiting process, but great care must be taken to ensure firms obtain the desired results and do so in compliance with existing and developing law, Paretti said.   

“Given the complexity and rapidly evolving standards around these issues,” he advised, “employers using AI in their employment decisions should work closely with counsel to ensure compliance and minimize the risk of violating any one of a number of laws.”  

Editor’s Note: Additional Content 

For more information and resources related to this article see the pages below, which offer quick access to all WorldatWork content on these topics: 

Related WorldatWork Resources
IRS Raises HSA Contribution Limit for 2025
What You Need to Know (and Do) About the EEOC’s Harassment Guidance
California’s Workplace Violence Law Could Be a Trend-Setter
Related WorldatWork Courses
Sales Compensation: Foundation and Core Principles
Sales Compensation: Advanced Implementation and Program Management
Sales Compensation Course Series
Feedback