What the Loper Bright Decision Means for AI Usage in HR/TR
Workspan Daily
October 02, 2024

In the absence of U.S. government legislation regulating artificial intelligence (AI) development and/or usage, several federal agencies — including the Equal Employment Opportunity Commission (EEOC), the National Labor Relations Board (NLRB) and the Department of Labor (DOL) — have taken it upon themselves to roll out guidance and initiatives that caution about, curtail or confine AI usage in the workplace.

For instance, in May 2024, the DOL issued “Artificial Intelligence and Worker Well-being: Principles for Developers and Employers,” a “roadmap” serving as “a guiding framework for businesses.” In 2021, the EEOC created the “Artificial Intelligence and Algorithmic Fairness Initiative,” outlining what it saw as potential unlawful AI applications in hiring and employment use cases.

Those agency efforts to control AI usage and proliferation, however, were hamstrung as the result of the U.S. Supreme Court’s June 28 decision in the case of Loper Bright Enterprises v. Raimondo (and the affiliated case of Relentless v. U.S. Department of Labor). The Loper Bright decision struck down what was known as the Chevron doctrine, which underscored federal agencies’ power to create enforceable rules and guidance around such things as AI. Under that doctrine:

  • Federal agencies had implied authority to interpret ambiguous statutes in cases where Congress had not provided clear legal guidance, and
  • If Congress had not directly addressed the question at the center of a dispute, a court was required to uphold the agency’s interpretation of the statute, as long as it was deemed reasonable.

“Courts would typically defer to those agencies’ interpretation, as long as it was reasonable,” said Lulu Seikaly, senior employment counsel at Payscale. “Since the Loper Bright decision, federal agency deference has weakened considerably.”

According to Michael Piker, the vice president for total rewards at Shiseido, a global cosmetics company, the Supreme Court’s Loper Bright decision, in simple terms, means the judicial branch has the true power in the U.S. to review and reject regulatory statutes enforced by federal agencies, including those addressing the use of AI.

“U.S. states, on the other hand, are moving faster than Congress to write AI legislation for the workplace,” Piker noted.

California and New York are among the states with impending or proposed bills regulating AI in the workplace.

Piker added that the National Conference of State Legislatures reported that 45 states are actively studying AI policy, although none have yet to pass legislation regulating AI usage in the workplace.

“Therefore, employers that operate in states with pending AI legislation must take heed on workplace policies regarding this latest Supreme Court decision,” he said.

Decision May Lead to More AI Exploration, Implementation

The bottom line is that Loper Bright, in effect, increasingly unencumbers AI usage in the workplace, opening the doors for HR and total rewards functions to more formally examine, pilot and implement related technologies within their areas.

This appears to be welcome news to the HR/TR profession. A recent Gartner report found that a majority of HR leaders are hungry to implement AI and feel a sense of urgency about moving it forward within their operations. Seventy-six percent of surveyed HR leaders believe that if their organization does not adopt and implement AI solutions, such as generative AI, in the next 12 to 24 months, they will lag in organizational success compared to those that do.

Thirty-eight percent of surveyed HR leaders also said they have explored or even implemented AI solutions to improve process efficiency within their organization. That percentage will likely rise in the months to come.

How AI Is Currently Being Used and Considered in HR

According to Payscale’s recent Compensation Best Practices Report, although forms of AI have been used reliably for several years in HR, organizations have approached the technology with caution. The report identified that among surveyed organizations, percentage of usage or development includes:

  • 21% for managing or generating job descriptions,
  • 17% to create or support learning and development or standard HR documents, and
  • 18% to parse resumes and identify candidates.

The latter use case will likely receive the most amount of scrutiny and is ripe for unintentional discrimination claims in the post-Loper Bright world.

Futhermore, Seikaly added, some organizations are exploring using AI to:

  • Benchmark and price jobs or predict pay ranges;
  • Monitor for pay equity and suggest pay increases;
  • Collect intelligence on skills for recruiting, upskilling and compensation;
  • Write offer letters and generate total rewards statements; and,
  • Identify career pathing and opportunities for internal mobility.

Additional Considerations for a ‘Post-Loper Bright World’

In terms of how Loper Bright and Chevron may impact AI implementation within the HR/TR function, agencies such as the EEOC, which has offered guidance on AI utilization in hiring, could be more frequently challenged in court, Seikaly said. She added that this could morph into a potential nightmare for organizations that rely on AI for recruiting, as it may introduce unnecessary legal risks if courts choose to interpret those federal agency guidelines differently.

“In a post-Loper Bright world, organizations may face increased lawsuits over their use of AI, especially if it leads to perceived discriminatory or unequitable treatment in the hiring process,” Seikaly said.

While the EEOC and other federal agencies generally have stepped back from regulating AI usage, she added, the EEOC also has been very vocal in stating that if an organization makes a discriminatory decision based on AI, that employer can be deemed liable.

“In this day and age, organizations should ensure that any AI processes are transparent and can provide clear, nondiscriminatory explanations for their decisions,” Seikaly warned.

She added that any HR/TR decisions made via AI should always be verified by a human professional in that function to ensure any biases in tech-centered decision-making are considered.

“Legal counsel should be an HR professional’s best friend, especially for organizations that rely on AI,” Seikaly said.  

For example, she explained, legal counsel can help organizations stay informed about court decisions, impending/trending case law and regulatory changes that could affect an organization’s use of emerging technologies.

“HR professionals can benefit greatly from incorporating AI tools into their workflows, but AI will not replace HR professionals,” Seikaly said. “HR professionals are essential in ensuring that any policies that AI drafts or decisions AI makes are nondiscriminatory.”

Editor’s Note: Additional Content

For more information and resources related to this article, see the pages below, which offer quick access to all WorldatWork content on these topics:

Related WorldatWork Resources
WTW: U.S. Employers Project 3.7% Salary Increase Budgets for 2025
Comparing Individual CEO Performance Against Corporate Results
Should Your Organization Consider Offering Onsite Childcare?
Related WorldatWork Courses
Pay Equity Course Series
International Remuneration: An Overview of Global Rewards
Strategic Communication in Total Rewards