For WorldatWork Members
- AI and the Skills Evolution: Where the Total Rewards Function Fits In, Workspan Daily Plus+ article
- Internal Talent Marketplaces Get a Boost from AI, Workspan Magazine article
- 3 Sales Compensation Challenges that AI Can Help Tackle, Workspan Magazine article
- TR Is Key to Successfully Integrating AI and Work, Journal of Total Rewards article
For Everyone
- How Will AI Impact TR’s Roles and Strategies Over the Next 5 Years? Workspan Daily article
- Rethinking Equity and Diversity: Can AI Improve Workplaces for All? Workspan Daily article
- What TR Pros Can Expect with Regulations and Compliance in 2025, Workspan Daily article
- Mini-Study: AI in Total Rewards, research
Artificial intelligence (AI) is starting to appear most everywhere these days, and that includes the world of total rewards (TR). According to WTW’s 2024 Global Salary Budget Planning Report, approximately 10% of respondents globally have started using AI-related technologies.
The greatest area of AI usage was salary benchmarking and rewards, said Russ Wakelin, the head of global product development, rewards data intelligence at WTW. Other AI usage areas included:
- Learning and development
- Market insights and trends
- Skills management
- Performance management
- Job architecture
This aligns with WorldatWork research. Its mini-study on emerging insights on AI in total rewards found 33% of TR leaders used AI in recruitment and administrative tasks.
“It appears companies are using AI to streamline compensation processes, enhance decision-making accuracy, and ensure the effective targeting of the right employees for salary adjustments and rewards,” Wakelin said.
AI’s appeal is undeniable. It can offer TR pros the ability to quickly sift and sort data and streamline compensation and TR tasks. It can also help with monitoring and analyzing industry trends and regulatory changes. But the technology is still new, and potential pitfalls abound. The imperfect inputs AI is trained on can sometimes lead to biased — and illegal — outputs.
Additionally, while the administration of President Donald Trump has signaled a hands-off approach to federal AI regulation, states and municipalities are passing AI-related legislation at a rapid pace. These laws are designed to protect job applicants and employees (and, in some cases, consumers more generally) from AI-generated bias, and many may require employers to make brand-new disclosures.
New and Pending AI Legislation
The National Law Review recently reported that more than 400 AI-related bills were introduced in 41 states in 2024 alone, and new AI legislation is a frequent occurrence. Here are some current and pending laws to keep an eye on (note that remote workers and applicants can sometimes extend a law’s reach).
Current legislation:
- Colorado’s Artificial Intelligence Act, effective Feb. 1, 2026, requires that deployers of high-risk AI systems use reasonable care to protect consumers (which includes employees and job applicants) “from any known or reasonably foreseeable risks of algorithmic discrimination.” Consumers also must be told they are interacting with an AI system.
- Illinois House Bill 3773, which takes effect Jan. 1, 2026, amends the Illinois Human Rights Act to explicitly prohibit AI-driven bias in recruiting, hiring, discipline, and any terms, privileges or conditions of employment. It also requires notice to employees when AI is used for any of these purposes.
- New York City’s Local Law 144 (2023) prohibits employers and employment agencies from using automated employment decision tools unless they have been subject to a publicly available bias audit within one year of use and certain notices have been provided to employees or job candidates.
Pending legislation:
- The California Privacy Protection Agency released proposed regulations that would require certain businesses to conduct risk assessments and complete annual cybersecurity audits, and allow consumers to access and opt out of the use of automated decision-making technologies. The public comment period runs through Feb. 19, 2025.
- The proposed Texas Responsible AI Governance Act would regulate high-risk AI systems — defined as those that make, or contribute to making, “consequential” decisions, including those affecting employment or employment opportunities. Developers and deployers of these systems could be fined up to $100,000 per violation.
- New York Gov. Kathy Hochul is reportedly seeking to expand the state’s already-extensive Worker Adjustment and Retraining Notification (WARN) Act provisions — which are triggered by plant closings and mass layoffs — to include mandatory disclosures if the layoffs are related to the employer’s AI usage.
- Virginia House Bill 2094, introduced on Feb. 2, has similar text as the Colorado AI Act. It creates requirements for the development, deployment and use of high-risk AI systems, defined in the bill, and civil penalties for noncompliance, to be enforced by the Attorney General. The bill has a delayed effective date of July 1, 2026.
What About Trump’s AI Executive Order?
On Jan. 23, 2025, President Trump released an AI-related executive order titled “Removing Barriers to American Leadership in Artificial Intelligence.” The document seeks to roll back Biden-era policies and directives on AI and orders various departments and agencies to create an action plan for doing so within 180 days.
The executive order signals reduced federal regulatory attention on potential discrimination and other legal issues in the use of AI, said Bill Nolan a partner at the Barnes & Thornburg law firm.
“Most notably for employers, for at least the next two years, the EEOC [Equal Employment Opportunity Commission] is unlikely to focus on how AI systems might discriminate in the data or algorithms they are using,” he said.
Nolan also noted this does not mean employers should become complacent about AI usage. Trump’s executive order “does not affect employees’ ability to bring private discrimination lawsuits or states’ ability to address AI in employment.”
Bradford J. Kelley, a shareholder at employment law firm Littler, concurred that AI legislation will continue advancing at the state and local levels, particularly in Democratic-led states.
Even in states without explicit AI legislation, employers remain vulnerable to various claims, including disparate impact, negligence and privacy-related allegations, said Moiré Morón, vice president for advocacy and technical claims at advisory firm NFP.
“The onus is now on companies to develop and implement robust AI policies that address issues like bias and privacy concerns when utilizing AI in the workplace if they want to mitigate their risk to discrimination claims,” she said. “They are still being held accountable.”
Strategies for Staying AI-Safe and Compliant
Trump’s executive order does not (at least for now) directly impact private-sector employers, Kelley said, and the administration will likely have a limited impact on AI regulation because most of the regulatory efforts are happening closer to home.
“State and sometimes even municipal law is going to be where the action is on AI,” Nolan added.
Unfortunately, Nolan said, because no two state legislatures are approaching AI exactly the same way, “Employers will just need to address varied requirements as they arise. There is no shortcut to staying on top of state and local developments — employers need to be tied into legal and HR resources that will help them stay abreast.”
Here are some tips for employers to consider:
- Keep an eye on your AI usage. Kelley recommended that employers regularly assess their AI use and the impact of AI systems in the workplace upon employees and applicants.
- Don’t forget about your vendors. “Employers who use outside companies for vetting of candidates would be wise to ask them if and how they use AI and if their process is scrubbed of any biases that might arise,” said Morón. “Vendors should also conduct regular audits.”
- Disclose as needed. While no two state laws are exactly the same, Nolan said employers will at least need to disclose their use of AI processes in jurisdictions with applicable laws.
- Be proactive. Employers should implement transparency measures and proactive audits to manage the risk of AI bias, Kelley said. Morón also advocated for establishing policies and best practices regarding AI use, such as when and how it is used in hiring, promotion and compensation decisions.
- Train your people. Morón recommended training employees on the proper and ethical use of AI and having them sign acknowledgements as they do for other types of training.
- Think old-school. “Twenty years ago, what would you have wanted to know about your compensation and other HR functions to ensure they are not discriminatory? You wanted to understand the data and how we are analyzing it,” said Nolan. “That’s still true with AI and other technologies. Employers need to work with their vendors, technology leaders and counsel to make sure they have a deep understanding of what AI tools are doing and make sure they can explain to a jury why the tools do not discriminate.”
Editor’s Note: Additional Content
For more information and resources related to this article, see the pages below, which offer quick access to all WorldatWork content on these topics: