A couple months ago, I’d bring up ChatGPT in casual conversations, and I’d be dismissed by family, friends and longtime colleagues as a nerd with too much free time on my hands. It’s amazing how quickly ChatGPT has taken root and been incorporated by millions for scores of business duties, day-to-day life tasks and beyond.
Want to learn a language, write a song, design a chair? There’s an AI app for that now.
ChatGPT makes everyone an expert — and when everyone’s an expert, no one is. The years you’ve spent studying, learning and refining your skills are no match for the power, intellect and speed of ChatGPT. Feed it an article, a business plan or a piece of code, then ask ChatGPT to improve it, and in seconds, it will.
About five years ago, I wrote a column on HR and AI and the stages HR professionals are likely to go through:
- Stage 1: As AI replaces entire categories of workers, administering to and attempting to motivate a workforce that’s suffering various degrees of AI-induced PTSD will become a major focus of the day-to-day HR function (giving EAPs a robust new line of business).
- Stage 2: When AI develops the skills necessary to master just about every modern situation, it will be entrusted with more complex interpersonal tasks such as conflict resolution and move HR further to the margins.
- Stage 3: Realizing AI is no longer a high-performing co-worker or colleague, but your boss who views you as just another data point.
- Stage 4: It’s time to prepare your resume, with the sickening knowledge that there’s another robot on the other end evaluating it.
I think this list still holds up, but I’d add a new stage: the weaponization of AI by co-workers using it to check, critique and provide alternatives to your work.
The weaponizing comes in the form of co-workers asking ChatGPT to take what you’ve created, then improve or grade it. It’s one thing for members of your team with the requisite skills, background and knowledge base to weigh in, but what about an unsolicited critique — or worse, an improved version from someone who has no history of such expertise? I’ve experienced this once or twice now, and it is deeply unpleasant and seldom productive.
The widespread use of ChatGPT to project unearned skills and expertise to gain an advantage is not unlike the widespread use of performance-enhancing drugs (PEDs) in sports. Organizations would be foolish if they didn’t use every tool at their disposal to get a competitive business edge, so long as it’s legal and ethical. ChatGPT can be such a tool. The problem is when individuals within the organization use it to game and undermine their co-workers and claim expertise they don’t legitimately possess.
Don’t get me wrong. I’m all for people across the organization learning how to use ChatGPT and other AI platforms to glean deeper insights and improve their work. Building an all-in culture is about welcoming great ideas and input from anyone and everyone. But insisting that everyone stays in their lane is not productive and can discourage initiative. It’s a fine line that organizations will need to be aware of and police.
PEDs show up in lab results, and in the sports world, there’s a remedy — you fine, suspend or deny Hall of Fame access to proven users. Until AI-enhanced work is easier to detect, it will inject uncertainty and resentment every time a colleague who is suspected of using an AI performance enhancer is recognized and rewarded for “their” work. I can’t currently think of a handy remedy — ChatGPT, if you’re so smart, what do you suggest we do?
Editor’s Note: Additional Content
For more information and resources related to this article see the pages below, which offer quick access to all WorldatWork content on these topics: