Future Tech

When AI meets HR: Prepare your policies now

Tan KW
Publish date: Sun, 09 Apr 2023, 12:31 PM
Tan KW
0 462,300
Future Tech

There are many amazing things about ChatGPT and other AI tools that are available. AI can do everything from writing your headlines for you to drawing you the presidents of the United States as if they were supervillains.

It's fun. But it's also useful, and you've probably been using AI already in your business, especially if you don't have a policy that prohibits employees from using ChatGPT for work purposes. And why not? If it saves time, it's worth it.

Yes, and no. While AI is great for generating headlines, you need to be prepared if you start using it to make hiring decisions.

Data privacy

ChatGPT isn't really intelligent. It has no ideas of its own - it's all based on existing ideas. It constantly "learns" by acquiring new data, including anything you post.

Craig Balding, the author of the cybersecurity and AI newsletter and blog on ThreatPrompt.com, told Worklife that what your employees submit could show up in someone else's answer: "You could have your own password showing up somewhere else. Although there's no attribution, there are identifiers like company name, well-known project names, and more."

This is a concern in all facets of business, including hiring. For instance, if a recruiter wants to give hiring managers a quick summary of candidates, ChatGPT summarizes resumes well. But now you've put someone's whole work history, address, and phone number into the data set. Did your candidates consent to this?

Bias and error

We know ChatGPT, like all artificial intelligence, is biased. It's only as good as the programmers and the datasets. Since these are flawed, the AI tools are biased. Amazon created an AI system to help with hiring and had to stop it because of bias. Illinois is suing HireVue, a video interviewing program that evaluates the facial geometry of candidates to determine "cognitive ability, personality traits, emotional intelligence, and social aptitude," because of bias.

And bias aside, ChatGPT makes things up when it doesn't know the answer. I asked it to summarize a newly released New York State case that dealt with abortion rights in the workplace, and ChatGPT returned information about gun rights.

Because I was familiar with the case before I asked for the summary, I knew it was wrong. But if you use this tool to help answer legal questions, you need an educated person asking the question. You cannot trust that ChatGPT will return the correct information.

You're legally responsible for everything your company does

It would be nice to say that any illegal bias in your hiring or employee retention efforts could be blamed on ChatGPT or another AI program, but you cannot. Whatever decision your company makes is your responsibility.

In a recent webinar, employment attorney and HR consultant Kate Bischoff discussed the important legalities you need to consider in staffing. Here are three critical things:

You can't illegally discriminate against candidates and employees - even if it was the black box of AI that made the suggestion. You still act.

If you don't have a policy, your employees will use ChatGPT and other AI programs. Make sure you're OK with that and get your policies together.

 

 - TNS

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment