It’s 2018 and a huge topic of discussion that has been rapidly spanning across almost all industries is Artificial Intelligence. Obviously, technology companies that sell software and hardware for improving I.T. infrastructure have been ranting and raving about it non-stop. But what about everyone else? Well, we’re in HR, and more specifically our company is HR outsourcing, and sure enough, there’s been a continuing discussion about it non-stop here too in our community.
HR could benefit from A.I., but you’d assume a lot of testing is needed to be done before any A.I. provider can get real angel investors or crowdfunding. The real challenge is proving that it works with analyzing real active employees and properly matching up their skillsets side by side, which can show productivity scores and other metrics to calculate a final assessment for an HR manager.
A.I. certainly should never replace an HR manager or director, but it could save them time with a lot of menial tasks, and also compare and contrast current and future employees up against one another to determine whats best for the business. However, A.I. can’t run an analysis on one very important factor, character. There is no metric to rating one’s character or personality. Interpersonal skills could only be measured by giving employees surveys to take on how well they get along with co-worker a,b,c, etc, and how accurate could that really be?
Some major questions we really need to ask ourselves about A.I. being integrated into standard operating procedures within HR are the following:
- Will this bot have a major influence, or serve as a base point for decisions that could potentially hurt peoples professional lives?
If an A.I. ranks someone poorly and marks or flags that person as a bad candidate for a potential role, then this person just lost an opportunity based off a machines decision making; which is based on data analysis only, and nothing to do with the individuals unique character.
- Can available data actually lead to a good outcome for potential candidates?
We need to ask ourselves additional questions on correlation vs causation, and whether or not every data point being awarded is a genuine and valid proxy for different outcomes, like obviously hiring the wrong person. Who can you blame in this case then, the A.I.?
- Are algorithms fair?
Many organizations are already turning to AI-powered candidate assessment and ranking processes to actually remove any human bias altogether. But “fairness” is a hard thing to prove. The right balance or AI and personnel combined should be making final decisions, not one or the other.
- How will results be used by humans?
Once results from algorithms are returned, how will they be used? Will those results be the sole determining factor in the decision-making process, or will they just be used as a general reference for who to consider in your hiring process?
- Will those affected by this system have any influence over the system?
Job candidates who are not selected for interviews as a result of a poor or lower relative AI-driven ranking will never have any ability to influence the system or process of AI. But many rejected candidates often have valid questions why they weren’t selected for a role and seek further counsel. HR leaders must respond to requests like these because with the power of social media and online ranking and review sites, anyone can bash an organization for unfair treatment.
HR hiring decisions are huge decisions, life-changing. And if future HR leaders are going to trust an algorithm in partially making some of these decisions, they need to hold that AI accountable (and themselves), because after all, peoples lives depend on it.