Home News 3 Ways To De-Risk AI For Hiring Decisions: Beyond Plug-And-Play

3 Ways To De-Risk AI For Hiring Decisions: Beyond Plug-And-Play

by admin

AI is not just transforming work; it is transforming critical talent decisions. Gallup reports that 93% of Fortune 500 Chief Human Resources Officers (CHROs) are using AI tools to improve business operations and reporting significant cost decreases from using AI. The recruiting function within human resources departments is responsible for attracting, screening, and hiring job candidates. AI’s unique value in processing large volumes of data is a potential operational advantage for these departments. The Equal Employment Opportunity Commission (EEOC) chair recently said that more than 80% of employers use AI for work and making employment decisions. However, while AI offers significant opportunities, it presents critical risks, especially regarding diversity, equity, and inclusion (DEI). Leaders must ensure that the AI they adopt does not exacerbate or create new inequalities. To do this, they need a nuanced approach from the very beginning.

Both Oprah Winfrey (in her ABC Special, “AI and the Future of US”) and Bill Gates (in the first episode of his Netflix series, “What’s Next?”) highlighted the duality of AI as both an improvement tool and a potential disruptor. For leaders, understanding “what AI can do for us and what it can do to us” (the subtitle of the first episode of the Gates special) is the key to leveraging it responsibly. To do this well, particularly in talent selection, leaders need to think beyond the “plug-and-play” mentality and adopt strategies that fit their organizations like a well-tailored suit. I recently interviewed two experts to help us understand some critical considerations when AI will be used to make talent selection and hiring decisions. To get the perspective from psychological science, I spoke with Dr. Fred Oswald, an industrial and organizational psychology professor at Rice University. He is also chair of the Board on Human-Systems Integration (BOHSI) and a member of the federal national AI advisory committee, NAIAC. I also interviewed Jonathan Brill, a business futurist, to get the business perspective. They recommended three key actions to help de-risk AI implementation for talent management.

1. Build The Right Culture To Lead AI

Although you will have a specific talent challenge in mind as you begin your AI journey, Brill reminds us that the first step is not to select a tool but to build a “culture of experimentation.” By this, he means you should not expect your effort to be perfect on the first go-round; you must plan for trial and error. Organizations should nurture curiosity, reward innovation, and make it safe for employees to explore new ideas without fear of retaliation if their ideas fail. In AI implementation, failure should be established as a feature, not a bug. He also recommends that organizations develop governance mechanisms such as AI steering committees, risk assessments, and ethical review boards to oversee AI initiatives. And Brill recommends leaders start with this approach now. It might seem too early to invest in the necessary training and culture shifts, but it will be too late by the time AI is ready,” he said.

2. Demand Fairness, Validity, And Transparency

One of AI’s promises is its potential to democratize access to opportunities. Gates has highlighted AI’s power to bring education and healthcare to those who have been historically underserved. However, as noted in Oprah Winfrey’s TV special, the intention behind AI will decide whether it divides or unites us. Since intentionality is crucial, it is essential to understand that many large language models (LLMs) are trained on data that are not representative of the larger population. As a result, some AI tools may be inherently discriminatory in their composition, and the selection decisions powered by these tools may also be biased.

Oswald said, “AI tools should be well-developed and backed by reasonable evidence that they measure job-relevant characteristics.” “Unless AI hiring tools come with supportive evidence, and unless organizations ask the right questions before investing in them, then I worry that employees and organizations will not fully benefit.”

The Society for Industrial and Organizational Psychology (SIOP) has outlined clear recommendations for using AI in hiring. Oswald had a hand in deriving their main recommendations which are that you check are the tool’s fairness, job-relatedness, and predictive validity. The recommendations also include ensuring that the tool does not adversely impact historically underrepresented groups.

Asking the right questions on the front end is the most effective way to avoid using AI-powered tools that might reinforce biases rather than eliminate them.

3. Carefully Evaluate “Off-the-Shelf” Solutions

According to Oswald, the algorithms on which AI-powered assessments are built should be transparent. This means the content, scoring and the system’s performance should be understandable. One of the risks of relying on AI in talent selection is that although the marketing may be compelling, vendors might not present technical reports to explain how their tools were designed. Or, instead of he necessary technical reports that provide research-based statistical observations, they might substitute informal explainer documents. Some vendors may fear that providing technical details will compromise their competitive advantage. However, if they don’t (or can’t) tell you precisely what the algorithm is doing, you willingly take on two risks. First, you may not be able to explain why particular candidates were recommended or rejected. The result could be that employees and candidates lose trust in your selection processes. Second, you may expose your organization to the risk of legal challenges.

Brill notes, “AI isn’t something you buy off the shelf and expect to transform your company overnight.” He suggests that AI is better considered a “component technology that must be integrated with your organization’s unique data, culture, and strategic goals.

Companies are understandably attracted to the simplicity of off-the-shelf AI products for talent management. But, as Brill put it, ” Leaders should now consider how AI aligns with their vision for the future rather than hastily adopting a generic solution that may not yield the expected outcomes—or worse, have unintended negative consequences.”

To de-risk AI for talent selection, consider custom rather than off-the-shelf solutions, demand transparency, and rigorous validation, and cultivate an organizational culture that embraces experimentation and inclusion.

You may also like

Leave a Comment