Artificial intelligence sweeping through workplaces brings all kinds of potential and excitement. But it also brings mystery — and that itself can cause problems.
It’s known as a “black box” issue. When people inside an organization don’t understand how something functions, they can face a series of cascading problems. Researchers have been exploring this, and discovering that this challenge can impact daily operations in numerous ways.
”The use of ‘black box’ (or unexplainable) AI in workplaces may degrade workers’ skill cultivation, use, and feelings of competence,” Sarah Bankins and Paul Formosa wrote in a study published by the Journal of Business Ethics. “For example, where workers are highly reliant on the decision making of an AI, they may feel lower levels of competence in their use of it due to little understanding of its functioning. This effect will likely be more acutely felt where workers are expected to understand and explain what the AI is doing.”
That’s not all. Lack of explainability can also lead to “opaque chains of accountability for decisions informed by AI,” they added. And workers can become “overly dependent” on an AI tool, perhaps believing that its abilities go beyond the tasks that it can actually perform.
The problem can get even worse, creating a new kind of challenge that even chips away at how people perceive themselves.
‘IT Identity Threat’
Over the last couple of decades, as technologies have become so fundamental to our daily lives, many people have incorporated their use of technologies into their perceptions of who they are and the role they play in the world. Researchers have coined a term for this, “IT identity.”
New technologies can shake the foundations of that sense of self. So now there’s a new term, “IT identity threat.” As four German professors — Milad Mirbabaie, Felix Brunker, Nicholas R. J. Mollmann Frick and Stefan Stieglitz — explain in a study, this term is defined as “the anticipation of harm to an individual’s self-beliefs, caused by the use of an IT.”
The often mysterious nature of AI makes this problem all the more likely. Without understanding the full scope of a new tool’s capabilities, people are even more likely to feel their “IT identity” threatened. This makes AI different from traditional IT tools. “AI further entails challenges such as replacing human tasks or black-boxing the process of decision-making,” the professors write in the journal Electronic Markets. “These aspects may evoke a unique perception of employees toward AI compared to IT.”
Making AI Tools Explainable
Recognizing this problem, some designers of AI tools are embracing an “XAI” model, which stands for “explainable artificial intelligence.” These are also sometimes referred to as “white box” or “transparent” models.
But even when the tools are not specifically created based on such models, employees can still be trained to understand a great deal about how they work. This requires a new mindset among leaders and those who oversee learning and development initiatives.
At times, when organizations train people in how a machine “works,” they’re really only focused on how to “work” the machine. With AI, it’s helpful to delve in further, with as much clarity as possible, and explain what the machine itself is doing.
This is also a part of modern workforce engagement management (which also has its own acronym, WEM). As Alex Doan with Nextiva explained in a blog post, part of what it takes to “create a positive work environment that prioritizes your employees’ well-being and growth” is to “support employees with the training, tools, and resources they need to succeed.”
In this era, that training should include, to whatever extent possible, a simple explanation of what goes on inside the “black box.”
Allay the Fears
It’s also essential to make clear to any team that they should not fear the rise of AI. As a business leader, I do not view the future as a competition between humans and machines. I don’t see people losing our relevance or our value as employees. I see us taking on different tasks.
Even the parts of AI that are difficult (or may even seem impossible) to understand are still not indications that humans won’t be needed to run organizations. As with every technological revolution, this one will open up new opportunities, new jobs, new possibilities.
So-called “black box” technologies have existed for decades. They haven’t replaced us. And there’s no reason to fear that they will.