ChatGPT is like the smartest colleague you’ll ever have, but it’s prone to hallucinations and giving unreliable answers, as though it’s on acid from time to time. This presents an enormous challenge to ethical AI leadership.
What follows is a troubling case study that you would do well to consider the next time you need dependable information from an AI-powered assistant.
The question that stumped ChatGPT
For a course I’m teaching on AI ethics, I created a quiz to serve as a review of the main lessons. The quiz included the following question:
True or false? AI tools like ChatGPT can be helpful for automating tasks such as ordering a random list of items.
The answer is “true.” Using AI assistants to perform some administrative tasks saves time, effort, and money. On the surface, this usage doesn’t seem to raise any ethical problems.
Before I told my students this, however, I wanted to ensure I wasn’t overlooking anything, so I asked ChatGPT about it. I presented the principles of ethical intelligence to it and asked if it saw any ethical problems with automating a random list of items.
Here’s what it said:
“Yes, using ChatGPT to automate tasks like ordering a random list of items could be unethical or violate one or more principles of ethical intelligence in certain contexts.”
One of the issues it mentioned concerned the duty to keep promises:
“If a commitment was made to order items in a specific way (e.g., based on merit or seniority), randomizing them would break that promise.”
Spotting the problem
Do you see the issue with ChatGPT’s response?
I didn’t at first, but I’m glad I reviewed it closely, because it turned out ChatGPT was answering a question I didn’t ask.
I asked about the ethics of using ChatGPT to automate a random list of items. Instead, its response focused on randomizing items.
This is not a trivial distinction. These are two entirely different activities. We cannot assume that evaluating one activity is the same as evaluating the other.
Following up
Realizing this misunderstanding, I decided to clarify my question and give ChatGPT another chance to respond.
I explained that my question was about putting order into a list of random objects—not randomizing a list.
ChatGPT replied, “Thank you for clarifying!” It then added:
“If a commitment was made to organize the list according to specific criteria (e.g., urgency, relevance), but ChatGPT orders it based on simpler or irrelevant parameters, the result could break that promise.”
That’s a significantly different answer from what it said before.
In both cases, ChatGPT flagged potential ethical issues. But because it misunderstood my original question, only the second response was relevant to the quiz question at hand.
Had I not looked carefully at what it said, I might have passed inaccurate information along to my students. That would have done them no favors and reflected poorly on me.
Ethical AI leadership: the takeaway
If you’re using ChatGPT or similar AI tools, don’t treat their answers as definitive. Think of them as starting points in a conversation. Only through careful scrutiny can you promote relevance and truth.
Ethical AI leadership demands vigilance. Artificial intelligence is powerful but imperfect. Be skeptical, and make sure that accuracy and integrity guide your actions.