Whether AI is helpful or harmful will be the top challenge facing businesses in 2025. Much of the news surrounding artificial intelligence focuses on technological issues, such as how to build faster computers, make AI programs work more efficiently, and get new AI tools to work with older technology.
Unless we use technology for good purposes and guard against abuse, however, it doesn’t matter how technically sophisticated AI becomes. The harms will overshadow the advances.
Let’s consider three ethical challenges that AI raises for businesses around the world. We’ll also look at how some businesses address these challenges and what all of this means for you and your own organization.
1. Saving lives, preventing disaster: Why AI safety must come first
The most fundamental ethical principle of all is “Do no harm.” It applies not just to physicians and other health care workers but to leaders in the AI space and everybody else.
AI safety is not optional. It is an urgent necessity. Speaking at the AI Safety Summit in November 2023, Dario Amodei, CEO of Anthropic, emphasized the importance of ongoing risk assessment and response. “We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur,” he said.
Although the summit took place in 2023, Amodei’s insights remain critical for 2025. The challenges he outlined—establishing rigorous evaluations and proactive response mechanisms—are foundational principles for managing AI risks that continue to grow.
Examples of AI-caused harm
Some of the grievous harms AI can cause include:
- Bias and discrimination, in which AI models perpetuate unfair practices in hiring, lending, and law enforcement, to name a few areas of concern
- Privacy violations, since AI-powered surveillance systems can compromise individual security
- Autonomous weapons, whereby artificial intelligence makes life-or-death decisions without human oversight
- Manipulation and misinformation through deepfakes and other AI-generated falsehoods, which can erode trust and incite political or social unrest
How companies are managing this ethical challenge
The company Anthropic, cited above, uses the process called “red teaming” to stress-test their AI systems. This means simulating adversarial attacks to identify weaknesses like biased outputs or harmful behaviors. Red teaming helps to ensure AI models are safe, reliable, and resilient before a company uses them.
By prioritizing safety over speed, delaying product launches when necessary, and collaborating with regulators to establish industry-wide safety standards, companies like Anthropic demonstrate how rigorous testing can build trust and prevent harmful outcomes.
For reflection
How can you prioritize safety in AI development without sacrificing innovation or speed?
2. Fighting the chaos: Will regulations catch up in time?
When I began my career at the West Virginia University Health Sciences Center in Morgantown, I took a seminar in time management. The instructor, law professor Forest “Jack” Bowman, told us, “If you don’t manage your time, someone else will.”
That wise saying could be updated to: “If you don’t manage your AI systems, someone else must.”
At a conference sponsored by Reuters yesterday, Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, highlighted the challenges policymakers face in developing AI safeguards because of the rapidly evolving nature of the technology.
Kelly noted that in areas like cybersecurity, it can be easy to bypass AI security measures. These workarounds are known as “jailbreaks” and can be easily executed by tech-savvy people.
Recall, in David Fincher’s The Girl with the Dragon Tattoo, the look of disbelief that Rooney Mara’s hacker Lisbeth Salander (Rooney Mara) gives Mikael Blomkvist (Daniel Craig) when he asks her about the difficulty of breaking into a computer system. And that was in 2011! (Written by Steven Zaillian, the film was based on the novel by Stieg Larsson.)
The European Union is one part of the world that is tackling the need for government regulation of AI systems. Its Artificial Intelligence Act (AI Act), which went into effect on August 1 this year, bans AI with unacceptable risks, like social scoring, in which individuals are given scores based on their behavior and actions. Social scoring can unfairly limit access to financial services, employment, travel, education, housing, and public benefits.
How companies are managing this ethical challenge
IBM has already taken proactive steps to address the concerns of the EU’s legislation through initiatives such as its Precision Regulation Policy. That policy addresses three components of AI ethics: 1) accountability, 2) transparency, and 3) fairness.
It’s worth taking a look at this document, because it presents a blueprint for how any company, and not just IBM, can use AI in the right way and for the right reasons.
For reflection
What is your company doing to align your AI systems with emerging regulations and thus avoid potential legal or ethical risks?
3. AI and the future of work: will technology leave us behind?
Earlier we considered the ethical principle Do No Harm with respect to safety. That fundamental ethical imperative also applies to employment. Whatever euphemism you wish to use—reduction in force, downsizing—the effect is the same: letting loyal, hardworking employees go causes harm, even if there are financial benefits for the companies that do this.
Andrew Yang, former presidential candidate and founder of the Forward Party, has been a vocal advocate on this issue. “The IMF [International Monetary Fund] said that about 40 percent of global jobs could be affected,” he noted earlier this year. “That’s hundreds of millions of workers around the world.”
How companies are managing this ethical challenge
In response to these concerns, some companies are forging mutually beneficial relationships with nonprofit organizations. “Nonprofits can often connect businesses with underrepresented talent in the knowledge workforce,” writes Cognizant Chief People Officer Kathy Diaz in an article for the World Economic Forum. “The IT Senior Management Forum is one of the many nonprofits leading the way in this area.”
For reflection
How can your organization ensure both technological advancement and job security with respect to its use of AI?
The takeaway
In 2025, businesses will have to answer the crucial question, “How can we use AI for good and prevent abuse?” If your organization takes this question seriously, you will go a long way toward ensuring that your own AI systems don’t wind up like HAL 9000 from 2001: A Space Odyssey and become humanity’s worst nightmare.