Home News 80% Of Surveyed Businesses Don’t Have Plans For An AI-Related Crisis

80% Of Surveyed Businesses Don’t Have Plans For An AI-Related Crisis

by admin

A fundamental best practice for managing crisis situations is to prepare for known risks. Business leaders who ignore potential threats could create a self-inflicted crisis for their companies.

A case in point are the dangers posed by AI, a technology that can also have important advantages and benefits for companies and organizations who use it with appropriate safeguards.

Despite news coverage and warnings about the threats from this rapidly improving technology, 80% of surveyed organizations still don’t have a dedicated plan to address generative AI risks, including AI-driven fraud attacks.

That’s according to the 2024 New Generation of Risk Report that was released last month by Riskconnect, a risk management software company.

Awareness Of Growing Risks And Threats

Of the 218 risk compliance and resilience professionals around the world who responded to the survey:

  • 72% said cybersecurity risks are having a significant or severe impact on their organization, which is a notable increase over last year’s 47%.
  • 24% said AI-powered cybersecurity threats—such as ransomware, phishing, and deepfakes—will have the biggest impact on businesses over the next 12 months.
  • 65% of companies don’t have a policy in place to govern the use of generative AI by partners and suppliers, even though third parties are a common entry point for fraudsters, according to Riskconnect.

Increased Concerns

“Concerns over AI ethics, privacy, and security continue to mount,” according to Riskconnect’s report.

“AI also tentacles into cybersecurity, geopolitics, and other areas, supercharging the risks of everything in its path. Hackers, for instance, are getting smarter, more sophisticated, and dangerous by the minute as they leverage the latest AI advancements to infiltrate organizations,” it observed.

Despite growing concerns of the crisis situations that AI could cause, efforts to address those concerns are lagging behind.

The report points out that “while companies’ top concerns [about AI]

have shifted over the past year, risk management approaches largely haven’t evolved fast enough, and key gaps remain. The data also suggests that risk management is increasingly seen as a strategic business function, but continued investment is necessary to keep up with the changing risk landscape.”

Internal Threats

Internal threats can be just as damaging to companies as external ones. One example is the use of generative AI by companies to create marketing-related content.

“While well-prompted AI is an excellent starting point for written text, marketers need to ensure that ad copy, emails, and text messages are carefully proofread by human editors and not merely resubmitted to the same or a different AI program for proofing. This is because generative AI is focused on writing for clarity, but not necessarily for persuasion, which should be a primary communication goal for marketers,” Anthony Miyazaki, a professor of marketing at Florida International University, recommended in an email interview.

There’s another way in which reliance on generative AI can backfire for companies.

“More concerning is using AI to generate website content. Google has already warned web developers that AI content will be deprioritized if it is used to try to game the search process, and this would severely damage organic and even paid SEO,” Miyazaki pointed out.

Internal Safeguards

“A lot of organizational AI policies are heavily focused on protecting the organization from internal use of AI,”Andrew Gamino-Chong, chief technology officer and co-founder of Trustible, observed via email.

But organizations need to guarantee that their policies have covered all the bases.

Companies “want to ensure confidential data isn’t leaked, that AI chatbots are secure, and comply with relevant regulations. However, those policies sometimes omit setting clear standards for the AI systems they are building for customers; many regulations specifically want organizations to consider the downstream effects of their AI systems on individuals, groups, and communities,” he noted.

One Company’s Proactive Steps

“The risks are very real, and we’ve taken deliberate steps to mitigate them,” Ryan Doser, vice president of inbound marketing at Empathy First Media, a digital marketing agency, commented via email.

He said the company has implemented the following guidelines and procedures to help ensure the responsible use of AI by employees:

Privacy

  • It prohibits entering a client’s proprietary or sensitive data into generative AI tools.

Quality Control

  • It does not allow generative AI responses to be copied and pasted, and requires the responses to be reviewed and polished by humans to help guarantee its accuracy and alignment with clients.

Regulatory Compliance

  • The company avoids using the technology when it could create conflicts in complying with the standards of different industries.

Transparency

  • Tells clients when generative AI has been used to create content.

“Transparency builds trust and helps educate our clients on how these tools are being used to enhance their campaigns,” Doser concluded.

Why Wait?

As I noted in a story about Riskconnect’s 2023 report, “The longer companies wait to prepare themselves for the risks and dangers associated with AI, the longer they will be unprotected from this potential crisis.

“Why should business leaders wait any longer to do the right thing?”

Given the growing sophistication of AI, there’s an even more urgent need today for business leaders to protect their organizations from the threats posed by this technology.

You may also like

Leave a Comment