Home News Forging A Global Path to Responsible AI To Achieve a Beneficial AI Future

Forging A Global Path to Responsible AI To Achieve a Beneficial AI Future

by admin

The transformational potential of AI is undisputed. In a recent survey of global AI and data leaders, 98.4% stated that their organizations were increasing their investments in AI and the data that provides its underpinnings, and 96.6% expressed the view that the overall impact of AI will be beneficial.

This optimism is encouraging but is tempered by a consensus that to deliver on its transformational potential, AI must be implemented in a responsible and thoughtful manner – 97.5% of executives believe that responsible AI safeguards and guardrails for governing AI must be in place.

The Forum for Cooperation on Artificial Intelligence (FCAI)

International recognition that AI must be implemented responsibly to deliver on its many potential benefits has been the impetus behind the establishment over five years ago of The Forum for Cooperation on Artificial Intelligence (FCAI), a collaboration between the Brookings Institution, the Washington, D.C. based public policy think tank, and the Centre for European Policy Studies, a European Union think tank based in Brussels, Belgium.

The FCAI will soon host its 25th roundtable of high-level officials from 7 governments – United States, Canada, United Kingdom, European Union, Japan, and Singapore – and experts from industry and academia. Their focus has been on identifying opportunities for cooperation, and establishing a common understanding and alignment, on AI regulation, industry standards, and research and development, including sensitive areas such as standards for privacy and data protection. The organization’s mandate has accelerated over the course of the past two years since the release of ChatGPT and the resulting explosion in AI adoption.

It was over a decade ago that I was first introduced to Cameron (Cam) Kerry, who I met in the context of his work as a visiting scholar at the MIT Media Lab. Kerry was also serving at that time as senior counsel with Sidley Austin LLP in Boston, MA, where his practice was focused on privacy, security, and international trade issues. He had previously served as general counsel and acting secretary of the U.S. Department of Commerce during the Obama administration, where he directed development of the White House blueprint on consumer privacy, “Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy”.

Today, Kerry serves as the Ann R. and Andrew H. Tisch Distinguished Visiting Fellow for Governance Studies in the Center for Technology Innovation at the Brookings Institution, where is a leader of the Brookings FCAI. On February 10 of this year, the FCAI released their latest report, “Network architecture for global AI policy”, which outlines a global path for collaboration and cooperation on AI governance and AI standards. The report was co-authored by Kerry, Joshua Meltzer, and Andrew Wyckoff from the FCAI team, and Andrea Renda of the Centre for European Policy Studies (CEPS).

The Artificial Intelligence (AI) Action Summit in Paris

Kerry had just returned from the Artificial Intelligence (AI) Action Summit held February 10-11 at the Grand Palais in Paris, when I spoke with him about the current state and future prospects for responsible AI global governance and standards. The Paris event, which brought together world leaders, industry leaders, academics, and other AI stakeholders and parties, achieved some dubious notoriety. The Economist referred to the event as “the Paris discord”, noting, “The attempt at global harmony ended in cacophony”, noting the tension of “cutting through the red tape that prevents companies from innovating and adopting AI”, balanced with privacy rules.

Kerry recognizes that balancing innovation and the resulting benefits that come from the responsible application of AI, with a thoughtful, international governance approach that is built around consensus is a long-term process and commitment. Kerry comments, “The distributed and iterative network of networks is presented as an alternative to more centralized or consolidated approaches”. He continues, “As I put it, a multiplicity of diverse initiatives is a feature, not a bug considering the scale, complexity, and uncertainty around generative AI”. Kerry adds that as these efforts develop, the FCAI team has been meeting with AI leaders and representatives from Latin America and Africa as part of this global application of AI policy.

Network architecture for global AI policy

AI safety is a growing concern for governments and citizens. In the survey of global AI and data leaders, 53.2% cited fears from spreading misinformation and disinformation, and another 19.8% expressed concern about ethical bias in AI. The recent FCAI report notes, “Development of AI standards in global standards bodies is a key aspect of AI governance”. The report goes on to state, “Broadening access for governments, industry, and civil society is needed to strengthen the legitimacy of these standards and ensure that the resulting standards respond to differing AI needs”.

While recognizing that we are at a moment in history when global anti-regulatory momentum is high, Kerry remains committed and strikes a note of cautious optimism. The FCAI report summarizes its conclusions, stating, “As AI governance initiatives progress, networked and distributed governance will remain the singular form of international cooperation that can respond to the rapid pace at which AI is accelerating”.

The FCAI report concludes, “While the future of AI is uncertain, it is certain that AI models and applications will develop in ways that raise new challenges and opportunities”, adding, “for AI governance to be effective, it must adapt and respond to rapid and dazzling technological changes”.

It is amidst this rapidly evolving landscape and regulatory headwinds that Kerry’s work forges ahead. Kerry concludes, “I see reasons for optimism, including bi-partisan support in the U.S. congress for responsible AI legislation”. The coming years will be telling. If executed thoughtfully and effectively, responsible AI guardrails, safeguards, and governance can only help enable and accelerate a transformational and ultimately beneficial AI future.

You may also like

Leave a Comment