Home News ‘We Know How To Build AGI’

‘We Know How To Build AGI’

by admin

OpenAI CEO Sam Altman said his super-startup, maker of the ChatGPT software that reignited the AI space in November 2022, knows how to build artificial general intelligence.

“We are now confident we know how to build AGI as we have traditionally understood it,” Altman posted to his personal blog over the weekend. “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

But AGI is about much more than agents, which business software companies have been talking about for a year. Artificial general intelligence is about “the glorious future,” Altman said, beyond agents that do business tasks for us.

“We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”

That will worry many luminaries in technology, such as the “godfather of AI,” Geoff Hinton, who has sounded alarms on current AI research. And many others, including Elon Musk, Apple co-founder Steve Wozniak and the Bulletin of the Atomic Scientists president Rachel Bronson, who signed an open letter with thousand of others in early 2023 calling for a pause on “giant AI experiments.”

Some AI researchers, such as Roman Yampolskiy, a professor at the University of Louisville, believe we already have AGI, under a narrow definition. Case in point: the already-outdated GPT-4 is itself generally better than a human across hundreds of domains.

“It can write poetry, generate art, play games,” Yampolskiy told me in a TechFirst podcast. “No human being can compete in all those domains, even very capable ones. So truly, if you average over all existing and hypothetical future tasks, it’s already dominating just because it’s so universal.”

Altman, of course, is talking about yet another level: super-intelligent AI that can conduct research, create new fields of knowledge and invent entirely new things, possibly with — but possibly without — an ongoing input and partnership with humans.

Altman knows how that sounds.

“This sounds like science fiction right now, and somewhat crazy to even talk about it,” he said,

But he’s not worried about sounding crazy.

“We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important,” Altman said.

Talk of imminent AGI tends to bring up the concept of the singularity, a hypothetical point in the future when technological growth driven by artificial general intelligence gets so fast and so profound it is essentially uncontrollable and irreversible, resulting in massive and unpredictable changes to human civilization.

Last year, Dr. Ben Goertzel, CEO of SingularityNet, chairman of the Artificial General Intelligence Society and former chief scientist at Hanson Robotics, told me AGI was just three to eight years away.

“If we wanted to define AGI as the creation of machines with the general intelligence of a really smart human on their best day, I would say we’re three to eight years from that,” Goertzel says. “So I think we’re pretty close.”

But Goertzel was not confident that large language models were the path to AGI, nor that adding a few more bells and whistles to LLMs or making them bigger would result in artificial general intelligence.

“On the other hand, I think they can be a powerful accelerant toward the creation of AGI,” Goertzel told me.

There are also researchers who think the whole concept of artificial general intelligence is misguided. One of them is Neil Lawrence, an author, DeepMind Professor of Machine Learning at the University of Cambridge, and senior fellow at the Alan Turing Institute.

“I think the notion of AGI is kind of nonsense because it’s a misunderstanding of the nature of intelligence,” says Lawrence, who wrote The Atomic Human partially to counteract this tendency. “We have a spectrum of intelligence, a spectrum of capabilities. There is no One Ring to rule them all. There’s a diversity of intelligences.”

All that said, Altman is forging ahead. And given what OpenAI has achieved already — ChatGPT is my primary search engine and knowledge engine — it would be fairly challenging to bet against him.

What’s clear is that if OpenAI does succeed in achieving some version of AGI, many things will change very, very quickly.

Dan Fagella, the CEO and founder of Emerj Artificial Intelligence Research, has interviewed near 1,000 AI experts and business leaders. He says those changes could include:

  • massive automation
  • significant workforce disruption
  • potential existential threats
  • global economic and military power shifts
  • and much more

In short, AGI is kind of a big deal, and Altman understands that.

“Given the possibilities of our work, OpenAI cannot be a normal company,” Altman said. “How lucky and humbling it is to be able to play a role in this work.”

You may also like

Leave a Comment