In 1937, Clinton Davisson was awarded the Nobel Prize in Physics for his discovery of electron diffraction in what became known as the Davisson-Germer experiment. Though Davisson spent much of his professional career at Princeton and the Carnegie Institute of Technology, he did his breakthrough research at Bell Labs, at the time an enterprise owned by American Telephone and Telegraph (AT&T) and Western Electric.
In following decades, scientists at Bell Labs won nine more Nobel Prizes in physics and chemistry. Funded by private industry, Bell Labs came to be regarded as the premier research facility of this type, developing important new technologies, such as the transistor, laser, and computer operating system Unix.
This week, three research scientists—David Baker, John Jumper, and Demis Hassabis—won the 2024 Nobel Prize in Chemistry for decoding the structure of proteins and creating new ones, which has resulted in advances in areas such as drug development. The work of Jumper and Hassabis has been underwritten by DeepMind, which is the AI research subsidiary of Google. Hassabis is the CEO of DeepMind, and Jumper works there as a senior research scientist.
It is not surprising that today the largest tech firms are supporting cutting-edge technologies, as Bell Labs began doing a century ago. Alphabet (the holding company for Google and YouTube), Microsoft, Apple, Amazon, and Meta (Facebook and Instagram) are now among the largest U.S.-based companies and drive much of our economic growth. These companies are innovation machines, and they have been quick to acquire firms that help them maintain their technical advantage. This is what Google did with DeepMind, which is a British-American artificial intelligence research laboratory that was founded by Hassabis in 2010. It was acquired by Google in 2014. The company started by building neural models to play video and board games. In 2016, it developed a computer program, AlphaGo, that beat the world champion chess player in a five-set match.
Founded in 1998, Google quickly emerged as one of the world’s most valuable brands, dominating the online search sector, but also becoming a leading provider of advertising services, video sharing (YouTube), web navigation (Chrome), operating systems (Android), cloud computing services, and in the emerging area of generative AI. Last year, the company’s revenue topped $300 billion; its market capitalization is about $2 trillion. Google’s massive size gives it the resources to fund constructive scientific efforts such as those undertaken by Hassabis and Jumper, which the Swedish Nobel committee recognized this week.
But Google’s size and dominance also pose challenges to the company and, more importantly, to society at large. Its sheer size and dominance raise concerns about the stifling of competition. This week, as part of its successful antitrust litigation against Google, the U.S. Department of Justice filed court papers saying that the government is considering a “full range of tools” to restore greater competition in this space. Antitrust litigation is slow and difficult, but Google now faces serious regulatory pressures both in the U.S. and Europe. A federal judge is considering the government’s proposed remedies and Google’s inevitable objections to them.
On a parallel track, the company faces challenges relating to the social and political impact of YouTube, its massive video-sharing service. In 2022, my colleagues at the NYU Stern Center for Business and Human Rights wrote a report focused on the challenges the company faces in addressing harmful content on this site. They concluded that while YouTube provides its two billion users with access to news, entertainment, and do-it-yourself videos, “it also serves as a venue for political disinformation, public health myths and incitement of violence.”
Google’s successes and contributions with DeepMind coexist with yet other challenges the company faces in deploying new AI tools. Earlier this year, the company was forced to pause its Gemini AI image generator after the model churned out historically flawed images which led to attacks on diversity and inclusion. More broadly, as my colleague Paul Barrett has explained in his report, “Safeguarding AI,” companies like Google have rushed to release generative AI models without thoroughly testing them to minimize “hallucination” and harmful content.
Google has the resources and technical capacity to help create online environments that promote democracy and strengthen society in the spirit of the Nobel accolades its scientists have recently received.