Geoffrey Hinton from University of Toronto and John Hopfield from Princeton University both received this year’s Nobel Prize in physics Tuesday for their research in the field of artificial intelligence. Specifically, the duo was recognized for their foundational work in neural networks, which laid the groundwork for today’s large language models and generative AI.
Hinton And His Work Developing Backpropagation
Hinton is widely recognized as the godfather of artificial intelligence, and he made headlines last year when he quit working for Google so he could more freely speak against the risks of AI — a technology he played a significant role in creating.
Hinton was instrumental in developing a technique in the. ’80s called backpropagation, which enables algorithms to learn. Here’s how the concept works.
If you were teaching a robot to distinguish between different animals, its learning process consists of three major steps:
- Noticing mistakes: It views pictures and attempts to identify which animal it is seeing. Then, it goes back and sees how many mistakes it made.
- Figuring out why: If the robot makes a mistake, it tries to trace which part of its “brain” led to this error.
- The improvement step: It then makes slight changes in its “brain,” so it won’t make those mistakes again.
So after many repetitions, where tens of thousands of images may be reviewed, the robot gets better and better at finding the correct animals — which is basically how computers “learn.”
Hopfield’s Work On Associative Memory
John Hopfield’s contribution was regarding the concept of “associative memory” where he developed a type of computer memory that works a lot like the human brain. His associative memory model is similar to a huge connect-the-dot picture, where every dot stands for some information.
- Connecting memories: Similar to how our brains create associations between concepts, the associative memory system forms links between related information by connecting the dots.
- Remembering incomplete things: What is amazing is that this kind of memory could complete the incomplete pieces of information once some fragment of it was given. In other words, if someone was showing you half of a smiley face drawing in a game of Pictionary — you could fill in the rest.
- How it works: You give it some information, such as part of a picture or a few words of a sentence, and it starts from there. The system follows connections to other dots, kind of like following a trail of breadcrumbs. It keeps doing this until it finds a pattern that makes sense, which is the complete memory.
- Learning and improving: The more these types of memory systems are used, the better they get at drawing connections and remembering things with accuracy.
- Why it’s special: This variety of memory can handle “noisy” or unclear information; just as you can recognize the sound of a friend’s voice at a crowded party.
What Hopfield did was to get computers memorizing and recalling information closer to the way human brains do it. As a result, he enabled the software to be more pattern-recognizing and piece-filling-in algorithms.
Both Hopfield’s and Hinton’s research paved the way for AI as we know it today.
Details About The Various Nobel Prize Awards
The annual honor comes with a cash award of $1 million from a bequest left by the award’s creator, Swedish inventor Alfred Nobel. The winners within each category — economics, physics, peace, literature, medicine and chemistry — are all invited to receive their awards at ceremonies on Dec. 10, to commemorate the anniversary of Nobel’s death in 1896.
Aside from establishing the award, Nobel is best known for inventing dynamite.