Geoffrey Hinton, regarded as one of the “godfathers of AI,” was recently awarded the 2024 Nobel Prize in Physics. Despite this honor, his views on artificial intelligence remain unchanged. In one of his first interviews since receiving the award, Hinton reiterated his concerns about the existential threat posed by AI–a threat he now believes is more imminent than previously thought.
An existential threat in 20 years or less. Hinton has long expressed concerns about the dangers of artificial intelligence, but he now sees the issue as increasingly urgent. Not long ago, he believed the risk was distant, perhaps 50 to 100 years away, suggesting we had time to prepare. However, he recently told Bloomberg, “I think it’s quite likely that sometime in the next 20 years, [these AI models] will get smarter than us, and we really need to worry about what happens then.” This concern echoes similar sentiments he shared in an interview with the BBC in May.
Control is needed. In the Bloomberg interview, Hinton emphasizes the importance of allocating more resources to ensure that humans retain control over the development and operation of artificial intelligence. However, he notes that large companies, rather than governments, have the necessary resources for this purpose.
A third of the computing power, rather than the revenue. He also suggests that companies shouldn’t dedicate a percentage of their revenues to AI because it can be confusing and misleading due to how they report those figures. Alternatively, they should commit a percentage of their computing power.
One in four GPUs should be dedicated to addressing risks associated with AI. According to Hinton, the percentage of computing power should be 33%. He says one third of the computing resources at companies like Microsoft, Google, Amazon, and Meta should focus on research aimed at preventing AI from becoming a threat to humanity. However, he would be willing to accept a quarter (25%) of those resources for that purpose.
He left Google to raise awareness about the risks of AI. Hinton, a key figure in advancing machine learning, contributed to many important breakthroughs in the field. After working in Google’s AI division, he departed in the spring of 2023, shortly after the rapid rise of generative AI. He left so he could freely speak out about the dangers of unregulated artificial intelligence.
Other experts, like Yann LeCun, critique this pessimistic perspective. LeCun, head of AI at Meta and a prominent figure in the field, has a markedly different outlook on the future of AI. In a recent interview with The Wall Street Journal, he described views from Hinton and others as “complete B.S.” LeCun believes that while AI holds significant potential, current generative AI systems are fundamentally limited and will never reach the level of human intelligence.
Image | Collision Conf
View 0 comments