Hinton Asserts: AI Nearing Parity with Human Brain's Capabilities

2024-03-21

In a lecture at the University of Oxford, one of the three founders of artificial intelligence, Geoffrey Hinton, stated that the level of digital models is approaching that of the human brain and will eventually surpass it. "A large language model has one trillion weights. You have one hundred trillion weights. Even if you only use 10% of them, you still have 10 trillion weights," said Hinton. He added that large language models know thousands of times more information than we do in their trillion weights. "It has more knowledge, partly because it has seen more data," he explained, "and it may also be because it has better learning algorithms." He explained that humans have not optimized a large amount of experience into connections. "We are optimized to have not much experience. You can only live for about a billion seconds," Hinton added, stating that humans hardly learn anything after the age of 30. "Your parameters are much more than your experience. Our brains are optimized to have not much experience." Recently, Elon Musk also shared a video clip from Joe Rogan's podcast featuring Ray Kurzweil, where they mentioned that artificial intelligence will reach the knowledge level of any individual by 2029. "AI could be smarter than any single person possibly next year. By 2029, AI could be much smarter than the sum of all human intelligence," Musk said. Kurzweil explained that people often underestimate the speed of technological development. "In fact, technology doubles every 14 years," he said, while people think it only grows by 2% per year. Regarding computer speed, he stated that it has increased to 35 billion calculations per second, which is a 24 trillion-fold increase from the 0.00007 calculations per second in 1939. Hinton, who left Google last year, has also been openly discussing the threat of artificial intelligence. He compared the potential risks of AI to the production of atomic bombs during World War II, emphasizing the dangers of profit-driven AI development, which could result in AI-generated content surpassing human-made content and endangering our existence.