Artificial General Intelligence (AGI), commonly referred to as "strong AI," "full AI," "human-level AI," or "general intelligent behavior," is a major leap forward that the field of artificial intelligence is about to achieve. Unlike narrow AI, which is designed for specific tasks such as detecting product defects, summarizing news, or building websites for you, AGI will be able to perform a wide range of cognitive tasks, with capabilities reaching or surpassing human levels. During an interview with the media at the annual GTC developer conference by NVIDIA this week, CEO Jensen Huang seemed tired of discussing this topic - partly because he said he is often misquoted.
It is reasonable for this issue to arise frequently: the concept raises existential questions about the role and control that humans will have in a future where machines can almost surpass human thinking, learning, and performance in all fields. The core concern is the unpredictability of AGI decision-making processes and goals, which may be inconsistent with human values or priorities (this concept has been explored in science fiction works since at least the 1940s). Some worry that once AGI reaches a certain level of autonomy and capability, it may become uncontrollable or unrestricted, leading to scenarios where its actions are unpredictable or irreversible.
When the media, seeking sensationalism, asks about a timeline, they are often trying to get AI professionals to set a timeline for the end of humanity - or at least the end of the current state of affairs. Needless to say, CEOs of AI companies are not always enthusiastic about discussing this topic.
However, Jensen Huang did take some time to share his views on this topic. He believes that predicting when we will see a viable AGI depends on how we define AGI and gives some analogies: even with the complexity of time zones, you know when New Year's Day is, and you know when 2025 arrives. If you drive to the San Jose Convention Center (the venue for this year's GTC conference), you usually know when you have arrived because you can see the huge GTC banner. The key point is that we can reach a consensus on how to measure whether you have reached your destination, whether in terms of time or geographical location.
"If we define AGI as something very specific, a series of tests in which a software program can do very well - or better than most people by 8% - I believe we will achieve it within 5 years," Jensen Huang explained. He suggested that these tests could be legal qualification exams, logic tests, economic tests, or the ability to pass a medical pre-medical exam. Unless the questioner can specify very specifically what AGI means in the context of the question, he is unwilling to make predictions. This is normal.
The AI Illusion is Solvable
In the Q&A session on Tuesday, Jensen Huang was asked how to deal with AI illusions - that is, when some AI tends to generate answers that sound reasonable but are not based on facts. He appeared very frustrated with this question and suggested solving the illusion problem by ensuring that answers are thoroughly researched.
"Add a rule: for every answer, you must look it up," Jensen Huang said, referring to this approach as "retrieval-augmented generation," which describes a method very similar to basic media literacy: checking sources and context. Compare the facts contained in the source with known facts, and if the answer is inaccurate - even partially - abandon the entire source and move on to the next one. "AI shouldn't just answer questions; it should research first to determine which answers are the best."
For critical tasks such as health advice, Jensen Huang suggests that perhaps checking multiple resources and known sources of facts is the way forward. Of course, this means that the answer generator needs to have options such as saying, "I don't know the answer to your question," or "I cannot reach a consensus on the correct answer to this question," or even something like, "Hey, the Super Bowl hasn't happened yet, so I don't know who won."