Will AI surpass human intelligence in just 5 years? The astonishingly rapid progress of AI

Geoffrey Hinton, frequently referred to as one of the prominent figures in the field of AI, has become notably vocal following his retirement from Google earlier this year. He is renowned for his role in refining and promoting “backpropagation,” a crucial algorithm that allows multi-layer neural networks to learn from and rectify their errors.

This advancement has played a pivotal role in the achievements of deep learning technologies, which form the foundation of contemporary generative AI models. In acknowledgment of his pioneering contributions, Hinton received the Turing Award, often likened to the Nobel Prize in the realm of computer science.

The pace of progress

The pace of progress in AI has prompted a significant shift in Geoffrey Hinton’s perspective. He transitioned from being optimistic about the timeline for AI surpassing human intelligence, originally thinking it would take 50 to 60 years, to becoming more pessimistic as he realized it might happen within as little as five years. In recent times, he has raised concerns about the potential existential threats posed by AI exceeding human intelligence, largely driven by the rapid advancements in generative AI powered by large language models (LLM).

Hinton’s prediction of this happening in five years, by 2028, is even more ambitious than the projections of AI optimist Ray Kurzweil, who is the head of Google Engineering. Kurzweil had stated in a previous interview that he believes computers will achieve human-level intelligence by 2029, and he further anticipated the concept of the “Singularity” by 2045, a point where AI and human intelligence merge to enhance our collective intelligence a billion-fold.

In a recent interview on 60 Minutes, Hinton emphasized that current leading AI models, such as those developed by OpenAI and Google, already exhibit genuine intelligence and reasoning capabilities. He went on to suggest that these models can have experiences akin to those of humans, although he does not believe they possess consciousness in the traditional sense. Nevertheless, Hinton expressed the belief that AI systems will eventually develop consciousness with time.

The era of AI growth

Geoffrey Hinton envisions that within the next five years, advanced AI models may attain the ability to reason at a level surpassing human capabilities. When asked whether humans would then rank as the second most intelligent beings on Earth, Hinton replied affirmatively. He emphasized that the prevailing uncertainty surrounding the future of AI underscores the importance of considering the consequences, as these AI systems now exhibit a remarkable understanding of their environment.

This phase in AI development can be likened to a period of rapid growth, akin to the need for parents to exercise caution in their conversations in front of a perceptive child. Hinton emphasized the need for thoughtful consideration of the unfolding events because of AI’s ability to comprehend and process information.

The urgency for proactive measures is evident, as the acceleration of AI development shows no signs of slowing down. Recent developments have dispelled any doubts about the emergence of an AI arms race. For instance, China has announced plans to boost its computing power by 50% by 2025 to keep pace with the United States in AI and supercomputing applications. This represents a substantial investment in computing power to support the training of ever-larger Large Language Models (LLMs).

The next generation of LLMs

Hinton also highlights that the human brain consists of roughly 100 trillion neural connections, while the most extensive current AI systems possess just 1 trillion parameters. However, he believes the knowledge encoded in these parameters already surpasses human capabilities. This suggests that AI models are significantly more efficient in terms of learning and retaining knowledge than humans.

Moreover, reports suggest that the next generation of LLMs is on the horizon, potentially emerging by the end of this year, and could be 5 to 20 times more advanced than the existing GPT-4 models.

Mustafa Suleyman, CEO and co-founder of Inflection AI and co-founder of DeepMind, anticipates that in the next five years, pioneering AI model developers will train models over a thousand times larger than those available today in GPT-4. These larger models hold immense potential, capable of functioning as highly adept personal assistants and addressing major global challenges, such as achieving fusion reactions for unlimited energy and delivering precision medicine for enhanced longevity and well-being.

Nonetheless, there is a growing concern that as AI becomes more intelligent and potentially develops consciousness, its objectives may diverge from those of humanity. The timing and the extent of this divergence remain uncertain, as Hinton notes, “We just don’t know.”

The Governance Dilemma

While the remarkable strides in AI technology are exhilarating, they have created immense pressure on the global arena for regulation, prompting a new race among governments to establish rules for AI tools. The rapid pace of AI development has placed a significant burden on regulators who must grasp the intricacies of this technology while crafting regulations that promote responsible innovation, rather than stifling it.

The European Union (E.U.) appears to be taking a lead in this domain, nearing the final stages of deliberation on comprehensive legislation, known as the AI Act. However, recent reports indicate that the United States has concerns about the E.U. law potentially favoring larger companies capable of shouldering compliance costs, while posing a challenge to smaller enterprises and potentially dampening productivity gains.

This concern suggests that the United States might opt for an alternative approach to AI regulation, and other countries could likewise forge their own regulations, leading to a fragmented global landscape of AI governance. Such fragmentation could create challenges for businesses operating across multiple nations, as they would need to navigate and conform to diverse regulatory frameworks. Furthermore, this fragmentation might hinder innovation, particularly for smaller firms unable to afford compliance with various regional regulations.

A turning point?

Nevertheless, there remains the possibility of international cooperation in the regulation of AI. Reports suggest that leaders from the Group of Seven (G7) nations plan to establish global AI regulations by year-end. Earlier, the G7 had agreed to create working groups addressing issues related to advanced AI, including governance, intellectual property rights, disinformation, and responsible use. However, the absence of China and a significant portion of E.U. countries from this list raises questions about the potential impact of any G7 agreement.

In a 60 Minutes interview, Hinton also emphasized the current moment as a potential turning point in which humanity must decide whether to further develop these technologies and how to protect itself if they do. He underscored the opportunity to enact laws that ensure the ethical use of AI.

The Imperative for Global Cooperation

As AI continues its rapid advancement, surpassing even the expectations of its creators, steering this technology toward benefits for humanity becomes increasingly challenging and yet essential. Governments, businesses, and civil society must set aside parochial concerns in favor of collective and collaborative action to swiftly establish a framework of ethical and sustainable AI governance.

The need for comprehensive, worldwide governance of AI is urgent. Getting this right could be pivotal, as the future of humanity may hinge on how we address the challenges presented by advanced AI.

Source : https://venturebeat.com/ai/smarter-than-humans-in-5-years-the-breakneck-pace-of-ai/