The dream of Artificial General Intelligence (AGI), a machine with human-like intelligence, is something that we can trace back to early foundational AGI theories and computational theories in the 1950s, when pioneers like John von Neumann explored the possibilities of replicating the human brain’s functions.
Today, AGI represents a paradigm shift from the wide variety of narrow AI tools and algorithms we use today that excel at specific tasks. A shift toward a form of intelligence that can learn, understand, and apply its knowledge across a wide range of tasks at or beyond the human level.
While the precise definition or characterization of AGI is not broadly agreed upon, the term “Artificial General Intelligence” has multiple closely related meanings, referring to the capacity of an engineered system to:
The journey to AGI saw numerous theories and conceptual frameworks, each contributing to our understanding and aspirations of this seemingly-imminent revolution in technology.
Let’s take a look back and explore some of the core theories and conceptualizations that have, over the long haul, given birth to the concept we know today as AGI.
Turing and the Turing Test (1950) Alan Turing’s seminal paper, “Computing Machinery and Intelligence,” introduced the idea that machines could potentially exhibit intelligent behavior indistinguishable from humans.
The Turing Test, which evaluates a machine’s ability to exhibit human-like responses, became a foundational concept, emphasizing the importance of behavior in defining intelligence.
Soon after, in 1958, John von Neumann’s book, “The Computer and the Brain,” explored parallels between neural processes and computational systems, sparking early interest in neurocomputational models.
In the 1950s through 60s, Allen Newell and Herbert A. Simon proposed the Physical Symbol System Hypothesis, asserting that a physical symbol system has the necessary and sufficient means for general intelligent action.
These, although not exactly foundational AGI theories, underpinned much of early AI research, leading to the development of symbolic AI, which focuses on high-level symbolic (human-readable) representations of problems and logic.
By the end of the 1960s, Marvin Minsky and Seymour Papert’s book, “Perceptrons,” critically examined early neural network models, highlighting their limitations. This work, while initially seen as a setback for connectionist models, eventually spurred deeper research into neural networks and their capabilities, influencing later developments in machine learning.
In 1956, Newell and Simon developed the Logic Theorist, considered by many to be the first real AI program. It was able to prove theorems in symbolic logic, marking a pretty significant milestone in AI R&D. And a bit later, in 1958, John McCarthy developed LISP, a programming language that became fundamental for AI research at the time.
In the 70s, the early promises of AI faced significant setbacks. Expectations were high, but the technology could not deliver on some of the grandiose benefits it was promised.
Systems struggled with complex problems, and the limitations of early neural networks and symbolic AI became apparent. Due to the lack of progress and overhyped expectations, AI research saw less funding. This period of reduced funding and interest is referred to as the first AI winter.
In the 1980s, a resurgence in neural network research occurred.
The development and commercialization of expert systems brought AI back into the spotlight. These systems, which used knowledge bases and inference rules to mimic human expertise in specific domains, proved to be practically useful in industries like medicine, finance, and manufacturing.
Not to mention, advances in computer hardware at the time provided the necessary computational power to run more complex AI algorithms. This led to new techniques and algorithms, an increased commercial interest, and increased investment in AI products.
David Rumelhart, Geoffrey Hinton, and Ronald Williams, and their development of the backpropagation algorithm, drove this resurgance.
This breakthrough enabled multi-layered neural networks to learn from data, effectively training complex models and rekindling interest in connectionist approaches to AI.
John Hopfield introduced Hopfield networks in 1982, demonstrating how neural networks could solve optimization problems. Between 1983 and 1985, Geoffrey Hinton and Terry Sejnowski developed Boltzmann machines, further advancing neural network theory by demonstrating the potential of neural networks to solve complex problems through distributed representations and probabilistic reasoning.
Hebbian Learning and Self-Organizing Maps (1949, 1982)
Donald Hebb’s principle, often summarized as “cells that fire together, wire together,” laid the foundation for unsupervised learning algorithms. Finnish Professor Teuvo Kohonen’s self-organizing maps in 1982 built on these principles, showing how systems could self-organize to form meaningful patterns without explicit supervision.
Deep Learning and the ImageNet Breakthrough (2012)
The ImageNet breakthrough in 2012, marked by the success of AlexNet, revolutionized the field of AI and deep learning. Developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, AlexNet’s deep convolutional neural network architecture, featuring innovations like ReLU activation, dropout, and GPU utilization, achieved a top-5 error rate of 15.3%, vastly outperforming previous models.
This success demonstrated the power of deep learning for image classification and ignited widespread interest and advancements in computer vision and natural language processing.
Cognitive architectures like SOAR (State, Operator, And Result) and ACT-R (Adaptive Control of Thought-Rational) emerged as comprehensive models of human cognition. Developed by John Laird, Allen Newell, and Paul Rosenbloom, SOAR aimed to replicate general intelligent behavior through problem-solving and learning. ACT-R, developed by John Anderson, focused on simulating human cognitive processes, providing insights into memory, attention, and learning.
Theories of embodied cognition emphasized the role of the body and environment in shaping intelligent behavior. Researchers like Rodney Brooks argued that true intelligence arises from the interaction between an agent and its environment, leading to the development of embodied AI systems that learn and adapt through physical experiences.
Marcus Hutter’s Universal Artificial Intelligence theory and the AIXI model provided a mathematical framework for AGI. The developers designed AIXI, an idealized agent, to achieve optimal behavior by maximizing expected rewards based on algorithmic probability. While AIXI is computationally infeasible, it offers a theoretical benchmark for AGI research.
One of the significant developments in AGI theories is the creation of OpenCog, an open-source software framework for artificial general intelligence research. Founded by Ben Goertzel, who coined the term AGI, OpenCog Classic focuses on integrating various AI methodologies. These include symbolic AI, neural networks, and evolutionary programming. The aim is to create a unified architecture capable of achieving human-like intelligence.
Efforts to integrate neural and symbolic approaches aimed to combine the strengths of both paradigms. Neural-symbolic systems leverage the learning capabilities of neural networks with the interpretability and reasoning of symbolic AI. This offers a promising pathway towards AGI.
2000s-2010s: Engineering Specialized AI Capabilities. Algorithmic architectures have displayed superhuman proficiency in specialized gaming tournaments, image classification, statistical predictions, etc. However, they remain constrained in generalizability and lack multi-domain adaptability uniformly.
2020s: Large Language Models Foundation models like GPT-3 show initial promise in text generation applications, displaying some cross-contextual transfer learning. However, they’re limited in full-spectrum reasoning, emotional intelligence, and transparency, highlighting challenges towards safe integrations responsibly.
2020s: OpenCog Hyperon Building on the foundations of OpenCog Classic, OpenCog Hyperon represents the next generation of AGI architecture. This open-source software framework synergizes multiple AI paradigms within a unified cognitive architecture. It propels us toward the realization of human-level AGI and beyond. With the recent release of OpenCog Hyperon Alpha, SingularityNET has created a framework for collaborative innovation within the AGI community.
He believes most of the key theories out there in the commercial field already existed in the 1960s and 1970s. That’s when the first practical AI systems were rolled out. With that said, AI has come a long way since its inception in the mid-20th century.
For example, in the 1960s, there were already neural networks. This includes deep neural networks with multiple layers of simulated neurons attempting to simulate brain cells. There were also automatic logical reasoning systems using formal logic to draw conclusions based on evidence.
He then discussed the current state of AI. He highlighted how AI systems are capable of doing incredible things, even if they are not yet at human level:
Large language models (LLMs) are a good example of this. They can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way but they can only “be very smart in the context of carrying out one of these narrow functions.” So what is next?
Dr. Goertzel stated “It’s intuitively clear AGI is now within reach, and it’s likely to be achieved within the next few years.” This is because we have a number of different approaches to AGI that seem plausible right now. Some research teams and companies like OpenAI pursue the approach to upgrade LLMs. Another approach is to connect different kinds of deep neural networks together. A third approach is to connect neural nets with other sorts of AI tools together in a distributed metagraph-based architecture like OpenCog Hyperon.
He reminds everyone that achieving AGI poses some interesting social, economic and ethical issues, but that he’s “not as worried as some people are”. This is because if we can keep the deployment of AGI decentralized, the governance participatory and democratic, we can have a lot of faith that AGI will grow up to be beneficial to humanity and help us lead more fulfilling lives.
But one thing is clear – we are standing on the shoulders of giants. From the early days of Turing and von Neumann to the pioneering work in symbolic AI, neural networks, and deep learning, each milestone has brought us closer to realizing the dream of AGI.
As we continue to push these boundaries with integrated cognitive architectures like OpenCog Hyperon, the horizon of AGI draws nearer.
The path is fraught with challenges, yet the collective effort of researchers, visionaries, and practitioners continues to propel us forward.
Together, we are creating the future of intelligence, transforming the abstract into the tangible, and turning AGI theories into reality. We are inching closer to machines that can think, learn, and understand as profoundly as humans do.
SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country. The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.
Decentralized AI Platform | OpenCog Hyperon | Ecosystem | ASI Alliance