Intelligently self-improving AI systems are controversial are they humanitys best hope for advanced solutions to our hardest problems or a wild card that is equally likely to cause harm?
It seems very likely that, once Human-level Artificial General Intelligence (HLAGI) is reached, Artificial Superintelligence (ASI) will probably not be far behind. An AGI with the technical competence of human scientists and engineers will be able to study itself and improve itself, and scale itself up, triggering a very rapidly advancing intelligence explosion which Vernor Vinge, Ray Kurzweil and others have referred to as a Singularity.
Dr. Ben Goertzels Beneficial AGI Manifesto
Dear Singularitarians,
The possibility of Artificial General Intelligence (AGI) achieving human-level understanding is no longer science fiction its an imminent, closer-than-ever reality. But what happens when AGI takes the next step improving itself recursively?
What are the potential benefits and drawbacks of this loop of self-improvement, and how can we best manage the associated risks?
Recursive self-improvement holds great promise in accelerating technological progress. Its exactly this pivotal moment in the development of HLAGI that is considered the moment of the Singularity the moment in which general intelligence learns how to study and improve itself, leading to a perpetually scaling avalanche of technological progress.
Imagine an AGI capable of tackling currently intractable problems in healthcare, energy, or sustainability via solutions far beyond our imagination’s limits. Or breakthroughs in scientific understandings that push the boundaries of human comprehension. By continuously refining its learning and creativity algorithms, a self-improving AGI could become increasingly adaptable and infinitely more capable of navigating and solving the complexities of the world around us.
What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities on a still-shorter time scale.
The Coming Technological Singularity, Vernor Vinge
(as quoted in Dr. Ben Goertzels Beneficial AGI Manifesto)
However, this very potential for rapid advancement also carries inherent risks. What if self-improving AGI transcends human understanding (which it likely will) and pursues goals misaligned with our values? Its ever-evolving nature could introduce unpredictable changes, making it impossible to anticipate its next moves or intervene. In the most extreme cases, the AGI could become an existential threat to humanity even unwittingly.
In AI discourse, opinions often split into two: utopians, who view AI as a path to solving major global challenges and even introducing an era of abundance, and doomers, who worry about the risks and potential negative impacts of AI, fearing it could lead to undesirable outcomes if not carefully managed. Read this article to learn more about those viewpoints and what they entail.
Naturally, AGIs that lack the ability to self-improve have their own set of benefits and drawbacks.
Since their capabilities are predetermined by their initial design (and subsequent human updates), their actions are theoretically easier to monitor and regulate. This means more stability and more control. It also means humanity, as its creator, can more easily align its decision-making processes with our inherent values, mitigating the risk of unintended consequences.
However, this seemingly safer path comes with its own set of drawbacks.
Progress with non-self-improving AGIs may be slower, potentially holding back groundbreaking advancements achievable with recursive self-improvement. Or, the AGI may not be flexible enough to adapt to fluctuating human society and needs to humanitys very progress. Additionally, their development becomes highly dependent on our own human limitations, such as biases, errors, and lack of imagination, to name a few.
In a broader global context, entities relying solely on non-self-improving AGIs could find themselves at a significant disadvantage if others do indeed unlock the potential of self-improvement. This can inadvertently steer the strongest AGI to be developed by groups with the lowest sense of safety and responsibility a tricky balancing act.
Some schools of thought on AGI development propose a cautious approach, advocating for the development of robust AI safety measures, ethical guidelines, and regulatory frameworks before embarking on creating self-improving AGIs. However, global cooperation on these issues is a difficult task.
Others argue for the potential benefits of rapid advancement and the necessity of keeping pace with technological evolution, even if it involves developing self-improving AGIs. Many other factors also come into play.
At SingularityNET, for instance, our approach is nuanced. The OpenCog Hyperon agi-capable software infrastructure we are developing has a core component in MeTTa, a programming language specially designed to be self-updating. The path to the maximally beneficial AGI appears to be through recursively updating AI this gives the broadest opportunity for AGI to emerge as an open-ended intelligence.
Rather than focus on whether or not to include recursive self-updating, SingularityNET is focused on creating AGI that is in alignment with human needs and values through its foundational functions. An e-commerce company would focus AI efforts on selling more stuff (whether people need it or not), and a social media company might focus on maintaining attention on their platform (through healthy means or not). These AI systems would be aimed at manipulating human behaviors for the good of a centralized organization, and the AGI that could emerge from this could have similarly flawed motives.
At SingularityNET, our approach is to build a network of beneficial AI systems around human health, creativity, cooperative activity, etc, which teach the AI about fundamental human values and needs and how to support them. By creating a vibrant ecosystem of AIs for human and planetary flourishing, overseen and utilized by a decentralized network of global participants, and by carefully training and designing proto-AGI systems with these values built in, our goal is to spur the emergence of a beneficial Singularity.
The path forward will likely involve a combination of rigorous safety research, international cooperation on norms and standards, and an ongoing dialogue among AI thought leaders, AGI developers, scientists, policymakers, ethicists, and even humanity as a whole to navigate the complex terrain of AGI development carefully, thoughtfully, and responsibly.
Ultimately, the path forward must be determined by all of us in conversation on these critical topics. The outcome is too critical to sit back and watch what governments and big tech decide to do AGI will affect all of us, and we have a chance right now to be involved in the conversation. What do you think?
Join us at BGI-24, 27 Feb to 01 Mar, 2024, and be heard
Ready for more? Subscribe to Synapse, SingularityNETs newsletter, to step into the future of AI!
Whether youre an AI enthusiast, a professional learning to leverage AI, or someone curious about the ongoing digital revolution, Synapse is your gateway to staying ahead in the ever-evolving world of AI.
SingularityNET is a decentralized Platform and Marketplace for Artificial Intelligence (AI) services founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI).