SingularityNET lets anyone create, share, and monetize AI services at scale. It's a full-stack AI solution powered by a decentralized protocol. The initiatives outlined below consist of core research areas that will strengthen the long-term AI capabilities of the network by supporting and improving the components detailed in our Platform and Services roadmaps.
SingularityNET & OpenCog-powered Sophia
Sophia will soon be controllable by a combination of cloud services powered by SingularityNET and OpenCog technology.
In a collaboration between the SingularityNET Foundation and Hanson Robotics, the OpenCog AGI toolkit and the SingularityNET framework are being used to create the next level of intelligence for the world-famous Sophia robot, and also for toy-scale Hanson robots. These tools will be packaged as a set of solutions for social robotics, and can also be used as a base for further research.
Specifically, we will use OpenCog AI to control Sophia in two research projects. The first aims to show that when OpenCog is being used to control Sophia, she has a higher level of Tononi Phi (a popular quantitative measure of “consciousness” in human brains) while responding to complex information than simple information.
The second uses Sophia, along with a specially-developed methodology, to interact with people to improve their emotional and mental state (and in some cases, induce transformative states of bliss). These are the “Loving AI” experiments. The trials are being conducted in California.
We'll also work on giving further human-like expressive face and head movements to Sophia by training neural networks with motion-capture data.
We aim to explore neural-symbolic learning, where we couple deep neural nets with probabilistic logic inference.
Developing deep neural net models that learn probabilistic models of the key semantic variables and relationships in the data they model is the key to creating deep nets that can do transfer learning and lifelong learning, as well as to combine neural and symbolic systems to form hybrid intelligence. R&D in this direction has immediate practical value for applications such as image and video processing, speech processing, and robotic movement. It also builds toward general intelligence via creating the means for interfacing neural net based perceptual systems with symbolic cognitive systems. Probabilistic programming is a powerful tool for bridging the neural and symbolic sides of such hybrid systems.
OfferNets (Offer Networks) is a research project that aims to create an alternative to currency-based exchanges.
Offer Networks provide an alternative to currency-based exchange with the use of constraint satisfaction algorithms to optimize complex barter networks. In various situations, such as those where revealed preferences are prevalent, they can lead to greater overall satisfaction than a currency-based exchange. A future version of SingularityNET may feature Offer Networks used alongside currency-based exchanges, each leveraged for those transactions where it fits best. Current research focuses on prototyping a version of Offer Networks on top of SingularityNET APIs to explore how Offer Network dynamics work in an AI Agent to community context.
A simulation model of the dynamics of a large, complex SingularityNET.
An initial simulation model of the dynamics of a large, complex SingularityNET has been created, and used to study the dynamics of information flow in such a network. This initial model will be fleshed out and applied as a powerful means of exploring and understanding the dynamics of the SingularityNET and evaluating potential decisions regarding network regulation and governance.
We are researching a unique approach to AI service provider reputation which can be extended into a new consensus framework: Proof of Reputation.
Measuring reputation and quality in various respects is a complex problem for most social networks and online platforms. AI, if implemented appropriately, can help. SingularityNET plays a dual role here: it requires a sophisticated reputation system to achieve its ultimate goals, and it also provides the AI needed to power a sophisticated reputation system. Our detailed reputation system design is under development and will be prototyped for future implementation within SingularityNET. This reputation system will also be applicable in other contexts, and we'll be collaborating with a group of external blockchain projects to bring it to fruition.
Reputation System for Marketplaces
Which product, supplied by which supplier, at what price, with what delivery option, best fulfills a particular buyer's needs?
To address this question, marketplaces often use ratings systems coupled with recommendation engines to aid consumers in finding products.
But can we trust these ratings systems and recommendation systems?
Resisting Reputation Manipulation in Marketplaces - Business Analysis
How can the average consumer wade through the overwhelming mass of data to find products that best fit his or her needs at the best price?
To address this question many shopping sites use product ratings via user reviews, and recommendation engines. Yet this leads us to another pertinent question:
how trustworthy are these ratings and recommendation systems?
Learning the grammar of natural language from large unlabeled corpora is one of the great unsolved problems of the AI field, but SingularityNET’s AI team (working together with the OpenCog Foundation and Hanson Robotics) is making significant progress.
This is both a deep theoretical pursuit and something with potentially important practical applications, from recognizing relevant meanings and relationships in social media posts to automating the creation of linguistics tools for developing-world languages without available computational-linguistics databases. The algorithms under development here combine OpenCog-based information-theoretic symbolic inference with neural net based learning in unique ways.
An offshoot of this work combines language learning algorithms with deep neural nets carrying out image understanding to infer grounded semantics alongside grammar from unlabeled corpora of captioned images.
Meta-learning for Inference Control
Probabilistic meta-learning is a key AGI component, ultimately leading to a system that can self-optimize its behavior.
The OpenCog AGI toolkit contains multiple AI algorithms with common elements related to probabilistic learning. Examples include: the PLN probabilistic logic engine, the MOSES evolutionary program learning framework, the hypergraph Pattern Miner, and releases for natural language comprehension and generation. All these algorithms, as well as others, can be cast mathematically within a common framework of historical-probability-guided forward, backward, and forward-backward inference.
Our current development initiative is focused on bringing this mathematics to reality by implementing these algorithms within OpenCog in a common way using OpenCog’s Unified Rule Engine. This is a first step toward a more AGI-oriented approach to learning, in which probabilistic meta-learning is used to study the patterns that have helped learning to be successful in the past, which can then be used to guide future learning and make it more efficient.
Benefit Project: Longevity Data Analysis
We will apply our Biomedical Data Analysis tools to genetic data for people who lived to the age of 110 or older.
We want to discover the combinations of genetic variations that differentiate supercentenarian (110 years or older) from ordinary people, and to use our biomedical data analysis tools to better understand the consequences of those variations and the roles they play in biological processes and different cellular components. Insights from this research will be shared openly with the life sciences community and can drive further research into the mechanisms of aging.
Benefit Project: Diagnose & Understand Crop Diseases
In collaboration with Addis Ababa-based AI firm iCog labs and the Leshan Agricultural Research Institute in Szechuan, China, we will apply deep neural nets and other AI tools to diagnose crop diseases based on plant images.
We will be tuning deep neural net architectures and other AI tools to work on datasets of plant leave images, so that we can not only diagnose the disease from the image, but understand the key latent variables implicit in the corpus of diseased plants.