And well, how do they give human-like understanding to AI? Tell us your secret.
We can’t say much, of course.
But what we can say.
Let’s examine Siri, Alexa and Google Assistant.
They receive a voice stream (you speak) or keyboard input (you type), and transform it into binary code: zeros, ones, and multiple combinations of that.
The combinations are recognized as patterns and patterns of patterns, and return a response based on statistical correlations between them.
In the current field of AI, from OpenAI to Baidu, these statistical correlations are based on a mix of linguistics and models of how the brain works.
For our startup, that represents a good opportunity, because none of them are based on meaning.
Meaning is what allows a baby to recognize mom or dad without the need for language, simply knowing what they mean, long before they even speak.
The disruption and innovation of using this domain of meaning is so great, that the only change we had to make was to substitute statistical correlations for the first principles of meaning discovered.
And that was enough to get accurate answers as a person.
These first principles describe how you self-emerge, self-organize and develop meaning in your own domain.
One discovery that amaze us was that its elements exhibit quantum properties!
Coherence and entanglement like quantum particles!
That was a breakthrough discovery.
Each bit of meaning contains 8 bits of language (8 times faster computations!).
And since each bit of meaning can entangle with others as quantum particles, each bit of meaning is actually,…. a qbit!
Hence the name of our startup and our model: Quanta of Meaning.
And how does this apply to AI language understanding?
Our algorithms process that binary code of text or voice and return a parsing of meaning. This is called: QOM parsing.
Mathematical reasoning. For the text of “How many eggs are there?“, the QOM parsing is composed of 3 elements of which one self-emerges, the result of “4”.
SQUAD 2.0 test, for the text of Amazon rainforest. 15 main elements give rise to 58 elements, from which all test questions are correctly passed as a human.
Physical reasoning. For the open question: “Why does ice sink?“, 8 elements are enough to reach the correct conclusion: “ice does not sink, ice float!.”
(You can see this parsings, algorithms and patents under NDA signature. And a version without NDA in person or in an online meeting.)
And that simple structure is what makes it possible for the first time in history to pass ALL standard and extreme tests as a human . Algorithms of few lines of code based on quantum first principles of the meaning domain, is what represents the first step to have an assistant AI like HER or Jarvis. A soft AGI.
That’s why we call these days of October 2022: the first days before the singularity.
Did you like? Share it
Stay in touch.
Know our news before they are published.
(your data are safe with us)
QOM is the first startup in history that is building an AI with human understanding.
The prediction of the singularity has been shortened to 2023.
Our aim is to get a good result for everyone.