Just as NLP tools (including here the LLMs of Generative AI) allow us to extract information from language, NMP allows us to extract meaning from a stream of bits of text or speech.
NMP arises from our discovery that understanding arises and develops in a domain orthogonal to language and the brain, and that in that domain meaning exhibits quantum properties.
This was the origin of the Physics of Understanding, where we can model, predict and observe how understanding and meaning develop, just as Physics does with its objects of study.
In our case, the processes and structures that allow any baby in the world to understand what "mama" and "dada" mean, even if they are spelled and said differently in different languages.
You won't find it in scientific papers because we don't need it. Our business is to do only applied science: patents and developments.
Is not LLM tuning.
Is not Prompt engineering
It does not use data pre-training.
It does not use neural networks.
It does not use statistical correlations.
Is not NLP
It does NOT use linguistic classifications.
It is an orthogonal domain to language and the brain.
It works on first principles. We know you have reached an understanding when the result of several equations is exactly 0.
It uses models, equations, principles and algorithms discovered and developed by us more than 3 years ago (although it took us 40 years to take only 3 years!).
It can be applied in all current areas of AI: in any language model, in any NLP pipeline and the result is that it increases understanding.
It can be applied in neural networks to create language models with better understanding. (but it is not necessary to achieve understanding like a human)
It can be applied in any area of AI that involves understanding: autonomous vehicles, imaging, robotics, etc.
It is a new paradigm. Understanding it implies understanding and having the experience that both the "I in here" that experiences a "reality out there" and the "reality out there" that is experienced by that "I in here" are both co-generated by your organism (mainly your brain). This duet is the so-called "duality").
It allows you to build a soft AGI because in its domain there are also purely "human" cognitive operations such as analyzing, critical thinking, imagining, calculating, comparing, learning, etc. So, after our development of our AI understanding like a human, a soft AGI is just one step away.
For example:
"Neil Amstrong was the 5th man to arrive to the moon. Where he lives?
The app replies:
"Neil Amstrong was not the 5th man to arrive to the moon. Neil Amstrong was the first person to walk on the moon, as part of the Apollo 11 spaceflight on July 20, 1969. He was accompanied by Buzz Aldrin and Michael Collins. This is supported by the information given in the text, which states that Neil Amstrong was part of the Apollo 11 spaceflight and was the first person to set foot on the Moon.
Where he lives? Based on the information given, Neil Amstrong is no longer alive. He passed away on August 25, 2012."
For example:
"Mike's mom had 4 kids; 3 of them are Luis, Drake and Matilda. What is the name of the 4th kid?"
"Who won the Super Bowl in the year that Justin Bieber was born?"
"Which is heavier? 10kg iron or 10kg cotton?"
"Does ice sink?"
"How many hours are there in 80 years?"
"As of the census of 2000, there were 7,791 people, 3,155 households, and 2,240 families residing in the county. 33.7% were of Germans, 13.9% Swedish people, 10.1% Irish people, 8.8% United States, 7.0 percent English people and 5.4% Danish people ancestry. Which ancestral groups are at least 10%?"
"Dan plants 3 rose bushes. Each rose bush has 25 roses. Each rose has 8 thorns. How many thorns are there total?"
"Would a sophist use an épée?"
"Answer Choices: (a) motel (b) chair (c) hospital (d) people (e) hotels". "What are you waiting for alongside with when you’re in a reception area?"
"Jill gets paid $20 per hour to teach and $30 to be a cheerleading coach. If she works 50 weeks a year, 35 hours a week as a teacher and 15 hours a week as a coach, what’s her annual salary?"
"Gretchen has 110 coins. There are 30 more gold coins than silver coins. How many gold coins does Gretchen possess?"
And more.
100% of accuracy in the benchmark of Fintech.
In comparison, GPT-4-turbo has only 30% of accuracy.
At the date: May, 6th, 2024. Our first MVP, in TRL-7, beta, is online. A demostration of a chat assistant without hallucinations and biases, and acces to ligth and deep search engine. Experience TrutTalker here.
Our API, in alpha, allow any LLM to connect and receive the input with corrected hallucinations, corrected biases, and with an improved understanding. The percent of improvement depends of the LLM. Available only by Pilot request here.
Alpha app for "QOM Recover" (we'll change its name to "Thinker" because is basically the thougth module of an AGI): Take a text, detect truthfullness, check facts, extract hot information for the user, build forecasting and insigths without hallucinations or biases. (Like a very smart Human). Is already deployed in our system and you can see by video chat.
Implementation of Documents recovering. Our test shows 100% of accuracy in the Fintech Benchmarks. It allow the processing of infinite documents of infinite lenghts. You can see by video chat.
Implementation of Self learning with the QOM Self on May2024:
Self Learning of Language
Self learning of Intuitive Science
Self Learning of Programming
Ensemble as a Soft AGI. Between Dec 2023 and the middle of 2024.