How to measure the understanding of an AI?.

Ask it as you would ask a person.
(If you know what it involves and what a person does, you won’t make the mistake of Google’s LaMDa programmer. If you don’t know, ask any expert meditator or psychotherapist.)
You can talk to the AI, or write it, as in a chat.
In chat tests, choice questions, AI is better at ranking things than the amazon Turk people who did the test for 23US$ an hour.
I mean, in those multiple choice tests no one would think it’s an AI answering. You can not know who is answering.

Like IBM Watson winning at Jeopardy, Deep Blue beating Kasparov, AlphaGo beating Ke Jie at Go.

AI 1, humans 0.

Now, get them to talk to you, like someone talking over a beer and skirt the issue.

“Why did you answer this?”, “why did you do this?”

That’s as far as it goes.

Nobody expects their calculator to talk, isn’t?

It’s not programmed for that, nor do they know how to program it for that (In QOM, we do).

AI 1, humans 1.

Now let’s test their understanding.

Using the Stanford Question Answering Dataset (SQUAD 2) as a benchmark because is the most accesible to a exploration of the wrong replies.

Strictely speaking, SQUAD give a measure of how good is an AI to detect similarity in texts.

Amazon rainforest text:

“The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain “Amazonas” in their names. The Amazon represents over half of the planet’s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.”

Let’s ask things like:

“How many square kilometers is the Amazon basin?”

  • Microfost Asia nlnet: no answer (incorrect).
  • AllenAI BIDAF+Attention+Elmo: 2.700.000 (incorrect)
  • Google BERT: 7,000,000 (correct).

“What is the estimate of the number of tree species in the Amazon rainforest?”

  • Microfost Asia nlnet: 390 billion (incorrect).
  • AllenAI BIDAF+Attention+Elmo: 390 billion (incorrect).
  • Google BERT: 7,000,000,000 (incorrect).

And so on.

Mathematical reasoning, Open questions, so on and so forth.

In the image of this article, a summary (with what our model answers in contrast).

AI 1, humans 2.

Although such models are sometimes impressive — generating poetry or correctly answering trivia questions — they have no sense of the meaning of language, which causes them to also create gibberish. “

Nature, June 2022, doi: https://doi.org/10.1038/d41586-022-01705-z

So, AI understand?

AI are still calculators. Still nobody expect their calculators understand.

In the meantime, what happens in a person or any living being?

Understanding goes intrinsicaly related with the living experience.

In the Quanta of Meaning (QOM) model, there is no need for US$3 million per data training per language model (GPT-3 cost US$4.3M).

QOM works on first principles that involve knowing, and understanding how understanding occurs, including who understands and what is understood.

First you understand, only then you “live” your experience (two different orthogonal domains) and only then are you aware that you experience it. Three different and orthogonal domains in humans and some animals.

First you understand, and only then language emerges.

People asked us (yes, we edit the question ;)):

“The QOM AI that you will build when this funding ends, can it surpass humans in its understanding?”

On the tests of this article?

Yes.

In tests of the deep dimensions about what it means to be human and related with what is discussed in the Mind and Life dialogues?

Not yet. That would requirea a hard AGI.

Although we have hypotheses of what that might look like, and how experimentally to measure the presence of consciousness, our intention in QOM is goes only as far as a soft AGI.

Did you like? Share it

Stay in touch.

Know our news before they are published.

(your data are safe with us)

QOM is the first startup in history that is building an AI with human understanding.

The prediction of the singularity has been shortened to 2023.

Our aim is to get a good result for everyone.

Social Media

Pages

CONTACT INFO