MIT Scientists Discover That Computers Can Understand Complex Words and Concepts



Statistics are used by models for natural language processing to gather a lot of data regarding word meanings.

When using a term, Humpty Dumpty declares scornfully in "Through the Looking Glass," "It means simply what I select it to signify – neither more nor less." The question is whether you can make words signify so many diverse things, Alice responds.

The meanings of words have long been the focus of study. The human mind must navigate a complicated network of flexible, specific information in order to understand their significance.

A more recent problem with word meaning has now been discovered. The ability of artificially intelligent computers to emulate human cognitive processes and grasp language is a topic of research. The answer to that question has recently been published in a research that was conducted by scientists from UCLA, MIT, and the National Institutes of Health.

The research, which was published in the journal Nature Human Behaviour, shows that machine learning algorithms may be able to understand extremely complicated word meanings. The researchers also discovered an easy way to acquire this advanced data. They found that the AI system they studied conveys word meanings in a way that is very similar to human judgment.

Over the past ten years, many people have used the authors' AI system to examine word meaning. It learns word meanings by "reading" vast amounts of content that is available online and comprises tens of billions of words.

a representation of semantic projection, which may establish how similar two words are in a given situation. Based on their sizes, the creatures in this grid are compared. Photograph by Idan Blank/UCLA

When two words are regularly used together, like "table" and "chair," the system learns that they have similar meanings. Additionally, it learns that words with very distinct meanings have relatively infrequent occurrences together, such as "table" and "planet," when they do.

That strategy appears to be a good place to start, but think about how effectively humans would comprehend the world if the only method to comprehend meaning was to count how frequently words come next to one another, without the opportunity to interact with other people or our surroundings.

The goal of the study, according to co-lead author and UCLA associate professor of psychology and linguistics Idan Blank, was to determine what the system knew about the words it learned and what type of "common sense" it possessed.

The system looked to have a significant flaw before the investigation, according to Blank: "As far as the system is concerned, every two words have only one numerical value that shows how close they are."

Consider our understanding of alligators and dolphins, Blank urged. "The two are comparable when compared on a scale of size, from "little" to "large. They are a little different from one other in terms of IQ. On a scale from "safe" to "hazardous," they vary substantially in the threat they offer to humans. The context of a word determines its meaning.

"We wanted to discover if this system genuinely understands these minute distinctions — if its notion of similarity is flexible in the same way that it is for humans."

The authors created a method they refer to as "semantic projection" in order to find out. The representations of various animals can be compared to one another by drawing a line between the model's depictions of the words "large" and "little," for instance.

Using that technique, the researchers looked at 52 word groupings to see if the system could learn to categorize meanings, such as ranking animals according to their size or level of risk to humans, or grouping U.S. states based on their weather or total income.

Other word groups included those for initial names, occupations, sports, mythical animals, and apparel. Multiple settings or aspects were ascribed to each category, such as size, danger, intellect, age, and speed.

The researchers discovered that their approach had strong similarities to human intuition across those numerous items and circumstances. (The researchers also requested that cohorts of 25 people each provide identical evaluations on each of the 52-word groupings in order to make that comparison.)

Surprisingly, the system came to understand that, despite the names "Betty" and "George" being similar in that they are both quite "old," they signify distinct genders. And that while "fencing" and "weightlifting" both often take place indoors, they differ in the amount of intellect they call on.

It is an incredibly easy procedure that is also totally intuitive, according to Blank. "We place creatures on a mental scale that divides the world into "large" and "little," or vice versa."

In reality, according to Blank, he didn't think the method would work, so he was thrilled when it did.

It turns out that this machine learning system is a lot smarter than we initially believed; it has quite complicated types of information, and this knowledge is structured in a very logical manner, he added. "You can learn a lot about the world just by keeping note of which words co-occur with one another in language," said Albert Einstein.

The Air Force Research Laboratory received funding for the study from the Office of the Director of National Intelligence's Intelligence Advanced Research Projects Activity.

Unexpectedly Smart AI Provides New Insights into Language Processing in the Brain

For Knowledge of Human Cognition Scientists Study the Collective Mind by Looking Beyond the Individual Brain

Brain's Search Engine Techniques: Why Some Words May Be Memorable More Than Others

The MIT Intelligence Quest will advance the study of both human and artificial intelligence.

Artificial Intelligence That Surprisingly Understands Human Thought Processes Language-Related Questions Concept \sSCIENCE

Words Are Needed To Think About Numbers, According To New Research

When using EEG biology, dogs learn about word boundaries in speech much like human infants do.

Despite Having a Sharp Hearing, Dogs May Never Understand How Important Each Word's Sound Is

One Comment Scientists at MIT have discovered that computers are capable of comprehending complex words and ideas.
Something must be comprehended by a sentient organism in order to be genuinely "understood." It is more appropriate to state that a computer can analyze "complex word meanings," as though it understood them, whether referring to the computer or the algorithm it is running. The argument that computer algorithms are sentient has not yet been made. A computer engineer recently lost his job as a result of making such a claim.

 By UNIVERSITY OF CALIFORNIA 


Comments

Popular posts from this blog

Do You Sleep on Your Back or Side? Here's The Research on 'Optimal' Sleep Positions

A Briefcase-Sized Box Is Already Making Oxygen on Mars

New DNA Research Unlocks Secrets of Native Rodents’ Rat Race to New Lands