Despite advancements, AI models still lack common sense

A team of scientists from the University of Southern California (USC), the University of Washington, and the Allen Institute for Artificial Intelligence in the United States developed a new test able to examine verbal reasoning skills in machine learning systems. The AI software had to put together a sentence by using simple nouns and verbs.

 

The text was designed to test the ability of the AI models to generate a grammatically correct sentence, which would make sense. Although one model came up with a coherent sentence (‘Two dogs are throwing frisbees at each other’), it isn’t logical. It would be humans throwing frisbees at dogs and not two dogs throwing it at each other.

 

The researchers have built a dataset of 35,141 scenarios described using 77,449 sentences generated by humans, and they have tested eight different language models so far. However, it was found out that all AI systems have scored lower than humans.

 

Hence, it would seem like AI software is yet not trained on plausibility. AI cannot still think like humans and lack common sense. The scientists suggested that without common sense, robots or voice-activated assistants would possibly make quite a few mistakes while interacting with humans. This could be due to the fact that the neural network doesn’t develop reasoning skills as they rely on memorization of datasets and not on real-world experiences.

 

The team is trying to develop the common-sense test in order to better build the algorithms in the future and have AI models able to interact with humans on an everyday life basis.

 

Although AI seems to be more and more like us, it still has a long way to go…

More
articles