Search
  • Nouvelles
  • Apprentissage automatique, apprentissage biologique

Two thumbs up! Computer accurately judges written sentiment

by CIFAR nov. 4 / 13

A new algorithm allows computers to accurately analyze the sentiments expressed in a sentence ­­– in this case, whether a reviewer meant to award a movie a thumbs up or a thumbs down.

EX-8-Movie
Photo credit: iStock

“Teaching computers to understand natural language is hard,” says Andrew Ng, an Associate in CIFAR’s Learning in Machines & Brains program (formerly known as Neural Computation & Adaptive Perception) and a computer scientist at Stanford University. “Human languages have figures of speech, allusion, complex forms of negation, and other features that aren’t easy to write rules for. This method allows a computer to learn – from data – many of the basic properties of how language works.”

Most automated methods use a “bag of words” technique, in which the sentiments expressed by individual words are added together. But that doesn’t always work, says Richard Socher, a Ph.D. student at Stanford and first author on the paper. A sentence like, “This film doesn’t care about cleverness, wit, or any other kind of intelligent humor,” would be rated as a positive with that technique.

The new program called NaSent – for Neural Analysis of Sentiment – starts at the bottom, learning the values for individual words, then building up to phrases, and finally sentences. So it teaches itself that “great movie” and “not such a great movie” have different meanings, and learns to apply the rule about negation more generally to other sentences.

This bottom-up method is called “recursive deep learning”, a special type of deep learning which is also being used by researchers in CIFAR’s LMB program to create better methods for computer vision and speech recognition.

To train the network, the researchers took a dataset of 11,855 individual sentences extracted from movie reviews on the website rottentomatoes.com. They broke those sentences down into 215,154 phrases, and had hundreds of humans rate each phrase from very negative to very positive. Then they turned their computer program loose on the data.

The program learned to correctly interpret difficult sentences such as “There are slow and repetitive parts, but it has just enough spice to keep it interesting.”

Overall, the program achieved an accuracy rating of 85 percent , compared to 80 percent for other programs using the dataset – a 25 percent reduction in the error rate. Socher thinks that the program will get better with increased training, since many of the mistakes were made on words and phrases the program had never seen before.

Eventually, programs like this could be used to automate opinion research. Companies might use them to quickly search social media and understand the response to a new movie, video, or other product they have just released.

The paper was presented in October at the Conference on Empirical Methods in Natural Language Processing in Seattle.