Sentiment analysis

Sentiment Analysis tool

Sentiment analysis (also known as opinion mining) refers to the use of natural language processing, text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews and social media for a variety of applications, ranging from marketing to customer service.

Generally speaking, sentiment analysis aims to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. The attitude may be his or her judgment or evaluation (see appraisal theory), affective state (that is to say, the emotional state of the author when writing), or the intended emotional communication (that is to say, the emotional effect the author wishes to have on the reader).

Types of sentiment analysis

A basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence, or feature/aspect level — whether the expressed opinion in a document, a sentence or an entity feature/aspect is positive, negative, or neutral. Advanced, "beyond polarity" sentiment classification looks, for instance, at emotional states such as "angry," "sad," and "happy."

Early work in that area includes Turney[1] and Pang[2] who applied different methods for detecting the polarity of product reviews and movie reviews respectively. This work is at the document level. One can also classify a document's polarity on a multi-way scale, which was attempted by Pang[3] and Snyder[4] among others: Bo and Lilian[3] expanded the basic task of classifying a movie review as either positive or negative to predicting star ratings on either a 3 or a 4 star scale, while Snyder[4] performed an in-depth analysis of restaurant reviews, predicting ratings for various aspects of the given restaurant, such as the food and atmosphere (on a five-star scale). Even though in most statistical classification methods, the neutral class is ignored under the assumption that neutral texts lie near the boundary of the binary classifier, several researchers suggest that, as in every polarity problem, three categories must be identified. Moreover it can be proven that specific classifiers such as the Max Entropy[5] and the SVMs[6] can benefit from the introduction of neutral class and improve the overall accuracy of the classification.

A different method for determining sentiment is the use of a scaling system whereby words commonly associated with having a negative, neutral or positive sentiment with them are given an associated number on a -10 to +10 scale (most negative up to most positive) and when a piece of unstructured text is analyzed using natural language processing, the subsequent concepts are analyzed for an understanding of these words and how they relate to the concept. Each concept is then given a score based on the way sentiment words relate to the concept, and their associated score. This allows movement to a more sophisticated understanding of sentiment based on an 11 point scale. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.[7]

Subjectivity/objectivity identification

This task is commonly defined as classifying a given text (usually a sentence) into one of two classes: objective or subjective.[8] This problem can sometimes be more difficult than polarity classification.[9] The subjectivity of words and phrases may depend on their context and an objective document may contain subjective sentences (e.g., a news article quoting people's opinions). Moreover, as mentioned by Su,[10] results are largely dependent on the definition of subjectivity used when annotating texts. However, Pang[11] showed that removing objective sentences from a document before classifying its polarity helped improve performance.

Feature/aspect-based sentiment analysis

It refers to determining the opinions or sentiments expressed on different features or aspects of entities, e.g., of a cell phone, a digital camera, or a bank.[12] A feature or aspect is an attribute or component of an entity, e.g., the screen of a cell phone, the service for a restaurant, or the picture quality of a camera. The advantage of feature-based sentiment analysis is the possibility to capture nuances about objects of interest. Different features can generate different sentiment responses, for example a hotel can have a convenient location, but mediocre food.[13] This problem involves several sub-problems, e.g., identifying relevant entities, extracting their features/aspects, and determining whether an opinion expressed on each feature/aspect is positive, negative or neutral.[14] The automatic identification of features can be performed with syntactic methods or with topic modeling.[15][16] More detailed discussions about this level of sentiment analysis can be found in Liu's work.[17]

Methods and features

Existing approaches to sentiment analysis can be grouped into four main categories: keyword spotting, lexical affinity, statistical methods, and concept-level techniques.[18] Keyword spotting classifies text by affect categories based on the presence of unambiguous affect words such as happy, sad, afraid, and bored.[19] Lexical affinity not only detects obvious affect words, it also assigns arbitrary words a probable “affinity” to particular emotions.[20] Statistical methods leverage on elements from machine learning such as latent semantic analysis, support vector machines, "bag of words" and Semantic Orientation — Pointwise Mutual Information (See Peter Turney's[1] work in this area). More sophisticated methods try to detect the holder of a sentiment (i.e. the person who maintains that affective state) and the target (i.e. the entity about which the affect is felt).[21] To mine the opinion in context and get the feature which has been opinionated, the grammatical relationships of words are used. Grammatical dependency relations are obtained by deep parsing of the text.[22] Unlike purely syntactical techniques, concept-level approaches leverage on elements from knowledge representation such as ontologies and semantic networks and, hence, are also able to detect semantics that are expressed in a subtle manner, e.g., through the analysis of concepts that do not explicitly convey relevant information, but which are implicitly linked to other concepts that do so.[23]

Open source software tools deploy machine learning, statistics, and natural language processing techniques to automate sentiment analysis on large collections of texts, including web pages, online news, internet discussion groups, online reviews, web blogs, and social media.[24] Knowledge-based systems, instead, make use of publicly available resources, e.g., WordNet-Affect,[25] SentiWordNet,[26] and SenticNet,[27][28] to extract the semantic and affective information associated with natural language concepts. Sentiment Analysis can also be performed on visual content i.e. images and videos. One of the first approach in this direction is SentiBank[29] utilizing an adjective noun pair representation of visual content.

A human analysis component is required in sentiment analysis, as automated systems are not able to analyze historical tendencies of the individual commenter, or the platform and are often classified incorrectly in their expressed sentiment. Automation impacts approximately 23% of comments that are correctly classified by humans.[30]

Sometimes, the structure of sentiments and topics is fairly complex. Also, the problem of sentiment analysis is non-monotonic in respect to sentence extension and stop-word substitution (compare THEY would not let my dog stay in this hotel vs I would not let my dog stay in this hotel). To address this issue a number of rule-based and reasoning-based approaches have been applied to sentiment analysis, including Defeasible Logic Programming.[31] Also, there is a number of tree traversal rules applied to syntactic parse tree to extract the topicality of sentiment in open domain setting.[32][33]

Evaluation

The accuracy of a sentiment analysis system is, in principle, how well it agrees with human judgments. This is usually measured by precision and recall. However, according to research human raters typically agree 79%[34] of the time (see Inter-rater reliability).

Thus, a 70% accurate program is doing nearly as well as humans, even though such accuracy may not sound impressive. If a program were "right" 100% of the time, humans would still disagree with it about 20% of the time, since they disagree that much about any answer .[35] More sophisticated measures can be applied, but evaluation of sentiment analysis systems remains a complex matter. For sentiment analysis tasks returning a scale rather than a binary judgement, correlation is a better measure than precision because it takes into account how close the predicted value is to the target value.

Sentiment analysis and Web 2.0

The rise of social media such as blogs and social networks has fueled interest in sentiment analysis. With the proliferation of reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of virtual currency for businesses looking to market their products, identify new opportunities and manage their reputations. As businesses look to automate the process of filtering out the noise, understanding the conversations, identifying the relevant content and actioning it appropriately, many are now looking to the field of sentiment analysis.[36] Further complicating the matter, is the rise of anonymous social media platforms such as 4chan and Reddit.[37] If web 2.0 was all about democratizing publishing, then the next stage of the web may well be based on democratizing data mining of all the content that is getting published.[38]

One step towards this aim is accomplished in research. Several research teams in universities around the world currently focus on understanding the dynamics of sentiment in e-communities through sentiment analysis.[39] The CyberEmotions project, for instance, recently identified the role of negative emotions in driving social networks discussions.[40]

The problem is that most sentiment analysis algorithms use simple terms to express sentiment about a product or service. However, cultural factors, linguistic nuances and differing contexts make it extremely difficult to turn a string of written text into a simple pro or con sentiment.[36] The fact that humans often disagree on the sentiment of text illustrates how big a task it is for computers to get this right. The shorter the string of text, the harder it becomes.

Even though short text strings might be a problem, sentiment analysis within microblogging has shown that Twitter can be seen as a valid online indicator of political sentiment. Tweets’ political sentiment demonstrates close correspondence to parties’ and politicians’ political positions, indicating that the content of Twitter messages plausibly reflects the offline political landscape.[41]

Resources for sentiment analysis

Sentiment vocabularies and annotated word lists:

Online sentiment analyzers:

Annotated corpora (documents with manual annotations of sentiment that can be used to evaluate algorithms):

See also

References

  1. 1 2 Turney, Peter (2002). "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews". Proceedings of the Association for Computational Linguistics. pp. 417–424. arXiv:cs.LG/0212032.
  2. Pang, Bo; Lee, Lillian; Vaithyanathan, Shivakumar (2002). "Thumbs up? Sentiment Classification using Machine Learning Techniques". Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). pp. 79–86.
  3. 1 2 Pang, Bo; Lee, Lillian (2005). "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales". Proceedings of the Association for Computational Linguistics (ACL). pp. 115–124.
  4. 1 2 Snyder, Benjamin; Barzilay, Regina (2007). "Multiple Aspect Ranking using the Good Grief Algorithm". Proceedings of the Joint Human Language Technology/North American Chapter of the ACL Conference (HLT-NAACL). pp. 300–307.
  5. Vryniotis, Vasilis (2013). The importance of Neutral Class in Sentiment Analysis.
  6. Koppel, Moshe; Schler, Jonathan (2006). "The Importance of Neutral Examples for Learning Sentiment". Computational Intelligence 22. pp. 100–109. CiteSeerX: 10.1.1.84.9735.
  7. Thelwall, Mike; Buckley, Kevan; Paltoglou, Georgios; Cai, Di; Kappas, Arvid (2010). "Sentiment strength detection in short informal text". Journal of the American Society for Information Science and Technology 61 (12): 2544–2558. doi:10.1002/asi.21416.
  8. Pang, Bo; Lee, Lillian (2008). "4.1.2 Subjectivity Detection and Opinion Identification". Opinion Mining and Sentiment Analysis. Now Publishers Inc.
  9. Mihalcea, Rada; Banea, Carmen; Wiebe, Janyce (2007). "Learning Multilingual Subjective Language via Cross-Lingual Projections" (PDF). Proceedings of the Association for Computational Linguistics (ACL). pp. 976–983.
  10. Su, Fangzhong; Markert, Katja (2008). "From Words to Senses: a Case Study in Subjectivity Recognition" (PDF). Proceedings of Coling 2008, Manchester, UK.
  11. Pang, Bo; Lee, Lillian (2004). "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts". Proceedings of the Association for Computational Linguistics (ACL). pp. 271–278.
  12. Hu, Minqing; Liu, Bing (2004). "Mining and Summarizing Customer Reviews". Proceedings of KDD 2004.
  13. Cataldi, Mario; Ballatore, Andrea; Tiddi, Ilaria; Aufaure, Marie-Aude (2013-06-22). "Good location, terrible food: detecting feature sentiment in user-generated reviews". Social Network Analysis and Mining 3 (4): 1149–1163. doi:10.1007/s13278-013-0119-7. ISSN 1869-5450.
  14. Liu, Bing; Hu, Minqing; Cheng, Junsheng (2005). "Opinion Observer: Analyzing and Comparing Opinions on the Web". Proceedings of WWW 2005.
  15. Zhai, Zhongwu; Liu, Bing; Xu, Hua; Jia, Peifa (2011-01-01). Huang, Joshua Zhexue; Cao, Longbing; Srivastava, Jaideep, eds. Constrained LDA for Grouping Product Features in Opinion Mining. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 448–459. doi:10.1007/978-3-642-20841-6_37. ISBN 978-3-642-20840-9.
  16. Titov, Ivan; McDonald, Ryan (2008-01-01). "Modeling Online Reviews with Multi-grain Topic Models". Proceedings of the 17th International Conference on World Wide Web. WWW '08 (New York, NY, USA: ACM): 111–120. doi:10.1145/1367497.1367513. ISBN 978-1-60558-085-2.
  17. Liu, Bing (2010). "Sentiment Analysis and Subjectivity" (PDF). In Indurkhya, N.; Damerau, F. J. Handbook of Natural Language Processing (Second ed.).
  18. Cambria, Erik; Schuller, Björn; Xia, Yunqing; Havasi, Catherine (2013). "New Avenues in Opinion Mining and Sentiment Analysis". IEEE Intelligent Systems 28 (2): 15–21. doi:10.1109/MIS.2013.30.
  19. Ortony, Andrew; Clore, G; Collins, A (1988). The Cognitive Structure of Emotions (PDF). Cambridge Univ. Press.
  20. Stevenson, Ryan; Mikels, Joseph; James, Thomas (2007). "Characterization of the Affective Norms for English Words by Discrete Emotional Categories" (PDF). Behavior Research Methods 39 (4): 1020–1024.
  21. Kim, S. M.; Hovy, E. H. (2006). "Identifying and Analyzing Judgment Opinions." (PDF). Proceedings of the Human Language Technology / North American Association of Computational Linguistics conference (HLT-NAACL 2006). New York, NY.
  22. Dey, Lipika; Haque, S. K. Mirajul (2008). "Opinion Mining from Noisy Text Data". Proceedings of the second workshop on Analytics for noisy unstructured text data, p.83-90.
  23. Cambria, Erik; Hussain, Amir (2012). Sentic Computing: Techniques, Tools, and Applications (PDF). Springer.
  24. Akcora, Cuneyt Gurcan; Bayir, Murat Ali; Demirbas, Murat; Ferhatosmanoglu, Hakan (2010). "Identifying breakpoints in public opinion". SigKDD, Proceedings of the First Workshop on Social Media Analytics.
  25. 1 2 Strapparava, Carlo; Valitutti, Alessandro (2004). "WordNet-Affect: An affective extension of WordNet" (PDF). Proceedings of LREC. pp. 1083–1086.
  26. 1 2 Baccianella, Stefano; Esuli, Andrea; Sebastiani, Fabrizio (2010). "Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining" (PDF). Proceedings of LREC. pp. 2200–2204. Retrieved 2014-04-05.
  27. 1 2 "SenticNet". sentic.net.
  28. 1 2 Cambria, Erik; Olsher, Daniel; Rajagopal, Dheeraj (2014). "SenticNet 3: A common and common-sense knowledge base for cognition-driven sentiment analysis" (PDF). Proceedings of AAAI. pp. 1515–1521.
  29. Borth, Damian; Ji, Rongrong; Chen, Tao; Breuel, Thomas; Chang, Shih-Fu (2013). "Large-scale Visual Sentiment Ontology and Detectors Using Adjective Noun Pairs". Proceedings of ACM Int. Conference on Multimedia. pp. 223–232.
  30. "Case Study: Advanced Sentiment Analysis". Retrieved 18 October 2013.
  31. Galitsky, Boris; McKenna, Eugene William. "Sentiment Extraction from Consumer Reviews for Providing Product Recommendations". Retrieved 18 November 2013.
  32. Galitsky, Boris; Dobrocsi, Gabor; de la Rosa, Josep Lluís (2010). "Inverting Semantic Structure Under Open Domain Opinion Mining". FLAIRS Conference.
  33. Galitsky, Boris; Chen, Huanjin; Du, Shaobin (2009). "Inversion of Forum Content Based on Authors' Sentiments on Product Usability". AAAI Spring Symposium: Social Semantic Web: Where Web 2.0 Meets Web 3.0: 33–38.
  34. Ogneva, M. "How Companies Can Use Sentiment Analysis to Improve Their Business". Mashable. Retrieved 2012-12-13.
  35. Roebuck, K. Sentiment Analysis: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors.
  36. 1 2 Wright, Alex. "Mining the Web for Feelings, Not Facts", New York Times, 2009-08-23. Retrieved on 2009-10-01.
  37. "Sentiment Analysis on Reddit". Retrieved 10 October 2014.
  38. Kirkpatrick, Marshall. ", ReadWriteWeb, 2009-04-15. Retrieved on 2009-10-01.
  39. CORDIS. "Collective emotions in cyberspace (CYBEREMOTIONS)", European Commission, 2009-02-03. Retrieved on 2010-12-13.
  40. Condliffe, Jamie. "Flaming drives online social networks ", NewScientist, 2010-12-07. Retrieved on 2010-12-13.
  41. Tumasjan, Andranik; O.Sprenger, Timm; G.Sandner, Philipp; M.Welpe, Isabell (2010). "Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment". "Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media"
  42. "generating the Affective Norms for English Words (ANEW) dataset". tomlee.wtf.
  43. Stevenson, Ryan A.; Mikels, Joseph A.; James, Thomas W. (2007-11-01). "Characterization of the Affective Norms for English Words by discrete emotional categories". Behavior Research Methods 39 (4): 1020–1024. doi:10.3758/BF03192999. ISSN 1554-351X.
  44. "SentiWordNet". cnr.it.
  45. Manuela Speranza, FBK. "WordNet Domains". fbk.eu.
  46. http://www.alchemyapi.com/
  47. https://www.bitext.com/
  48. "Semantria Web Demo - semantria.com". semantria.com.
  49. "API - Sentiment140 - A Twitter Sentiment Analysis Tool". sentiment140.com.
  50. "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank". Deeply Moving: Deep Learning for Sentiment Analysis.
  51. "Twinword Sentiment Analysis API Web Demo". twinword.com.
  52. "A Twitter and web sentiment analysis tool". werfamous.com.
  53. "Content analysis software for sentiment analysis". provalisresearch.com.
  54. "Text Analysis & Sentiment Analysis API | Buzzlogix". Buzzlogix | Text Analysis API. Buzzlogix.com. Retrieved 2015-11-23.
  55. "DAI-Labor > Competence Centers > CC IRML > Datasets > Annotated Sentiment Dataset". dai-labor.de.

Further reading

This article is issued from Wikipedia - version of the Tuesday, February 02, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.