Optimization of sentiment analysis using machine learning classifiers

Human-centric Computing and Information Sciences, Dec 2017

Words and phrases bespeak the perspectives of people about products, services, governments and events on social media. Extricating positive or negative polarities from social media text denominates task of sentiment analysis in the field of natural language processing. The exponential growth of demands for business organizations and governments, impel researchers to accomplish their research in sentiment analysis. This paper leverages four state-of-the-art machine learning classifiers viz. Naïve Bayes, J48, BFTree and OneR for optimization of sentiment analysis. The experiments are performed using three manually compiled datasets; two of them are captured from Amazon and one dataset is assembled from IMDB movie reviews. The efficacies of these four classification techniques are examined and compared. The Naïve Bayes found to be quite fast in learning whereas OneR seems more promising in generating the accuracy of 91.3% in precision, 97% in F-measure and 92.34% in correctly classified instances.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:


Optimization of sentiment analysis using machine learning classifiers

Singh et al. Hum. Cent. Comput. Inf. Sci. Optimization of sentiment analysis using machine learning classifiers Jaspreet Singh profjaspreetbatth@gmail Gurvinder Singh Rajinder Singh Words and phrases bespeak the perspectives of people about products, services, governments and events on social media. Extricating positive or negative polarities from social media text denominates task of sentiment analysis in the field of natural language processing. The exponential growth of demands for business organizations and governments, impel researchers to accomplish their research in sentiment analysis. This paper leverages four state-of-the-art machine learning classifiers viz. Naïve Bayes, J48, BFTree and OneR for optimization of sentiment analysis. The experiments are performed using three manually compiled datasets; two of them are captured from Amazon and one dataset is assembled from IMDB movie reviews. The efficacies of these four classification techniques are examined and compared. The Naïve Bayes found to be quite fast in learning whereas OneR seems more promising in generating the accuracy of 91.3% in precision, 97% in F-measure and 92.34% in correctly classified instances. Sentiment analysis; Social media text; Movie reviews; Product reviews; Machine learning classifiers - in “Proposed methodology for optimization of sentiment prediction using weka” section. The related work with recent contributions of machine learning in the field of sentiment classification is described in “Related work” section. In “Datasets taken” section, the three manually annotated datasets are described along with their preprocessing. The experimental results and discussion of efficacies of classifiers are cataloged in “Results and discussions” section followed by the ending remarks along with a future direction in “Conclusion” section. Levels of sentiment Due to scarcity of opinion text available in digital form, very less research interest on computational linguistics in the last decade of twentieth century was witnessed [ 2–4 ]. The escalation of social media text on internet attracts young researchers to define the level of granularities of text. The web text is classified into three levels viz. document level, sentence level and word level. In [ 5 ], the fourth level granularity is defined by using deep convolution neural network. This fourth level is character level feature extraction approach used for extracting features of each character window from given word (Table 1). Machine learning techniques for sentiment analysis The social networking sites dispense their data conveniently and freely on the web. This availability of data entices the interest of young researchers to plunge them in the field of sentiment analysis. People express their emotions and perspectives on the social media discussion forums [ 6 ]. The business organizations employ researchers to investigate the unrevealed facts about their products and services. Spontaneous and automatic determination of sentiments from reviews is the main concern of multinational organizations [ 7–10 ]. The machine learning techniques have improved accuracy of sentiment analysis and expedite automatic evaluation of data these days. This work attempted to utilize four machine learning techniques for the task of sentiment analysis. The modeling of four techniques is briefly discussed below. Naïve Bayes used for sentiment classification The dichotomy of sentiment is generally decided by the mindset of an author of text whether he is positively or negatively oriented towards his saying [ 6, 11–13 ]. Naïve Bayes classifier is a popular supervised classifier, furnishes a way to express positive, negative and neutral feelings in the web text. Naïve Bayes classifier utilizes conditional probability to classify words into their respective categories. The benefit of using Naïve Bayes on text classification is that it needs small dataset for training. The raw data from web undergoes preprocessing, removal of numeric, foreign words, html tags and special symbols yielding the set of words. The tagging of words with labels of positive, negative and neutral tags is manually performed by human experts. This preprocessing produces word-category pairs for training set. Consider a word ‘y’ from test set (unlabeled word set) and a window of n-words (x1, x2, …… xn) from a document. The conditional probability of given data point ‘y’ to be in the category of n-words from training set is given by: P(y/x1, x2, . . . . . . xn) = P y × n P(xi/y) i=1 P(x1, x2, . . . . . . xn) Consider an example of a movie review for movie “Exposed”. The experimentation with Naïve Bayes yields the following results. J48 algorithm used for sentiment prediction The hierarchical mechanism divides feature space into distinct regions followed by the categorization of sample into category labels. J48 is a decision tree based classifier used to generate rules for the prediction of target terms. It has an ability to deal with larger training datasets than other classifiers [ 14 ]. The word features for sentences of corpus taken from labeled arff file of training set are represented in the leaf nodes of decision tree. In the test set every time when a near feature qualifies the label condition of internal feature node, its level is lifted up in the same branch of decision tree. The assignment of labels to the word features of test set gradually generates different two branches of decision tree. J48 algorithm uses entropy function for testing the classification of terms from the test set. Entropy(Term) = − n |Termj| |Termj| j=1 |Term| log2 |Term| where (Term) can be unigram, bigram and trigram. In this study we have considered unigrams and bigrams. The example in the Table 2 contains bigrams like “Horrible acting”, “Bad writing” and “Very misleading” are labeled with negative sentiment whereas the term “More enjoyable” reflects positive sentiment towards the movie. The decision tree of J48 algorithm for obtaining sentiment form text is represented in the Fig. 1 below. BFTREE algorithm used for sentiment prediction Another classification approach outperforms J48, C4.5 and CART by expanding only best node in the depth first order. BFTree algorithm excavates the training file for locating best supporting matches of positive and negative terms in the test file. BFTree algorithm keeps heuristic information gain to identify best node by probing all collected word features. The only difference between J48 and BFTree classifier is the computation order in which decision tree is built. The decision tree disparate feature terms of plain (1) (2) text taken from movie reviews and classify them at document level by tagging appropriate labels. BFTree extracts best node from labeled and trained binary tree nodes to reduce the error computed from information gain. Infogain(S, A) = Entropy(S) − |Si| × Entropy(Si) i∈V (A) S (3) where S is word feature term of test set and A is the attribute of sampled term from training set. V(A) denotes set of all possible values of A. The binary tree stops growing when an attribute A captures single value or when value of information gain vanishes. OneR algorithm used for sentiment prediction OneR algorithm is a classification approach which restricts decision tree to level one thereby generating one rule. One rule makes prediction on word feature terms with minimal error rate due to repetitive assessment of word occurrences. The classification of most frequent terms of a particular sentence is made on the basis of class of featured terms from training set. The demonstration of OneR algorithm for sentiment prediction with smallest error of classification is given below: Step 1 Step 2 Step 3 Step 4 Step 5 Select a featured term from training set. Train a model using step 3 and step 4. For each prediction term. For each value of that predictor. Count frequency of each value of target term. Find most frequent class. Make a rule and assign that class to predictor. Calculate total error of rules of each predictor. Choose predictor with smallest error. Proposed methodology for optimization of sentiment prediction using weka The preprocessing of raw text from web is done in python 3.5 using NLTK and bs4 libraries. Each review in the first dataset is parsed with NLTK’s parser and title of the review is considered as a feature. We have obtained 15 features from first dataset and 42 features from each of second and third dataset. The CSV files generated from Python are converted to ARFF files for WEKA 3.8. Only two sentiment labels namely Pos for positive and Neg for negative are used for assigning sentences. The working methodology of proposed work for optimization of sentiment prediction is given below in Fig. 2. After loading files with ARFF loader, the class assigner picks up appropriate class labels from dataset and performs feature selection on the basis of frequently used headings and most frequent titles. The feature selector module is implemented using three feature selection methods namely Document Frequency (DF), Mutual Information (MI) and Information Gain (IG). The mathematical modeling of these feature selection methods requires some probability distributions and statistical notations described below: P(w): Probability that a document ‘d’ contains term ‘w’. P(c’): Probability that document ‘d’ does not belongs to category ‘c’. P(w, c): Joint probability that document ‘d’ contains word term ‘w’ of category ‘c’. P(c/w): Conditional probability that a document ‘d’ belongs to category ‘c’ under the condition that ‘d’ contains word term ‘w’. Similarly other notations like P(w’), P(w/c), P(w/c’), P(c/w’) and P(c’/w) are taken and {c} is the set of categories. N1: Number of documents that exhibit category ‘c’ and contain term ‘w’. N2: Number of documents that do not belong to category ‘c’ but contains term ‘w’. N3: Number of documents that belong to category ‘c’ and do not contain term ‘w’. N4: Number of documents that neither belong to category ‘c’ nor contain term ‘w’. N: Total number of document reviews. DF method qualifies only those documents in which a higher frequency terms are considered. The MI method measures features of text by computing similarity of word terms ‘w’ and category ‘c’. DF = N1i m i=1 SimInfo(w, c) = log P(w/c) P(w) MI = log N1 × N (N1 + N3)(N1 + N2) (4) (5) (6) (7) The IG-construct measures similarity information for category by exploiting probabilities of absence or presence of terms in a document review. IG(w) = − P(c) · log P(c) + P(w) P(c/w) · log P(c/w) + P w′ P c/w′ · log P c/w′ The normalization module converts all letters into lowercase, removal of punctuation marks and special symbols, conversion of numbers into words, expansion of abbreviation and limiting the average length of twenty words in a sentence. Each sentence is delimited by a newline character. The Python’s NLTK and bs4 libraries are used for this purpose. Data splitter take the ratio of (80:20) of (Train: Test) subsets. We have used manual splitting of dataset at the time of retrieval of data from web. The four classifiers are trained with training subsets followed by performance evaluation. The evaluation metrics taken in the experiment are precision, recall, accuracy and F-measure. Related work Existing approaches of sentiment prediction and optimization widely includes SVM and Naïve Bayes classifiers. Hierarchical machine learning approaches yields moderate performance in classification tasks whereas SVM and Multinomial Naïve Bayes are proved better in terms of accuracy and optimization. Sentiment analysis using neural network architectures has appeared in very few works. The sentiment prediction methods using recursive neural networks and deep convolution neural networks are bit complex in capturing compositionality of words. Extracting character level features and embeddings of complex words is found hard in many neural network architectures whereas extracting sentence level or word level features such as morphological tags and stems are more effectively achieved in convolutional neural networks. A very few researchers have used J48, BFTree and OneR for the task of sentiment prediction. These three classifiers are utilized for other classification tasks like emotion recognition from text and twitter’s text categorizations. The summary of benchmarks related to machine learning techniques in terms of accuracy of classification is listed in the Table 2. SVM and Naive Bayes are proving better in terms of benchmarks than other machine learning techniques (Table 3). Datasets taken Three Datasets are manually annotated from http://www.amazon.in. First dataset consists of product reviews of Woodland’s wallet are taken from 12th October 2016 to 25th October 2016 for training set containing 88 reviews and from 25th October 2016 to 30th October 2016 for testing set containing 12 randomly chosen product reviews with their sentiments prediction using four machine learning algorithms. Second dataset consists of 7465 Digital Camera reviews of Sony are taken from 01st October 2016 to 25th October 2016 for training set and 1000 reviews are from 25th October 2016 to 30th October 2016 for test dataset. Third dataset consists of movie reviews taken from http://www. imdb.com. It contains 2421 reviews for training set and 500 reviews for test set. Results and discussions The experiment is carried out by using freeware WEKA software tool for classification of sentiments in the text. Standard implementations of Naïve Bayes, J48, BFTree and OneR algorithms are exploited from WEKA version 3.8. The classification accuracy of first dataset shows 100% classification accuracy with Naïve Bayes in some of the epochs because of small size of dataset. The average of 29 epochs for all four classifiers on second and third datasets is presented in Table 4 below. Naïve Bayes shows faster learning among four classifiers whereas J48 found to be slower. OneR classifier is leading from other three classifiers in percentage of correctly classified instances. The accuracy of J48 algorithm is promising in true positive and false positive rates. Results of classification accuracies for the test subsets with 42 and 15 attributes are recorded. The average accuracies of 29 runs on three datasets is presented in Table 5 s ] g 0 in [2 , d s d M n e L o b ,E itc ] m M o 16 e V [ d S m . r ,se reu ow and ] ] 2 8 [2 irsccceau  iilfftrrsssssaaaeeeeuundd ,,iilrttskFSeeePBRVnnhOwMm ,i[]rtssccveee17oonngddbm ,,i[rtxyae1BVuonpNMMmmm ,i[]-skyvaaaSee15BVndNNM ilrtttcaaSeeVuooongddwMm ,iiiffrttttsaaaeeeeuooonnngp ,ilrrr-skaaaSeeeVnnnngdMm ][91 ,iirtsxyyvaaaeeeBuonnpmm ,i][syvaaee12BVNM ,iilttsxcaaSeeeeVonnndbMm a C S S N A L M S L s ’ r e i f i s ) lsa )n 016  c io (2 f t o ca ad s il m k b r u m a p a h m f o o h r M c a d n e n e (y a B ) o /s k 3 ( n r e le o h h c b t it a u ir T A K ) 4 1 0 2 ( iitrsceonp iiiilrtttsssssssyyvaaeeeeeuunnnndgpmm iillllrr-tttssszyccvaaaaaeeeeeeuooonnndm irttkaaeonndgw iiiiiillllfrrttttsssyvaaaeeeuuoonnnnnhggpmm iiiilfffrrtttssscccaaaeeeeooonnnnpmm iiiilffr-tttttttsssssxycccaaeeeeeeoonnndmm iifttssceeeuooonhnndCm iiifrttttttssyccaSeeee0212uooooonndUmmm iiillrttttsscaeeeeeeonndpw iittttttsssyccaaeeeeeeonnnnnddmm iiiilllfrrtttsssycvvvaaeeeeeeeeooonnnnpwmm ttsaaed iiillfrrttsssyaaaeeeeeonnnnngpm ,iilllrrrttssycaaaeeeePooohhnngdpbAmm .iiiffffrrtttssscaSTeeeeeeeVoonnnhndddCDm ..ilrr-ttttttsssycaaaaaSeeeeeeKKoZdddbD ,.il)(rttttsycaaaaSeee0102BBuonndddbDm .il)(rrttttJsycaaaaaaTeeee1202BonndddbHM .ill)(r-tJsskvaaaSTEeee012220122unngm .iifrttttttttssssycccaaeeeeeeuooonndwmmm iiIllrttsscvvaaaTEeeeeeeeeRhhhhhdgMmW irycccaaun iiilffrrrrttssssskyaaaeeeeoooonnndpwmm ifrrrtttsszcvaeeee030uoooonnddpAwmm D S M A A S C D P A P M V S r o f d n a M L E r o f m a r g n N ,s ] rd m 32 o ra [ W i g s t in re en ,u tau S VM fe O [ S P ,S ]7 d e n i b m o c , s g a t h s a h , s 6 k s a t ) 6 1 0 2 l a v E m e S ( .l a t e d a m m a h o ) 5 1 0 2 ( .l a t e n a i n r e C n iraend ilrtze rsup iltsceeo teeenw d ifeng su frBo coon rtaeu 778b txaen r p 72 za fe .0 t fo ro .8 m tu n h .83 cS ,t8 rA oh ito rt=ilsg 4 T se fo it c n n ,n rS ta 5 w lee ice rEo txe io fo a .7 le s i t d 28 ffe ,f t a .4 od reu iifssc 86d ivew and dm ftae con LSTM iseen ilrycaanb iiftcaaonn irrveeom ttsaaedd rrseoopp .it556hw ilrrtcaeoo aSTndM fr18ohC fro iss f2o irve f2o 9d son -LN .07 .75 lca .18 ed .16 an rae CN r= 8 8 9 P irrteeeondm iilrttsTehgw ilssyaaonnd ilrcaaeenhn itaon lrtaeeuonnw frsssaeeogm iseenddgbm rtsccaeuop frtsceonpm ilssya llrrkaeeenn iftseeonnm illtaeuuoonn ryeeoodmm ilssy n ittssceennm ,illrsaEEnhng irrttsceuop frtsaaeeeub irtseenhgm iilftssccaen iltvuooonn rtttsxeohm llrrveeodw ilrtaeohdgm lffcooondw ittsaeennnm illltaeunpm frrcaeeonm iirtceoonng ilcvaeoonng rrttseohmm ittaaeennnm ito itc en cTw ed su itm cd fo te so ya ic ito p n rd ng sd irscep tauom freog irabA frreom irveew frseon rseoop itcaon rccaah reohpp ilrtaop yandm lvuoon itehng iteoom rseoop laond iraeng D A P P T C P ed ,,iIfrtsssaaePuonGM ,i[]rttsSee6ondNW iilfrrtsssycaeeonp ,ijtscvaaSeendd ] illr-svaeeuonngddw []rtscve5o ,,,iittcaSSeeonnng lsskeeeeonndgw ifrrttsxcaeeondg (rr-tsaeeeuoddbwm iirtttssccaaeeenngdd ilrttsaeeonhgw frrrtcveoooddw ][52 s m n m PO [3 rko ing PO on u fe h m an ion  iilfftrrssssaaaeeeuund ,iilrrrsvaaeeeuonnngddp ,iirrtssceeeRooongCNm ,,ïisxyvaaaSeeBVuMmm ,,iiirrtssaaunhggbwmm ffrrrtsycaeeeeuuonqdw illrttvaeeuuooonnnnw llrr-tccvaaeeeeehddbm iiillfrsssscaeeeunngbm iiiftccaaooonnddmmm f][rtsaee4u ,ïsyvaaaeeBVndCNNNM ,iilttxvaaaeeuuoonddd i])[saS42PnndgdO irttskccaaaaooonndgpp llrrtsaaaeeeuoonddpm frr-tssscaaaeeuuhngm illrrtsccaaeee–VuondpA C S N C E S B 7 below. All four classifiers improved in accuracies with the increase of features from 15 to 42. This shows the direct proportionality of multiple features with learning capability for machine learning algorithms. Conclusion This paper exploits four machine learning classifiers for sentiment analysis using three manually annotated datasets. The mean of 29 epochs of experimentation recorded in Table  4 shows that OneR is more precise in terms of percentage of correctly classified instances. On the other hand, Naïve Bayes exhibits faster learning rate and J48 reveals adequacy in the true positive and false positive rates. Table 5 reveals the truth that J48 and OneR are better for smaller dataset of woodland’s wallet reviews. The preprocessing of proposed methodology is limited to extract foreign words, emoticons and elongated words with their appropriate sentiments. The future work in the task of sentiment analysis has scope to improve preprocessing with word embeddings using deep neural networks and can also extend this study through convolution neural networks. D1: Woodland’s wallet reviews, D2: Sony digital camera reviews, D3: IMDB movie reviews Authors’ contributions JS made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data. GS helped in revision and has given final approval of the version to be published. RS agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors read and approved the final manuscript. Acknowledgements This research was supported by Department of Computer Science, Guru Nanak Dev University, Amritsar. I thank Dr. Gurvinder Singh and Dr. Rajinder Singh for their participation in experimental work and their assistance to improve the manuscript. Competing interests This research work has non-financial Academic and intellectual competing interests. Availability of data and materials I submit that I can make the experimental data and materials available after the completion of my thesis. Consent for publication We hereby grant and assign all rights to Human-centric Computing and Information Sciences for publication. Ethics approval and consent to participate The article submitted is an original work and has neither been published in any other peer-reviewed journal nor under consideration for publication by any other journal. Funding information No funding was received from any funder. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. 1. Parvathy G , Bindhu JS ( 2016 ) A probabilistic generative model for mining cybercriminal network from online social media: a review . Int J Comput Appl 134 ( 14 ): 1 - 4 . doi: 10 .5120/ijca2016908121 2. Cambria E , White B ( 2014 ) Jumping NLP curves: a review of natural language processing research . IEEE Comput Intell Mag 9 ( 2 ): 48 - 57 . doi: 10 .1109/ mci . 2014 .2307227 3. Pang B , Lee L , Vaithyanathan S ( 2002 ) Thumbs up? In: Proceedings of the ACL-02 conference on empirical methods in natural language processing-EMNLP '02. doi:10.3115/1118693 .1118704 4. Poria S , Cambria E , Gelbukh A , Bisio F , Hussain A ( 2015 ) Sentiment data flow analysis by means of dynamic linguistic patterns . IEEE Comput Intell Mag 10 ( 4 ): 26 - 36 . doi: 10 .1109/ mci . 2015 .2471215 5. Nogueira dos Santos C , Gatti M ( 2014 ) Deep convolution neural networks for sentiment analysis of short texts . In: Proceedings of COLING 2014 , the 25th international conference on computational linguistics . p 69 - 78 6. Kiritchenko S , Mohammad S , Salameh M ( 2016 ) SemEval -2016 task 7: determining sentiment intensity of English and Arabic phrases . In: Proceedings of the 10th international workshop on semantic evaluation (SemEval- 2016 ). doi: 10 .18653/v1/s16- 1004 7. Cernian A , Sgarciu V , Martin B ( 2015 ) Sentiment analysis from product reviews using SentiWordNet as lexical resource . In: 2015 7th international conference on electronics, computers and artificial intelligence (ECAI) . doi:10 .1109/ecai. 2015 .7301224 8. Hammer HL , Solberg PE , Øvrelid L ( 2014 ) Sentiment classification of online political discussions: a comparison of a word-based and dependency-based method . In: Proceedings of the 5th workshop on computational approaches to subjectivity, sentiment and social media analysis . doi:10 .3115/v1/w14- 2616 9. Zadeh L ( 2006 ) Toward human-level machine intelligence . In: 2006 18th IEEE international conference on tools with artificial intelligence (ICTAI'06) . doi: 10 .1109/ictai. 2006 .114 10. Joachims T ( 2002 ) Text classification . Learning to classify text using support vector machines . p 7 - 33 . doi: 10 .1007/978-1- 4615 -0907- 3 _ 2 11. Wanton TM , Porrata AP , Guijarro AM , Balahur A ( 2010 ) Opinion polarity detection-using word sense disambiguation to determine the polarity of opinions . In: Proceedings of the 2nd international conference on agents and artificial intelligence. doi:10.5220/0002703504830486 12. Xia Y , Cambria E , Hussain A , Zhao H ( 2014 ) Word polarity disambiguation using bayesian model and opinion-level features . Cogn Comput 7 ( 3 ): 369 - 380 . doi: 10 .1007/s12559-014-9298-4 13. Dey L , Chakraborty S , Biswas A , Bose B , Tiwari S ( 2016 ) Sentiment analysis of review datasets using Naïve Bayes' and K-NN classifier . Int J Inform Eng Electron Bus 8 ( 4 ): 54 - 62 . doi: 10 .5815/ijieeb. 2016 . 04 .07 14. Nie CY , Wang J , He F , Sato R ( 2015 ) Application of J48 decision tree classifier in emotion recognition based on chaos characteristics . In: Proceedings of the 2015 international conference on automation, mechanical control and computational engineering . doi: 10 .2991/amcce- 15 . 2015 .330 15. Tan S , Zhang J ( 2008 ) An empirical study of sentiment analysis for Chinese documents . Expert Syst Appl 34 ( 4 ): 2622 - 2629 . doi: 10 .1016/j.eswa. 2007 . 05 .028 16. Mohammad SM , Zhu X , Kiritchenko S , Martin J ( 2015 ) Sentiment, emotion, purpose, and style in electoral tweets . Inf Process Manage 51 ( 4 ): 480 - 499 . doi: 10 .1016/j.ipm. 2014 . 09 .003 17. Kiritchenko S , Mohammad SM ( 2016 ) sentiment composition of words with opposing polarities . In: Proceedings of the 2016 conference of the north american chapter of the association for computational linguistics: human language technologies . doi:10 .18653/v1/n16- 1128 18. Dashtipour K , Poria S , Hussain A , Cambria E , Hawalah AY , Gelbukh A , Zhou Q ( 2016 ) Multilingual sentiment analysis: state of the art and independent comparison of techniques . Cogn Comput 8 ( 4 ): 757 - 771 . doi: 10 .1007/ s12559-016-9415-7 19. Sobhani P , Mohammad S , Kiritchenko S ( 2016 ) Detecting stance in tweets and analyzing its interaction with sentiment . In: Proceedings of the 5th joint conference on lexical and computational semantics. doi:10 .18653/v1/ s16- 2021 20. Poria S , Cambria E , Winterstein G , Huang G ( 2014 ) Sentic patterns: dependency-based rules for concept-level sentiment analysis . Knowl Based Syst 69 : 45 - 63 . doi: 10 .1016/j.knosys. 2014 . 05 .005 21. Socher R ( 2016 ) deep learning for sentiment analysis-invited talk . In: Proceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis . doi:10 .18653/v1/w16- 0408 22. Turney PD , Mohammad SM ( 2014 ) Experiments with three approaches to recognizing lexical entailment . Nat Lang Eng 21 ( 03 ): 437 - 476 . doi: 10 .1017/s1351324913000387 23. Mohammad S , Kiritchenko S , Sobhani P , Zhu X , Cherry C ( 2016 ) SemEval -2016 task 6: detecting stance in tweets . In: Proceedings of the 10th international workshop on semantic evaluation (SemEval- 2016 ). doi: 10 .18653/v1/s16- 1003 24. Poria S , Chaturvedi I , Cambria E , Hussain A ( 2016 ) Convolutional MKL based multimodal emotion recognition and sentiment analysis . In: 2016 IEEE 16th international conference on data mining (ICDM). doi:10 .1109/icdm. 2016 .0055 25. Wang J , Yu L , Lai KR , Zhang X ( 2016 ) Dimensional sentiment analysis using a regional CNN-LSTM model . In: Proceedings of the 54th annual meeting of the association for computational linguistics , vol 2 : short papers. doi:10 .18653/ v1/p16- 2037

This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1186%2Fs13673-017-0116-3.pdf

Jaspreet Singh, Gurvinder Singh, Rajinder Singh. Optimization of sentiment analysis using machine learning classifiers, Human-centric Computing and Information Sciences, 2017, 32, DOI: 10.1186/s13673-017-0116-3