TY - JOUR
T1 - Classifying True and False Hebrew Stories Using Word N-Grams
AU - HaCohen-Kerner, Yaakov
AU - Dilmon, Rakefet
AU - Friedlich, Shimon
AU - Cohen, Daniel Nissim
N1 - Publisher Copyright:
© 2016, Copyright © Taylor & Francis Group, LLC.
PY - 2016/11/16
Y1 - 2016/11/16
N2 - False story detection is an important and challenging problem. This paper presents a simple and sound methodology that is able to automatically distinguish between true and false Hebrew stories using either psychological or semantic information. The examined corpus contains 96 stories that were composed by 48 native Hebrew speakers who were asked to tell both true and false stories. The features used by the classification model are word unigrams, bigrams, and trigrams. Different experiments on various combinations of these feature sets using five supervised machine learning (ML) methods, the InfoGain feature filtering method, and parameter tuning have been performed. We report on the success of this approach in identifying the correct types of stories. The word unigrams set was superior to all other feature sets. For the first classification task (true and false stories), the logistic regression ML method was the best method, achieving an accuracy of 91.67%. The two decision tree ML methods (J48 and REPTree) also present high accuracy results (90.63% and 87.5%) using only 5 and 4 unigrams, respectively.
AB - False story detection is an important and challenging problem. This paper presents a simple and sound methodology that is able to automatically distinguish between true and false Hebrew stories using either psychological or semantic information. The examined corpus contains 96 stories that were composed by 48 native Hebrew speakers who were asked to tell both true and false stories. The features used by the classification model are word unigrams, bigrams, and trigrams. Different experiments on various combinations of these feature sets using five supervised machine learning (ML) methods, the InfoGain feature filtering method, and parameter tuning have been performed. We report on the success of this approach in identifying the correct types of stories. The word unigrams set was superior to all other feature sets. For the first classification task (true and false stories), the logistic regression ML method was the best method, achieving an accuracy of 91.67%. The two decision tree ML methods (J48 and REPTree) also present high accuracy results (90.63% and 87.5%) using only 5 and 4 unigrams, respectively.
KW - False stories
KW - story classification
KW - supervised learning
KW - text classification
KW - true stories
KW - word N-grams
UR - http://www.scopus.com/inward/record.url?scp=84992163226&partnerID=8YFLogxK
U2 - 10.1080/01969722.2016.1232119
DO - 10.1080/01969722.2016.1232119
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:84992163226
SN - 0196-9722
VL - 47
SP - 629
EP - 649
JO - Cybernetics and Systems
JF - Cybernetics and Systems
IS - 8
ER -