Classifying True and False Hebrew Stories Using Word N-Grams

Yaakov HaCohen-Kerner, Rakefet Dilmon, Shimon Friedlich, Daniel Nissim Cohen

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


False story detection is an important and challenging problem. This paper presents a simple and sound methodology that is able to automatically distinguish between true and false Hebrew stories using either psychological or semantic information. The examined corpus contains 96 stories that were composed by 48 native Hebrew speakers who were asked to tell both true and false stories. The features used by the classification model are word unigrams, bigrams, and trigrams. Different experiments on various combinations of these feature sets using five supervised machine learning (ML) methods, the InfoGain feature filtering method, and parameter tuning have been performed. We report on the success of this approach in identifying the correct types of stories. The word unigrams set was superior to all other feature sets. For the first classification task (true and false stories), the logistic regression ML method was the best method, achieving an accuracy of 91.67%. The two decision tree ML methods (J48 and REPTree) also present high accuracy results (90.63% and 87.5%) using only 5 and 4 unigrams, respectively.

Original languageEnglish
Pages (from-to)629-649
Number of pages21
JournalCybernetics and Systems
Issue number8
StatePublished - 16 Nov 2016

Bibliographical note

Publisher Copyright:
© 2016, Copyright © Taylor & Francis Group, LLC.


  • False stories
  • story classification
  • supervised learning
  • text classification
  • true stories
  • word N-grams


Dive into the research topics of 'Classifying True and False Hebrew Stories Using Word N-Grams'. Together they form a unique fingerprint.

Cite this