Discriminative model checking

Peter Niebert, Doron Peled, Amir Pnueli

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations


Model checking typically compares a system description with a formal specification, and returns either a counterexample or an affirmation of compatibility between the two descriptions. Counterexamples provide evidence to the existence of an error, but it can still be very difficult to understand what is the cause for that error. We propose a model checking methodology which uses two levels of specification. Under this methodology, we group executions as good and bad with respect to satisfying a base LTL specification. We use an analysis specification, in CTL* style, quantifying over the good and bad executions. This specification allows checking not only whether the base specification holds or fails to hold in a system, but also how it does so. We propose a model checking algorithm in the style of the standard CTL* decision procedure. This framework can be used for comparing between good and bad executions in a system and outside it, providing assistance in locating the design or programming errors.

Original languageEnglish
Title of host publicationComputer Aided Verification - 20th International Conference, CAV 2008, Proceedings
Number of pages13
StatePublished - 2008
Event20th International Conference on Computer Aided Verification, CAV 2008 - Princeton, NJ, United States
Duration: 7 Jul 200814 Jul 2008

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5123 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference20th International Conference on Computer Aided Verification, CAV 2008
Country/TerritoryUnited States
CityPrinceton, NJ


Dive into the research topics of 'Discriminative model checking'. Together they form a unique fingerprint.

Cite this