Skip to content

We introduce TruthQuest, a benchmark designed to evaluate the suppositional reasoning capabilities of large language models through knights and knaves puzzles.

License

Notifications You must be signed in to change notification settings

mainlp/TruthQuest

Repository files navigation

Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models

knights and knaves

Generated by DALL·E 3


Knights and knaves problems represent a classic genre of logical puzzles where characters either tell the truth or lie. The objective is to logically deduce each character's identity based on their statements. The challenge arises from the truth-telling or lying behavior, which influences the logical implications of each statement. Solving these puzzles requires not only direct deductions from individual statements, but the ability to assess the truthfulness of statements by reasoning through various hypothetical scenarios. As such, knights and knaves puzzles serve as compelling examples of suppositional reasoning. In this paper, we introduce \emph{TruthQuest}, a benchmark for suppositional reasoning based on the principles of knights and knaves puzzles. Our benchmark presents problems of varying complexity, considering both the number of characters and the types of logical statements involved. Evaluations on \emph{TruthQuest} show that large language models like Llama 3 and Mixtral-8x7B exhibit significant difficulties solving these tasks. A detailed error analysis of the models' output reveals that lower-performing models exhibit a diverse range of reasoning errors, frequently failing to grasp the concept of truth and lies. In comparison, more proficient models primarily struggle with accurately inferring the logical implications of potentially false statements.

Table of Contents

Setup

All code was developed and tested on Ubuntu 22.04 with Python 3.11

To run the current code, we recommend to use Poetry:

poetry install                          # Install dependencies
poetry shell                            # Activate virtual environment
# Work for a while
deactivate

Please make sure to configure your HuggingFace credentials to download respective models.

Generate Puzzles

To generate puzzles, run the following command:

python gen_data.py --from-yaml

This will generate the corresponding data. Alternatively, you can fetch the data from here.

Run Models

To run models, run the following command:

python run.py --model <hf-model-name>

In this project, we used the following models:

Note that in order to use a new model, you need to add a configuration file in this folder.

Evaluate Performance

To evaluate the performance of the models, run the following command:

python evaluate_conclusion.py

For specific command-line arguments, please refer to the code.

LLM-Based and Human Annotations

We publish all LLM-based and human annotations in our respective HuggingFace data repository. The TruthQuest dataset can be found here.

Human-Annotated CoT Prompts

We provide up to 8 human-annotated CoT examples for each dataset configuration. Please see this folder for further information.

License

MIT license

This work is licensed under a CC BY-SA 4.0 License.

Citation

If you find our work helpful, you can cite this paper as:

@article{mondorf2024liar,
  title={Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models},
  author={Mondorf, Philipp and Plank, Barbara},
  journal={arXiv preprint arXiv:2406.12546},
  year={2024}
}

About

We introduce TruthQuest, a benchmark designed to evaluate the suppositional reasoning capabilities of large language models through knights and knaves puzzles.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages