Voight-Kampff Generative AI Authorship Verification 2024

Synopsis

  • Task: Given two texts, one authored by a human, one by a machine: pick out the human.
  • Input: Pairs of texts, one of which was written by a human. [download]
  • Evaluation: ROC-AUC, Brier, C@1, F1, F0.5u [code]
  • Submission: Deployment on TIRA [submit]
  • Baselines: PPMd CBC, Authorship Unmasking, Binoculars, DetectLLM, DetectGPT, Fast-DetectGPT, Text length [code]

Task

With Large Language Models (LLMs) improving at breakneck speed and seeing more widespread adoption every day, it is getting increasingly hard to discern whether a given text was authored by a human being or a machine. Many classification approaches have been devised to help humans distinguish between human- and machine-authored text, though often without questioning the fundamental and inherent feasibility of the task itself.

With years of experience in a related but much broader field—authorship verification—, we set out to answer whether this task can be solved. We start with the simplest arrangement of a suitable task setup: Given two texts, one authored by a human, one by a machine: pick out the human.

The Generative AI Authorship Verification Task @ PAN is organized in collaboration with the Voight-Kampff Task @ ELOQUENT Lab in a builder-breaker style. PAN participants will build systems to tell human and machine apart, while ELOQUENT participants will investigate novel text generation and obfuscation methods for avoiding detection.

Data

Test data for this task will be compiled from the submissions of ELOQUENT participants and will comprise multiple text genres, such as news articles, Wikipedia intro texts, or fanfiction. Additionally, PAN participants will be provided with a bootstrap dataset of real and fake news articles spanning multiple 2021 U.S. news headlines. The bootstrap dataset can be used for training purposes, though we strongly recommend using other sources as well.

Download instructions: The dataset is available via Zenodo. Please register first at Tira and then request access on Zenodo using the same email address. The dataset contains copyrighted material and may be used only for research purposes. No redistribution allowed.

The bootstrap dataset is provided as a set of newline-delimited JSON files. Each file contains a list of articles, written either by (any number of) human authors or a single machine. That is, the file human.jsonl contains only human texts, whereas a file gemini-pro.jsonl contains articles about the same topics, but written entirely by Google's Gemini Pro. The file format is as follows:

{"id": "gemini-pro/news-2021-01-01-2021-12-31-kabulairportattack/art-081", "text": "..."}
{"id": "gemini-pro/news-2021-01-01-2021-12-31-capitolriot/art-050", "text": "..."}
...

The article IDs and line orderings are the same across all files (except for the model-specific prefix before the first /), so the same line always corresponds to the same topic, but from different “authors”.

The test dataset will be provided in a different format. Instead of individual files, only a single JSONL file will be given, each line containing a pair of texts:

{"id": "iixcWBmKWQqLAwVXxXGBGg", "text1": "...", "text2": "..."}
{"id": "y12zUebGVHSN9yiL8oRZ8Q", "text1": "...", "text2": "..."}
...

The IDs will be scrambled and the participant's task is to generate an appropriate output file with predictions for which of the two texts is the human one (see Submission below).

Evaluation

Systems will be evaluated with the same measures as previous installments of the PAN authorship verification tasks. The following metrics will be used:

  • ROC-AUC: The area under the ROC (Receiver Operating Characteristic) curve.
  • Brier: The complement of the Brier score (mean squared loss).
  • C@1: A modified accuracy score that assigns non-answers (score = 0.5) the average accuracy of the remaining cases.
  • F1: The harmonic mean of precision and recall.
  • F0.5u: A modified F0.5 measure (precision-weighted F measure) that treats non-answers (score = 0.5) as false negatives.
  • The arithmetic mean of all the metrics above.

The evaluator for the task will output the above measures as JSON like so:

{
    "roc-auc": 0.992,
    "brier": 0.979,
    "c@1": 0.978,
    "f1": 0.978,
    "f05u": 0.978,
    "mean": 0.981
}

Submission

Participants will submit their systems as Docker images through the Tira platform. It is not expected that submitted systems are actually trained on Tira, but they must be standalone and runnable on the platform without requiring contact to the outside world (evaluation runs will be sandboxed).

The submitted software must be executable inside the container via a command line call. The script must take two arguments: an input file (an absolute path to the input JSONL file) and an output directory (an absolute path to where the results will be written):

Within Tira, the input file will be called dataset.jsonl, so with the pre-defined Tira placeholders, your software should be invoked like this:

$ mySoftware $inputDataset/dataset.jsonl $outputDir

Within $outputDir, a single (!) file with the file extension *.jsonl must be created with the following format:

{"id": "iixcWBmKWQqLAwVXxXGBGg", "is_human": 1.0}
{"id": "y12zUebGVHSN9yiL8oRZ8Q", "is_human": 0.3}
...

For each test case in the input file, an output line must be written with the ID of the input text pair and a confidence score between 0.0 and 1.0. A score < 0.5 means that text1 is believed to be human-authored. A score > 0.5 means that text2 is believed to be human-authored. A score of exactly 0.5 means the case is undecidable. Participants are encouraged to answer with 0.5 rather than making a wrong prediction.

All test cases must be processed in isolation without information leakage between them! Even though systems may be given an input file with multiple JSON lines at once for reasons of efficiency, these inputs must be processed and answered just the same as if only a single line were given. Answers for any one test case must not depend on other cases in the input dataset!

Tip: You can test your submission using the pan24-generative-authorship-smoke-test dataset and evaluator (which is different from the final test dataset).

Baselines

We provide six (seven) LLM detection baselines as re-implementations from the original papers:

With PPMd CBC and Authorship unmasking, we provide two bag-of-words authorship verification models. Binoculars, DetectLLM, and DetectGPT use large language models to measure text perplexity. The text length baseline serves mainly as a data sanity check and is designed to have random performance.

The baselines are published on GitHub. You can run them locally, in a Docker container or using tira-run. All baselines come with a CLI and usage instructions. Their general usage is:

$ baseline BASELINENAME [OPTIONS] INPUT_FILE OUTPUT_DIRECTORY
Use --help on any subcommand for more information:
$ baseline --help
Usage: baseline [OPTIONS] COMMAND [ARGS]...

  PAN'24 Generative Authorship Detection baselines.

Options:
  --help  Show this message and exit.

Commands:
  binoculars     PAN'24 baseline: Binoculars.
  detectgpt      PAN'24 baseline: DetectGPT.
  detectllm      PAN'24 baseline: DetectLLM.
  fastdetectgpt  PAN'24 baseline: Fast-DetectGPT.
  length         PAN'24 baseline: Text length.
  ppmd           PAN'24 baseline: Compression-based cosine.
  unmasking      PAN'24 baseline: Authorship unmasking.

More information on how to install and run the baselines can be found in the README on GitHub.

Task Committee