Synopsis

  • Task: Given two texts, determine if they are written by the same author.
  • Input: [data]
  • Evaluation: [code]
  • Submission: [submit]
  • Baseline: [code]

Task

Authorship verification is the task of deciding whether two texts have been written by the same author based on comparing the texts' writing styles.

In the coming three years at PAN 2020 to PAN 2022, we develop a new experimental setup that addresses three key questions in authorship verification that have not been studied at scale to date:

  • Year 1 (PAN 2020): Closed-set verficiation.
    Given a large training dataset comprising of known authors who have written about a given set of topics, the test dataset contains verification cases from a subset of the authors and topics found in the training data.

  • Year 2 (PAN 2021): Open-set verification.
    Given the training dataset of Year 1, the test dataset contains verification cases from previously unseen authors and topics.

  • Year 3 (PAN 2022): Suprise task.
    The task of the last year of this evaluation cycle (to be announced at a later time) will be designed with an eye on realism and practical application.

This evaluation cycle on authorship verification provides for a renewed challenge of increasing difficulty within a large-scale evaluation. We invite you to plan ahead and participate in all three of these tasks.

Data

The dataset comes in two variants: a smaller dataset particularly for symbolic machine learning methods and a large dataset suitable for deep learning. Participants have to specify which of the two datasets was used to train their model. Models using the small set will be evaluated separately from models using the large set. We encourage participants to try the small dataset as a challenge, though participants can submit separate approaches for either one or both.

Both the small and the large dataset come with two newline delimited JSON files each (*.jsonl). The first file contains pairs of texts (each pair has a unique ID) and their fandom labels:

{"id": "6cced668-6e51-5212-873c-717f2bc91ce6", "fandoms": ["Fandom 1", "Fandom 2"], "pair": ["Text 1...", "Text 2..."]}
                        {"id": "ae9297e9-2ae5-5e3f-a2ab-ef7c322f2647", "fandoms": ["Fandom 3", "Fandom 4"], "pair": ["Text 3...", "Text 4..."]}
...

The second file, ending in *_truth.jsonl, contains the ground truth for all pairs. The ground truth is composed of a boolean flag indicating if texts in a pair are from the same author and the numeric author IDs:

{"id": "6cced668-6e51-5212-873c-717f2bc91ce6", "same": true, "authors": ["1446633", "1446633"]}
                        {"id": "ae9297e9-2ae5-5e3f-a2ab-ef7c322f2647", "same": false, "authors": ["1535385", "1998978"]}
...

Data and ground truth are in the same order and can be ingested line-wise in parallel without the need for a reshuffle based on the pair ID.

The fandom labels will be given in both the training and testing datasets. The ground truth file will only be available for the training data.

Evaluation

More details will be shared soon.

Results

More details will be shared soon.

Task Committee