Synopsis

  • Task: Given a fanfiction text, determine its author among a list of candidates.
  • Input: [data]
  • Output: [example] [schema] [verifier]
  • Evaluation: [code]
  • Submission: [submit]
  • Baseline: [code]

Introduction

Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship? This task may be quite challenging when documents of known and unknown authorship come from different domains (e.g., thematic area, genre).

In this edition of PAN, for the first time, we focus on cross-domain attributionapplied to Fanfiction. Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.

The cross-domain attribution task in this edition of PAN can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom).

Task

Given a set of documents (known fanfics) by a small number (up to 20) of candidate authors, identify the authors of another set of documents (unknown fanfics). Each candidate author has contributed at least one of the unknown fanfics, which all belong to the same target fandom. The known fanfics belong to several fandoms (excluding the target fandom), although not necessarily the same for all candidate authors. An equal number of fanfics per candidate author is provided. In contrast, the unknown fanfics are not equally distributed over the authors. The text-length of fanfics varies from 500 to 1,000 tokens. All documents are in the same language that may be English, French, Italian, Polish, or Spanish.

Development Phase

To develop your software, we provide you with a corpus with highly similar characteristics to the evaluation corpus. It comprises a set of cross-domain authorship attribution problems in each of the following 5 languages: English, French, Italian, Polish, and Spanish. Note that we specifically avoid to use the term 'training corpus' because the sets of candidate authors of the development and the evaluation corpora are not overlapping. Therefore, your approach should not be designed to particularly handle the candidate authors of the development corpus.

Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.

{
    "unknown-folder": "unknown",
    "candidate-authors": [
        { "author-name": "candidate00001" },
        { "author-name": "candidate00002" },
        ...
    ]
}

The true author of each unknown document can be seen in the file ground-truth.json, also found in the main folder of each problem.

In addition, to handle a collection of such problems, the file collection-info.jsonincludes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en", "fr", "it", "pl", or "sp") and encoding (always UTF-8) of its documents.

[
    { "problem-name": "problem00001",
      "language": "en",
      "encoding": "UTF-8" },
    { "problem-name": "problem00002",
       "language": "fr",
       "encoding": "UTF-8" },
	  ...
]

Evaluation Phase

Once you finished tuning your approach to achieve satisfying performance on the development corpus, your software will be tested on the evaluation corpus. During the competition, the evaluation corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

Output

Your system should produce one output file for each authorship attribution problem in JSON. The name of the output files should be answers-PROBLEMNAME.json (e.g., answers-problem00001.json, answers-problem00002.json) including the list of unknown documents and their predicted author:

[
    { "unknown-text":  "unknown00001.txt",
      "predicted-author":  "candidate00003" },
    { "unknown-text":  "unknown00002.txt",
      "predicted-author":  "candidate00005" },
	...
]

Performance Measures

The submissions will be evaluated in each attribution problem separately based on their macro-averaged F1 score. Participants will be ranked according to their average macro-F1 across all attribution problems of the evaluation corpus.

We provide you with a Python script that calculates macro-F1 (and optionally the confusion matrix) of a single attribution problem.

Submission

We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the evaluation corpus and (ii) an absolute path to an existing empty output directory:

mySoftware -i EVALUATION-DIRECTORY -o OUTPUT-DIRECTORY

Within EVALUATION-DIRECTORY a collection-info.json file and a number of folders, one for each attribution problem, will be found (similar to the development corpus as described above). For each attribution problem, the output file should be written in OUTPUT-DIRECTORY.

Note: Each attribution problem should be solved independently of other problems in the collection.

You can choose freely among the available programming languages and among the operating systems Microsoft Windows and Ubuntu. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop.

Once deployed in your virtual machine, we ask you to access TIRA at www.tira.io, where you can self-evaluate your software on the test data.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Task Committee