Synopsis

  • Task: Given a fanfiction text, determine its author among a list of candidates.
  • Input: [data]
  • Evaluation: [code]
  • Submission: [submit]
  • Baselines: [code]

Introduction

Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.

Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.

This edition of PAN focuses on cross-domain attribution in fanfiction, a task that can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom). In contrast to the PAN-2018 edition of this task, we focus on open-set attribution conditions, namely the true author of a text in the target domain is not necessarily included in the list of candidate authors.

Task

Given a set of documents (known fanfics) by a small number (up to 10) of candidate authors, identify the authors of another set of documents (unknown fanfics) in another target domain. Each candidate author has contributed at least one of the unknown fanfics, which all belong to the same target fandom. Some of the fanfics in the target domain were not written by any of the candidate authors. The known fanfics belong to several fandoms (excluding the target fandom), although not necessarily the same for all candidate authors. An equal number of known fanfics per candidate author is provided. In contrast, the unknown fanfics are not equally distributed over the authors. The text-length of fanfics varies from 500 to 1,000 tokens. All documents are in the same language that may be English, French, Italian, or Spanish.

Development Phase

To develop your software, we provide you with a corpus with highly similar characteristics to the evaluation corpus. It comprises a set of cross-domain authorship attribution problems in each of the following 5 languages: English, French, Italian, and Spanish. Note that we specifically avoid to use the term 'training corpus' because the sets of candidate authors of the development and the evaluation corpora are not overlapping. Therefore, your approach should not be designed to particularly handle the candidate authors of the development corpus.

Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.

{
    "unknown-folder": "unknown",
    "candidate-authors": [
        { "author-name": "candidate00001" },
        { "author-name": "candidate00002" },
		...
	]
}

The fanfics of known authorship belong to several fandoms (excluding the target fandom). The file fandom-info.json (it can be found in the main folder of each problem) provides information about the fandom of all fanfics of known authorsihp, as follows.

[
    { "author-name": "candidate00001",
      "known-text": "known00001.txt",
      "fandom": "fandom00210"},
    { "author-name": "candidate00001",
      "known-text": "known00002.txt",
      "fandom": "fandom00051"},
    ...
]

The true author of each unknown document can be seen in the file ground-truth.json, also found in the main folder of each problem. Note that all unknown documents that are not written by any of the candidate authors belong to the <UNK> class.

{
    "ground_truth": [
        { "unknown-text": "unknown00001.txt",
          "true-author": "candidate00002" },
        { "unknown-text": "unknown00002.txt",
          "true-author": "<UNK>"},
        ...
    ]
}

In addition, to handle a collection of such problems, the file collection-info.json includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en", "fr", "it", or "sp"), and the encoding (always UTF-8) of documents.

[
    { "problem-name": "problem00001",
      "language": "en",
      "encoding": "UTF-8" },
    { "problem-name": "problem00002",
      "language": "fr",
      "encoding": "UTF-8" },
 	  ...
]

Baselines

We provide the implementation of baseline methods that can help you estimate the efficacy of your approach. It is also possible for you to start from a baseline method and attempt to improve it for that specific task. The following baselines are available:

  • BASELINE-SVM: This is a language-independent authorship attribution approach based on a character 3-gram representation and a linear SVM classifier with a reject option. It estimates the probabilities of output classes and assigns an unknown document to the <UNK> class when the difference of the top two candidates is less than a threshold.

  • BASELINE-COMPRESSOR: Another language-independent approach that uses text compression to estimate the distance of an unknown document to each of the candidate authors. It assigns an unknown document to the <UNK> class when the difference between the two most likely candidates is lower than a threshold.

  • BASELINE-IMPOSTERS: This baseline offers an implementation of the language-independent "imposters" approach for authorship verification (Koppel & Winter, 2014), based on character tetragram features. During a bootstrapped procedure, the technique iteratively compares an unknown text to each candidate author's stylistic profile, as well as to a set of imposter documents, on the basis of a random feature set. If the highest ranking candidate author does not pass a fixed similarity threshold after this procedure, the document is assigned to the <UNK> class and left unattributed.

    We also provide a set of imposter documents required by this baseline approach for each of the four languages. This is a password-protected file (use the same password as for the development corpus).

Evaluation Phase

Once you finished tuning your approach to achieve satisfying performance on the development corpus, your software will be tested on the evaluation corpus. During the competition, the evaluation corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

Output

Your system should produce one output file for each authorship attribution problem in JSON. The name of the output files should be answers-PROBLEMNAME.json (e.g., answers-problem00001.json, answers-problem00002.json) including the list of unknown documents and their predicted author:

[
    { "unknown-text":  "unknown00001.txt", 
      "predicted-author":  "candidate00003" },
    { "unknown-text":  "unknown00002.txt", 
      "predicted-author":  "<UNK>" },
	...
]

Performance Measures

The submissions will be evaluated in each attribution problem separately based on their open-set macro-averaged F1 score (calculated over the training classes, that is when <UNK> is excluded) (Mendes et al. 2017). Participants will be ranked according to their average open-set macro-F1 across all attribution problems of the evaluation corpus.

We provide you with a Python script that calculates open-set macro-F1 for a collection of attribution problems

Submission

We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the evaluation corpus and (ii) an absolute path to an existing empty output directory:

mySoftware -i EVALUATION-DIRECTORY -o OUTPUT-DIRECTORY

Within EVALUATION-DIRECTORY a collection-info.json file and a number of folders, one for each attribution problem, will be found (similar to the development corpus as described above). For each attribution problem, the output file should be written in OUTPUT-DIRECTORY.

Note: Each attribution problem should be solved independently of other problems in the collection.

You can choose freely among the available programming languages and among the operating systems Microsoft Windows and Ubuntu. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. More information about how to access the virtual machines can be found in the user guide linked above.

Once deployed in your virtual machine, we ask you to access TIRA at www.tira.io, where you can self-evaluate your software on the test data.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Task Committee