Author Identification
2018

This task is divided into cross-domain authorship attribution and style change detection. You can choose to solve one or both of them.

Cross-domain Authorship Attribution

Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship? This task may be quite challenging when documents of known and unknown authorship come from different domains (e.g., thematic area, genre).

In this edition of PAN, for the first time, we focus on cross-domain attribution applied to Fanfiction. Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.

The cross-domain attribution task in this edition of PAN can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom).

Task
Given a set of documents (known fanfics) by a small number (up to 20) of candidate authors, identify the authors of another set of documents (unknown fanfics). Each candidate author has contributed at least one of the unknown fanfics, which all belong to the same target fandom. The known fanfics belong to several fandoms (excluding the target fandom), although not necessarily the same for all candidate authors. An equal number of fanfics per candidate author is provided. In contrast, the unknown fanfics are not equally distributed over the authors. The text-length of fanfics varies from 500 to 1,000 tokens. All documents are in the same language that may be English, French, Italian, Polish, or Spanish.
Development Phase

To develop your software, we provide you with a corpus with highly similar characteristics to the evaluation corpus. It comprises a set of cross-domain authorship attribution problems in each of the following 5 languages: English, French, Italian, Polish, and Spanish. Note that we specifically avoid to use the term 'training corpus' because the sets of candidate authors of the development and the evaluation corpora are not overlapping. Therefore, your approach should not be designed to particularly handle the candidate authors of the development corpus.

Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.


{
    "unknown-folder": "unknown",
    "candidate-authors": [
        { "author-name": "candidate00001" },
        { "author-name": "candidate00002" },
		...
	]
}
	

The true author of each unknown document can be seen in the file ground-truth.json, also found in the main folder of each problem.

In addition, to handle a collection of such problems, the file collection-info.json includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en", "fr", "it", "pl", or "sp") and encoding (always UTF-8) of its documents.


[  { "problem-name": "problem00001", 
     "language": "en", 
     "encoding": "UTF-8" },
   { "problem-name": "problem00002", 
     "language": "fr", 
     "encoding": "UTF-8" },
	  ...
]
	

Download corpus

This is a password-protected file. To obtain the password, first register for the author identification task at PAN-2018, and then notify PAN organizers.

In addition, a language-independent baseline approach based on a character n-gram representation and a linear SVM classifier is provided.

Download baseline

Evaluation Phase
Once you finished tuning your approach to achieve satisfying performance on the development corpus, your software will be tested on the evaluation corpus. During the competition, the evaluation corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.
After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.
Output

Your system should produce one output file for each authorship attribution problem in JSON. The name of the output files should be answers-PROBLEMNAME.json (e.g., answers-problem00001.json, answers-problem00002.json) including the list of unknown documents and their predicted author:

[
    { "unknown-text":  "unknown00001.txt", 
      "predicted-author":  "candidate00003" },
    { "unknown-text":  "unknown00002.txt", 
      "predicted-author":  "candidate00005" },
	…
]
Performance Measures

The submissions will be evaluated in each attribution problem separately based on their macro-averaged F1 score. Participants will be ranked according to their average macro-F1 across all attribution problems of the evaluation corpus.

We provide you with a Python script that calculates macro-F1 (and optionally the confusion matrix) of a single attribution problem:

Download evaluation script

and another Python script that calculates macro-F1 for a collection of attribution problems:

Download evaluation script

Submission

We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the evaluation corpus and (ii) an absolute path to an existing empty output directory:

> mySoftware -i EVALUATION-DIRECTORY -o OUTPUT-DIRECTORY
	

Within EVALUATION-DIRECTORY a collection-info.json file and a number of folders, one for each attribution problem, will be found (similar to the development corpus as described above). For each clustering problem, the output file should be written in OUTPUT-DIRECTORY.

Note: Each attribution problem should be solved independently of other problems in the collection.

You can choose freely among the available programming languages and among the operating systems Microsoft Windows and Ubuntu. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. More information about how to access the virtual machines can be found in the user guide below:

PAN Virtual Machine User Guide »

Once deployed in your virtual machine, we ask you to access TIRA at www.tira.io, where you can self-evaluate your software on the test data.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Related Work

We refer you to:

Mike Kestemont

Mike Kestemont

University of Antwerp

Task Committee

Efstathios Stamatatos

Efstathios Stamatatos

University of the Aegean

Walter Daelemans

Walter Daelemans

University of Antwerp

Martin Potthast

Martin Potthast

Bauhaus-Universität Weimar

Benno Stein

Benno Stein

Bauhaus-Universität Weimar

Style Change Detection

While many approaches target the problem of identifying authors of whole documents, research on investigating multi-authored documents is sparse. In the last two PAN editions we therefore aimed to narrow the gap by at first proposing a task to cluster by authors inside documents (Author Diarization, 2016). Relaxing the problem, the follow-up task (Style Breach Detection, 2017) focused only on identifying style breaches, i.e., to find text positions where the authorship and thus the style changes. Nevertheless, the results of participants revealed relatively low accuracies and indicated that this task is still too hard to tackle.

Consequently, this year we propose a substantially simplified task, while still beeing a continuation of last year's task: The only question that should be answered by participants is whether there exists a style change in a given document or not. Further, we changed the name to Style Change Detection, in order to reflect the task more intuitively.

Given a document, participants thus should apply intrinsic analyses to decide if the document is written by one or more authors, i.e., if there exist style changes. While the precedent task demanded to specifically locate the exact position of such changes, this year we only ask for a binary answer per document:
  • yes: the document contains at least one style change (is written by at least two authors)
  • no: the document has no style changes (is written by a single author)

In this sense it is irrelevant to identify the number of style changes, the specific positions, or to build clusters of authors. You may adapt existing algorithms other problem types such as intrinsic plagiarism detection or text segmentation. For example, if you already have an intrinsic plagiarism detection system, you can apply your method on this task by outputting yes if you found a plagiarism case or no otherwise (please note that intrinsic plagiarism detection methods may need adaptions as they naturally are not designed to handle uniformly distributed author texts).

The following figure illustrates some possible scenarios and the expected output:

Task

Given a document, determine whether it contains style changes or not, i.e., if it was written by a single or multiple authors.

All documents are provided in English and may contain zero up to arbitrarily many style changes.

Development Phase

To develop your algorithms, a training data set including corresponding solutions will be provided. Moreover and for your convenience, within the training data set we provide the exact locations of all style changes, as they may be helpful to develop algorithms.

The data set will be provided soon.

For each problem instance X, two files are provided:

  • problem-X.txt contains the actual text
  • problem-X.truth contains the ground truth, i.e., the correct solution in JSON format:
    {
        "changes": true/false,
        "positions": [
            character_position_change_1,
            character_position_change_2,
            …
        ]
    }
    

    If present, the absolute character positions of the first non-whitespace character of the new segment is provided in the solution (positions). Please note that this information is only for development purposes and not used for the evaluation.

Evaluation Phase
Once you finished tuning your approach to achieve satisfying performance on the training corpus, your software will be tested on the evaluation corpus. During the competition, the evaluation corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.
After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.
Output

In general, the data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing. Thus, for each given problem problem-X.txt your software should output the missing solution file problem-X.truth. The output should be a JSON object containing of a single property:

{
    "changes": true/false
}
Output "changes" : true if there are style changes in the document, and "changes" : false otherwise.

Performance Measures

The performance of the approaches will simply be measured and ranked by computing the accuracy.

Submission

We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the evaluation corpus and (ii) an absolute path to an empty output directory:

> mySoftware -i EVALUATION-DIRECTORY -o OUTPUT-DIRECTORY

Within EVALUATION-DIRECTORY, you will find a list of problem instances, i.e., [filename].txt files. For each problem instance you should produce the solution file [filename].truth in the OUTPUT-DIRECTORY For instance, you read EVALUATION-DIRECTORY/problem-12.txt, process it and write your results to OUTPUT-DIRECTORY/problem-12.truth.

You can choose freely among the available programming languages and among the operating systems Microsoft Windows and Ubuntu. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. More information about how to access the virtual machines can be found in the user guide below:

PAN Virtual Machine User Guide »

Once deployed in your virtual machine, we ask you to access TIRA at www.tira.io, where you can self-evaluate your software on the test data.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Related Work

We refer you to:

Michael Tschuggnall

Michael Tschuggnall

University of Innsbruck

Task Committee

Günther Specht

Günther Specht

University of Innsbruck

Martin Potthast

Martin Potthast

Bauhaus-Universität Weimar

Benno Stein

Benno Stein

Bauhaus-Universität Weimar