Synopsis

  • Task: Given two documents, are they written by the same author?
  • Input: [data]
  • Evaluation: [code]
  • Submission: [submit]

Introduction

Authorship attribution is an important problem in many areas including information retrieval and computational linguistics, but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may be able to save lives. The most common framework for testing candidate algorithms is a text classification problem: given known sample documents from a small, finite set of candidate authors, which if any wrote a questioned document of unknown authorship? It has been commented, however, that this may be an unreasonably easy task. A more demanding problem is author verification where given a set of documents by a single author and a questioned document, the problem is to determine if the questioned document was written by that particular author or not. This may more accurately reflect real life in the experiences of professional forensic linguists, who are often called upon to answer this kind of question. It is the third year PAN focuses on the so-called author verification problem. The major difference with previous PAN editions is that this year we no longer consider cases where all texts within a verification problem are in the same genre or the same thematic area. We are focusing on cross-genre and cross-topic author verification, a more challenging version of the problem that better resembles real-world applications.

A note to forensic linguists: In order to bridge the gap between linguistics and computer science, we strongly encourage submissions from researchers from both fields. We understand that research groups with expertise in linguistics use manual or semi-automated methods and, therefore, they are not able to submit their software. To enable their participation, we will provide them with the opportunity to analyze the test corpus after the deadline of software submission (mid-April). Their results will be ranked in a separate list with respect to the performance of the software submissions and they will be entitled to describe their approach in a paper. In this framework, any scholar or research group with expertise in linguistics wishing to participate should contact the Task Chair.

Task

Given a small set (no more than 5, possibly as few as one) of "known" documents by a single person and a "questioned" document, the task is to determine whether the questioned document was written by the same person who wrote the known document set. The genre and/or topic may differ significantly between the known and unknown documents.

Data

To develop your software, we provide you with a training corpus that comprises a set of author verification problems in several languages/genres. Each problem consists of some (up to five) known documents by a single person and exactly one questioned document. All documents within a single problem instance will be in the same language. However, their genre and/or topic may differ significantly. The document lengths vary from a few hundred to a few thousand words.

The documents of each problem are located in a separate folder, the name of which (problem ID) encodes the language of the documents. The following list shows the available sub-corpora, including their language, type (cross-genre or cross-topic), code, and examples of problem IDs:

Language Type Code Problem IDs
Dutch Cross-genre DU DU001, DU002, DU003, etc.
English Cross-topic EN EN001, EN002, EN003, etc.
Greek Cross-topic GR GR001, GR002, GR003, etc.
Spanish Cross-genre SP SP001, SP002, SP003, etc.

The ground truth data of the training corpus found in the file truth.txt include one line per problem with problem ID and the correct binary answer (Y means the known and the questioned documents are by the same author and N means the opposite). For example:

EN001 N
EN002 Y
EN003 N
...

Output

Your software must take as input the absolute path to a set of problems. For each problem there is a separate sub-folder within that path including the set of known documents and the single unknown document of that problem (similarly to the training corpus). The software has to output a single text file answers.txt with all the produced answers for the whole set of evaluation problems. Each line of this file corresponds to a problem instance, it starts with the ID of the problem followed by a score, a real number in [0,1] inclusive, corresponding to the probability of a positive answer. That is, 0 means it is absolutely sure the questioned document is not by the author of the known documents, 1.0 means it is absolutely sure the questioned document and the known documents are by the same author, and 0.5 means that a positive and a negative answer are equally likely. The probability scores should be round with three decimal digits. Use a single whitespace to separate problem ID and probability score.
For example, an answers.txt file may look like this:
EN001 0.031
EN002 0.874
EN003 0.500
...

Evaluation

Once you finished tuning your approach to achieve satisfying performance on the training corpus, your software will be tested on the evaluation corpus. During the competition, the evaluation corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

  • The clustering output will be evaluated according to BCubed F-score (Amigo et al. 2007)
  • The ranking of authorship links will be evaluated according to Mean Average Precision (Manning et al. 2008)

For your convenience, we provide an evaluator script written in Octave.

It takes three parameters: (-i) an input directory (the data set including a 'truth' folder), (-a) an answers directory (your software output) and (-o) an output directory where the evaluation results are written to. Of course, you are free to modify the script according to your needs.

The participants' answers will be evaluated according to the area under the ROC curve (AUC) of their probability scores.

In addition, the performance of the binary classification results (automatically extracted from probability scores where every score greater than 0.5 corresponds to a positive answer, every score lower than 0.5 corresponds to a negative answer, while 0.5 corresponds to an unanswered problem, or an "I don't know" answer) will be measured based on c@1 (Peñas & Rodrigo, 2011):

  • c@1 = (1/n)*(nc+(nu*nc/n))

where:

  • n = #problems
  • nc = #correct_answers
  • nu = #unanswered_problems

Note: when positive/negative answers are provided for all available problems (probability scores different than 0.5), then c@1=accuracy. However, c@1 rewards approaches that maintain the same number of correct answers and decrease the number of incorrect answers by leaving some problems unanswered (when probability score equals 0.5).

The final ranking of the participants will be based on the product of AUC and c@1.

Task Committee