Author Identification
2014

Authorship attribution is an important problem in many areas including information retrieval and computational linguistics, but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may be able to save lives. The most common framework for testing candidate algorithms is a text classification problem: given known sample documents from a small, finite set of candidate authors, which if any wrote a questioned document of unknown authorship? It has been commented, however, that this may be an unreasonably easy task. A more demanding problem is author verification where given a set of documents by a single author and a questioned document, the problem is to determine if the questioned document was written by that particular author or not. This may more accurately reflect real life in the experiences of professional forensic linguists, who are often called upon to answer this kind of question. It is the second year PAN focuses on the so-called author verification problem.

A note to forensic linguists: In order to bridge the gap between linguistics and computer science, we strongly encourage submissions from researchers from both fields. We understand that research groups with expertise in linguistics use manual or semi-automated methods and, therefore, they are not able to submit their software. To enable their participation, we will provide them with the opportunity to analyze the test corpus after the deadline of software submission (mid-April). Their results will be ranked in a separate list with respect to the performance of the software submissions and they will be entitled to describe their approach in a paper. In this framework, any scholar or research group with expertise in linguistics wishing to participate should contact the Task Chair.

Task
Given a small set (no more than 5, possibly as few as one) of "known" documents by a single person and a "questioned" document, the task is to determine whether the questioned document was written by the same person who wrote the known document set.
Training Corpus

To develop your software, we provide you with a training corpus that comprises a set of author verification problems in several languages/genres. Each problem consists of some (up to five) known documents by a single person and exactly one questioned document. All documents within a single problem instance will be in the same language and best efforts are applied to assure that within-problem documents are matched for genre, register, theme, and date of writing. The document lengths vary from a few hundred to a few thousand words.

Download corpus (updated April 22, 2014)

The documents of each problem are located in a separate folder, the name of which (problem ID) encodes the language/genre of the documents. The following list shows the available languages/genres, their codes, and examples of problem IDs:

LanguageGenreCodeProblem IDs
DutchessaysDEDE001, DE002, DE003, etc.
DutchreviewsDRDR001, DR002, DR003, etc.
EnglishessaysEEEE001, EE002, EE003, etc.
EnglishnovelsENEN001, EN002, EN003, etc.
GreekarticlesGRGR001, GR002, GR003, etc.
SpanisharticlesSPSP001, SP002, SP003, etc.

The ground truth data of the training corpus found in the file truth.txt include one line per problem with problem ID and the correct binary answer (Y means the known and the questioned documents are by the same author and N means the opposite). For example:

EN001 N
EN002 Y
EN003 N
...
Output

Your software must take as input the absolute path to a set of problems. For each problem there is a separate sub-folder within that path including the set of known documents and the single unknown document of that problem (similarly to the training corpus). The software has to output a single text file answers.txt with all the produced answers for the whole set of evaluation problems. Each line of this file corresponds to a problem instance, it starts with the ID of the problem followed by a score, a real number in [0,1] inclusive, corresponding to the probability of a positive answer. That is, 0 means it is absolutely sure the questioned document is not by the author of the known documents, 1.0 means it is absolutely sure the questioned document and the known documents are by the same author, and 0.5 means that a positive and a negative answer are equally likely. The probability scores should be round with three decimal digits. Use a single whitespace to separate problem ID and probability score.
For example, an answers.txt file may look like this:

EN001 0.031
EN002 0.874
EN003 0.500
...
Performance Measures

The participants’ answers will be evaluated according to the area under the ROC curve (AUC) of their probability scores.

In addition, the performance of the binary classification results (automatically extracted from probability scores where every score greater than 0.5 corresponds to a positive answer, every score lower than 0.5 corresponds to a negative answer, while 0.5 corresponds to an unanswered problem, or an "I don’t know" answer) will be measured based on c@1 (Peñas & Rodrigo, 2011):

  • c@1 = (1/n)*(nc+(nu*nc/n))

where:

  • n = #problems
  • nc = #correct_answers
  • nu = #unanswered_problems

Note: when positive/negative answers are provided for all available problems (probability scores different than 0.5), then c@1=accuracy. However, c@1 rewards approaches that maintain the same number of correct answers and decrease the number of incorrect answers by leaving some problems unanswered (when probability score equals 0.5).

The final ranking of the participants will be based on the product of AUC and c@1.

Test Corpus

Once you finished tuning your approach to achieve satisfying performance on the training corpus, your software will be tested on the evaluation corpus.

During the competition, the evaluation corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the evaluation corpus will become available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

Download corpus 1 Download corpus 2

Submission

We ask you to prepare your software so that it can be executed via command line calls. To maximize the sustainability of software submissions for this task, we encourage you to prepare your software so it can be re-trained on demand, i.e., by offering two commands, one for training, and one for testing. This way, your software can be reused on future evaluation corpora as well as on private collections submitted to PAN by via our data submission initiative.

The training command shall take as input (i) an absolute path to a training corpus formated as described above, and (ii) an absolute path to an empty output directory:

> myTrainingSoftware -i path/to/training/corpus -o path/to/output/directory

Based on the training corpus, and perhaps based on its language and genre found within, your software shall train a classification model, and save the trained model to the specified output directory in serialized or binary form.

The testing command shall take as input (i) an absolute path to a test corpus (not containing the ground truth) (ii) an absolute path to a previously trained classification model, and (iii) an absolute path to an empty output directory:

> myTestingSoftware -i path/to/test/corpus -m path/to/classification/model -o path/to/output/directory

Based on the classification model, the software shall classifiy each case found in the test corpus and write an output file as described above to the output directory.

However, offering a command for training is optional, so if you face difficulties in doing so, you may skip the training command and omit the model option -m from the testing command.

You can choose freely among the available programming languages and among the operating systems Microsoft Windows 7 and Ubuntu 12.04. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. More information about how to access the virtual machines can be found in the user guide below

PAN Virtual Machine User Guide »

Once deployed in your virtual machine, your can move to submit your software. Before doing so, we provide your with a software submission readiness tester. Please use this tester to verify that your software works. Since we will be calling your software automatically in much the same ways as the tester does, this lowers the risk ot errors.

Download PAN Software Submission Readiness Tester

When your software is submission-ready, please mail the filled out submission.txt file found along the software submission readiness tester to pan@webis.de.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Contributions

For your convenience, we summarize the main contributions of the 2014 edition of the author identification task with respect to previous editions:

Novelties:

  • The output of your software must be composed of real (probability) scores rather than binary Y/N answers
  • The maximum number of documents of known authorship within a problem is 5 (instead of 10)
  • The evaluation measures used for ranking are (ROC) AUC and c@1 instead of recall, precision and F1
  • More languages/genres are represented in the corpus
  • The training/evaluation corpora are larger
  • It is possible (optionally) to submit a trainable version of your approach to be used with any given training corpus

Unchanged:

  • The task definition is the same
  • The format of corpus and ground truth is the same
  • The positive/negative problems are equally distributed

Results

The following table lists the performances achieved by the participating teams:

Authorship attribution performance
FinalScoreTeam
0.566Meta Classifier
0.490Mahmoud Khonji and Youssef Iraqi
Khalifa University, United Arab Emirates
0.484Jordan Fréry°, Christine Largeron°, and Mihaela Juganaru-Mathieu*
°Université de Lyon and *École Nationale Supérieure des Mines, France
0.461Esteban Castillo°, Ofelia Cervantes°, Darnes Vilariño*, David Pinto*, and Saul León*
°Universidad de las Américas Puebla and *Benemérita Universidad Autónoma de Puebla, Mexico
0.451Erwan Moreau, Arun Jayapal, and Carl Vogel
Trinity College Dublin, Ireland
0.450Cristhian Mayor, Josue Gutierrez, Angel Toledo, Rodrigo Martinez, Paola Ledesma, Gibran Fuentes, and Ivan Meza
Universidad Nacional Autonoma de Mexico, Mexico
0.426Hamed Zamani, Hossein Nasr, Pariya Babaie, Samira Abnar, Mostafa Dehghani, and Azadeh Shakery
University of Tehran, Iran
0.400Satyam, Anand, Arnav Kumar Dawn, and Sujan Kumar Saha
Birla Institute of Technology, India
0.375Pashutan Modaresi and Philipp Gross
pressrelations GmbH, Germany
0.367Magdalena Jankowska, Vlado Kešelj, and Evangelos Milios
Dalhousie University, Canada
0.335Oren Halvani and Martin Steinebach
Fraunhofer Institute for Secure Information Technology SIT, Germany
0.325Baseline
0.308Anna Vartapetiance and Lee Gillam
University of Surrey, UK
0.306Robert Layton
Federation University, Australia
0.304Sarah Harvey
University of Waterloo, Canada

A more detailed analysis of the detection performances can be found in the overview paper accompanying this task.

Learn more »

Related Work

We refer you to:

Task Chair

Efstathios Stamatatos

Efstathios Stamatatos

University of the Aegean

Task Committee

Walter Daelemans

Walter Daelemans

University of Antwerp

Patrick Juola

Patrick Juola

Duquesne University

Martin Potthast

Martin Potthast

Bauhaus-Universität Weimar

Benno Stein

Benno Stein

Bauhaus-Universität Weimar

Miguel Angel Sanchez Perez

Miguel Angel Sánchez Pérez

National Polytechnic Institute, Mexico

Ben Verhoeven

Ben Verhoeven

University of Antwerp

Alberto Barrón-Cedeño

Alberto Barrón-Cedeño

Universitat Politècnica de Catalunya