Multi-Author Writing Style Analysis 2024
Synopsis
- Task: Given a document, determine at which positions the author changes.
- Input: Reddit comments, combined into documents [data].
- Output: Where does authorship change on the paragraph level [validator].
- Evaluation: F1 [code].
- Submission: Deployment on TIRA [submit].
Task
The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Hence, a fundamental question is the following: If multiple authors together have written a text, can we find evidence for this fact; do we have a means to detect variations in the writing style? Answering this question belongs to the most difficult and most interesting challenges in author identification: Style change detection is the only means to detect plagiarism in a document if no comparison texts are given; likewise, style change detection can help to uncover gift authorships, to verify a claimed authorship, or to develop new technology for writing support.
Previous editions of the multi-author writing style analysis task aim at e.g., detecting whether a document is single- or multi-authored (2018), the actual number of authors within a document (2019), whether there was a style change between two consecutive paragraphs (2020, 2021, 2022), and where the actual style changes were located (2021, 2022). In 2022, style changes also had to be detected on the sentence level. The previously used datasets exhibited high topic diversity, which allowed the participants to leverage topic information as a style change signal. In this year's edition of the writing style analysis task, special attention is paid to this issue.
We ask participants to solve the following intrinsic style change detection task: for a given text, find all positions of writing style change on the paragraph-level (i.e., for each pair of consecutive paragraphs, assess whether there was a style change). The simultaneous change of authorship and topic will be carefully controlled and we will provide participants with datasets of three difficulty levels:
- Easy: The paragraphs of a document cover a variety of topics, allowing approaches to make use of topic information to detect authorship changes.
- Medium: The topical variety in a document is small (though still present) forcing the approaches to focus more on style to effectively solve the detection task.
- Hard: All paragraphs in a document are on the same topic.
All documents are provided in English and may contain an arbitrary number of style changes. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and contains no style changes).
Data [download]
To develop and then test your algorithms, three datasets including ground truth information are provided (easy for the easy task, medium for the medium task, and hard for the hard task).
Each dataset is split into three parts:
- training set: Contains 70% of the whole dataset and includes ground truth data. Use this set to develop and train your models.
- validation set: Contains 15% of the whole dataset and includes ground truth data. Use this set to evaluate and optimize your models.
- test set: Contains 15% of the whole dataset, no ground truth data is given. This set is used for evaluation.
You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.
Input Format
The datasets are based on user posts from various subreddits of the Reddit platform. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem. We provide one folder for train, validation, and test data for each dataset, respectively.
For each problem instance X
(i.e., each input document), two files are provided:
problem-X.txt
contains the actual text. When reading in files via Python, please useopen(path, "r", newline="")
to prevent any errors.truth-problem-X.json
contains the ground truth, i.e., the correct solution in JSON format. An example file is listed in the following:{ "authors": NUMBER_OF_AUTHORS, "changes": RESULT_ARRAY_TASK }
The result (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change).
An example of a multi-author document with a style change between the third and fourth paragraph could be described as follows (we only list the relevant key/value pairs here):
{ "changes": [0,0,1,...] }
Output Format [validator]
To evaluate the solutions for the tasks, the results have to be stored in a single file for each of the input documents and each of the datasets. Please note that we require a solution file to be generated for each input problem for each dataset. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.
For each given problem problem-X.txt
, your software should output the missing solution file solution-problem-X.json
, containing a JSON object holding the solution to the respective task. The solution is an array containing a binary value for each pair of consecutive paragraphs .
An example solution file is featured in the following:
{
"changes": [0,0,1,0,0,...]
}
Evaluation [code]
Submissions are evaluated by the F1-score measure (macro) across all paragraph pairs. The solutions for each dataset are evaluated independently based on the obtained evaluation scores.
We provide you with a script to compute the F1-score based on the produced output-files [evaluator and tests].
Submission
Once you finish tuning your approach on the validation set, your software will be tested on the test set. During the competition, the test sets will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as follows.
We ask you to prepare your software so that it can be executed via command line calls. The command shall take as input (i) an absolute path to the directory of the test corpora and (ii) an absolute path to an empty output directory:
mySoftware -i INPUT-DIRECTORY -o OUTPUT-DIRECTORY
Within INPUT-DIRECTORY
, you will find the set of problem instances (i.e., problem-[id].txt
files) for each of the three datasets, respectively. For each problem instance you should produce the solution file solution-problem-[id].json
in the respective OUTPUT-DIRECTORY
. For instance, you read INPUT-DIRECTORY/problem-12.txt
, process it, and write your results to OUTPUT-DIRECTORY/solution-problem-12.json
.
In general, this task follows PAN's software submission strategy described here.
Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.
Related Work
- Style Change Detection, PAN@CLEF'23
- Style Change Detection, PAN@CLEF'22
- Style Change Detection, PAN@CLEF'21
- Style Change Detection, PAN@CLEF'20
- Style Change Detection, PAN@CLEF'19
- Style Change Detection, PAN@CLEF'18
- Style Breach Detection, PAN@CLEF'17
- PAN@CLEF'16 (Clustering by Authorship Within and Across Documents and Author Diarization section)
- J. Cardoso and R. Sousa. Measuring the performance of ordinal classification. International Journal of Pattern Recognition and Artificial Intelligence 25.08, pp. 1173-1195, 2011
- Benno Stein, Nedim Lipka and Peter Prettenhofer. Intrinsic Plagiarism Analysis. In Language Resources and Evaluation, Volume 45, Issue 1, pages 63-82, 2011.
- Efstathios Stamatatos. A Survey of Modern Authorship Attribution Methods. Journal of the American Society for Information Science and Technology, Volume 60, Issue 3, pages 538-556, March 2009.