Intrinsic Plagiarism Detection in Arabic Text (AraPlagDet) 2015

Synopsis

  • Given a document written in Arabic, does it contain plagiarism?
  • Input: [data][truth]
  • Evaluator: [code]

Task

Identify the text fragments that are inconsistent with the rest of the document in terms of the writing style. The suspicious document should not be compared to any other documents.

Input

You will be provided with a training corpus corpus to use it while developing your method. The corpus involves a collection of suspicious documents. Each suspicious document is associated with an XML document that determine the position of the plagiarised fragments, which allows you to check the correctness of your detections.

Output

Participants should submit their runs on the test corpus in the form of XML documents following the annotation format of the actual plagiarism provided in the training corpus (PAN format). Below an example on the content of the XML documents that should be submitted by the participants.

<document reference="suspicious-documentXYZ.txt">
<feature
  name="detected-plagiarism"
  this_offset="5"
  this_length="1000"
/>
<feature ... />
...
</document>
  • Each <feature> tag desribes a detected fragment.
  • The attributes this_offset and source_offset describe the position of the first character of the detected plagiarism fragment in the suspicious document and the source document respectively.
  • The attributes this_length and source_length repesent the number of characters of the detected plagiarism fragment in the suspicious document and the source document respectively.

Performance Measures

The following measures are used to evaluate the performance of methods:

  • Precision and Recall at character level,
  • Granularity: it reports overlapping or multiple detections for a single plagiarism case,
  • Plagdet: a combination of the former measures to an overall score. This score is used to rank the methods.

To compute these measures, we use the same code provided at PAN@CLEF (author: Martin Potthast). [Learn more]

Submission

(1) Run your method(s) on the test corpus, and generate for each suspicious document an XML document (with the same name) that contains the detected plagiarism (please follow the format shown above).
Please record the runtime of your method in terms of seconds on the training as well as the test corpus.

(2) Send your runs on the test corpus as well as the training one in separate zip files to araplagdet.fire2015@gmail.com .(you can also upload them in a site such as Google Drive or Dropbox and send us the links)
The names of the zip files should follow the format bellow:
- for the runs on the training corpus: intrinsic_train_your.familly.name.zip
- for the runs on the test corpus: intrinsic_test_your.familly.name.zip

If you submit more than 1 run for a subtask, please add a number to the name of you files e.g. intrinsic_test_myname_1.zip and intrinsic_test_myname_2.zip

In the email, mention the names of the sent zip files along with the runtime to generate each of them (preferably in seconds if possible, but you could put an approximate time in hours or minutes) . E.g.
intrinsic_test_yourname.zip 1000 sec
intrinsic_train_yourname.zip 1022 sec

Results

The winner is Magooda Team. Congratulation !

Method Plagdet Precision Recall Granularity
Baseline 0.3753558 0.2691282 0.7792149 1.0934164
Magooda 0.1926474 0.1879309 0.1976069 1.0000000
Participants
Magooda Ahmed Ezzat abdelGawad Magooda
Ahsraf Youssef Mahgoub
Mohsen Rashwan
RDI, Egypt

The intrinsic plagiarism detection task on English documents has been run from PAN'09 to PAN'12. Here is a quick list of the respective proceedings and overviews:

The following references provide further information on intrinsic plagiarism detection, its evaluation corpora development approaches and Arabic language:

Task Committee