Profiling Hate Speech Spreaders on Twitter 2021

Sponsored by
Symanto Research

Synopsis

  • Task: Given a Twitter feed, determine whether its author spreads hate speech.
  • Input:
    • Timelines of users sharing hate speech towards, for instance, immigrants and women.
    • English and Spanish, 200 training cases/authors each (with 200 tweets per author) [data]
  • Evaluation: Accuracy
  • Submission: Deployment on TIRA [submit]
  • Baselines: Character n-Grams+Logistic, Word n-Grams+SVM, USE+LSTM, XLMR+LSTM, MBERT+LSTM, TFIDF+LSTM, LDSE

Task

Hate speech (HS) is commonly defined as any communication that disparages a person or a group on the basis of some characteristic such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics. Given the huge amount of user-generated contents on Twitter, the problem of detecting, and therefore possibly contrasting the HS diffusion, is becoming fundamental, for instance for fighting against misogyny and xenophobia. To this end, in this task, we aim at identifying possible hate speech spreaders on Twitter as a first step towards preventing hate speech from being propagated among online users.

After having addressed several aspects of author profiling in social media from 2013 to 2020 (fake news spreaders, bot detection, age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating if it is possbile to discriminate authors that have shared some hate speech in the past from those that, to the best of our knowledge, have never done it.

As in previous years, we propose the task from a multilingual perspective:

  • English
  • Spanish
NOTE: Although we recommend to participate in both languages (English and Spanish), it is possible to address the problem just for one language.

Award

We are happy to announce that the best performing team at the 9th International Competition on Author Profiling will be awarded 300,- Euro sponsored by Symanto
This year, the winner of the task is:

  • Marco Siino, Elisa Di Nuovo, Ilenia Tinnirello and Marco La Cascia, Università degli Studi di Palermo and Università degli Studi di Torino, Italy

Data

Input

The uncompressed dataset consists in a folder per language (en, es). Each folder contains:
  • A XML file per author (Twitter user) with 200 tweets. The name of the XML file correspond to the unique author id.
  • A truth.txt file with the list of authors and the ground truth.
The format of the XML files is:
    <author lang="en">
        <documents>
            <document>Tweet 1 textual contents</document>
            <document>Tweet 2 textual contents</document>
            ...
        </documents>
    </author>
      
The format of the truth.txt file is as follows. The first column corresponds to the author id. The second column contains the truth label.
    b2d5748083d6fdffec6c2d68d4d4442d:::0
    2bed15d46872169dc7deaf8d2b43a56:::0
    8234ac5cca1aed3f9029277b2cb851b:::1
    5ccd228e21485568016b4ee82deb0d28:::0
    60d068f9cafb656431e62a6542de2dc0:::1
    ...
    

Output

Your software must take as input the absolute path to an unpacked dataset, and has to output for each document of the dataset a corresponding XML file that looks like this:

    <author id="author-id"
        lang="en|es"
        type="0|1"
    />
                              

The naming of the output files is up to you. However, we recommend to use the author-id as filename and "xml" as extension.

IMPORTANT! Languages should not be mixed. A folder should be created for each language and place inside only the files with the prediction for this language.

Evaluation

The performance of your system will be ranked by accuracy. For each language, we will calculate individual accuracies in discriminating between the two classes. Finally, we will average the accuracy values per language to obtain the final ranking.

Results

POS Team EN ES AVG
1SiinoDiNuovo73.085.079.0
2MUCIC73.083.078.0
2UO-UPV74.082.078.0
4andujar72.082.077.0
4anitei72.082.077.0
4anwar72.082.077.0
7pagnan73.080.076.5
LDSE [20]70.082.076.0
char nGrams+Logistic69.083.076.0
8hoellig73.079.076.0
9bañuls68.083.075.5
9supaca69.082.075.5
9oleg67.083.075.0
9moreno69.081.075.0
9cervero70.080.075.0
14katona70.079.074.5
word nGrams+SVM65.083.074.0
15bagdon67.081.074.0
15das67.081.074.0
17ikae66.081.073.5
17mata70.077.073.5
19lai62.084.073.0
19jain66.080.073.0
19villarroya67.079.073.0
19mktung64.082.073.0
19sercopa67.079.073.0
19castro67.079.073.0
25giglou65.080.072.5
25huertas67.078.072.5
25wentao68.077.072.5
28rus61.083.072.0
28tudo65.079.072.0
30jaiferhu61.082.071.5
30joshi65.078.071.5
32valiense63.079.071.0
32krstev65.077.071.0
34martin65.077.071.0
35gomez58.083.070.5
35bakhteev58.083.070.5
35MaNa64.077.070.5
38cabrera62.078.070.0
38esam63.077.070.0
38zhang63.077.070.0
41dudko61.078.069.5
41meghana64.075.069.5
43rubio59.079.069.0
43uzan62.076.069.0
45herrero57.080.068.5
46puertas60.076.068.0
USE-LSTM56.079.067.5
XLMR-LSTM62.073.067.5
47ipek58.077.067.5
47schlicht2158.077.067.5
47peirano59.076.067.5
47russo55.080.067.5
MBERT-LSTM59.075.067.0
51kazzaz55.077.066.0
52dorado60.071.065.5
53kobby53.077.065.0
53kern54.076.065.0
53espinosa64.066.065.0
56labadie51.078.064.5
57silva56.069.062.5
57garibo57.068.062.5
59estepicursor51.072.061.5
60spears52.068.060.0
TFIDF-LSTM61.051.056.0
61barbas46.050.048.0
62dukic75.0--
63tosev70.0--
64amir68.0--
65siebert68.0--
66iteam65.0--
67amina*63.0--
* Result sent beyond the deadline.

Task Committee