Profiling Hate Speech Spreaders on Twitter 2021
Synopsis
- Task: Given a Twitter feed, determine whether its author spreads hate speech.
- Input:
- Timelines of users sharing hate speech towards, for instance, immigrants and women.
- English and Spanish, 200 training cases/authors each (with 200 tweets per author) [data]
- Evaluation: Accuracy
- Submission: Deployment on TIRA [submit]
- Baselines: Character n-Grams+Logistic, Word n-Grams+SVM, USE+LSTM, XLMR+LSTM, MBERT+LSTM, TFIDF+LSTM, LDSE
Task
Hate speech (HS) is commonly defined as any communication that disparages a person or a group on the basis of some characteristic such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics. Given the huge amount of user-generated contents on Twitter, the problem of detecting, and therefore possibly contrasting the HS diffusion, is becoming fundamental, for instance for fighting against misogyny and xenophobia. To this end, in this task, we aim at identifying possible hate speech spreaders on Twitter as a first step towards preventing hate speech from being propagated among online users.
After having addressed several aspects of author profiling in social media from 2013 to 2020 (fake news spreaders, bot detection, age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating if it is possbile to discriminate authors that have shared some hate speech in the past from those that, to the best of our knowledge, have never done it.
As in previous years, we propose the task from a multilingual perspective:
- English
- Spanish
Award
We are happy to announce that the best performing team at the 9th International Competition on Author Profiling will be awarded 300,- Euro sponsored by Symanto
This year, the winner of the task is:
- Marco Siino, Elisa Di Nuovo, Ilenia Tinnirello and Marco La Cascia, Università degli Studi di Palermo and Università degli Studi di Torino, Italy
Data
Input
The uncompressed dataset consists in a folder per language (en, es). Each folder contains:- A XML file per author (Twitter user) with 200 tweets. The name of the XML file correspond to the unique author id.
- A truth.txt file with the list of authors and the ground truth.
<author lang="en"> <documents> <document>Tweet 1 textual contents</document> <document>Tweet 2 textual contents</document> ... </documents> </author>The format of the truth.txt file is as follows. The first column corresponds to the author id. The second column contains the truth label.
b2d5748083d6fdffec6c2d68d4d4442d:::0 2bed15d46872169dc7deaf8d2b43a56:::0 8234ac5cca1aed3f9029277b2cb851b:::1 5ccd228e21485568016b4ee82deb0d28:::0 60d068f9cafb656431e62a6542de2dc0:::1 ...
Output
Your software must take as input the absolute path to an unpacked dataset, and has to output for each document of the dataset a corresponding XML file that looks like this:
<author id="author-id" lang="en|es" type="0|1" />
The naming of the output files is up to you. However, we recommend to use the author-id as filename and "xml" as extension.
IMPORTANT! Languages should not be mixed. A folder should be created for each language and place inside only the files with the prediction for this language.
Evaluation
The performance of your system will be ranked by accuracy. For each language, we will calculate individual accuracies in discriminating between the two classes. Finally, we will average the accuracy values per language to obtain the final ranking.Results
POS | Team | EN | ES | AVG |
---|---|---|---|---|
1 | SiinoDiNuovo | 73.0 | 85.0 | 79.0 |
2 | MUCIC | 73.0 | 83.0 | 78.0 |
2 | UO-UPV | 74.0 | 82.0 | 78.0 |
4 | andujar | 72.0 | 82.0 | 77.0 |
4 | anitei | 72.0 | 82.0 | 77.0 |
4 | anwar | 72.0 | 82.0 | 77.0 |
7 | pagnan | 73.0 | 80.0 | 76.5 |
LDSE [20] | 70.0 | 82.0 | 76.0 | |
char nGrams+Logistic | 69.0 | 83.0 | 76.0 | |
8 | hoellig | 73.0 | 79.0 | 76.0 |
9 | bañuls | 68.0 | 83.0 | 75.5 |
9 | supaca | 69.0 | 82.0 | 75.5 |
9 | oleg | 67.0 | 83.0 | 75.0 |
9 | moreno | 69.0 | 81.0 | 75.0 |
9 | cervero | 70.0 | 80.0 | 75.0 |
14 | katona | 70.0 | 79.0 | 74.5 |
word nGrams+SVM | 65.0 | 83.0 | 74.0 | |
15 | bagdon | 67.0 | 81.0 | 74.0 |
15 | das | 67.0 | 81.0 | 74.0 |
17 | ikae | 66.0 | 81.0 | 73.5 |
17 | mata | 70.0 | 77.0 | 73.5 |
19 | lai | 62.0 | 84.0 | 73.0 |
19 | jain | 66.0 | 80.0 | 73.0 |
19 | villarroya | 67.0 | 79.0 | 73.0 |
19 | mktung | 64.0 | 82.0 | 73.0 |
19 | sercopa | 67.0 | 79.0 | 73.0 |
19 | castro | 67.0 | 79.0 | 73.0 |
25 | giglou | 65.0 | 80.0 | 72.5 |
25 | huertas | 67.0 | 78.0 | 72.5 |
25 | wentao | 68.0 | 77.0 | 72.5 |
28 | rus | 61.0 | 83.0 | 72.0 |
28 | tudo | 65.0 | 79.0 | 72.0 |
30 | jaiferhu | 61.0 | 82.0 | 71.5 |
30 | joshi | 65.0 | 78.0 | 71.5 |
32 | valiense | 63.0 | 79.0 | 71.0 |
32 | krstev | 65.0 | 77.0 | 71.0 |
34 | martin | 65.0 | 77.0 | 71.0 |
35 | gomez | 58.0 | 83.0 | 70.5 |
35 | bakhteev | 58.0 | 83.0 | 70.5 |
35 | MaNa | 64.0 | 77.0 | 70.5 |
38 | cabrera | 62.0 | 78.0 | 70.0 |
38 | esam | 63.0 | 77.0 | 70.0 |
38 | zhang | 63.0 | 77.0 | 70.0 |
41 | dudko | 61.0 | 78.0 | 69.5 |
41 | meghana | 64.0 | 75.0 | 69.5 |
43 | rubio | 59.0 | 79.0 | 69.0 |
43 | uzan | 62.0 | 76.0 | 69.0 |
45 | herrero | 57.0 | 80.0 | 68.5 |
46 | puertas | 60.0 | 76.0 | 68.0 |
USE-LSTM | 56.0 | 79.0 | 67.5 | |
XLMR-LSTM | 62.0 | 73.0 | 67.5 | |
47 | ipek | 58.0 | 77.0 | 67.5 |
47 | schlicht21 | 58.0 | 77.0 | 67.5 |
47 | peirano | 59.0 | 76.0 | 67.5 |
47 | russo | 55.0 | 80.0 | 67.5 |
MBERT-LSTM | 59.0 | 75.0 | 67.0 | |
51 | kazzaz | 55.0 | 77.0 | 66.0 |
52 | dorado | 60.0 | 71.0 | 65.5 |
53 | kobby | 53.0 | 77.0 | 65.0 |
53 | kern | 54.0 | 76.0 | 65.0 |
53 | espinosa | 64.0 | 66.0 | 65.0 |
56 | labadie | 51.0 | 78.0 | 64.5 |
57 | silva | 56.0 | 69.0 | 62.5 |
57 | garibo | 57.0 | 68.0 | 62.5 |
59 | estepicursor | 51.0 | 72.0 | 61.5 |
60 | spears | 52.0 | 68.0 | 60.0 |
TFIDF-LSTM | 61.0 | 51.0 | 56.0 | |
61 | barbas | 46.0 | 50.0 | 48.0 |
62 | dukic | 75.0 | - | - |
63 | tosev | 70.0 | - | - |
64 | amir | 68.0 | - | - |
65 | siebert | 68.0 | - | - |
66 | iteam | 65.0 | - | - |
67 | amina* | 63.0 | - | - |
Related Work
- [1] Valerio Basile, Cristina Bosco, Elisabetta Fersini, Dora Nozza, Viviana Patti, Francisco Rangel, Paolo Rosso, Manuela Sanguinetti (2019). SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. Proc. SemEval 2019
- [2] Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, Viviana Patti (2020). Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources & Evaluation. https://doi.org/10.1007/s10579-020-09502-8
- [3] Paula Fortuna, Sérgio Nunes (2018). A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR) 51.4
- [4] Maria Anzovino, Elisabetta Fersini, Paolo Rosso (2018). Automatic Identification and Classification of Misogynistic Language on Twitter. In: Proc. 23rd Int. Conf. on Applications of Natural Language to Information Systems, NLDB-2018, Springer-Verlag, LNCS(10859), pp. 57-64
- [5] Elisabetta Fersini, Paolo Rosso, Maria Anzovino (2018). Overview of the task on automatic misogyny identification at IberEval 2018. Proc. IberEval 2018
- [6] Elisabetta Fersini, Dora Nozza, Paolo Rosso (2018). Overview of the Evalita 2018 task on automatic misogyny identification (AMI). Proc. EVALITA 2018
- [7] Cristina Bosco, Felice Dell'Orletta, Fabio Poletto, Manuela Sanguinetti, Maurizio Tesconi (2018). Overview of the EVALITA 2018 hate speech detection task. Proc. EVALITA 2018
- [8] Samuel Caetano da Silva, Thiago Castro Ferreira, Ricelli Moreira Silva Ramos, Ivandre Paraboni (2020). Data-driven and psycholinguistics motivated approaches to hate speech detection. Computación y Sistemas, 24(3): 1179–1188
- [9] Stiven Zimmerman, Udo Kruschwitz, Cris Fox (2018). Improving hate speech detection with deep learning ensembles. In Proc. of the Eleventh Int. Conf. on Language Resources and Evaluation (LREC 2018)
- [10] Simona Frenda, Bilal Ghanem, Manuel Montes-y Gomez, and Paolo Rosso. 2019. Online hate speech against women: Automatic identification of misogyny and sexism on twitter. Journal of Intelligent & Fuzzy Systems, 36(5):4743–4752.
- [11] Francisco Rangel, Anastasia Giachanou, Bilal Ghanem, Paolo Rosso. Overview of the 8th Author Profiling Task at PAN 2020: Profiling Fake News Spreaders on Twitter. In: L. Cappellato, C. Eickhoff, N. Ferro, and A. Névéol (eds.) CLEF 2020 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings.CEUR-WS.org, vol. 2696
- [12] Francisco Rangel and Paolo Rosso. Overview of the 7th Author Profiling Task at PAN 2019: Bots and Gender Profiling in Twitter. In: L. Cappellato, N. Ferro, D. E. Losada and H. Müller (eds.) CLEF 2019 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings.CEUR-WS.org, vol. 2380
- [13] Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein. Overview of the 6th author profiling task at pan 2018: multimodal gender identification in Twitter. In: CLEF 2018 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 2125.
- [14] Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein. Overview of the 5th Author Profiling Task at PAN 2017: Gender and Language Variety Identification in Twitter. In: Cappellato L., Ferro N., Goeuriot L, Mandl T. (Eds.) CLEF 2017 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 1866.
- [15] Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Pottast, Benno Stein. Overview of the 4th Author Profiling Task at PAN 2016: Cross-Genre Evaluations. In: Balog K., Capellato L., Ferro N., Macdonald C. (Eds.) CLEF 2016 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 1609, pp. 750-784
- [16] Francisco Rangel, Fabio Celli, Paolo Rosso, Martin Pottast, Benno Stein, Walter Daelemans. Overview of the 3rd Author Profiling Task at PAN 2015.In: Linda Cappelato and Nicola Ferro and Gareth Jones and Eric San Juan (Eds.): CLEF 2015 Labs and Workshops, Notebook Papers, 8-11 September, Toulouse, France. CEUR Workshop Proceedings. ISSN 1613-0073, http://ceur-ws.org/Vol-1391/,2015.
- [17] Francisco Rangel, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Verhoeven, Walter Daelemans. Overview of the 2nd Author Profiling Task at PAN 2014. In: Cappellato L., Ferro N., Halvey M., Kraaij W. (Eds.) CLEF 2014 Labs and Workshops, Notebook Papers. CEUR-WS.org, vol. 1180, pp. 898-827.
- [18] Francisco Rangel, Paolo Rosso, Moshe Koppel, Efstatios Stamatatos, Giacomo Inches. Overview of the Author Profiling Task at PAN 2013. In: Forner P., Navigli R., Tufis D. (Eds.)Notebook Papers of CLEF 2013 LABs and Workshops. CEUR-WS.org, vol. 1179
- [19] Francisco Rangel and Paolo Rosso On the Implications of the General Data Protection Regulation on the Organisation of Evaluation Tasks. In: Language and Law / Linguagem e Direito, Vol. 5(2), pp. 80-102
- [20] Francisco Rangel, Marc Franco-Salvador, Paolo Rosso A Low Dimensionality Representation for Language Variety Identification. In: Postproc. 17th Int. Conf. on Comput. Linguistics and Intelligent Text Processing, CICLing-2016, Springer-Verlag, Revised Selected Papers, Part II, LNCS(9624), pp. 156-169 (arXiv:1705.10754)