Social media bots pose as humans to influence users with commercial, political or ideological purposes. For example, bots could artificially inflate the popularity of a product by promoting it and/or writing positive ratings, as well as undermine the reputation of competitive products through negative valuations. The threat is even greater when the purpose is political or ideological (see Brexit referendum or US Presidential elections). Fearing the effect of this influence, the German political parties have rejected the use of bots in their electoral campaign for the general elections. Furthermore, bots are commonly related to fake news spreading. Therefore, to approach the identification of bots from an author profiling perspective is of high importance from the point of view of marketing, forensics and security.
After having addressed several aspects of author profiling in social media from 2013 to 2018 (age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating whether the author of a Twitter feed is a bot or a human. Furthermore, in case of human, to profile the gender of the author.
As in previous years, we propose the task from a multilingual perspective:
Unlike previous years and with the aim at maintaining a realistic scenario, we have not performed any cleaning action on the tweets: they remain as users tweeted them. This means that RTs have not been removed and tweets potentially in more than one language can appear.
Data [Download Training Data]
InputThe uncompressed dataset consists in a folder per language (en, es). Each folder contains:
- A XML file per author (Twitter user) with 100 tweets. The name of the XML file correspond to the unique author id.
- A truth.txt file with the list of authors and the ground truth.
<author lang="en"> <documents> <document>Tweet 1 textual contents</document> <document>Tweet 2 textual contents</document> ... </documents> </author>The format of the truth.txt file is as follows. The first column corresponds to the author id. The second and third columns contain the truth for the human/bot and bot/male/female tasks.
b2d5748083d6fdffec6c2d68d4d4442d:::bot:::bot 2bed15d46872169dc7deaf8d2b43a56:::bot:::bot 8234ac5cca1aed3f9029277b2cb851b:::human:::female 5ccd228e21485568016b4ee82deb0d28:::human:::female 60d068f9cafb656431e62a6542de2dc0:::human:::male c6e5e9c92fb338dc0e029d9ea22a4358:::human:::male ...
Your software must take as input the absolute path to an unpacked dataset, and has to output for each document of the dataset a corresponding XML file that looks like this:
<author id="author-id" lang="en|es" type="bot|human" gender="bot|male|female" />
The naming of the output files is up to you. However, we recommend to use the author-id as filename and "xml" as extension.
IMPORTANT! Languages should not be mixed. A folder should be created for each language and place inside only the files with the prediction for this language.
IMPORTANT! In order to avoid overfitting when experimenting with the training set, we recommend you to use the provided split train/dev (files truth-train.txt and truth-dev.txt).
EvaluationThe performance of your author profiling solution will be ranked by accuracy. For each language, we will calculate individual accuracies. Firstly, we will calculate the accuracy of identifying bots vs. human. Then, in case of humans, we will calculate the accuracy of identifying males vs. females. Finally, we will average the accuracy values per language to obtain the final ranking.
Submission [Submit software to PAN]
This task follows PAN's software submission strategy described here.
- Andre Guess, Jonathan Nagler, and Joshua Tucker. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances vol. 5 (2019)
- Kai Shu, Suhang Wang, and Huan Liu. Understanding user profiles on social media for fake news detection. IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 430--435 (2018)
- Massimo Stella, Emilio Ferrara, and Manlio De Domenico. Bots sustain and inflate striking opposition in online social systems. arXiv preprint arXiv:1802.07292 (2018)
- Massimo Stella, Emilio Ferrara, and Manlio De Domenico. Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences, vol. 115 (49), pp. 12435-12440 (2018)
- Gouzhu Dong and Huan Liu. Feature Engineering for Machine Learning and Data Analytics. CRC Press (2018). Chapter 12. Feature Engineering for Social Bot Detection. Onur Varol, Clayton A. Davis, Filippo Menczer, Alessandro Flammini.
- Emilio Ferrara, Onur Varol, Filippo Menczer, Alessandro Flammini. Detection of Promoted Social Media Campaigns. The 10th International AAAI Conference on Web and Social Media - ICWSM, pp. 563-566 (2016)
- Zakaria el Hjouji, D. Scott Hunter, Nicolas Guenon des Mesnards, Tauhid Zaman. The Impact of Bots on Opinions in Social Networks. arXiv preprint arXiv:1810.12398 (2018)
- John P. Dickerson, Vadim Kagan, V.S. Subrahmanian. Using Sentiment to Detect Bots on Twitter: Are Humans more Opionated than Bots? Proceedings of the 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 620-627. IEEE Press. (2014)
- Kai-Cheng Yang, Onur Varol, Clayton A. Davis, Emilio Ferrara, Alessandro Flammini, Filippo Menczer. Arming the Public with AI to Counter Social Bots. arXiv preprint arXiv:1901.00912. 2019 Jan 3.
- Chiyu Cai, Linking Li, Daniel Zeng. Behaviour Enhanced Deep Bot Detection in Social Media. Intelligence and Security Informatics (ISI). 2017 IEEE International Conference, pp. 128-130 (2017)
- Andrew Hall, Loren Terveen, Aaron Halfaker. Bot Detection in Wikidata Using Behavioral and Other Informal Cues. Proceedings of the ACM on Human-Computer Interaction. 2018 Nov 1;2(CSCW):64.
- Mariona Taulé, M. Antonia Martí, Francisco Rangel, Paolo Rosso, Cistina Bosco, and Viviana Patti. Overview of the task on stance and gender detection in tweets on Catalan independence at IberEval 2017. In: 2nd Workshop on Evaluation of Human Language Technologies for Iberian Languages, IberEval 2017. CEUR Workshop Proceedings. CEUR-WS.org, vol. 1881.
- Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein. Overview of the 6th author profiling task at pan 2018: multimodal gender identification in Twitter. In: CLEF 2018 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 2125.
- Francisco Rangel, Paolo Rosso, Martin Potthast, Benno Stein. Overview of the 5th Author Profiling Task at PAN 2017: Gender and Language Variety Identification in Twitter. In: Cappellato L., Ferro N., Goeuriot L, Mandl T. (Eds.) CLEF 2017 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 1866.
- Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Pottast, Benno Stein. Overview of the 4th Author Profiling Task at PAN 2016: Cross-Genre Evaluations. In: Balog K., Capellato L., Ferro N., Macdonald C. (Eds.) CLEF 2016 Labs and Workshops, Notebook Papers. CEUR Workshop Proceedings. CEUR-WS.org, vol. 1609, pp. 750-784
- Francisco Rangel, Fabio Celli, Paolo Rosso, Martin Pottast, Benno Stein, Walter Daelemans. Overview of the 3rd Author Profiling Task at PAN 2015.In: Linda Cappelato and Nicola Ferro and Gareth Jones and Eric San Juan (Eds.): CLEF 2015 Labs and Workshops, Notebook Papers, 8-11 September, Toulouse, France. CEUR Workshop Proceedings. ISSN 1613-0073, http://ceur-ws.org/Vol-1391/,2015.
- Francisco Rangel, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Verhoeven, Walter Daelemans. Overview of the 2nd Author Profiling Task at PAN 2014. In: Cappellato L., Ferro N., Halvey M., Kraaij W. (Eds.) CLEF 2014 Labs and Workshops, Notebook Papers. CEUR-WS.org, vol. 1180, pp. 898-827.
- Francisco Rangel, Paolo Rosso, Moshe Koppel, Efstatios Stamatatos, Giacomo Inches. Overview of the Author Profiling Task at PAN 2013. In: Forner P., Navigli R., Tufis D. (Eds.)Notebook Papers of CLEF 2013 LABs and Workshops. CEUR-WS.org, vol. 1179