Register now!

Bots and Gender Profiling

PAN @ CLEF 2019
The Logic Value
Sponsor

Given a Twitter feed, determine whether its author is a bot or a human. In case of human, identify her/his gender.

Task

Social media bots pose as humans to influence users with commercial, political or ideological purposes. For example, bots could artificially inflate the popularity of a product by promoting it and/or writing positive ratings, as well as undermine the reputation of competitive products through negative valuations. The threat is even greater when the purpose is political or ideological (see Brexit referendum or US Presidential elections). Fearing the effect of this influence, the German political parties have rejected the use of bots in their electoral campaign for the general elections. Furthermore, bots are commonly related to fake news spreading. Therefore, to approach the identification of bots from an author profiling perspective is of high importance from the point of view of marketing, forensics and security.

After having addressed several aspects of author profiling in social media from 2013 to 2018 (age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating whether the author of a Twitter feed is a bot or a human. Furthermore, in case of human, to profile the gender of the author.

 

As in previous years, we propose the task from a multilingual perspective:

  • English
  • Spanish
NOTE: Although we recommend to participate in both, bots and gender profiling, it is possible to address just one problem, and for just one language: English or Spanish.

Unlike previous years and with the aim at maintaining a realistic scenario, we have not performed any cleaning action on the tweets: they remain as users tweeted them. This means that RTs have not been removed and tweets potentially in more than one language can appear.


Data [Download Training Data]

Input

The uncompressed dataset consists in a folder per language (en, es). Each folder contains:
  • A XML file per author (Twitter user) with 100 tweets. The name of the XML file correspond to the unique author id.
  • A truth.txt file with the list of authors and the ground truth.
The format of the XML files is:
	<author lang="en">
	    <documents>
		    <document>Tweet 1 textual contents</document>
		    <document>Tweet 2 textual contents</document>
		    ...
		</documents>
	</author>
                              
The format of the truth.txt file is as follows. The first column corresponds to the author id. The second and third columns contain the truth for the human/bot and bot/male/female tasks.
	b2d5748083d6fdffec6c2d68d4d4442d:::bot:::bot
	2bed15d46872169dc7deaf8d2b43a56:::bot:::bot
	8234ac5cca1aed3f9029277b2cb851b:::human:::female
	5ccd228e21485568016b4ee82deb0d28:::human:::female
	60d068f9cafb656431e62a6542de2dc0:::human:::male
	c6e5e9c92fb338dc0e029d9ea22a4358:::human:::male
	...
                              

Output

Your software must take as input the absolute path to an unpacked dataset, and has to output for each document of the dataset a corresponding XML file that looks like this:

	<author id="author-id"
		lang="en|es"
		type="bot|human"
		gender="bot|male|female"
	/>
                              

The naming of the output files is up to you. However, we recommend to use the author-id as filename and "xml" as extension.

IMPORTANT! Languages should not be mixed. A folder should be created for each language and place inside only the files with the prediction for this language.

IMPORTANT! In order to avoid overfitting when experimenting with the training set, we recommend you to use the provided split train/dev (files truth-train.txt and truth-dev.txt).

Evaluation

The performance of your author profiling solution will be ranked by accuracy. For each language, we will calculate individual accuracies. Firstly, we will calculate the accuracy of identifying bots vs. human. Then, in case of humans, we will calculate the accuracy of identifying males vs. females. Finally, we will average the accuracy values per language to obtain the final ranking.

Submission [Submit software to PAN]

This task follows PAN's software submission strategy described here.

Related Work