Author Profiling
2017

MeaningCloud
Sponsor

Authorship analysis deals with the classification of texts into classes based on the stylistic choices of their authors. Beyond the author identification and author verification tasks where the style of individual authors is examined, author profiling distinguishes between classes of authors studying their sociolect aspect, that is, how language is shared by people. This helps in identifying profiling aspects such as gender, age, native language, or personality type. Author profiling is a problem of growing importance in applications in forensics, security, and marketing. E.g., from a forensic linguistics perspective one would like being able to know the linguistic profile of the author of a harassing text message (language used by a certain type of people) and identify certain characteristics (language as evidence). Similarly, from a marketing viewpoint, companies may be interested in knowing, on the basis of the analysis of blogs and online product reviews, the demographics of people that like or dislike their products. The focus is on author profiling in social media since we are mainly interested in everyday language and how it reflects basic social and personality processes.

Award

We are happy to announce that the best performing team at the 4th International Competition on Author Profiling will be awarded 300,- Euro sponsored by MeaningCloud.

Task

Gender and language variety identification in Twitter. Demographics traits such as gender and language variety have so far investigated separately. In this task we will provided participantes with a Twitter corpus annotated with authors' gender and their specific variation of their native language:

  • English (Australia, Canada, Great Britain, Ireland, New Zealand, United States)
  • Spanish (Argentina, Chile, Colombia, Mexico, Peru, Spain, Venezuela)
  • Portuguese (Brazil, Portugal)
  • Arabic (Egypt, Gulf, Levantine, Maghrebi)

Although we suggest to participate in both subtasks (gender and language identification) and in all languages, it is possible participating only in one of them and in some of the languages.

Training corpus

To develop your software, we provide you with a training data set that consists of Twitter tweets in English, Spanish, Portuguese and Arabic, labeled with gender and language variety.

Download corpus (Updated March 10, 2017)

Info about additional training material (although domains are different): http://ttg.uni-saarland.de/resources/DSLCC

Output

Your software must take as input the absolute path to an unpacked dataset, and has to output for each document of the dataset a corresponding XML file that looks like this:

  <author id="author-id"
	  lang="en|es|pt|ar"
	  variety="australia|canada|great britain|ireland|new zealand|united states|
	  	argentina|chile|colombia|mexico|peru|spain|venezuela|
		portugal|brazil|
		gulf|levantine|maghrebi|egypt"
	  gender="male|female"
  />
  

The naming of the output files is up to you, we recommend to use the author-id as filename and "xml" as extension.

IMPORTANT! Languages should not be mixed. A folder should be created for each language and place inside only the files with the prediction for this language.

Performance Measures

The performance of your author profiling solution will be ranked by accuracy.

Concretely, we will calculate individual accuracies for each language, gender, and variety. Then, we will average the accuracy values to obtain a joint identification of variety and gender in each language.

Submission

We ask you to prepare your software so that it can be executed via command line calls. More details will be released here soon.

You can choose freely among the available programming languages and among the operating systems Microsoft Windows and Ubuntu. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. More information about how to access the virtual machines can be found in the user guide below:

PAN Virtual Machine User Guide »

Once deployed in your virtual machine, we ask you to access TIRA at www.tira.io, where you can self-evaluate your software on the test data.

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Related Work and Corpora

We refer you to:

Task Chair

Paolo Rosso

Paolo Rosso

Universitat Politècnica de València

Task Committee

Francisco Rangel

Francisco Rangel

Autoritas Consulting

Benno Stein

Benno Stein

Bauhaus-Universität Weimar

Martin Potthast

Martin Potthast

Bauhaus-Universität Weimar

Walter Daelemans

Walter Daelemans

University of Antwerp

Efstathios Stamatatos

Efstathios Stamatatos

University of the Aegean

© pan.webis.de