Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.
Thanks to the 322 teams who registered for this task!
The competition is over, but you can still request access to the data on Zenodo.
Main performance measure is accuracy on a balanced set of articles. In addition, we measure precision, recall, and F1-score for the hyperpartisan class (evaluation script). Please find the SemEval leaderboard here. However, note that you can still submit your approaches (see Ongoing Submissions).
In order to encourage developers to share their software so that everyone can profit from their work, we announced a grand prize of $1,000 to the best-performing submission that had its code published open source before the SemEval conference (which has been held at NAACL-HLT on June 6-7 in Minneapolis, USA).
The grand prize has been won by team bertha-von-suttner (Ye Jiang, Johann Petrak, Xingyi Song, Kalina Bontcheva, Diana Maynard from the University of Sheffield, United Kingdom). Well done!
We continue to allow submissions. New submissions will appear on the TIRA leaderboard [labels-by-article, labels-by-publisher]. When you publish your code on Github, we will add it to our list. To participate, please send a mail to the organizer's mailing list with your choice of a operating system (Ubuntu/Windows) and code name and we will provide you with a virtual machine. You can find help here and on the PAN mailing list.
Your software must be executable from the command line and not require Internet access during the evaluation. Note that you retain full copyrights of your software, but agree to grant us usage rights only for the purpose of the competition.
We provide a random baseline to illustrate the output of a submission and a term frequency extractor to illustrate how to read the dataset. For features, see the code from our ACL'18 publication for inspiration.