Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.
When you registered, you will get remote access to a virtual machine (Windows or Linux) to deploy the task software in. Your software must be executable from the command line and not require Internet access during the evaluation period.
Note that you retain full copyrights of your software, but agree to grant us usage rights only for the purpose of the competition.
We provide a random baseline to illustrate the output of a submission and a term frequency extractor to illustrate how to read the dataset. For features, see the code from our ACL'18 publication for inspiration.
You will self-evaluate your software using TIRA. Main performance measure will be accuracy on a balanced set of articles. In addition, we will measure precision, recall, and F1-score for the hyperpartisan class (evaluation script).
You will be able to designate up to three test-data runs for this competition: one submitted before 12 Dec ("Early Bird") and two more submitted before 23 Jan.
After evaluation, participants are required to submit and review task description papers following the SemEval timeline.
To want to encourage developers to share their software so that everyone can profit from their work. We are thus excited to announce a grand prize of $1,000 to the best-performing submission that has its code published open source before the SemEval conference (which is held at NAACL-HLT on June 6-7 in Minneapolis, USA).