Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.
We will provide 1 million articles labeled by the overall tendency of the publisher for training your algorithm as well as a about 500 manually labeled articles for validation.
When you registered, you will get remote access to a virtual machine (Windows or Linux) to deploy the task software in. Your software must be executable from the command line and not require Internet access during the evaluation period.
Note that you retain full copyrights of your software, but agree to grant us usage rights only for the purpose of the competition.
We will soon provide the code from our ACL'18 publication to get you started.
You will be able to self-evaluate your software using the TIRA service. Main performance measure will be accuracy on a balanced set of articles. In addition, we will measure precision, recall, and F1-score for the hyperpartisan class.
After the evaluation, participants are required to submit and review task description papers following the SemEval timeline.
To want to encourage developers to share their software so that everyone can profit from their work. We are thus excited to announce a grand prize of $1,000 to the best-performing submission that has its code published open source before the SemEval conference.