Register now!

Workshop Keynotes

PAN is co-located with the CLEF conference and will be held from September 09 to 12, 2019.

Giancarlo Ruffo
Hoax vs fact checking: understanding and predicting the diffusion of low quality information on communication networks
University of Turin, Italy

The Internet and online social networks have amplified information diffusion processes, but at the same time, they provide fertile ground for the spread of misinformation, rumors, and hoaxes. The goal of this work is to introduce a simple modeling framework to study these phenomena: following the epidemic approach and motivated by results in literature, we look at misinformation as an instance of the more general concept of information diffusion, and we propose an adaption of the classic SIS (Susceptible-Infected-Susceptible) model to the case of misinformation by adding two essential socio-cognitive features: forgetting and competition with fact-checking efforts. First, we focus on how the availability of debunking information may contain the misinformation diffusion. Our approach allows to quantitatively gauge the minimal reaction necessary to eradicate a hoax. Second, we simulate the spreading dynamics on networks with two communities of gullible and skeptic users, with different propensities to believe hoaxes and a segregation parameter that represents the sparsity of links between the two communities. Simulations show that segregation plays an important role in the diffusion of misinformation, but can have different effects varying other parameters. Finally, we validate our model on Twitter data (both fake news and debunking), obtaining good results. Our encouraging findings suggest that fact-checking can be still considered useful in fighting misinformation, but also that the structure of the underlying social network is very important in the spreading process evolution, then further investigation in this direction is absolutely necessary in order to develop new tools and solutions to limit the diffusion of fake news.

Giancarlo Ruffo, Ph.D, is Associate Professor of Computer Science at the University of Turin, Italy from 2006, Adjunct Professor at Schools of Informatics and Computing from 2011, Indiana University, ISI fellow (awarded by ISI Foundation) from 2015, and coordinator of the master's degree program in "Networks and Computational Systems" (Reti e Sistemi Informatici) at University of Turin. His current research interests fall in the multidisciplinary research area of Computational Social Science and Network Science, with focus on data visualization and data-driven approaches to model the diffusion of misinformation, opinion polarization in social media. He also investigated research problems on web and data mining, recommendation systems, social media, distributed applications, peer-to-peer systems, security, and micro-payment schemes. He is the principal investigator of ARCS group, and he has led several research projects. He has published about 50 peer-reviewed papers in international journals and conferences. Aside his academic work, he has been involved in many other professional activities as free-lance consultant in the last 20 years. In 2013 he co-founded NetAtlas s.r.l., a tech company specialized in data modeling, analysis and management, data visualization, and ICT solutions.

Read more… Read less…
Preslav Nakov
Exposing Paid Opinion Manipulation Trolls
Qatar Computing Research Institute (QCRI), HBKU

The practice of using opinion manipulation trolls has been reality since the rise of Internet and community forums. It has been shown that user opinions about products, companies and politics can be influenced by posts by other users in online forums and social networks. This makes it easy for companies and political parties to gain popularity by paying for "reputation management" to people or companies that write in discussion forums and social networks fake opinions from fake profiles.

A natural question is whether such trolls can be found and exposed automatically. This is hard as there is no enough data to train a classifier; yet, it is possible to obtain some test data, as such trolls are sometimes caught and widely exposed. Yet, one still needs training data. We solve the problem by assuming that a user who is called a troll by several different people is likely to be one, and one who has never been called a troll is unlikely to be such. We compare the profiles of (i) paid trolls vs. (ii) "mentioned" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii).

Preslav Nakov, Ph.D, is a Principal Scientist at the Qatar Computing Research Institute (QCRI), HBKU. His research interests include computational linguistics, "fake news" detection, fact-checking, machine translation, question answering, sentiment analysis, lexical semantics, Web as a corpus, and biomedical text processing. At QCRI, he leads the Tanbih project, developed in collaboration with MIT, which aims to limit the effect of "fake news", propaganda and media bias by making users aware of what they are reading. Dr. Nakov is the Secretary of ACL SIGLEX and ACL SIGSLAV, and a member of the EACL advisory board. He also serves on the editorial boards of the Journals of Transactions of the Association for Computational Linguistics, Computer Speech and Language, Natural Language Engineering, AI Communications, and Frontiers in AI. Dr. Nakov received his PhD from the University of California at Berkeley (supported by a Fulbright grant). He is the recipient of the Bulgarian President's John Atanasoff award, named after the inventor of the first automatic electronic digital computer.

Read more… Read less…