Broadly speaking, stereotypes are features perceived to be associated with particular categories of people, and stereotyping corresponds to the characterisation of a group of people as sharing the same behaviours and attributes. According to the economic approach, stereotypes are beliefs about a group member in terms of the aggregate statistical distribution of group traits. In the sociological approach, stereotypes are described as oversimplified, derogatory and fundamentally incorrect generalisations about social groups. Finally, in the social cognition approach, stereotypes are seen as special cases of cognitive schemas: limited-capacity human minds create shortcuts via judgmental heuristics that result in savings on cognitive resources. These judgments are based on data of limited validity, processed according to heuristic rules, that might lead to biased conclusions. More recently, the investigation of stereotype formation and function has been explored in the framework of Bayesian predictive processing. Learning for the predictive brain involves testing predictions by using the data obtained from the world and applying Bayes’ theorem to develop probabilities. According to this learning process, instead of conceptualising stereotypes as a problem of the cognitive bias of the individual, it can be sustained that they should be viewed as “culture in mind”, influencing the cognition of cultural group members. In this talk, I will discuss statistical aspects of the stereotyping process and its biases and present some results of our research on stereotypes.
PAN at CLEF 2022
Shared Tasks
- Authorship Verification
- Profiling Irony and Stereotype Spreaders on Twitter (IROSTEREO)
- Style Change Detection
Important Dates
- April 26, Early bird software submission phase (optional)
- May 24, Software submission deadline
- May 27, 2022: Participant paper submission
- June 13, 2022: Peer review notification
- July 1, 2022: Camera-ready participant papers submission
- tba: Early bird conference registration
- September 5-9, 2022: Conference
The timezone of all deadlines is Anywhere on Earth.
Keynotes
A variety of typologies of figurative messages can be recognised in social media: from irony to sarcastic posts, and to facetious tweets that can be playful, aimed at amusing or at strengthening ties with other users. In the last decade, irony and sarcasm have been proven to be pervasive in social media, posing a challenge to sentiment analysis systems. They are creative linguistic phenomena where affect-related aspects play a key role. They can influence and twist the affect of an utterance in complex and different ways, they can elicit various affective reactions, and can behave differently with respect to the polarity reversal phenomenon. Recently, awareness of the importance of automatically detecting irony and other figures of speech to correctly recognise hate speech in social media has grown, hand in hand with the need to create computational models capable of going under the shallow and identifying implicit and indirect expressions of abuse, where the the main challenges are related, on the one hand, to the use of figurative devices (i.e., irony and sarcasm), on the other hand, to the recall of inner ideologies (i.e., sexist ideology) and cognitive schemas (i.e., stereotypes). In fact, in case of negative and hateful opinions, social media users may tend to be less explicit, employing irony and sarcasm in their claims, in order to limit their exposure. In particular, sarcasm - a sharp and very effective form of irony used for mocking and ridiculing a victim - often recurs in hateful messages, lowering the social cost of what has been said. Identifying such indirect and implicit forms characterising hate speech is crucial not only to gain a richer understanding of the phenomena, but also because they often increase the viral load of the hate message (and its dangerousness or the possibility of fuelling a hate campaign): users sharing such contents do so with more levity when the message does not contain explicit insults. On this line, from a computational linguistics perspective, it is interesting to study how to make abusive language detection systems sensitive to implicit expressions of hate, and how the injection of linguistic (and affective) knowledge into the detection models can be useful to capture such implicit levels of meaning, with the final aim of investigating whether awareness of the presence of irony and sarcasm increase the performance of abusive language detection systems.
Program
PAN's program is part of the CLEF 2022 conference program.
The schedule below is tentative; it may be adjusted once we have participants' confirmations. Please note that all session times below are given in Bologna local time
Monday, September 5. | |
11.10-12.30 | CLEF Session: Lab overviews (ArqMath, BioAsq, iDPP, PAN) |
15:20-16:50 | Keynote and Lab Session: IROSTEREO, Chair: Paolo Rosso |
15:20-15:25 | Welcome |
15:25-16:20 | Keynote: Stereotyping: explanation and fallacies from a probabilistic and statistical perspective Lara Fontanella |
16:20-16:50 | Overview of the IROSTEREO task Reynier Ortega Bueno, Berta Chulvi, Francisco Rangel, Paolo Rosso and Elisabetta Fersini |
17:10-18:40 | Keynote and Lab Session: IROSTEREO, Chair: Francisco Rangel |
17:10-17:15 | Best system award of the IROSTEREO task Symanto |
17:15-17:30 | BERT-based ironic authors profiling Wentao Yu, Benedikt Boenninghoff, Dorothea Kolossa |
17:30-17:45 | Exploiting Affective-based Information for Profiling Ironic Users on Twitter Delia Irazu Hernandez Farias, Manuel Montes-Y-Gómez |
17:45-18:40 | Keynote: Fast and furious: when irony meets hatred and prejudice in social media Viviana Patti |
Tuesday, September 6. | |
15:30-17:00 | Lab Session: Style Change Detection, Chair: Eva Zangerle |
15:30-16:00 | Overview of the Style Change Detection task Eva Zangerle, Maximilian Mayerl, Martin Potthast and Benno Stein |
16:00-16:15 | Ensemble Pre-trained Transformer Models for Writing Style Change Detection Tzu-Mi Lin, Chao-Yi Chen, YuWen Tzeng, Lung-Hao Lee |
16:15-16:30 | Style Change Detection using Discourse Markers Faisal Alvi, Hasan Algafri, Naif Alqahtani |
17:20-18:50 | Lab Session: Authorship Verification, Chair: Benno Stein |
17:20-18:00 | Overview of the Authorship Verification task Efstathios Stamatatos, Mike Kestemont, Krzysztof Kredens, Piotr Pezik, Annina Heini, Janek Bevendorff, Martin Potthast and Benno Stein |
18:00-18:20 | Graph-Based Siamese Network for Authorship Verification Jorge Alfonso Martinez Galicia, Daniel Embarcadero Ruiz, Alejandro Ríos Orduña, Helena Gómez Adorno |
18:20-18:40 | Different Encoding Approaches for Authorship Verification Stefanos Konstantinou, Jinqiao Li, Angelos Zinonos |
Wednesday, September 7. | |
8:50-10:20 | Posters |