Call for Papers
Disinformation spreads easily in online social networks and is propagated by social media actors and network communities to achieve specific (mostly malevolent) objectives. Disinformation has deleterious effects on users’ real lives since it distorts their points of view regarding societally-sensitive topics, such as politics, health or religion. Ultimately, it has a negative effect on the very fabric of democratic societies and should be fought against via an effective combination of human and technical means.
Disinformation campaigns are increasingly powered by advanced AI techniques and a lot of effort was put into the detection of fake content. While important, this is only a piece of the puzzle if one wants to address the phenomenon in a comprehensive manner. Whether a piece of information is considered fake or true often depends on the temporal and cultural contexts in which it is interpreted. This is for instance the case for scientific knowledge, which evolves at a fast pace, and whose usage in mainstream content should be updated accordingly.
Multimedia content is often at the core of AI-assisted disinformation campaigns. Their impact is directly related to the perceived credibility of their content. Significant advances related to the automatic generation/manipulation of each modality were obtained with the introduction of dedicated deep learning techniques. Visual content can be tampered with in order to produce manipulated but realistic versions of it. Synthesized speech has attained a high quality level and is more and more difficult to distinguish from the actual voice. Deep language models learned on top of huge corpora allow the generation of text which resembles that written by humans. Combining these advances has the potential to boost the effectiveness of disinformation campaigns. This combination is an open research topic which needs to be addressed in order to reduce the effects of disinformation campaigns. This workshop welcomes contributions related to different aspects of AI-powered disinformation.
Topics of interest include but are not limited to:
- Disinformation detection in multimedia content (video, audio, texts, images)
- Multimodal verification methods
- Synthetic and manipulated media detection
- Multimedia forensics
- Disinformation spread and effects in social media
- Analysis of disinformation campaigns in societally-sensitive domains (e.g., politics, health)
- Explaining disinformation to non-experts
- Disinformation detection technologies for non-expert users
- Dataset sharing and governance in AI for disinformation
- Temporal and cultural aspects of disinformation
- Datasets for disinformation detection and multimedia verification
- Multimedia verification systems and applications
- System fusion, ensembling and late fusion techniques
- Benchmarking and evaluation frameworks
- Open resources, e.g., datasets, tools
The workshop is supported under project H2020 AI4Media "A European Excellence Centre for Media, Society and Democracy", grant #951911, axis ICT-48-2020 / Towards a vibrant European network of AI excellence centres, https://www.ai4media.eu/.