Fake News, Disinformation, Propaganda, and Media Bias

Tutorial Website https://propaganda.math.unipd.it/cikm21-tutorial/

Tutorial Schedule

  • The tutorial will start on Nov. 1st at 7am UTC time.
  • 7:00 Introduction and Fact checking
  • 8:00 Q&A and short break
  • 8:10 Fake news, Stance detection and Media bias
  • 9:10 Q&A and short break
  • 9:20 Propaganda Detection, Final Remarks
  • 10:20 Q&A

Tutorial Description

Social media have democratized content creation and have made it easy for anybody to spread information online. However, stripping traditional media from their gate-keeping role has left the public unprotected against biased, deceptive and disinformative content, which could now travel online at breaking-news speed and influence major public events. For example, during the COVID-19 pandemic, a new blending of medical and political disinformation has given rise to the first global infodemic.

We offer an overview of the emerging and inter-connected research areas of fact-checking, disinformation, “fake news”, propaganda, and media bias detection. We explore the general fact-checking pipeline and important elements thereof such as check-worthiness estimation, spotting previously fact-checked claims, stance detection, source reliability estimation, detection of persuasion techniques, and detecting malicious users in social media. We also cover large-scale pre-trained language models, and the challenges and opportunities they offer for generating and for defending against neural fake news.

Tutorial Organisers

  • Head shot of Preslav Nakov
    Preslav Nakov
    Qatar Computing Research Institute, HBKU
    Dr. Preslav Nakov is a Principal Scientist at the Qatar Computing Research Institute (QCRI), HBKU. His research interests include computational linguistics, "fake news" detection, fact-checking, machine translation, question answering, sentiment analysis, lexical semantics, Web as a corpus, and biomedical text processing. He received his PhD degree from the University of California at Berkeley (supported by a Fulbright grant), and he was a Research Fellow in the National University of Singapore, a honorary lecturer in the Sofia University, and research staff at the Bulgarian Academy of Sciences. At QCRI, he leads the Tanbih project, developed in collaboration with MIT, which aims to limit the effect of "fake news", propaganda and media bias by making users aware of what they are reading. Dr. Nakov is President of ACL SIGLEX, Secretary of ACL SIGSLAV, and a member of the EACL advisory board. He is member of the editorial board of TACL, CS&L, NLE, AI Communications, and Frontiers in AI. He is also on the Editorial Board of the Language Science Press Book Series on Phraseology and Multiword Expressions. He co-authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and many research papers in top-tier conferences and journals. He also received the Young Researcher Award at RANLP’2011. Moreover, he was the first to receive the Bulgarian President’s John Atanasoff award, named after the inventor of the first automatic electronic digital computer. Dr. Nakov’s research on "fake news" was featured by over 100 news outlets, including Forbes, Boston Globe, Aljazeera, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget, among others.
  • Head shot of Giovanni Da San Martino
    Giovanni Da San Martino
    University of Padova
    Giovanni Da San Martino is a Senior Assistant Professor at the University of Padova, Italy. His research interests are at the intersection of machine learning and natural language processing. He has been researching for 10+ years on these topics, publishing more than 70 publications in top-tier conferences and journals. He has worked on several NLP tasks including paraphrase recognition and stance detection and community question answering. Currently, he is actively involved in research on disinformation and propaganda detection. He is co-organiser of the Checkthat! labs at CLEF 2018-2021, the NLP4IF workshops on censorship, disinformation, and propaganda, and of its shared task, the 2019 Hack the News Datathon, the SemEval 2020 task 11 on "Detection of propaganda techniques in news articles" and the SemEval 2021 task 6 on "Detection of Persuasive Techniques in Texts and Images".

Tutorial Abstract

The rise of Internet and social media changed not only how we consume information, but it also democratized the process of content creation and dissemination, thus making it easily available to anybody. Despite the hugely positive impact, this situation has the downside that the public was left unprotected against biased, deceptive, and disinformative content, which could now travel online at breaking-news speed and allegedly influence major events such as political elections, or disturb the efforts of governments and health officials to fight the ongoing COVID-19 pandemic. The research community responded to the issue, proposing a number of inter-connected research directions such as fact-checking, disinformation, misinformation, fake news, propaganda, and media bias detection. Below, we cover the mainstream research, and we also pay attention to less popular, but emerging research directions, such as propaganda detection, check-worthiness estimation, detecting previously fact-checked claims, and multimodality, which are of interest to human fact-checkers and journalists. We further cover relevant topics such as stance detection, source reliability estimation, detection of persuasion techniques in text and memes, and detecting malicious users in social media. Moreover, we discuss large-scale pre-trained language models, and the challenges and opportunities they offer for generating and for defending against neural fake news. Finally, we explore some recent efforts aiming at flattening the curve of the COVID-19 infodemic.