Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications

Tutorial Website http://tutorial-cikm.trustai.uk/

Tutorial Schedule

  • We will do the tutorial by pre-recording videos and online Q&A sessions. We will release our tutorial videos via website http://tutorial-cikm.trustai.uk/ by 10 Oct
  • On the tutorial day, 1 Nov, we will deliver three online Q&A sessions via Zoom
  • Q&A Session 1: 11:00 – 11:30 (UTC, 1 Nov)
  • Q&A Session 2: 17:30 – 18:00 (UTC, 1 Nov)
  • Q&A Session 3: 21:00 – 21:30 (UTC, 1 Nov)

Tutorial Description

This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples. This tutorial will particularly highlight state-of-the-art techniques in adversarial attacks and robustness verification on deep learning models We will also introduce some effective countermeasures to improve robustness of deep learning models, with a particular focus on adversarial training. We aim to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers. We will also summarize potential research directions concerning the adversarial robustness of deep learning, and its potential benefits to enable accountable and trustworthy deep learning-based data analytical systems and applications.

Tutorial Organisers

Tutorial Abstract

This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples. This tutorial will particularly highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs). We will also introduce some effective countermeasures to improve robustness of deep learning models, with a particular focus on adversarial training. We aim to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers. We will also summarize potential research directions concerning the adversarial robustness of deep learning, and its potential benefits to enable accountable and trustworthy deep learning-based data analytical systems and applications.