showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Faculty Job Talk by ZHANG Jingfeng | Adversarial Robustness: Adversarial Training and Its Applications

Please click here if you are unable to view this page.

 
Adversarial Robustness: Adversarial Training and Its Applications

Speaker (s):

ZHANG Jingfeng
Postdoctoral Researcher
RIKEN-AIP, Japan

Date:

Time:

Venue:

 

13 May 2022, Friday 

2:30pm - 3:45pm

This is a virtual seminar. Please register by 9 May 2022, the meeting link will be sent to those who have registered on the following day.

We look forward to seeing you at this research seminar.

About the Talk

When we deploy models trained by standard training (ST), they work well on natural test data. However, those models cannot handle adversarial test data (also known as adversarial examples) that are algorithmically generated by adversarial attacks. An adversarial attack is an algorithm that applies specially designed tiny perturbations on natural data to transform them into adversarial data, misleading a trained model to give wrong predictions. Adversarial robustness aims to improve the robust accuracy of trained models against adversarial attacks, which can be achieved by adversarial training (AT). What is AT? Given the knowledge that the test data may be adversarial, AT carefully simulates some adversarial attacks during training. Thus, the model has already seen many adversarial training data in the past, and hopefully, it can generalize to adversarial test data in the future. AT has two purposes: (1) correctly classify the data (same as ST) and (2) make the decision boundary thick so that no data lie nearby the decision boundary.

In this talk, the speaker will introduce two effective AT strategies in detail, i.e., friendly adversarial training and geometry-aware instance-dependent adversarial training. He will also introduce AT’s modifications, AT’s intriguing property (i.e., its smoothing effect), and AT’s applications for enhancing the reliability of AI-powered tools.

About the Speaker

Jingfeng Zhang is a researcher in RIKEN-AIP at the “imperfect Information Learning Team’’ led by Prof. Masashi Sugiyama. Prior to RIKEN-AIP, Jingfeng obtained his Ph.D. degree (in 2020) under Prof. Mohan Kankanhalli at the School of Computing in the National University of Singapore. Jingfeng is the receiver of JST Strategic Basic Research Programs ACT-X funding (2021 - 2023), JSPS Grants-in-Aid for Scientific Research (KAKENHI) Early-Career Scientists funding (2022 - 2023), and the RIKEN Ohbu award 2022. Jingfeng serves as a long-standing reviewer for prestigious ML conferences and journals such as ICLR, ICML, NeurIPS, CVPR, AAAI, IJCAI, ACM MM, etc. Jingfeng’s long-term research strives to develop safe, trustworthy, reliable, and extensible machine learning (ML) technology. 

He is a tenure-track faculty candidate for the Artificial Intelligence & Data Science, Machine Learning & Intelligence.