showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Thesis Defense by WANG Zilin | Robust Learning with Probabilistic Relaxation Using Hypothesis-Test-Based Sampling

Please click here if you are unable to view this page.

 
 

Robust Learning with Probabilistic Relaxation Using Hypothesis-Test-Based Sampling

WANG Zilin

MPhil (IS) Student
School of Computing and Information Systems
Singapore Management University
 

FULL PROFILE

Research Area

Dissertation Committee

Research Advisor
Committee Members
 

Date

12 December 2024 (Thursday)

Time

1:00pm - 2:00pm

Venue

Meeting room 4.4, Level 4. School of Computing and Information Systems 1, Singapore Management University, 
80 Stamford Road 
Singapore 178902

Please register by 11 December 2024.

We look forward to seeing you at this research seminar.

 

About The Talk

In recent years, deep learning has been a vital tool in various tasks. The performance  of a neural network is usually evaluated by empirical risk minimization. However,  robustness issues have gained great concern which can be fatal in safety-critical  applications. Adversarial training can mitigate the issue by minimizing the loss of  worst-case perturbations of data. It is effective in improving the robustness of the  model, but it is too conservative, and the plain performance of the model can be unsatisfying. Probabilistic Robust Learning (PRL) empirically balances the average- and worst-case performance while the robustness of the model is not provable in most  of the current work. This thesis proposes a novel approach for robust learning by sampling based on hypothesis testing. The approach guides the training to improve robustness in a highly efficient probabilistic robustness setting. It also enforces the robustness to be certified provably. We evaluate our new framework by generating adversarial samples from several popular datasets and comparing the performance with other state-of-the-art works. The results of our approach illustrate a close performance on simple classification tasks and a better performance on more difficult tasks compared to the state-of-the-art works.

 

About The Speaker

I am WANG Zilin, a Master student supervised by Professor SUN Jun. During my study in SMU, I worked on AI semantic testing, got familiar with the semantics and operations of TensorFlow layers, and successfully reproduced the semantics of most TensorFlow layers using Prolog. I also researched on AI robust learning and proposed a novel approach to effective and efficient learning while guaranteeing robustness.