showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Thesis Defense by NGUYEN Hua Gia Phuc | DEEPCAUSE: VERIFYING NEURAL NETWORKS WITH ABSTRACTION REFINEMENT

Please click here if you are unable to view this page.

 
 
DEEPCAUSE: VERIFYING NEURAL NETWORKS WITH ABSTRACTION REFINEMENT

NGUYEN Hua Gia Phuc

MPhil (IS) Candidate
School of Computing and Information Systems
Singapore Management University
 

FULL PROFILE
Research Area Dissertation Committee
Research Advisor
Committee Members
 
Date

8 November 2022 (Tuesday)

Time

10:00am - 11:00am

Venue

Meeting room 5.1, Level 5,
School of Computing and Information Systems 1,
Singapore Management University,
80 Stamford Road Singapore 178902

We look forward to seeing you at this research seminar.

 
About The Talk

Neural networks have been becoming essential parts in many safety-critical systems (such as self-driving cars and medical diagnosis). Due to that, it is desirable that neural networks not only have high accuracy (which traditionally can be validated using a test set) but also satisfy some safety properties (such as robustness, fairness, or free of backdoor). To verify neural networks against desired safety properties, there are many approaches developed based on classical abstract interpretation. However, like in program verification, these approaches suffer from false alarms, which may hinder the deployment of the networks.

One natural remedy to tackle the problem adopted from program verification commu-nity is counterexample-guided abstraction refinement (CEGAR). The application of CE-GAR in neural network verification is, however, highly non-trivial due to the complication raised from both neural networks and abstractions. In this thesis, we propose a method to enhance abstract interpretation in verifying neural networks through an application of CEGAR in two steps. First, we employ an optimization-based procedure to validate abstractions along each propagation step and identify problematic abstraction via coun-terexample searching. Then, we leverage a causality approach to select the most likely problematic components of the abstraction and refine them accordingly.

To evaluate our approach, we have implemented a prototype named DeepCause based on DeepPoly and take local robustness as the target safety property to verify. The evalua-tion shows that our proposal can outperform DeepPoly and RefinePoly in all benchmark networks. We should note that our idea is not limited to specific abstract domain and we believe it is a promising step towards enhancing verification of complex neural network systems.

 
Speaker Biography

Phuc joined the Singapore School of Management as a postgraduate student. Phuc conducted research on smart contract and neural networks. In particular, he is interested in verifying the local robustness of deep neural network. Before joining SMU, Phuc graduated with a Bachelor’s degree in from Ho Chi Minh University of Technology in Vietnam. He presented his paper at ICCCI 2018 conference and worked as an intern under Professor Sun Jun in SMU around 6 months in 2019. He enjoys reading books, playing game, and watching anime in his leisure time.