| |
| |
|
DEEPCAUSE: VERIFYING NEURAL NETWORKS WITH ABSTRACTION REFINEMENT
|
|

|
NGUYEN Hua Gia Phuc
Master Student
School of Computing and Information Systems
Singapore Management University
|
|
Research Area
Dissertation Committee
Research Advisor
Committee Members
|
|
|
|
Date
16 September 2022 (Friday)
|
|
Time
3:30pm - 4:30pm
|
|
Venue
Meeting room 5.1, Level 5,
School of Computing and Information Systems 1,
Singapore Management University,
80 Stamford Road Singapore 178902
|
|
We look forward to seeing you at this research seminar.

|
|
|
| |
|
About The Talk
Neural networks have been becoming essential parts in many safety-critical systems (such as self-driving cars and medical diagnosis). Due to that, it is desirable that neural networks not only have high accuracy (which traditionally can be validated using a test set) but also satisfy some safety properties (such as robustness, fairness, or free of backdoor). To verify neural networks against desired safety properties, there are many approaches developed based on classical abstract interpretation. However, like in program verification, these approaches suffer from false alarms, which may hinder the deployment of the networks.
One natural remedy to tackle the problem adopted from program verification community is counterexample-guided abstraction refinement (CEGAR). The application of CEGAR in neural network verification is, however, highly non-trivial due to the complication raised from both neural networks and abstractions. In this thesis, we propose a method to enhance abstract interpretation in verifying neural networks through an application of CEGAR in two steps. First, we employ an optimization-based procedure to validate abstractions along each propagation step and identify problematic abstraction via counterexample searching. Then, we leverage a causality approach to select the most likely problematic components of the abstraction and refine them accordingly.
To evaluate our approach, we have implemented a prototype named DeepCause based on DeepPoly and take local robustness as the target safety property to verify. The evaluation shows that our proposal can outperform DeepPoly in all benchmark networks.
|
| |
|
Speaker Biography
My name is Nguyen Hua Gia Phuc and I am a Master student in School of Computing and Information Systems, Singapore Management University. I am supervised by Professor SUN Jun. My research work focuses on developing tools to verify the local robustness of deep neural network.
|
|