The past decade has seen the great potential of applying deep neural network (DNN) based software to safety-critical scenarios, such as, image classification, audio recognition and autonomous driving. Similar to traditional software, DNNs could exhibit incorrect behaviours, caused by hidden defects, leading to severe accidents and losses. It is very important for quality and security assurance of deep learning systems, especially for those applied in safety- and mission-critical scenarios. However, the quality and security assurance of deep learning software is still at a very early stage. Due to the black-box nature of the deep learning software, it is rather challenging to analyze and explain its behaviors. In this talk, the speaker would present the full-stack analysis framework for quality and security assurance of deep learning systems. In particular, he will present the model-based analysis techniques from traditional software to deep learning software. Based on the model, they make a very first step in the interpretation, automated testing, fault localization, automated repair, adversarial attack detection and robustness analysis of deep neural networks.
About the Speaker
Xiaofei Xie is a Wallenberg-NTU Presidential Postdoctoral Fellow in Nanyang Technological University, Singapore. He received his Ph.D from Tianjin University and won the CCF Outstanding Doctoral Dissertation Award (2019) in China. His research mainly focuses on traditional software analysis including loop analysis, software testing and vulnerability detection, and quality and security assurance of deep learning software. He has made some top tier conference/journal papers relevant in software engineering, security and AI domains such as ISSTA, FSE, TSE, ASE, ICSE, AAAI, IJCAI, ICLR, CCS, TDSC and TIFS. In particular, he won two ACM SIGSOFT Distinguished Paper Awards (FSE’16 and ASE’19).
He is a tenure-track faculty candidate for the Information Systems & Technology, Software Engineering & Systems cluster.