showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Pre-Conference Talk by ZHOU Xin

Please click here if you are unable to view this page.

 

Pre-Conference Talk by ZHOU Xin
 

DATE :

11 April 2024, Thursday

TIME :

6:00pm - 6:30pm

VENUE :

Meeting room 5.1, Level 5
School of Computing and Information Systems 1,
Singapore Management University,
80 Stamford Road,
Singapore 178902

Please register by 10 April 2024

 

 
 

There are 2 talks in this session. Both are for a Pre-Conference talk for 46th International Conference on Software Engineering (ICSE 2024).

 

About the Talk (s)

Talk #1: Large Language Model for Vulnerability Detection: Emerging Results and Future Directions

Recently, rapid advancements in Large Pre-Trained Language Models (LLMs) have garnered attention for their remarkable few-shot learning capabilities, with notable attention directed toward ChatGPT. Despite the widespread adoption of ChatGPT, its effectiveness and potential in detecting software vulnerabilities remain largely unexplored. This paper aims to bridge this gap by investigating the efficacy of ChatGPT (built on GPT-3.5 and GPT-4) with diverse prompts. Experimental results demonstrate that, with the incorporation of our designed prompts, ChatGPT (GPT-3.5) exhibits a significant improvement of 25.4% in terms of Accuracy. With the utilization of prompts, ChatGPT (GPT-3.5) achieves competitive performance with the state-of-the-art vulnerability detection approach and ChatGPT (GPT-4) outperformed the state-of-the-art by 34.8% in terms of Accuracy.

Talk #2: Out of Sight, Out of Mind: Better Automatic Vulnerability Repair by Broadening Input Ranges and Sources

The advances of deep learning (DL) have paved the way for automatic software vulnerability repair approaches, which effectively learn the mapping from the vulnerable code to the fixed code. Nevertheless, existing DL-based vulnerability repair methods face notable limitations. To address this, we propose VulMaster, a Transformer-based neural network model that excels at generating vulnerability repairs by comprehensively understanding the entire vulnerable code, irrespective of its length. This model also integrates diverse information, encompassing vulnerable code structures and expert knowledge from the CWE system. The experimental results demonstrated that VulMaster exhibits substantial improvements compared to the learning-based state-of-the-art vulnerability repair approach.

 

About the Speaker

 

ZHOU Xin is a Ph.D. candidate in SCIS, under the supervision of Prof. David LO. Xin's research focuses on pre-trained code representation and automation for software maintenance and development.