showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

SCIS Research Cluster Seminars (February 2024)

Please click here if you are unable to view this page.

 

Date:

23 February 2024, Friday

Time:

3:30pm to 5:15pm

Venue:

Seminar Room B1-2, Basement 1 
School of Economics/School of Computing & Information Systems 2, Singapore Management University, 90 Stamford Road, Singapore 178903

Limited seating. Registration will close on 15 February 2024 or once maximum capacity is reached. Registration is required for attendance.

Research Cluster: Artificial Intelligence & Data Science

 

Topic:

Stable Diffusion and its Adaptation to Non-Visible Light Imagery 
 

Speaker:

SUN Qianru, Assistant Professor of Computer Science 
 

Abstract:

Stable Diffusion (SD) has achieved a nearly perfect synthesis of regular images. In our recent work, we are very interested in exploring its “hidden” ability in non-visible light domains. We take the Synthetic Aperture Radar (SAR) data for a case study. Due to the inherent challenges in capturing satellite data, acquiring ample SAR training samples is infeasible. For instance, for a particular category of ship in the open sea, we can collect only a few-shot SAR images which are too limited to derive effective ship classification or detection models. If large-scale models pre-trained with regular images (such as SD) can be adapted to generate novel SAR images, the problem is solved. In a preliminary study, we found that fine-tuning or LoRA-based-adapting an SD model with few-shot SAR images is not working at all, as the models can not capture the two primary differences between SAR and regular images: structure and modality, from those few-shot samples. In this talk, I will introduce how we address this issue and adapt the models to synthesize high-utility SAR images. I will also show the results of using these images for data augmentation in SAR recognition tasks.

 

Research Cluster: Human-Machine Collaborative Systems

 

Topic:

Understanding Videos from Egocentric Perspective 
 

Speaker:

Bin ZHU, Assistant Professor of Computer Science 
 

Abstract:

Egocentric videos, captured through wearable devices such as GoPro cameras and Google Glass, offer a unique first-person viewpoint that mirrors the wearer's own experiences and interactions. This perspective not only provides rich visual data, but is also beneficial in advancing the immersive quality and user-centric design of virtual and augmented reality (VR and AR) technologies. In this talk, I will introduce my recent efforts on egocentric video understanding. I will present how to recognize human actions within these videos, even in the absence of human annotations in target domains. Furthermore, I will also present object segmentation and hand-object interaction in egocentric videos.

 

Research Cluster: Information Systems & Technology

 

Topic:

Runtime Enforcement for Autonomous Vehicle  
 

Speaker:

SUN Yang, PhD Candidate  
 

Abstract:

Autonomous driving systems (ADSs) integrate sensing, perception, drive control, and several other critical tasks in autonomous vehicles, motivating research into techniques for assessing their safety. While there are several approaches for testing and analysing them in high-fidelity simulators, ADSs may still encounter additional critical scenarios beyond those covered once they are deployed on real roads. An additional level of confidence can be established by monitoring and enforcing critical properties when the ADS is running. 

Existing work, however, is only able to monitor simple safety properties (e.g., avoidance of collisions) and is limited to blunt enforcement mechanisms such as hitting the emergency brakes. 

In this work, we propose REDriver, a general and modular approach to runtime enforcement, in which users can specify a broad range of properties (e.g., national traffic laws) in a specification language based on signal temporal logic (STL). REDriver monitors the planned trajectory of the ADS based on a quantitative semantics of STL, and uses a gradient-driven algorithm to repair the trajectory when a violation of the specification is likely.

 
 

ABOUT THE SPEAKER(S)

  

Qianru is an Assistant Professor at the SCIS of SMU and was awarded Lee Kong Chian Fellow from 2021 to 2023. She leads a research team now with 8 members whose interests are computer vision and machine learning techniques. She has published over 50 papers in prestigious conferences and journals like CVPR, ICCV, ECCV, NeurIPS (NIPS before 2018), ICLR, T-PAMI, etc. For service, she works actively as an Area Chair for the conferences of CVPR, ICCV, and ECCV, and an Associate Editor for the journals of Pattern Recognition as well as IEEE Trans. on Multimedia. 
 

  

Bin ZHU is currently an Assistant Professor of Computer Science in the School of Computing and Information Systems , Singapore Management University (SMU). Before joining SMU, he was postdoctoral researcher working at the Department of Computer Science, University of Bristol, UK. Prior to that, he obtained his Ph.D. degree in Computer Science from City University of Hong Kong. His research interest lies in Human Centered Multimedia Analysis, including cross-modal search and creation, egocentric video understanding, multi‐modal large language model and AI for Healthcare. He serves as one of the organizing committee members for EPIC-KITCHENS-100 Challenges. 
 

  

Sun Yang is a Ph.D. candidate at SMU SCIS, supervised by Prof. SUN Jun. Yang's research focuses on evaluating and improving Autonomous Driving Systems.

 

  

SEMINAR MODERATOR

  

Jing JIANG  
Professor of Computer Science 
Director, Artificial Intelligence & Data Science Cluster