showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

SCIS Research Cluster Seminars (February 2025)

Please click here if you are unable to view this page.

 

Date:

28 February 2025, Friday

Time:

3:45pm to 4:45pm

Venue:

School of Computing & Information Systems 1 (SCIS1), Level 2, Seminar Room 2-4, Singapore Management University, 80 Stamford Road, Singapore 178902

Limited seating. Registration will close on 16 February 2025 or once maximum capacity is reached. Registration is required for attendance. Light refreshment will be provided after the talks.

Research Cluster: Human-Machine Collaborative Systems

 

Topic:

Event-Based Eye Tracking: Challenges, Innovations, and Applications in Real-Time Sensing

Speaker:

Thivya KANDAPPU, Assistant Professor of Computer Science 

Abstract:

Neuromorphic cameras have garnered significant interest in the eye-tracking research community, owing to their sub-microsecond latency in capturing intensity changes resulting from eye movements. Unlike traditional RGB cameras that capture frames at a fixed rate, event cameras produce events asynchronously, on a per-pixel basis, whenever there is a change in the incident intensity and with very low microsecond latency. These unique characteristics make event cameras well-suited for capturing fast, and dynamic pupil kinematics, such as fine-grained and/or micro eye movements, with high temporal resolution and low latency. This talk will first highlight the core shortcomings of event-based eye-tracking and and present recent advancements in two critical areas: (i) novel dynamic event representations and (ii) methods for processing spatiotemporal data with high accuracy. Finally, it will explore applications of on-device pupillary kinematics tracking, such as enhancing user authentication, detecting the early onset of Parkinson’s disease, and measuring cognitive load.

 

Topic:

Mixed-Modality Interfaces for Effective Human-AI Interaction

Speaker:

LI Jiannan, Assistant Professor of Computer Science

Abstract:

With the development of large language models, language has become the dominant interface for humans to communicate with AI. In this talk, I argue that language has its limits as a communication medium because it is often not grounded in the contexts of the tasks. Through two projects in two distinct domains, I show that the visual modality can provide the missing contexts and the complementary use of both the language and visual modalities enables more effective interactions with AI. I first present a robot programming system, ImageInThat, that allows users to program a robot through directly manipulating the images of the robot’s task environment. Experiments showed that users were able to program a robot to complete household tasks more efficiently and accurately with ImageInThat than with language instructions alone. I then introduce two experiments that demonstrated combining in-context visual and textual guidance could improve the learning outcome of software skill acquisition for intelligent tutoring systems. These examples suggest that mixed modality interaction not only empowers humans to exert more precise control over AI but also enables AI to communicate more effectively and contextually with humans, fostering more productive human-AI relationships.

 

Topic:

Proactive Conversational Agents -- A Glimpse into the World of Emotional Support

Speaker:

LIAO Lizi, Assistant Professor of Computer Science, Lee Kong Chian Fellow

Abstract:

Emotional support chatbots have made significant strides in assisting users with their mental and emotional well-being. However, traditional reactive systems often fall short in providing sustained engagement and personalized support. Proactive conversational agents take this a step further by not only responding to users but also anticipating their needs and strategically planning interactions to foster deeper engagement. This talk will explore the evolution of emotional support chatbots, focusing on enhancing the anticipation and planning abilities of proactive agents, as well as the role of multimodal interactions in enriching user experiences. The talk will briefly highlight key research efforts in this domain and conclude with insights into future challenges and opportunities.

 

ABOUT THE SPEAKER(S)

 

Thivya KANDAPPU is an Assistant Professor at the School of Computing and Information Systems at Singapore Management University. Her research mainly focuses on mobile and ubiquitous computing and sensing, particularly in designing wearable devices for personalized monitoring of human cognitive behaviors and behavioral analytics through sensing low-level physical and physiological proxies. Her work has been widely recognized in premier conferences such as UbiComp, MobiSys, NeurIPS, and CSCW.

 

LI Jiannan is an Assistant Professor of Computer Science at the School of Computing and Information Systems, Singapore Management University. His research on human-agent interaction and novel spatial interfaces has led to more than ten papers and a best paper honorable mention award at top-tier HCI conferences (CHI, UIST, CSCW, HRI). He has also served on several program and organizing committees of these conferences, including ACM CHI 24/25 and IEEE VR 20.
 

 

Dr. Lizi LIAO is an Assistant Professor and Lee Kong Chian Fellow at the School of Computing and Information Systems, Singapore Management University. She earned her Ph.D. from the National University of Singapore. Her research focuses on advancing human conversational understanding and response generation through machine learning models. Her work spans areas such as task-oriented dialogues, proactive conversational agents, and multimodal conversational systems, with applications in search and recommendation. Dr. Liao is actively engaged in the academic community, serving as an Area Chair for major conferences such as ACL and EMNLP, and taking on important roles at SIGIR and ACM Multimedia. She also serves as an Associate Editor for ACM TOIS and ACM TOMM and was honored with the Google South Asia & Southeast Asia Research Award 2023.

 

SEMINAR MODERATOR

 

NGO Chong Wah    
Lee Kong Chian Professor of Computer Science,
Director, Human-Machine Collaborative Systems Cluster