WU Min, Principal Scientist, Institute for Infocomm Research, A*STAR
Date
14 November 2024 (Thursday)
Time
2:00pm – 3:00pm
Venue
Meeting room 4.4, Level 4 School of Computing and Information Systems 1, Singapore Management University, 80 Stamford Road, Singapore 178902
Please register by 13 November 2024.
We look forward to seeing you at this research seminar.
ABOUT THE TALK
Recommendation systems have been widely deployed in various scenarios and applications, such as e-commerce, social media, and streaming services. Recommendation systems have significantly influenced how we interact with various items in a wide range of platforms. They help users discover their preferred items and provide efficient and enjoyable experiences. They also help item providers and platforms to quickly find their potential customers, thus increasing the total revenue and user engagement. The majority of existing recommendation systems merely focus on the matching between users and items, aiming for higher recommendation accuracy. Collaborative filtering is regarded as one of the most successful paradigms, as it can accurately model user-item interaction patterns. However, traditional recommendation systems rarely consider all stakeholders involved in the context of broader trustworthiness in human-machine interactions, which include qualities such as adaptability, fairness, explainability, and robustness. Those qualities do not directly contribute to the accuracy but can be beneficial for sustainable development of recommendation systems in the long term. As a result, there is a growing demand for trustworthy recommendation systems that not only provide accurate recommendations but also adhere to key principles of trustworthiness. In this dissertation, we focus on several important principles of a trustworthy recommendation system, including adaptability, fairness, explainability, and robustness. These principles play crucial roles in the context of trustworthiness, which are multi-faceted and deeply interconnected, calling for a wide range of objectives and methodologies. Specifically, we delve into these topics from the following four distinct angles: (1) adaptability of learning fine-grained preferences, (2) fairness learning for popularity bias, (3) propensity estimation for causal effect modeling, and (4) robustness in large language models for recommendations.
ABOUT THE SPEAKER
LIU Zhongzhou is a Ph.D. candidate in Computer Science at the SMU School of Computing and Information Systems, supervised by Assistant Prof. FANG Yuan. His research aims to investigate the trustworthiness that go beyond traditional user-item collaborative filtering, including adaptability, fairness, causality-based recommendations, robust LLM for recommendation and more.