| |
No Experts, No Problem: Avoidance Learning from Bad Demonstrations Speaker (s):  HOANG Minh Huy PhD Student School of Computing and Information Systems Singapore Management University
| Date: Time: Venue: | | 11 November 2025, Tuesday 11:00am – 11:30am Meeting room 4.4, Level 4 School of Computing and Information Systems 1, Singapore Management University, 80 Stamford Road, Singapore 178902 We look forward to seeing you at this research seminar. Please register by 9 November 2025. 
|
|
About the Talk This paper addresses the problem of learning avoidance behavior within the context of offline imitation learning. In contrast to conventional methodologies that prioritize the replication of expert or near-expert demonstrations, our work investigates a setting where expert (or desirable) data is absent, and the objective is to learn to eschew undesirable actions by leveraging demonstrations of such behavior (i.e., learning from negative examples). To address this challenge, we propose a novel training objective grounded in the maximum entropy principle. We further characterize the fundamental properties of this objective function, reformulating the learning process as a cooperative inverse Q-learning task. Moreover, we introduce an efficient strategy for the integration of unlabeled data (i.e., data of indeterminate quality) to facilitate unbiased and practical offline training. The efficacy of our method is evaluated across standard benchmark environments, where it consistently outperforms state-of-the-art baselines.
This is a Pre-Conference talk for The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025). About the speaker Hoang Minh Huy is a PhD student in Computer Science at Singapore Management University, under the supervision of Assistant Professor Mai Anh Tien and Dr. Pavitra Krishnaswamy. He is supported by the SINGA A*STAR Merit Award. His research focuses on Deep Reinforcement Learning and Imitation Learning, particularly developing novel algorithms for Offline Imitation Learning from mixed-quality data (including suboptimal and undesirable demonstrations) and Safe/Constrained Reinforcement Learning.
|