| |
| |
|
Mitigating the Out-Of-Distribution Gap in Natural Language Processing
|
|

|
YU Sicheng
PhD Candidate
School of Computing and Information Systems
Singapore Management University
|
|
Research Area
Dissertation Committee
Research Advisor
Co-Research Advisor
Dissertation Committee
|
|
|
|
Date
26 November 2021 (Friday)
|
|
Time
3:30pm - 4:30pm
|
|
Venue
This is a virtual seminar. Please register by 24 November, the zoom link will be sent out on the following day to those who have registered.
|
|
We look forward to seeing you at this research seminar.

|
|
|
| |
|
About The Talk
Most traditional machine learning or deep learning methods are based on the premise that training data and test data are independent and identical distributed, ie., IID. However, it is just an ideal situation. In real-world applications, test set and training data often follow different distributions, which we refer to as the out of distribution, ie., OOD, setting. As a result, models trained with traditional methods always suffer from an undesirable performance drop on the OOD test set. It's necessary to develop techniques to solve this problem for real applications. In this proposal, I present my work so far in this direction on three different natural language processing scenarios.
|
| |
|
Speaker Biography
Sicheng Yu received the B.E. degree in electronic and information engineering from Dalian University of Technology, China, in 2017 and M.S. degree in signal processing from Nanyang Technology University, Singapore, in 2018. Now he is pursuing the Ph.D. degree in computer science in Singapore Management University. His research focuses on natural language processing.
|
|