|
Feature-Based Transfer Learning In Natural Language Processing | 
| YU Jianfei PhD Candidate
School of Information Systems
Singapore Management University
| Research Area
Dissertation Committee Chairman Committee Members External Member |
| | Date
November 2, 2018 (Friday) | Time
9.30am - 10.30am | Venue
Meeting Room 4.4, Level 4,
School of Information Systems,
Singapore Management University,
80 Stamford Road
Singapore 178902 | We look forward to seeing you at this research seminar. ![]()
|
|
|
| About The Talk In the past few decades, supervised machine learning approach is one of the most important methodologies in the Natural Language Processing (NLP) community. Although various kinds of supervised learning methods have been proposed to obtain the state-of-the-art performance across most NLP tasks, the bottleneck of them lies in the heavy reliance on the large amount of manually annotated data, which is not always available in our desired target domain/task. To alleviate the data sparsity issue in the target domain/task, an attractive solution is to find sufficient labeled data from a related source domain/task. However, for most NLP applications, due to the discrepancy between the distributions of the two domains/tasks, directly training any supervised models only based on labeled data in the source domain/task usually results in poor performance in the target domain/task. Therefore, it is necessary to develop effective transfer learning techniques to leverage rich annotations in the source domain/task to improve the model performance in the target domain/task. My dissertation focuses on proposing different transfer learning approaches to leverage the annotations in other resource-rich domains/tasks to improve the model performance in our target domain/task under two different settings: unsupervised transfer learning and supervised transfer learning. In the unsupervised transfer learning setting, we first propose a simple yet effective domain adaptation method by deriving shared representations with instance similarity features. Furthermore, we target at a specific NLP task, i.e., sentiment classification, and propose a neural domain adaptation framework, which performs joint learning of the actual sentiment classification task and several manually designed domain-independent auxiliary tasks to produce shared representations across domains. In the supervised transfer learning setting, we first propose a neural domain adaptation approach for retrieval-based question answering systems by simultaneously learning shared feature representations and modelling inter-domain and intra-domain relationships in a unified model. Moreover, we attempt to improve multi-label emotion classification with the help of sentiment classification by proposing a dual attention transfer network, where a shared feature space is employed to capture the general sentiment words, and another task-specific space is employed to capture the specific emotion words. | Speaker Biography YU Jianfei is a PhD candidate in the School of Information Systems, Singapore Management University, under the supervision of Associate Prof. Jing Jiang. He received his Bachelor and Master degree from Nanjing University of Science and Technology, China in 2012 and 2015 respectively. Currently, he works in the area of text mining with a focus on applying deep learning and transfer learning techniques to some NLP tasks like Sentiment Analysis, Question Answering, Relation Extraction, etc. |
|