showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Learning Natural Language Inference with LSTM

Please click here if you are unable to view this page.

 

Learning Natural Language Inference with LSTM


Speaker (s):

WANG Shuohang

PhD Candidate

School of Information Systems

Singapore Management University


Date:


Time:


Venue:

 

June 6, 2016, Monday


1:30pm - 2:00pm


Meeting Room 4.4, Level 4

School of Information Systems

Singapore Management University


80 Stamford Road

Singapore 178902

We look forward to seeing you at this research seminar.

About the Talk

Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1%, outperforming the state of the art.

This a pre-conference talk for 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL2016).

About the Speaker

WANG Shuohang is a PhD student in School of Information Systems, Singapore Management University. He is advised by Assistant Professor JIANG Jing and Associate Professor ZHENG Baihua. His main research interest is Natural Language Processing. He is now focusing on the Natural language Inference and Machine Comprehension with recurrent neural networks.