showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Pre-Conference Talk by LAN Yunshi | Embedding WordNet Knowledge for Textual Entailment

Please click here if you are unable to view this page.

 

Embedding WordNet Knowledge for Textual Entailment

 

Speaker (s):


 


 

LAN Yunshi

PhD Candidate

School of Information Systems

Singapore Management University

 

 

Date:


Time:


Venue:

 

 

 

August 15, 2018, Wednesday


10:00am - 10:30am


Meeting Room 4.4, Level 4

School of Information Systems

Singapore Management University

80 Stamford Road

Singapore 178902


 

We look forward to seeing you at this research seminar.


 

 

About the Talk


 

 

In this paper, we study how we can improve a deep learning approach to textual entailment by incorporating lexical entailment relations from WordNet. Our idea is to embed the lexical entail-ment knowledge contained in WordNet in specially-learned word vectors, which we call “entail-ment vectors.” We present a standard neural network model and a novel set-theoretic model to learn these entailment vectors from word pairs with known lexical entailment relations derived from WordNet. We further incorporate these entailment vectors into a decomposable attention model for textual entailment and evaluate the model on the SICK and the SNLI dataset. We find that using these special entailment word vectors, we can significantly improve the performance of textual entailment compared with a baseline that uses only standard word2vec vectors. The final performance of our model is close to or above the state of the art, but our method does not rely on any manually-crafted rules or extensive syntactic features.

This a pre-conference talk for 27th International Conference on Computational Linguistics (COLING 2018).

About the Speaker

Yunshi LAN is a PhD candidate at School of Information Systems, Singapore Management University. She is advised by Associate Professor Jing Jiang and Associate Professor Feida Zhu. Her research interests are in applications of knowledge bases in Natural Language Processing like textual entailment, knowledge base question answering, etc..