| |
Pre-Conference Talks by LE Duy Dung, LAN Yunshi & YU Jianfei | DATE : | 1 August 2019, Thursday | | TIME : | 10.30am - 12.00pm (Lunch provided to confirmed registrants) | | VENUE : | Meeting Room 5.1, Level 5
SMU School of Information Systems
80 Stamford Road
Singapore 178902 |
|  |
| | There are 3 talks in this session, each talk is approximately half an hour. * Please register by 27 July 2019 for catering purpose.* | About the Talk (s) Talk #1: Learning Multiple Maps from Conditional Ordinal Triplets
by LE Duy Dung, PhD Candidate | Ordinal embedding seeks a low-dimensional representation of objects based on relative comparisons of their similarities. This low-dimensional representation lends itself to visualization on a Euclidean map. Classical assumptions admit only one valid aspect of similarity. However, there are increasing scenarios involving ordinal comparisons that inherently reflect multiple aspects of similarity, which would be better represented by multiple maps. We formulate this problem as conditional ordinal embedding, which learns a distinct low-dimensional representation conditioned on each aspect, yet allows collaboration across aspects via a shared representation. Our geometric approach is novel in its use of a shared spherical representation and multiple aspect-specific projection maps on tangent hyperplanes. Experiments on public datasets showcase the utility of collaborative learning over baselines that learn multiple maps independently. | Talk #2: Knowledge Base Question Answering with Topic Units
by LAN Yunshi, PhD Candidate | Knowledge base question answering (KBQA) is an important task in natural language processing. Existing methods for KBQA usually start with entity linking, which considers mostly named entities found in a question as the starting points in the KB to search for answers to the question. However, relying only on entity linking to look for answer candidates may not be sufficient. In this paper, we propose to perform topic unit linking where topic units cover a wider range of units of a KB. We use a generation-and-scoring approach to gradually refine the set of topic units. Furthermore, we use reinforcement learning to jointly learn the parameters for topic unit linking and answer candidate ranking in an end-to-end manner. Experiments on three commonly used benchmark datasets show that our method consistently works well and outperforms the previous state of the art on two datasets. | Talk #3: Adapting BERT for Target-Oriented Multimodal Sentiment Classification
by YU Jianfei, Research Scientist | As an important task in Sentiment Analysis, Target-oriented Sentiment Classification (TSC) aims to identify sentiment polarities over each opinion target in a sentence. However, existing approaches to this task primarily rely on the textual content, ignoring the other increasingly popular multimodal data sources (e.g., images), which can enhance the robustness of these text-based models. Motivated by this observation and inspired by the recently proposed BERT architecture, we study Target-oriented Multimodal Sentiment Classification (TMSC) and propose a multimodal BERT architecture. To model intra-modality dynamics, we first apply BERT to obtain target-sensitive textual representations. We then borrow the idea from self-attention and design a target attention mechanism to perform target-image matching to derive target-sensitive visual representations. To model inter-modality dynamics, we further propose to stack a set of self-attention layers on top to capture multi-modal interactions. Experimental results show that our model can outperform several highly competitive approaches for TSC and TMSC. |
These are pre-conference talks for 28th International Joint Conference on Artificial Intelligence (IJCAI 2019). About the Speaker(S)  | | Dung D. LE is a PhD candidate in the Information Systems program at Singapore Management University (SMU). Formerly, he earned his Degree of Engineer in Mathematics and Informatics from Hanoi University of Science and Technology, in 2014. His research interests include recommender systems, information retrieval, and visual analytics, with publications in major data mining venues such as CIKM and SDM. | | | | |  | | Yunshi LAN is a PhD candidate at School of Information Systems, Singapore Management University. She is advised by Associate Professor Jing Jiang and Associate Professor Feida Zhu. Her research interests are in applications of knowledge bases in Natural Language Processing like textual entailment, knowledge base question answering, etc. | | | | |  | | YU Jianfei is a research scientist in the School of Information Systems, Singapore Management University (SMU), under the supervision of Associate Prof. Jing Jiang. He received his Ph.D. degree from SMU in 2018, and B.Sc. and M.Eng. degrees from Nanjing University of Science \& Technology, China in 2012 and 2015, respectively. Currently, he works in the area of text mining with a focus on applying deep learning to some NLP tasks like Relation Extraction, Sentiment Analysis, Question Answering, etc. Also, he is interested in solving domain adaptation problems in NLP tasks. |
|
| |
|
|