showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

PhD Dissertation Defense by LIM Jia Peng | Perspectives on Interpretability of Neural Models for Representing Text

Please click here if you are unable to view this page.

 
Perspectives on Interpretability of Neural Models for Representing Text

LIM Jia Peng

PhD Candidate
School of Computing and Information Systems
Singapore Management University
 

FULL PROFILE 

Research Area

Dissertation Committee

Research Advisor

Dissertation Committee Member

External Member

  • Arunesh SINHA, Assistant Professor, Department of Management Science & Information Systems, Rutgers Business School, Rutgers University
 

Date

21 May 2026 (Thursday)

Time

9:00am – 10:00am

Venue

Meeting room 5.1, Level 5
School of Computing and Information Systems 1, 
Singapore Management University, 
80 Stamford Road, 
Singapore 178902

Please register by 19 May 2026.

We look forward to seeing you at this research seminar.

 

ABOUT THE TALK

In this dissertation, we investigate interpretability in the three elements of learning neural text representations: inputs, models, and outputs. We emphasise perspective as we present alternative novel methods to mine and organise meaning in this work. Firstly, examining models, we propose an alternate angle of interpreting Neural Topic Models word-topic distribution, producing better topic representations for interpretation. Next, we apply our previous findings to mine interpretations from the weights of transformer-based Large Language Models. Second, for outputs, as observations is critical to interpretability evaluations, we examine text representations from the human mental model. We propose and formulate a large-scale correlation analysis and accompanying user studies to examine automated coherence metrics and human evaluations. Finally, the model and its corresponding outputs learn from the pre-defined boundaries of the input space. For neural text representations, that would be the token space consisting of words and subwords. To better understand the subword space, we propose an interpretation problem to the word space and a novel perspective on solving for the token space. Using these three elements, we explore the notion of interpretability using creative methodology, and yielding interesting results.

ABOUT THE SPEAKER

Jia Peng joined SMU in 2017 as an undergraduate studying Information Systems. After graduating in 2021, he continued to pursue a Ph.D. in Computer Science, under supervision of Prof. Hady W. Lauw, working on Natural Language Processing research. Over the course of his Ph.D., he published in ACL, EMNLP, COLING, NeurIPS. He was also awarded with AISG Ph.D. Fellowship and Singapore Data Science Consortium Fellowship.