showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

PhD Dissertation Defense by BO Jianyuan | Enhancing Graph Representation Learning Through Self-Supervision: An Augmentation Perspective

Please click here if you are unable to view this page.

 

Enhancing Graph Representation Learning Through Self-Supervision: An Augmentation Perspective

BO Jianyuan

PhD Candidate
School of Computing and Information Systems
Singapore Management University
 

FULL PROFILE

Research Area

Dissertation Committee

Research Advisor
Committee Members
External Member
  • Dawei ZHOU, Assistant Professor, Department of Computer Science, Virginia Tech
 

Date

10 July 2025 (Thursday)

Time

9:30am - 10:30am

Venue

Meeting room 5.1, 
Level 5
School of Computing and Information Systems 1,
Singapore Management University,
80 Stamford Road
Singapore 178902

Please register by 8 July 2025.

We look forward to seeing you at this research seminar.

 

ABOUT THE TALK

Graph representation learning is pivotal for analyzing graph-structured data across diverse domains, yet traditional supervised methods are hindered by data scarcity and extensive labeling efforts. Graph Self-supervised Learning (SSL) emerges as an effective alternative by leveraging inherent graph structures without explicit labels. However, current approaches face three critical limitations: (1) manual augmentation design requiring extensive domain expertise, (2) separation between contrastive and generative learning paradigms, and (3) challenges in integrating graph structures with large language models for text-attributed graphs. 

To address these limitations, this dissertation explores Graph SSL from an augmentation perspective with three innovative contributions: (1) Adaptive Augmentation Selection through our Graph-centric Contrastive framework for Graph Matching (GCGM) with Boosting-inspired Adaptive Augmentation Sampler (BiAS), which automatically selects effective augmentations from a comprehensive pool without manual tuning, consistently outperforming state-of-the-art self-supervised baselines in graph matching without requiring side information. (2) Node Feature Masking as Unifying Augmentation via our Contrastive Masked Feature Reconstruction (CORE) framework, which theoretically demonstrates that masked feature reconstruction converges with contrastive learning, achieving superior performance across node and graph classification tasks by effectively combining discriminative and generative strengths. (3) Quantizing Text-attributed Graphs through our Soft Tokenization for Text-attributed Graphs (STAG) framework, which employs vector quantization as augmentation strategy to bridge continuous graph embeddings and discrete tokens, enabling effective zero-shot transfer learning and maintaining consistent performance across different LLM architectures. 

Our comprehensive experimental validation demonstrates the superior effectiveness of our augmentation-centric approach across diverse datasets and challenging scenarios, establishing augmentation as a unifying perspective for advancing graph self-supervised learning.

 

SPEAKER BIOGRAPHY

Jianyuan is a PhD candidate in Computer Science at Singapore Management University under the supervision of Prof. Yuan Fang. He conducts research on graph neural networks, self-supervised learning, and graph foundation models. 

Before his PhD, Jianyuan completed his Master of IT in Business (AI) at SMU and Master of Science in Mechanical Engineering from USC. He also holds a Bachelor's degree in Mechatronics Engineering from Huazhong Agricultural University, China. 

Jianyuan has publications at top-tier conferences including KDD 2025 and IJCAI 2024.