showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Research Seminar by Hong Cheng | Graph Prompt Learning and Pre-training

Please click here if you are unable to view this page.

 

Graph Prompt Learning and Pre-training

Speaker (s):



Hong Cheng
Professor, 
The Chinese University of Hong Kong, 
Department of Systems Engineering 
and Engineering Management

Date:

Time:

Venue:

 

15 August 2024, Thursday

10:00am – 11:00am

School of Computing & 
Information Systems 1 (SCIS 1)
Level 4, Meeting Room 4-4
Singapore Management University
80 Stamford Road
Singapore 178902

Please register by 14 August 2024.

We look forward to seeing you at this research seminar.

About the Talk

Recently, “pre-training and fine-tuning” has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph annotations from each application. However, graph tasks with node level, edge level, and graph level are far diversified, making the pre-training pretext often incompatible with these multiple tasks. This gap may even cause a negative transfer to the specific application, leading to poor results. Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, in the first work, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks. We propose a novel multi-task prompting method for graph models. To narrow the gap between various graph tasks and state-of-the-art pre-training strategies, we study the task space of various graph applications and reformulate downstream problems to the graph-level task. Afterward, we introduce meta-learning to efficiently learn a better initialization for the multi-task prompt of graphs so that our prompting framework can be more reliable and general for different tasks. In the second work, we study cross-domain graph pre-training and propose a novel approach called Graph COordinators for PrEtraining (GCOPE) that harnesses the underlying commonalities across diverse graph datasets to enhance few-shot learning. By successfully leveraging the synergistic potential of multiple graph datasets for pretraining, our work stands as a pioneering contribution to the realm of graph foundational model.
 

About the Speaker

Hong Cheng is a Professor in the Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong. She received the Ph.D. degree from the University of Illinois at Urbana-Champaign in 2008. Her research interests include data mining, database systems, and machine learning. She received best research paper awards at SIGKDD'23, ICDE'07, SIGKDD'06, and SIGKDD'05. She received the 2010 Vice-Chancellor's Exemplary Teaching Award at The Chinese University of Hong Kong.