| |
 Privacy-preserving Decision Tree Training and Inference Speaker (s):
 Shujie Cui Lecturer, Faculty of Information Technology, Monash University
| Date: Time: Venue: | | 19 February 2025, Wednesday 10:00am – 11:00am School of Computing & Information Systems 2 (SCIS 2) Level 4, Meeting Room 4-1 Singapore Management University 90 Stamford Road, Singapore 178903 Please register by 18 February 2025. We look forward to seeing you at this research seminar.  |
|
About the Talk The use of machine learning on personal and sensitive data raises significant privacy concerns, with the potential for inadvertent information leakage, such as the extraction of text messages or images from generative models. Nevertheless, analyzing such data can yield substantial benefits for individuals and society, particularly in domains like healthcare and transportation. To reconcile these conflicting objectives, it is essential to deploy data analysis methods that provide robust confidentiality guarantees and are securely implemented.
This talk will explore the challenges and strategies for achieving these goals in the context of decision tree (DT) training and inference, a widely adopted machine learning model known for its versatility, speed, and interpretability. The speaker will begin by providing a concise overview of the DT training and inference process, followed by discussing the requirements for effective data protection. The talk will then focus on three distinct approaches to meeting these requirements, encompassing both trusted hardware-based and pure software-based solutions. About the Speaker Dr. Shujie Cui is a Lecturer at Monash University in the Faculty of Information Technology. She obtained her PhD degree from the University of Auckland in 2019. Before joining Monash University, she was a Post-Doc researcher in the Large-Scale Data & Systems (LSDS) group in the Department of Computing at Imperial College London, UK. Her main research interests include applied cryptography, information security, trusted execution environments, side-channel attacks, and privacy-preserving machine learning.
|