showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Joint Singapore OpenMined & SIGKDD Meetup - ML, Privacy & Explainability

Please click here if you are unable to view this page.

 
Joint Singapore OpenMined & SIGKDD Meetup
- ML, Privacy & Explainability
DATE :   October 24, 2019, Thursday
TIME :   6.45pm - 9.00pm
 (beginning with light refreshments)
VENUE :   SIS Seminar Room 2.4, Level 2
  SMU School of Information Systems
  80 Stamford Road
  Singapore 178902


Please register on the Meetup page by
October 23, 2019, Wednesday

 

OpenMined and SIGKDD Singapore are jointly organising this seminar. The event is supported by SMU School of Information Systems.

 

About the TALKS

Talk#1: Data Privacy in Machine Learning: from Centralized Platforms to Federated Learning
     
 

Reza Shokri
NUS Presidential Young Professor of Computer Science

In this talk, I will give a broad overview of data privacy risks in machine learning systems. I will show how an adversary can exploit the privacy vulnerabilities of machine learning algorithms using inference attacks. I will then present privacy-enhancing algorithms that can limit the information leakage about sensitive data, while enabling meaningful computations. Examples of these algorithms include differential privacy, trusted hardware, secure multi-party computation, and federated learning.

     
Talk#2: Tests and metrics to evaluate ML model explanations
     
 

Naresh Rajendra Shah
Co-founder and CTO at Untangle AI

With a variety of explanation methods available today, how do we understand the limitations and pitfalls of the explanation methods? Explanation methods are by themselves ML problems and in that view, to make them robust, we bring along a test suite similar to metrics and tests available for most ML methods today. If you have a new explainability method, then you can test that method against these tests. If you want to use an explainability method, you will be aware of its limitations and know when is the problem arising from the explainability method as opposed to the model itself. Lastly, this allows for incremental progress as well as in some cases defence against adversarial attacks of specific kinds.