showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Faculty Job Seminar by JIN Di

Please click here if you are unable to view this page.

 
Adversarial Robustness and Generalization
for Natural Language Processing

Speaker (s):

JIN Di
Research Scientist
Amazon Alexa AI

Date:

Time:

Venue:

 

20 April 2021, Tuesday

10:00am - 11:15am

This is a virtual seminar. Please register by 8 April 2021, the webex link will be sent to those who have registered on the following day.

We look forward to seeing you at this research seminar.

About the Talk

Deep learning and large-scale unsupervised pre-training has remarkably accelerated the development of natural language processing (NLP). The best models can now achieve comparable or even superior performance compared with human, which gives us the impression that NLP problems may have been solved. However, when we adopt these models into real-world applications, much evidence has shown us that they are still not robust against the real data that may contain some levels of noise. This points out to us the great importance of examining and enhancing the model robustness. In this presentation, we will introduce approaches to evaluating and improving the robustness of NLP models based on adversarial attack and learning. We will see that exposing these models against adversarial samples can make them more robust and thus better generalize to unseen data.

About the Speaker

Di Jin is now a research scientist at Amazon Alexa AI, USA, working on conversational modeling. He graduated from MIT as a PhD in September 2020, supervised by Prof. Peter Szolovits. He works on Natural Language Processing (NLP) and its applications into the healthcare domain. Previous works focused on sequential sentence classification, transfer learning for low-resource data, adversarial attacking and defense, and text editing/rewriting.

He is a tenure-track faculty candidate for the Artificial Intelligence & Data Science, Data Management & Analytics cluster.