showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Pre-Conference Talk by ZHOU Kankan | VLStereoSet: A Study of Stereotypical Bias in Pre-trained Vision-Language Models

Please click here if you are unable to view this page.

 
VLStereoSet: A Study of Stereotypical Bias in Pre-trained Vision-Language Models

Speaker (s):

ZHOU Kankan
PhD Candidate
School of Computing and Information Systems
Singapore Management University

Date:

Time:

Venue:

 

11 November 2022, Friday

1:30pm - 2:30pm

Meeting room 5.1, Level 5
School of Computing and Information Systems 1,
Singapore Management University,
80 Stamford Road
Singapore 178902

About the Talk

Recently there has been much interest in adapting foundation models such as ALBERT, RoBERTa, T5, GPT-3 and CLIP for different downstream tasks. These models demonstrate powerful transfer capabilities largely because they have acquired the rich body of knowledge contained in their pre-training data. However, their pre-training data may also contain social biases and stereotypes, especially when the data are crawled from the internet without cleaning. As a result, pre-trained models may “inherit” these biases and stereotypes, affecting the fairness of systems derived from these foundation models for downstream tasks. In this paper we study how to measure stereotypical bias in pre-trained vision-language models and leverage a recently released text-only dataset, StereoSet, which covers a wide range of stereotypical bias, and extend it into a vision-language probing dataset called VLStereoSet to measure stereotypical bias in vision-language models. We analyze the differences between text and image and propose a probing task that detects bias by evaluating a model's tendency to pick stereotypical statements as captions for anti-stereotypical images. We further define several metrics to measure both a vision-language model's overall stereotypical bias and its intra-modal and inter-modal bias.

This is a Pre-Conference talk for the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (AACL 2022).

About the Speaker

Kankan Zhou received his bachelor’s degree in computer science from Nanyang Technological University (NTU), Singapore, in 2014 and master’s degree in computing from National University of Singapore (NUS), in 2016. Kankan has more than 10 years working experience in AI & Analytic field with different companies such as Oracle Singapore, Aon Singapore, and Accenture Singapore, etc. Now he is pursuing the part time Ph.D. degree in computer science in Singapore Management University(SMU) under the supervision of Prof. Jing Jiang and work as full time SCIS undergraduate instructor in SMU. His research focuses on natural language processing.