showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Research Seminar by Yi Ding | Echoes of Authenticity: Reclaiming Human Sentiment in the LLM Era

Please click here if you are unable to view this page.

 

Echoes of Authenticity: Reclaiming Human Sentiment in the LLM Era

Speaker (s):



Yi Ding
Assistant Professor of Information Systems, 
The University of Warwick

Date:

Time:

Venue:

 

30 July 2024, Tuesday

10:30am – 12:00pm

School of Economics/School of Computing & Information Systems 2 (SOE/SCIS 2)
Level 2, Seminar Room 2-8
Singapore Management University
90 Stamford Road
Singapore 178903

Please register by 29 July 2024.

We look forward to seeing you at this research seminar.

About the Talk

This paper scrutinizes the unintended consequences of employing large language models (LLMs) like ChatGPT for editing user-generated content, particularly focusing on alterations in sentiment. Through a detailed analysis of a climate change tweet dataset, we uncover that LLM-rephrased tweets tend to display a more neutral sentiment than their original counterparts. By replicating an established study on public opinions regarding climate change, we illustrate how such sentiment alterations can potentially skew the results of research relying on user-generated content. To counteract the biases introduced by LLMs, our research outlines two effective strategies. First, we employ predictive models capable of retroactively identifying the true human sentiment underlying the original communications, utilizing the altered sentiment expressed in LLM-rephrased tweets as a basis. While useful, this approach faces limitations when the origin of the text—whether directly crafted by a human or modified by an LLM—remains uncertain. To address such scenarios where the text's provenance is ambiguous, we develop a second approach based on the fine-tuning of LLMs. This fine-tuning process not only helps in aligning the sentiment of LLM-generated texts more closely with human sentiment but also offers a robust solution to the challenges posed by the indeterminate origins of digital content. This research highlights the impact of LLMs on the linguistic characteristics and sentiment of user-generated content, and more importantly, offers practical solutions to mitigate these biases, thereby ensuring the continued reliability of sentiment analysis in research and policy. 
 

About the Speaker

Yi Ding is an assistant professor in the Group of Information Systems Management and Analytics, Warwick Business School, University of Warwick. She obtained her PhD in information systems and analytics from National University of Singapore and a bachelor's degree in management information systems from Fudan University. Her research interests include social impact of emerging technologies and fintech. Her work has been published in journals such as MIS Quarterly, Information Systems Research (ISR) and Research Policy. She has also presented papers at international conferences, including ICIS, WITS, CIST, and HICSS.