showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Pre-Conference Talk by YING Jiahao | Intuitive or Dependent? Investigating LLMs’ Behavior Style to Conflicting Prompts

Please click here if you are unable to view this page.

 


Intuitive or Dependent? Investigating LLMs’ Behavior Style to Conflicting Prompts
 

Speaker (s):


YING Jiahao
PhD Candidate
School of Computing and Information Systems
Singapore Management University

Date:

Time:

Venue:

 

1 August 2024, Thursday

2:00pm – 2:30pm

Meeting room 4.4, Level 4
School of Computing and
Information Systems 1, 
Singapore Management University, 
80 Stamford Road,
Singapore 178902

We look forward to seeing you at this research seminar.

Please register by 31 July 2024.

About the Talk

This study investigates the behaviors of Large Language Models (LLMs) when faced with conflicting prompts versus their internal memory. This will not only help to understand LLMs’decision mechanism but also benefit real-world applications, such as retrievalaugmented generation (RAG). Drawing on cognitive theory, we target the first scenario of decision-making styles where there is no superiority in the conflict and categorize LLMs’ preference into dependent, intuitive, and rational/irrational styles. Another scenario of factual robustness considers the correctness of prompt and memory in knowledge-intensive tasks, which can also distinguish if LLMs behave rationally or irrationally in the first scenario. To quantify them, we establish a complete benchmarking framework including a dataset, a robustness evaluation pipeline, and corresponding metrics. Extensive experiments with seven LLMs reveal their varying behaviors. And, with role play intervention, we can change the styles, but different models present distinct adaptivity and upper-bound. One of our key takeaways is to optimize models or the prompts according to the identified style. For instance, RAG models with high role play adaptability may dynamically adjust the interventions according to the quality of retrieval results — being dependent to better leverage informative context; and, being intuitive when the external prompt is noisy. 

This is a Pre-Conference talk for The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).
 

About the Speaker

Jiahao YING is a PhD candidate in Computer Science at the School of Computing and Information Systems at SMU, under the supervision of Assistant Professor SUN Qianru and external co-supervisor CAO Yixin. His interest is in LLMs evaluation and improvement.