AI Navigate

Enhancing Debunking Effectiveness through LLM-based Personality Adaptation

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The study introduces a novel method for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with inputs based on the Big Five personality traits.
  • The approach converts generic debunking content into tailored messages aligned with specific personality profiles, enhancing persuasiveness.
  • An automated evaluation is conducted using separate LLMs simulating personality traits, thus avoiding expensive human evaluation panels.
  • Results show personalized debunking messages are generally more persuasive, with traits like Openness increasing persuadability and Neuroticism decreasing it.
  • The research highlights the practical potential of LLMs for targeted debunking while raising ethical concerns about the use of such personalized AI-driven messaging techniques.

Computer Science > Artificial Intelligence

arXiv:2603.09533 (cs)
[Submitted on 10 Mar 2026]

Title:Enhancing Debunking Effectiveness through LLM-based Personality Adaptation

View a PDF of the paper titled Enhancing Debunking Effectiveness through LLM-based Personality Adaptation, by Pietro Dell'Oglio and Alessandro Bondielli and Francesco Marcelloni and Lucia C. Passaro
View PDF HTML (experimental)
Abstract:This study proposes a novel methodology for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with persona-based inputs aligned to the Big Five personality traits: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Our approach guides LLMs to transform generic debunking content into personalized versions tailored to specific personality profiles. To assess the effectiveness of these transformations, we employ a separate LLM as an automated evaluator simulating corresponding personality traits, thereby eliminating the need for costly human evaluation panels. Our results show that personalized messages are generally seen as more persuasive than generic ones. We also find that traits like Openness tend to increase persuadability, while Neuroticism can lower it. Differences between LLM evaluators suggest that using multiple models provides a clearer picture. Overall, this work demonstrates a practical way to create more targeted debunking messages exploiting LLMs, while also raising important ethical questions about how such technology might be used.
Comments:
Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Cite as: arXiv:2603.09533 [cs.AI]
  (or arXiv:2603.09533v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09533
Focus to learn more
arXiv-issued DOI via DataCite
Related DOI: https://doi.org/10.1007/978-3-032-15632-7_23
Focus to learn more
DOI(s) linking to related resources

Submission history

From: Alessandro Bondielli [view email]
[v1] Tue, 10 Mar 2026 11:44:17 UTC (1,579 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Enhancing Debunking Effectiveness through LLM-based Personality Adaptation, by Pietro Dell'Oglio and Alessandro Bondielli and Francesco Marcelloni and Lucia C. Passaro
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.AI
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.