WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • WildFeedback is a new framework for aligning LLMs with human preferences by using feedback gathered during real user conversations (in-situ feedback) rather than relying only on costly annotated datasets.
  • Given a corpus of multi-turn user–LLM dialogues, it automatically identifies and classifies user feedback to model responses between turns, turning that feedback into preference data with preferred vs. dispreferred examples.
  • Experiments show that LLMs fine-tuned on the WildFeedback dataset achieve significantly better alignment with user preferences, validated by both standard benchmarks and a checklist-guided evaluation method.
  • The approach is intended to improve scalability and reduce issues like subjectivity and feedback-loop amplification of biases found in traditional alignment workflows.
  • Overall, WildFeedback aims to produce LLMs that respond more effectively to users’ diverse and changing needs by continuously leveraging interaction-derived signals.

Abstract

As large language models (LLMs) continue to advance, aligning these models with human preferences has emerged as a critical challenge. Traditional alignment methods, relying on human or LLM annotated datasets, are limited by their resource-intensive nature, inherent subjectivity, misalignment with real-world user preferences, and the risk of feedback loops that amplify model biases. To overcome these limitations, we introduce WildFeedback, a novel framework that leverages in-situ user feedback during conversations with LLMs to create preference datasets automatically. Given a corpus of multi-turn user-LLM conversation, WildFeedback identifies and classifies user feedback to LLM responses between conversation turns. The user feedback is then used to create examples of preferred and dispreferred responses according to users' preference. Our experiments demonstrate that LLMs fine-tuned on WildFeedback dataset exhibit significantly improved alignment with user preferences, as evidenced by both traditional benchmarks and our proposed checklist-guided evaluation. By incorporating in-situ feedback from actual users, WildFeedback addresses the scalability, subjectivity, and bias challenges that plague existing approaches, marking a significant step toward developing LLMs that are more responsive to the diverse and evolving needs of their users.