CHAIRO: Contextual Hierarchical Analogical Induction and Reasoning Optimization for LLMs

arXiv cs.AI / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes CHAIRO, a new LLM-based content moderation framework that uses contextual hierarchical analogical induction to improve rule induction and decision reliability.
  • Instead of relying on static rules, CHAIRO performs end-to-end optimization of analogical retrieval, rule generation, and moderation classification to dynamically adapt to diverse and ambiguous user-generated content.
  • Experiments indicate CHAIRO substantially outperforms rule-injected fine-tuning baselines and multi-stage static RAG pipelines in moderation accuracy and the quality of generated moderation rules.
  • The authors support the approach with human evaluations and external model generalization tests, reporting improved clarity, interpretability, and real-world applicability of the produced rules.

Abstract

Content moderation in online platforms faces persistent challenges due to the evolving complexity of user-generated content and the limitations of traditional rule-based and machine learning approaches. While recent advances in large language models (LLMs) have enabled more sophisticated moderation via direct prompting or fine-tuning, these approaches often exhibit limited generalization, interpretability, and adaptability to unseen or ambiguous cases. In this work, we propose a novel moderation framework that leverages analogical examples to enhance rule induction and decision reliability. Our approach integrates end-to-end optimization of analogical retrieval, rule generation, and moderation classification, enabling the dynamic adaptation of moderation rules to diverse content scenarios. Through comprehensive experiments, we demonstrate that our method significantly outperforms both rule-injected fine-tuning baselines and multi-stage static RAG pipelines in terms of moderation accuracy and rule quality. Further evaluations, including human assessments and external model generalization tests, confirm that our framework produces rules with better clarity, interpretability, and applicability. These findings show that analogical example-driven methods can advance robust, explainable, and generalizable content moderation in real-world applications.