Fact4ac at the Financial Misinformation Detection Challenge Task: Reference-Free Financial Misinformation Detection via Fine-Tuning and Few-Shot Prompting of Large Language Models
arXiv cs.CL / 4/17/2026
📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper describes a winning approach for the Reference-Free Financial Misinformation Detection shared task, where models must judge claim veracity without external references or evidence.
- It builds on the RFC-BENCH framework and reframes detection as relying on internal semantic reasoning and contextual consistency rather than fact-checking.
- The proposed system combines in-context learning (zero-shot and few-shot prompting) with parameter-efficient fine-tuning using LoRA to better capture subtle linguistic cues of financial manipulation.
- The method achieved first place on both official leaderboards, reporting 95.4% accuracy on the public test set and 96.3% on the private test set.
- The authors release their models (14B and 32B) on Hugging Face to support further research and context-aware misinformation detection in financial NLP.

