FinSafetyBench: Evaluating LLM Safety in Real-World Financial Scenarios

arXiv cs.CL / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The article proposes FinSafetyBench, a bilingual (English-Chinese) red-teaming benchmark to evaluate whether LLMs refuse requests that violate financial compliance requirements.
  • FinSafetyBench is built from real-world financial crime cases and ethics standards and includes 14 subcategories covering financial crimes and ethical violations.
  • Experiments on both general-purpose and finance-specialized LLMs across three representative attack settings reveal vulnerabilities that adversarial prompts can use to bypass compliance safeguards.
  • The analysis finds that Chinese-language contexts are more susceptible to such attacks and that prompt-level defenses are limited against sophisticated or implicit manipulation strategies.

Abstract

Large language models (LLMs) are increasingly applied in financial scenarios. However, they may produce harmful outputs, including facilitating illegal activities or unethical behavior, posing serious compliance risks. To systematically evaluate LLM safety in finance, we propose FinSafetyBench, a bilingual (English-Chinese) red-teaming benchmark designed to test an LLM's refusal of requests that violate financial compliance. Grounded in real-world financial crime cases and ethics standards, the benchmark comprises 14 subcategories spanning financial crimes and ethical violations. Through extensive experiments on general-purpose and finance-specialized LLMs under three representative attack settings, we identify critical vulnerabilities that allow adversarial prompts to bypass compliance safeguards. Further analysis reveals stronger susceptibility in Chinese contexts and highlights the limitations of prompt-level defenses against sophisticated or implicit manipulation strategies.