Call-Chain-Aware LLM-Based Test Generation for Java Projects
arXiv cs.AI / 4/27/2026
💬 OpinionTools & Practical UsageModels & Research
Key Points
- The paper introduces CAT, a call-chain-aware LLM-based method for generating Java unit tests using static analysis to add call-chain and dependency context to prompts.
- CAT goes beyond execution-path-only prompting by modeling caller–callee relationships, object constructors, and third-party dependencies to help produce executable and semantically valid test contexts.
- It includes an iterative test-fixing mechanism to recover from generation failures, improving robustness when tests initially cannot run.
- On the Defects4J benchmark, CAT raises line coverage by 18.04% and branch coverage by 21.74% compared with the state-of-the-art approach PANTA.
- CAT also performs better than the prior approach on four real-world GitHub projects released after the LLM cutoff date, and an ablation study confirms the value of call-chain and dependency contexts.
Related Articles

Black Hat USA
AI Business

Legal Insight Transformation: 7 Mistakes to Avoid When Adopting AI Tools
Dev.to

Legal Insight Transformation: A Beginner's Guide to Modern Research
Dev.to
The Open Source AI Studio That Nobody's Talking About
Dev.to
How I Built a 10-Language Sports Analytics Platform with FastAPI, SQLite, and Claude AI (As a Solo Non-Technical Founder)
Dev.to