A systematic literature Review for Transformer-based Software Vulnerability detection
arXiv cs.LG / 4/29/2026
💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents a transformer-focused systematic literature review of 80 studies (2021–2025) on using transformer models to detect software vulnerabilities.
- It categorizes transformer architectures into encoder, decoder, and combined designs, and compares both pre-trained and fine-tuned approaches across inputs like source code, logs, and smart contracts.
- The review evaluates multiple research dimensions including trends, datasets/sources, programming languages, transformer frameworks, detection granularity, metrics, reference models, vulnerability types, and experimental setups.
- It highlights common benchmarks and baselines used in the literature, while identifying key technical challenges such as data imbalance, limited interpretability, scalability constraints, and weak cross-language generalization.
- The authors conclude that synthesizing these findings can help researchers and practitioners build more reliable, accurate, and interpretable transformer-based vulnerability detection systems, while pointing to open research gaps.
Related Articles

How I Use AI Agents to Maintain a Living Knowledge Base for My Team
Dev.to

An API testing tool built specifically for AI agent loops
Dev.to
IK_LLAMA now supports Qwen3.5 MTP Support :O
Reddit r/LocalLLaMA
OpenAI models, Codex, and Managed Agents come to AWS
Dev.to

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to