Beyond the Parameters: A Technical Survey of Contextual Enrichment in Large Language Models: From In-Context Prompting to Causal Retrieval-Augmented Generation
arXiv cs.CL / 4/6/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper surveys how large language models can be augmented with more structured information at inference time, framing methods along a single axis of “degree of structured context.”
- It covers and connects in-context prompting/prompt engineering with retrieval-based approaches including RAG, GraphRAG, and CausalRAG, focusing on overcoming limitations from static parameters and finite context windows.
- The survey proposes an explicit literature-screening protocol and a claim-audit framework to separate higher-confidence results from emerging findings across the reviewed work.
- It concludes with a deployment-oriented decision framework and concrete research priorities aimed at improving trustworthiness in retrieval-augmented NLP systems.
Related Articles

Black Hat Asia
AI Business

How Bash Command Safety Analysis Works in AI Systems
Dev.to

How I Built an AI Agent That Earns USDC While I Sleep — A Complete Guide
Dev.to

How to Get Better Output from AI Tools (Without Burning Time and Tokens)
Dev.to

How I Added LangChain4j Without Letting It Take Over My Spring Boot App
Dev.to