LLM attribution analysis across different fine-tuning strategies and model scales for automated code compliance
arXiv cs.AI / 4/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper tackles a gap in automated code compliance research by analyzing how training choices influence LLM interpretive behavior rather than treating the models as black boxes.
- Using a perturbation-based attribution method, it compares interpretive behaviors across full fine-tuning (FFT), LoRA, and quantized LoRA fine-tuning, and across different model scales (parameter sizes).
- It finds that FFT yields attribution patterns that are statistically distinct and more focused than those from parameter-efficient fine-tuning approaches.
- As model scale increases, the models adopt more specific interpretive strategies (e.g., emphasizing numerical constraints and rule identifiers), but semantic similarity performance plateaus for models larger than 7B.
- The findings aim to improve explainability for critical, regulation-based applications in the AEC (Architecture, Engineering, and Construction) industry.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to