Generating and Evaluating Sustainable Procurement Criteria for the Swiss Public Sector using In-Context Prompting with Large Language Models
arXiv cs.CL / 3/25/2026
💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- The paper tackles the labor-intensive challenge of converting Swiss and EU sustainability regulations into concrete, verifiable public procurement criteria used in tenders.
- It proposes a configurable, LLM-assisted pipeline that generates and evaluates catalogs of sustainability-oriented selection/award/technical criteria using in-context prompting and interchangeable LLM backends.
- Automated output validation and an LLM-based evaluation component are used to improve auditability and reduce errors compared with purely manual drafting.
- A proof-of-concept instantiates the system by ingesting structured official guidelines, and evaluation combines automated quality checks with expert comparison to a manually curated “gold standard.”
- Results indicate substantial reductions in manual drafting effort while maintaining consistency with official guidelines, alongside documented limitations and failure modes for real deployments.
Related Articles
Sentiment Analysis API Tutorial: Build a Customer Review Dashboard
Dev.to
Teaching AI Agents to Handle NFTs: ERC-721, ERC-1155, and Metaplex
Dev.to
The Complete Guide to Model Context Protocol (MCP): Building AI-Native Applications in 2026
Dev.to
AI Agent Skill Security Report — 2026-03-25
Dev.to
How to Build Multi-Agent AI Systems That Actually Work: A 2026 Practical Guide
Dev.to