The PICCO Framework for Large Language Model Prompting: A Taxonomy and Reference Architecture for Prompt Structure

arXiv cs.CL / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces PICCO, a reference framework for structuring large language model prompts, aiming to reduce inconsistencies in how prompt design is described and applied.
  • PICCO was derived by rigorously synthesizing 11 previously published prompting frameworks found via a multi-database search.
  • It provides a taxonomy that clarifies distinct but related concepts including prompt frameworks, prompt elements, prompt generation, prompting techniques, and prompt engineering.
  • It proposes a five-element prompt-generation reference architecture—Persona, Instructions, Context, Constraints, and Output (PICCO)—defining each element’s function, scope, and relationships.
  • The work also discusses implementation-relevant concepts such as common prompting techniques (e.g., zero-shot, few-shot, chain-of-thought), iterative prompt engineering approaches, and responsible prompting concerns like security, privacy, bias, and trust.

Abstract

Large language model (LLM) performance depends heavily on prompt design, yet prompt construction is often described and applied inconsistently. Our purpose was to derive a reference framework for structuring LLM prompts. This paper presents PICCO, a framework derived through a rigorous synthesis of 11 previously published prompting frameworks identified through a multi-database search. The analysis yields two main contributions. First, it proposes a taxonomy that distinguishes prompt frameworks, prompt elements, prompt generation, prompting techniques, and prompt engineering as related but non-equivalent concepts. Second, it derives a five-element reference architecture for prompt generation: Persona, Instructions, Context, Constraints, and Output (PICCO). For each element, we define its function, scope, and relationship to other elements, with the goal of improving conceptual clarity and supporting more systematic prompt design. Finally, to support application of the framework, we outline key concepts relevant to implementation, including prompting techniques (e.g., zero-shot, few-shot, chain-of-thought, ensembling, decomposition, and self-critique, with selected variants), human and automated approaches to iterative prompt engineering, responsible prompting considerations such as security, privacy, bias, and trust, and priorities for future research. This work is a conceptual and methodological contribution: it formalizes a common structure for prompt specification and comparison, but does not claim empirical validation of PICCO as an optimization method.