Mapping the Course for Prompt-based Structured Prediction

arXiv cs.CL / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that while LLMs perform well across language tasks without task-specific fine-tuning, they still suffer from hallucinations, inconsistencies, and weaker complex reasoning tied to autoregressive generation limits.
  • It proposes improving structured prediction by combining LLM prompting with combinatorial (symbolic) inference to enforce structural consistency during prediction.
  • Through exhaustive experiments, the authors evaluate multiple prompting strategies for estimating confidence values used by downstream symbolic inference and find that adding symbolic inference improves accuracy and consistency regardless of the prompting approach.
  • The work further shows that applying calibration and fine-tuning using structured learning objectives boosts performance on hard tasks, suggesting structured learning remains important even with modern LLMs.

Abstract

Large language models (LLMs) have demonstrated strong performance in a wide-range of language tasks without requiring task-specific fine-tuning. However, they remain prone to hallucinations and inconsistencies, and often struggle with complex reasoning, in part due to the limitations of autoregressive generation. We propose to address some of these issues, particularly for structured prediction, by combining LLMs with combinatorial inference to marry the predictive power of LLMs with the structural consistency provided by inference methods. We perform exhaustive experiments in an effort to understand which prompting strategies can best estimate confidence values for downstream symbolic inference, and find that, independent of prompting strategy, incorporating symbolic inference yields more consistent and accurate predictions than prompting alone. Finally, we show that calibration and fine-tuning with structured learning objectives further increases performance on challenging tasks, highlighting that structured learning remains valuable in the era of LLMs.