Is Large Language Model Performance on Reasoning Tasks Impacted by Different Ways Questions Are Asked?

arXiv cs.CL / 4/29/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study examines whether changing the way questions are phrased (multiple-choice, true/false, short/long answers) affects LLM accuracy on reasoning tasks.
  • Across five LLMs and two evaluation dimensions (reasoning-step accuracy and final-answer selection accuracy), performance varies significantly by question type.
  • Reasoning-step accuracy does not always predict how accurately the model picks the final answer, indicating a potential mismatch between intermediate reasoning and outcome selection.
  • The number of answer options and specific wording in the questions can meaningfully influence LLM performance.
  • Overall, the paper highlights that evaluation results for reasoning benchmarks may depend heavily on prompt/question formatting rather than only model reasoning capability.

Abstract

Large Language Models (LLMs) have been evaluated using diverse question types, e.g., multiple-choice, true/false, and short/long answers. This study answers an unexplored question about the impact of different question types on LLM accuracy on reasoning tasks. We investigate the performance of five LLMs on three different types of questions using quantitative and deductive reasoning tasks. The performance metrics include accuracy in the reasoning steps and choosing the final answer. Key Findings: (1) Significant differences exist in LLM performance across different question types. (2) Reasoning accuracy does not necessarily correlate with the final selection accuracy. (3) The number of options and the choice of words, influence LLM performance.