To LLM, or Not to LLM: How Designers and Developers Navigate LLMs as Tools or Teammates

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study, based on interviews with 33 designers and developers at three large tech organizations, finds that decisions to use LLMs in workflows are not purely technical but depend on how practitioners frame the model’s role.
  • When LLMs are treated as tools under clear human control, participants generally view their use as acceptable and compatible with existing governance and oversight structures.
  • When LLMs are framed as teammates with shared or ambiguous agency, participants report hesitation—especially when it is unclear who is accountable for outcomes.
  • The authors propose an analytic rubric showing how “tool” versus “teammate” framing affects decision authority, accountability ownership, oversight strategies, and overall organizational acceptability, positioning the issue as a sociotechnical design-time concern.
  • Rather than focusing only on model capability after deployment, the paper argues for evaluating and designing around role framing during system design to support responsible adoption.

Abstract

Large language models (LLMs) are increasingly integrated into design and development workflows, yet decisions about their use are rarely binary or purely technical. We report findings from a constructivist grounded theory study based on interviews with 33 designers and developers across three large technology organisations. Rather than evaluating LLMs solely by capability, participants reasoned about the role an LLM could occupy within a workflow and how that role would interact with existing structures of responsibility and organisational accountability. When LLMs were framed as tools under clear human control, their use was typically acceptable and could be integrated within existing governance structures. When framed as teammates with shared or ambiguous agency, practitioners expressed hesitation, particularly when responsibility for outcomes could not be clearly justified. At the same time, participants also described productive teammate configurations in which LLMs supported collaborative reasoning while remaining embedded within explicit oversight structures. We identify tool and teammate framings as recurring ways in which designers and developers position LLMs relative to human work and present an analytic rubric describing how role framing shapes decision authority, accountability ownership, oversight strategies, and organisational acceptability. By foregrounding design-time reasoning, this work reframes To LLM or Not to LLM as a sociotechnical positioning problem that emerges during system design rather than during post-deployment evaluation.