Programming Manufacturing Robots with Imperfect AI: LLMs as Tuning Experts for FDM Print Configuration Selection

arXiv cs.RO / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper studies how manufacturing robots can use imperfect AI for process expertise, using fused deposition modeling (FDM) 3D printing where print configuration strongly affects quality.
  • It proposes a modular closed-loop system that embeds an LLM’s tuning expertise inside a Bayesian optimization loop rather than using the LLM as a direct end-to-end decision oracle.
  • An approximate evaluator scores candidate configurations and returns structured diagnostics, which the LLM turns into natural-language adjustments that are then compiled into machine-actionable guidance.
  • On 100 Thingi10k parts, the LLM-guided optimization loop found the best configuration for 78% of objects with 0% likely-to-fail cases, outperforming single-shot recommendations that rarely matched the best and had 15% likely-to-fail cases.
  • The authors conclude that LLMs are most effective as constrained decision modules within evidence-driven optimization for robot programming, and expect similar benefits beyond FDM.

Abstract

We use fused deposition modeling (FDM) 3D printing as a case study of how manufacturing robots can use imperfect AI to acquire process expertise. In FDM, print configuration strongly affects output quality. Yet, novice users typically rely on default configurations, trial-and-error, or recommendations from generic AI models (e.g., ChatGPT). These strategies can produce complete prints, but they do not reliably meet specific objectives. Experts iteratively tune print configurations using evidence from prior prints. We present a modular closed-loop approach that treats an LLM as a source of tuning expertise. We embed this source of expertise within a Bayesian optimization loop. An approximate evaluator scores each print configuration and returns structured diagnostics, which the LLM uses to propose natural-language adjustments that are compiled into machine-actionable guidance for optimization. On 100 Thingi10k parts, our LLM-guided loop achieves the best configuration on 78% objects with 0% likely-to-fail cases, while single-shot AI model recommendations are rarely best and exhibit 15% likely-to-fail cases. These results suggest that LLMs provide more value as constrained decision modules in evidence-driven optimization loops than as end-to-end oracles for print configuration selection. We expect this result to extend to broader LLM-based robot programming.