AI Navigate

Intelligent Materials Modelling: Large Language Models Versus Partial Least Squares Regression for Predicting Polysulfone Membrane Mechanical Performance

arXiv cs.AI / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • LLMs outperformed the chemometric baseline for elongation at break (EL), delivering about 40% RMSE reductions and lowering MAE from 11.63% to 5.18% in data-scarce settings.
  • For Young's modulus (E) and tensile strength (TS), LLMs showed statistical parity with PLS, indicating linear methods can be competitive when structure-property correlations are strong.
  • LLMs exhibited much lower run-to-run variability (≤3%) compared with PLS (up to 47%), suggesting greater robustness in small-data regimes.
  • The study proposes a hybrid architecture that combines LLM-encoded knowledge with interpretable, latent-variable models to optimize small-data materials discovery.

Abstract

Predicting the mechanical properties of polysulfone (PSF) membranes from structural descriptors remains challenging due to extreme data scarcity typical of experimental studies. To investigate this issue, this study benchmarked knowledge-driven inference using four large language models (LLMs) (DeepSeek-V3, DeepSeek-R1, ChatGPT-4o, and GPT-5) against partial least squares (PLS) regression for predicting Young's modulus (E), tensile strength (TS), and elongation at break (EL) based on pore diameter (PD), contact angle (CA), thickness (T), and porosity (P) measurements. These knowledge-driven approaches demonstrated property-specific advantages over the chemometric baseline. For EL, LLMs achieved statistically significant improvements, with DeepSeek-R1 and GPT-5 delivering 40.5% and 40.3% of Root Mean Square Error reductions, respectively, reducing mean absolute errors from 11.63\pm5.34% to 5.18\pm0.17%. Run-to-run variability was markedly compressed for LLMs (\leq3%) compared to PLS (up to 47%). E and TS predictions showed statistical parity between approaches (q\geq0.05), indicating sufficient performance of linear methods for properties with strong structure-property correlations. Error topology analysis revealed systematic regression-to-the-mean behavior dominated by data-regime effects rather than model-family limitations. These findings establish that LLMs excel for non-linear, constraint-sensitive properties under bootstrap instability, while PLS remains competitive for linear relationships requiring interpretable latent-variable decompositions. The demonstrated complementarity suggests hybrid architectures leveraging LLM-encoded knowledge within interpretable frameworks may optimise small-data materials discovery.