WARP: Guaranteed Inner-Layer Repair of NLP Transformers

arXiv cs.LG / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces WARP (Weight-Adjusted Repair with Provability), a constraint-based framework to repair adversarial vulnerabilities in NLP Transformer models beyond the final layer.
  • WARP formulates repair as a convex quadratic program using a first-order linearization of the logit gap, enabling tractable optimization over a high-dimensional parameter space.
  • For each input, WARP provides three types of guarantees: a positive margin for correct classification, preservation constraints over a chosen remain set, and a certified robustness radius via Lipschitz continuity.
  • To maintain feasibility across different Transformer architectures, the method adds a sensitivity-based preprocessing step that conditions the optimization landscape.
  • Experiments on encoder-only Transformers with different layer architectures report that the theoretical guarantees hold in practice and improve robustness to adversarial perturbations.

Abstract

Transformer-based NLP models remain vulnerable to adversarial perturbations, yet existing repair methods face a fundamental trade-off: gradient-based approaches offer flexibility but lack verifiability and often overfit; methods that do provide repair guarantees are restricted to the final layer or small networks, significantly limiting the parameter search space available for repair. We present WARP (Weight-Adjusted Repair with Provability), a constraint-based repair framework that extends repair beyond the last layer of Transformer models. WARP formulates repair as a convex quadratic program derived from a first-order linearization of the logit gap, enabling tractable optimization over a high-dimensional parameter space. Under the condition that the first-order approximation holds, this formulation induces three per-sample guarantees: (i) a positive margin constraint ensuring correct classification on repaired inputs, (ii) preservation constraints over a designated remain set, and (iii) a certified robustness radius derived from Lipschitz continuity. To ensure feasibility across varying model architectures, we introduce a sensitivity-based preprocessing step that conditions the optimization landscape accordingly. We further show that the iterative optimization procedure converges to solutions satisfying all repair constraints under mild assumptions. Empirical evaluation on encoder-only Transformers with varying layer architectures validates that these guarantees hold in practice while improving robustness to adversarial inputs. Our results demonstrate that guaranteed, generalizable Transformer repair is achievable through principled constraint-based optimization.