Inverting Neural Networks: New Methods to Generate Neural Network Inputs from Prescribed Outputs

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles the inverse problem of finding input images that a neural network maps to specified class outputs, aiming to reveal what recognizable features correspond to those classes.
  • It proposes two general inversion strategies: a forward-pass approach using root-finding with the input Jacobian, and a backward-pass approach that inverts layers iteratively while injecting random null-space vectors.
  • The authors validate the methods on both transformer-based architectures and simpler sequential linear-layer networks.
  • Results show the techniques can generate random-like input images that still achieve near-perfect classification scores, highlighting vulnerabilities in how these networks learn and represent input spaces.
  • The work argues these methods provide broader “coverage” of possible inputs that satisfy inverse mappings, potentially improving understanding of network behavior and security weaknesses.

Abstract

Neural network systems describe complex mappings that can be very difficult to understand. In this paper, we study the inverse problem of determining the input images that get mapped to specific neural network classes. Ultimately, we expect that these images contain recognizable features that are associated with their corresponding class classifications. We introduce two general methods for solving the inverse problem. In our forward pass method, we develop an inverse method based on a root-finding algorithm and the Jacobian with respect to the input image. In our backward pass method, we iteratively invert each layer, at the top. During the inversion process, we add random vectors sampled from the null-space of each linear layer. We demonstrate our new methods on both transformer architectures and sequential networks based on linear layers. Unlike previous methods, we show that our new methods are able to produce random-like input images that yield near perfect classification scores in all cases, revealing vulnerabilities in the underlying networks. Hence, we conclude that the proposed methods provide a more comprehensive coverage of the input image spaces that solve the inverse mapping problem.