Sparse $\epsilon$ insensitive zone bounded asymmetric elastic net support vector machines for pattern classification

arXiv stat.ML / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an b5-insensitive bounded asymmetric elastic net loss integrated into an SVM (called b5-BAEN-SVM) to jointly improve robustness to noise and promote sparsity in pattern classification.
  • It provides theoretical results that samples within the b5-insensitive band are not support vectors (support sparsity), and that the loss has bounded influence (robustness via a bounded influence function).
  • The authors address a non-convex optimization challenge by designing a half-quadratic algorithm with clipping dual coordinate descent that solves the problem through a sequence of weighted subproblems for better computational efficiency.
  • Experiments on simulated and real datasets, including tests under a Gaussian kernel, show that b5-BAEN-SVM outperforms traditional and existing robust SVM variants, achieving higher accuracy and stronger noise insensitivity.
  • Overall, the method is positioned as a practical approach that balances sparsity and robustness, with statistical tests supporting its superiority.

Abstract

Existing support vector machines(SVM) models are sensitive to noise and lack sparsity, which limits their performance. To address these issues, we combine the elastic net loss with a robust loss framework to construct a sparse \varepsilon-insensitive bounded asymmetric elastic net loss, and integrate it with SVM to build \varepsilon Insensitive Zone Bounded Asymmetric Elastic Net Loss-based SVM(\varepsilon-BAEN-SVM). \varepsilon-BAEN-SVM is both sparse and robust. Sparsity is proven by showing that samples inside the \varepsilon-insensitive band are not support vectors. Robustness is theoretically guaranteed because the influence function is bounded. To solve the non-convex optimization problem, we design a half-quadratic algorithm based on clipping dual coordinate descent. It transforms the problem into a series of weighted subproblems, improving computational efficiency via the \varepsilon parameter. Experiments on simulated and real datasets show that \varepsilon-BAEN-SVM outperforms traditional and existing robust SVMs. It balances sparsity and robustness well in noisy environments. Statistical tests confirm its superiority. Under the Gaussian kernel, it achieves better accuracy and noise insensitivity, validating its effectiveness and practical value.