Improving Performance in Classification Tasks with LCEN and the Weighted Focal Differentiable MCC Loss

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a modified version of LASSO-Clip-EN (LCEN) that extends it from regression to nonlinear, interpretable feature selection for classification tasks.
  • Experiments on four common binary and multiclass datasets show LCEN consistently achieves strong macro F1 and Matthews correlation coefficient (MCC) performance compared with 10 other model types.
  • LCEN maintains sparsity in classification settings by removing an average of 56% of input features, and using LCEN-selected features to retrain models yields statistically significant improvements in three experiments.
  • The study also evaluates a weighted focal differentiable MCC (diffMCC) loss function, finding it produces the best results across experiments, averaging 4.9% higher macro F1 and 8.5% higher MCC than models trained with weighted cross-entropy.
  • Overall, the results suggest LCEN can be effective for classification feature selection while diffMCC loss can train more accurate models than weighted cross-entropy for the tested tasks.

Abstract

The LASSO-Clip-EN (LCEN) algorithm was previously introduced for nonlinear, interpretable feature selection and machine learning. However, its design and use was limited to regression tasks. In this work, we create a modified version of the LCEN algorithm that is suitable for classification tasks and maintains its desirable properties, such as interpretability. This modified LCEN algorithm is evaluated on four widely used binary and multiclass classification datasets. In these experiments, LCEN is compared against 10 other model types and consistently reaches high test-set macro F_1 score and Matthews correlation coefficient (MCC) metrics, higher than that of the majority of investigated models. LCEN models for classification remain sparse, eliminating an average of 56% of all input features in the experiments performed. Furthermore, LCEN-selected features are used to retrain all models using the same data, leading to statistically significant performance improvements in three of the experiments and insignificant differences in the fourth when compared to using all features or other feature selection methods. Simultaneously, the weighted focal differentiable MCC (diffMCC) loss function is evaluated on the same datasets. Models trained with the diffMCC loss function are always the best-performing methods in these experiments, and reach test-set macro F_1 scores that are, on average, 4.9% higher and MCCs that are 8.5% higher than those obtained by models trained with the weighted cross-entropy loss. These results highlight the performance of LCEN as a feature selection and machine learning algorithm also for classification tasks, and how the diffMCC loss function can train very accurate models, surpassing the weighted cross-entropy loss in the tasks investigated.