Benchmarking Logistic Regression, SVM, and LightGBM Against BiLSTM with Attention for Sentiment Analysis on Indonesian Product Reviews

arXiv cs.CL / 4/29/2026

📰 NewsModels & Research

Key Points

  • The study benchmarks an AutoML-based machine learning pipeline (via PyCaret) against a deep learning BiLSTM with attention model for binary sentiment analysis on Indonesian e-commerce product reviews.
  • The dataset contains 19,728 balanced samples (equal positive/negative), enabling evaluation using 10-fold stratified cross-validation for the ML models and a held-out test set for the DL model.
  • Among ML methods, Logistic Regression performed best, reaching 97.26% accuracy and 97.26% F1-score, outperforming linear-kernel SVM and LightGBM in the reported setup.
  • The BiLSTM with Attention model achieved nearly matching results, with 97.24% accuracy and 97.24% F1-score on 3,946 held-out test samples.
  • The authors conclude that well-preprocessed traditional ML approaches with good feature extraction can closely match or slightly beat more complex sequential DL architectures while requiring less computation.

Abstract

Sentiment analysis of product reviews on e-commerce platforms plays a critical role in automatically understanding customer satisfaction and providing actionable insights for sellers seeking to improve product quality. This paper presents a comprehensive benchmarking study comparing a Machine Learning (ML) approach via the PyCaret AutoML framework against a Deep Learning (DL) approach based on a Bidirectional Long Short-Term Memory (BiLSTM) architecture with an Attention mechanism for binary sentiment classification on Indonesian product reviews. The dataset comprises 19,728 samples balanced equally between positive and negative reviews. For the ML approach, three prominent algorithms were evaluated via 10-fold stratified cross-validation: Logistic Regression (LR), Support Vector Machine (SVM) with a linear kernel, and Light Gradient Boosting Machine (LightGBM). Logistic Regression achieved the best ML performance with an accuracy of 97.26\% and an F1-score of 97.26\%. The BiLSTM with Attention model, evaluated on 3,946 held-out test samples, achieved an accuracy of 97.24\% and an F1-score of 97.24\%. These comparative results demonstrate that traditional ML algorithms with proper preprocessing and feature extraction can compete closely with, and even marginally outperform, more complex sequential DL architectures on high-dimensional datasets, while simultaneously offering greater computational efficiency.