MyoVision: A Mobile Research Tool and NEATBoost-Attention Ensemble Framework for Real Time Chicken Breast Myopathy Detection

arXiv cs.LG / 4/16/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces MyoVision, a low-cost smartphone-based transillumination imaging framework to classify chicken breast myopathies (Normal, Woody Breast, Spaghetti Meat) without destructive testing.
  • It captures 14-bit RAW images, extracts structural texture descriptors for detecting internal tissue abnormalities, and applies a NEATBoost-Attention Ensemble for multi-class classification.
  • The proposed NEATBoost-Attention model uses neuroevolution (NEAT) to automatically discover model hyperparameters and enable architecture diversity by combining weighted fusion of LightGBM and attention-based MLP components.
  • On a dataset of 336 fillets, the approach reports 82.4% test accuracy (F1 = 0.83), outperforming conventional ML/DL baselines and approaching performance claimed by much more expensive hyperspectral systems.
  • Beyond classification, the work presents a reproducible consumer-grade RGB-D acquisition pipeline intended to support scalable multimodal meat quality research.

Abstract

Woody Breast (WB) and Spaghetti Meat (SM) myopathies significantly impact poultry meat quality, yet current detection methods rely either on subjective manual evaluation or costly laboratory-grade imaging systems. We address the problem of low-cost, non-destructive multi-class myopathy classification using consumer smartphones. MyoVision is introduced as a mobile transillumination imaging framework in which 14-bit RAW images are captured and structural texture descriptors indicative of internal tissue abnormalities are extracted. To classify three categories (Normal, Woody Breast, Spaghetti Meat), we propose a NEATBoost-Attention Ensemble model, which is a neuroevolution-optimized weighted fusion of LightGBM and attention-based MLP models. Hyperparameters are automatically discovered using NeuroEvolution of Augmenting Topologies (NEAT), eliminating manual tuning and enabling architecture diversity for small tabular datasets. On a dataset of 336 fillets collected from a commercial processing facility, our method achieves 82.4% test accuracy (F1 = 0.83), outperforming conventional machine learning and deep learning baselines and matching performance reported by hyperspectral imaging systems costing orders of magnitude more. Beyond classification performance, MyoVision establishes a reproducible mobile RGB-D acquisition pipeline for multimodal meat quality research, demonstrating that consumer-grade imaging can support scalable internal tissue assessment.