Fusing Memory and Attention: A study on LSTM, Transformer and Hybrid Architectures for Symbolic Music Generation

arXiv cs.LG / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The study systematically compares LSTM and Transformer architectures for symbolic music generation (SMG) by evaluating how each models local melodic continuity versus global structural coherence.
  • On the Deutschl dataset, LSTMs are found to capture local patterns better but struggle to preserve long-range dependencies, while Transformers maintain global structure but often generate irregular phrasing.
  • The authors propose a Hybrid model (Transformer Encoder + LSTM Decoder) designed to leverage LSTM strengths for local consistency and Transformer strengths for global coherence.
  • Across evaluations of 1,000 generated melodies per architecture using 17 quality metrics, the hybrid approach improves both local and global continuity and coherence versus the two baselines.
  • Ablation studies and human perceptual evaluations further support the quantitative findings, validating the rationale behind the architectural fusion.

Abstract

Machine learning techniques, such as Transformers and Long Short-Term Memory (LSTM) networks, play a crucial role in Symbolic Music Generation (SMG). Existing literature indicates a difference between LSTMs and Transformers regarding their ability to model local melodic continuity versus maintaining global structural coherence. However, their specific properties within the context of SMG have not been systematically studied. This paper addresses this gap by providing a fine-grained comparative analysis of LSTMs versus Transformers for SMG, examining local and global properties in detail using 17 musical quality metrics on the Deutschl dataset. We find that LSTM networks excel at capturing local patterns but fail to preserve long-range dependencies, while Transformers model global structure effectively but tend to produce irregular phrasing. Based on this analysis and leveraging their respective strengths, we propose a Hybrid architecture combining a Transformer Encoder with an LSTM Decoder and evaluate it against both baselines. We evaluated 1,000 generated melodies from each of the three architectures on the Deutschl dataset. The results show that the hybrid method achieves better local and global continuity and coherence compared to the baselines. Our work highlights the key characteristics of these models and demonstrates how their properties can be leveraged to design superior models. We also supported the experiments with ablation studies and human perceptual evaluations, which statistically support the findings and provide robust validation for this work.