BiScale-GTR: Fragment-Aware Graph Transformers for Multi-Scale Molecular Representation Learning

arXiv cs.LG / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • BiScale-GTR is proposed as a unified self-supervised molecular representation learning framework that addresses limitations of GNN-dominated hybrid models used with graph transformers.
  • The approach improves graph BPE tokenization to yield chemically valid, high-coverage fragment tokens derived from fragment-aware tokenization.
  • It uses a parallel GNN–Transformer architecture where atom-level GNN representations are pooled into fragment embeddings, fused with fragment token embeddings, and then processed for multi-scale reasoning.
  • The method targets multi-granularity molecular patterns by jointly capturing local chemical environments, substructure motifs, and long-range dependencies.
  • Experiments on MoleculeNet, PharmaBench, and LRGB report state-of-the-art results for both classification and regression, with attribution analysis indicating the model learns chemically meaningful functional motifs; code is planned for release after acceptance.

Abstract

Graph Transformers have recently attracted attention for molecular property prediction by combining the inductive biases of graph neural networks (GNNs) with the global receptive field of Transformers. However, many existing hybrid architectures remain GNN-dominated, causing the resulting representations to remain heavily shaped by local message passing. Moreover, most existing methods operate at only a single structural granularity, limiting their ability to capture molecular patterns that span multiple molecular scales. We introduce BiScale-GTR, a unified framework for self-supervised molecular representation learning that combines chemically grounded fragment tokenization with adaptive multi-scale reasoning. Our method improves graph Byte Pair Encoding (BPE) tokenization to produce consistent, chemically valid, and high-coverage fragment tokens, which are used as fragment-level inputs to a parallel GNN-Transformer architecture. Architecturally, atom-level representations learned by a GNN are pooled into fragment-level embeddings and fused with fragment token embeddings before Transformer reasoning, enabling the model to jointly capture local chemical environments, substructure-level motifs, and long-range molecular dependencies. Experiments on MoleculeNet, PharmaBench, and the Long Range Graph Benchmark (LRGB) demonstrate state-of-the-art performance across both classification and regression tasks. Attribution analysis further shows that BiScale-GTR highlights chemically meaningful functional motifs, providing interpretable links between molecular structure and predicted properties. Code will be released upon acceptance.