LongCat-Next: Lexicalizing Modalities as Discrete Tokens

Reddit r/LocalLLaMA / 3/31/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes DiNA (Discrete Native Autoregressive), a unified framework that represents multimodal inputs in a shared discrete token space to enable consistent autoregressive modeling across text, vision, and audio.
  • It introduces dNaViT, a “discrete native any-resolution” visual tokenizer/decoder that converts continuous images into hierarchical discrete tokens at arbitrary resolutions.
  • Based on this approach, the authors develop LongCat-Next, claiming strong “see, paint, and talk” performance by using a single autoregressive objective with minimal modality-specific engineering.
  • The work targets the known limitations of discrete vision modeling on understanding tasks and frames LongCat-Next as a way to reconcile understanding vs. generation in a unified multimodal model.
  • The authors open-source the LongCat-Next model and tokenizers, aiming to accelerate further research and development in native multimodality.
LongCat-Next: Lexicalizing Modalities as Discrete Tokens

Paper: https://arxiv.org/abs/2603.27538

Code: https://github.com/meituan-longcat/LongCat-Next

Blog: https://longcat.chat/longcat-next/intro

Model: https://huggingface.co/meituan-longcat/LongCat-Next

MIT License: https://huggingface.co/meituan-longcat/LongCat-Next/blob/main/LICENSE

Abstract

The prevailing Next-Token Prediction (NTP) paradigm has driven the success of large language models through discrete autoregressive modeling. However, contemporary multimodal systems remain language-centric, often treating non-linguistic modalities as external attachments, leading to fragmented architectures and suboptimal integration. To transcend this limitation, we introduce Discrete Native Autoregressive (DiNA), a unified framework that represents multimodal information within a shared discrete space, enabling a consistent and principled autoregressive modeling across modalities. A key innovation is the Discrete Native Any-resolution Visual Transformer (dNaViT), which performs tokenization and de-tokenization at arbitrary resolutions, transforming continuous visual signals into hierarchical discrete tokens. Building on this foundation, we develop LongCat-Next, a native multimodal model that processes text, vision, and audio under a single autoregressive objective with minimal modality-specific design. As an industrial-strength foundation model, it excels at seeing, painting, and talking within a single framework, achieving strong performance across a wide range of multimodal benchmarks. In particular, LongCat-Next addresses the long-standing performance ceiling of discrete vision modeling on understanding tasks and provides a unified approach to effectively reconcile the conflict between understanding and generation. As an attempt toward native multimodality, we open-source the LongCat-Next and its tokenizers, hoping to foster further research and development in the community. GitHub: https://github.com/meituan-longcat/LongCat-Next

submitted by /u/ninjasaid13
[link] [comments]