AI Navigate

Image Captioning via Compact Bidirectional Architecture

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a Compact Bidirectional Transformer model for image captioning that leverages both past and future context simultaneously, unlike traditional unidirectional models.
  • This model tightly couples left-to-right (L2R) and right-to-left (R2L) flows into a compact architecture, enabling parallel execution and regularization for improved context usage.
  • Extensive experiments on the MSCOCO benchmark demonstrate that the compact bidirectional design and sentence-level ensemble strategy significantly boost performance, outperforming refinement-based models.
  • The approach extends conventional self-critical training to a two-flow setup and achieves state-of-the-art results among non-vision-language-pretraining methods.
  • The generality of the architecture is validated by its successful adaptation to an LSTM backbone, and the source code is openly available for further use and research.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2201.01984 (cs)
[Submitted on 6 Jan 2022 (v1), last revised 10 Mar 2026 (this version, v3)]

Title:Image Captioning via Compact Bidirectional Architecture

View a PDF of the paper titled Image Captioning via Compact Bidirectional Architecture, by Zijie Song and 6 other authors
View PDF HTML (experimental)
Abstract:Most current image captioning models typically generate captions from left-to-right. This unidirectional property makes them can only leverage past context but not future context. Though refinement-based models can exploit both past and future context by generating a new caption in the second stage based on pre-retrieved or pre-generated captions in the first stage, the decoder of these models generally consists of two networks~(i.e. a retriever or captioner in the first stage and a captioner in the second stage), which can only be executed sequentially. In this paper, we introduce a Compact Bidirectional Transformer model for image captioning that can leverage bidirectional context implicitly and explicitly while the decoder can be executed parallelly. Specifically, it is implemented by tightly coupling left-to-right(L2R) and right-to-left(R2L) flows into a single compact model to serve as a regularization for implicitly exploiting bidirectional context and optionally allowing explicit interaction of the bidirectional flows, while the final caption is chosen from either L2R or R2L flow in a sentence-level ensemble manner. We conduct extensive ablation studies on MSCOCO benchmark and find that the compact bidirectional architecture and the sentence-level ensemble play more important roles than the explicit interaction mechanism. By combining with word-level ensemble seamlessly, the effect of sentence-level ensemble is further enlarged. We further extend the conventional one-flow self-critical training to the two-flows version under this architecture and achieve new state-of-the-art results in comparison with non-vision-language-pretraining models. Finally, we verify the generality of this compact bidirectional architecture by extending it to LSTM backbone. Source code is available at this https URL.
Subjects: Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL)
Cite as: arXiv:2201.01984 [cs.CV]
  (or arXiv:2201.01984v3 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2201.01984
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yuanen Zhou [view email]
[v1] Thu, 6 Jan 2022 09:23:18 UTC (4,110 KB)
[v2] Tue, 29 Jul 2025 10:32:39 UTC (838 KB)
[v3] Tue, 10 Mar 2026 04:54:20 UTC (3,653 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.