AI Navigate

The Temporal Markov Transition Field

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The Temporal Markov Transition Field (TMTF) is an extension of the Markov Transition Field (MTF) designed to better capture changes in time series dynamics by partitioning the series into contiguous temporal chunks.
  • TMTF estimates a separate transition matrix for each chunk rather than relying on a single global matrix, allowing the representation to reflect temporal regime changes and their timing.
  • The resulting TMTF image has horizontal bands encoding distinct local transition dynamics, preserving order and being amplitude-agnostic, making it suitable for convolutional neural network inputs in time series tasks.
  • The paper presents formal definitions, analyzes structural properties, provides numerical examples highlighting differences with the global MTF, and discusses geometric interpretations related to process characteristics like persistence and mean reversion.
  • This method addresses the bias-variance trade-off introduced by chunking and enhances the interpretability and utility of Markov transition fields for time series characterization under non-stationary conditions.

Computer Science > Machine Learning

arXiv:2603.08803 (cs)
[Submitted on 9 Mar 2026]

Title:The Temporal Markov Transition Field

View a PDF of the paper titled The Temporal Markov Transition Field, by Michael Leznik
View PDF HTML (experimental)
Abstract:The Markov Transition Field (MTF), introduced by Wang and Oates (2015), encodes a time series as a two-dimensional image by mapping each pair of time steps to the transition probability between their quantile states, estimated from a single global transition matrix. This construction is efficient when the transition dynamics are stationary, but produces a misleading representation when the process changes regime over time: the global matrix averages across regimes and the resulting image loses all information about \emph{when} each dynamical regime was active. In this paper we introduce the \emph{Temporal Markov Transition Field} (TMTF), an extension that partitions the series into $K$ contiguous temporal chunks, estimates a separate local transition matrix for each chunk, and assembles the image so that each row reflects the dynamics local to its chunk rather than the global average. The resulting $T \times T$ image has $K$ horizontal bands of distinct texture, each encoding the transition dynamics of one temporal segment. We develop the formal definition, establish the key structural properties of the representation, work through a complete numerical example that makes the distinction from the global MTF concrete, analyse the bias--variance trade-off introduced by temporal chunking, and discuss the geometric interpretation of the local transition matrices in terms of process properties such as persistence, mean reversion, and trending behaviour. The TMTF is amplitude-agnostic and order-preserving, making it suitable as an input channel for convolutional neural networks applied to time series characterisation tasks.
Comments:
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:2603.08803 [cs.LG]
  (or arXiv:2603.08803v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.08803
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Michael Leznik Dr. [view email]
[v1] Mon, 9 Mar 2026 18:04:40 UTC (110 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.