MuViS: Multimodal Virtual Sensing Benchmark

arXiv cs.AI / 3/27/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • MuViS is introduced as a domain-agnostic benchmark suite for multimodal virtual sensing, unifying diverse datasets under a standardized interface for preprocessing and evaluation.
  • The paper highlights that virtual sensing research is currently fragmented across processes, modalities, and sensing configurations, with no established default approach that generalizes.
  • Using MuViS, the authors benchmark multiple established methods including gradient-boosted decision trees and deep neural networks, finding that none consistently delivers a universal advantage.
  • The release is positioned as an open-source, extensible platform to enable reproducible comparisons and to support future dataset and model integrations.
  • Overall, the results point to the need for more generalizable virtual sensing architectures rather than relying on any single existing method class.

Abstract

Virtual sensing aims to infer hard-to-measure quantities from accessible measurements and is central to perception and control in physical systems. Despite rapid progress from first-principle and hybrid models to modern data-driven methods research remains siloed, leaving no established default approach that transfers across processes, modalities, and sensing configurations. We introduce MuViS, a domain-agnostic benchmarking suite for multimodal virtual sensing that consolidates diverse datasets into a unified interface for standardized preprocessing and evaluation. Using this framework, we benchmark established approaches spanning gradient-boosted decision trees and deep neural network (NN) architectures, and show that none of these provides a universal advantage, underscoring the need for generalizable virtual sensing architectures. MuViS is released as an open-source, extensible platform for reproducible comparison and future integration of new datasets and model classes.