AI Navigate

NymeriaPlus: Enriching Nymeria Dataset with Additional Annotations and Data

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • NymeriaPlus upgrades Nymeria by adding improved human motion representations in Momentum Human Rig and SMPL formats.
  • It provides dense 3D and 2D bounding box annotations for indoor objects and structural elements, along with instance-level 3D object reconstructions.
  • It introduces additional modalities, such as basemap recordings, audio, and wristband videos, to create a more multimodal egocentric benchmark.
  • The dataset consolidation is expected to bridge gaps in existing egocentric resources and enable broader research in multimodal learning for embodied AI.

Abstract

The Nymeria Dataset, released in 2024, is a large-scale collection of in-the-wild human activities captured with multiple egocentric wearable devices that are spatially localized and temporally synchronized. It provides body-motion ground truth recorded with a motion-capture suit, device trajectories, semi-dense 3D point clouds, and in-context narrations. In this paper, we upgrade Nymeria and introduce NymeriaPlus. NymeriaPlus features: (1) improved human motion in Momentum Human Rig (MHR) and SMPL formats; (2) dense 3D and 2D bounding box annotations for indoor objects and structural elements; (3) instance-level 3D object reconstructions; and (4) additional modalities e.g., basemap recordings, audio, and wristband videos. By consolidating these complementary modalities and annotations into a single, coherent benchmark, NymeriaPlus strengthens Nymeria into a more powerful in-the-wild egocentric dataset. We expect NymeriaPlus to bridge a key gap in existing egocentric resources and to support a broader range of research, including unique explorations of multimodal learning for embodied AI.