A Multi-Modal Dataset for Ground Reaction Force Estimation Using Consumer Wearable Sensors
arXiv cs.AI / 4/1/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces a fully open, multi-modal dataset for estimating vertical ground reaction force (vGRF) using consumer Apple Watch IMU sensors with laboratory force-plate ground truth.
- The dataset covers five activities (walking, jogging, running, heel drops, step drops) from 10 adults, providing 492 validated, time-aligned trials with IMU recordings (~100 Hz) and force plate measurements (~1000 Hz).
- It includes both raw and processed time series, trial-level metadata, quality-control flags, and machine-readable data dictionaries, along with trial matching manifests for cross-modality alignment.
- Quality and reliability are assessed via a multi-phase cross-sensor plausibility/consistency framework, repeatability analysis of peak vGRF (ICC ~0.871–0.990), and robustness testing using Monte Carlo timing perturbations.
- A subset of 395 trials includes wrist, waist, and force-plate data (triad-complete), enabling sensor-placement studies and reproducible benchmarking for machine-learning vGRF estimation, released under CC BY 4.0 with archived analysis scripts on GitHub.
Related Articles

Black Hat Asia
AI Business

AI server farms heat up the neighborhood for miles around, paper finds
The Register

Paperclip: Công Cụ Miễn Phí Biến AI Thành Đội Phát Triển Phần Mềm
Dev.to
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to