FEEL (Force-Enhanced Egocentric Learning): A Dataset for Physical Action Understanding
arXiv cs.CV / 3/18/2026
📰 NewsTools & Practical UsageModels & Research
Key Points
- FEEL (Force-Enhanced Egocentric Learning) is the first large-scale dataset pairing force measurements from custom piezoresistive gloves with egocentric video to enable force-informed physical action understanding.
- It contains approximately 3 million force-synchronized frames from natural unscripted kitchen manipulation, with 45% of frames involving hand-object contact.
- FEEL supports two task families: (1) contact understanding via temporal contact segmentation and pixel-level segmentation of contacted objects, and (2) action representation learning with force prediction as a self-supervised pretraining objective for video backbones.
- The work reports state-of-the-art results on temporal contact segmentation, competitive pixel-level segmentation, and transfer gains on action understanding tasks across EPIC-Kitchens, SomethingSomething-V2, EgoExo4D and Meccano without manual labels.
- By treating force as a primitive for physical interaction, FEEL enables scalable data collection and improved generalization for action understanding models.
Related Articles

Astral to Join OpenAI
Dev.to

I Built a MITM Proxy to See What Claude Code Actually Sends to Anthropic
Dev.to

Your AI coding agent is installing vulnerable packages. I built the fix.
Dev.to

ChatGPT Prompt Engineering for Freelancers: Unlocking Efficient Client Communication
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA