FEEL (Force-Enhanced Egocentric Learning): A Dataset for Physical Action Understanding
arXiv cs.CV / 3/18/2026
📰 NewsTools & Practical UsageModels & Research
Key Points
- FEEL (Force-Enhanced Egocentric Learning) is the first large-scale dataset pairing force measurements from custom piezoresistive gloves with egocentric video to enable force-informed physical action understanding.
- It contains approximately 3 million force-synchronized frames from natural unscripted kitchen manipulation, with 45% of frames involving hand-object contact.
- FEEL supports two task families: (1) contact understanding via temporal contact segmentation and pixel-level segmentation of contacted objects, and (2) action representation learning with force prediction as a self-supervised pretraining objective for video backbones.
- The work reports state-of-the-art results on temporal contact segmentation, competitive pixel-level segmentation, and transfer gains on action understanding tasks across EPIC-Kitchens, SomethingSomething-V2, EgoExo4D and Meccano without manual labels.
- By treating force as a primitive for physical interaction, FEEL enables scalable data collection and improved generalization for action understanding models.
Related Articles
We asked 200 ChatGPT users their biggest frustration. All top 5 answers are problems ChatGPT Toolbox solves.
Reddit r/artificial
I Built an AI That Reviews Every PR for Security Bugs — Here's How (2026)
Dev.to
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to