Hoi! -- A Multimodal Dataset for Force-Grounded, Cross-View Articulated Manipulation
arXiv cs.RO / 4/17/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research
Key Points
- The paper introduces the Hoi! dataset, designed for force-grounded, cross-view articulated manipulation that links visual inputs, performed actions, and measured interaction forces.
- The dataset includes 3,048 sequences involving 381 articulated objects across 38 environments, providing broad coverage for interaction research.
- Each object is manipulated in four different embodiments—human hand, hand with a wrist-mounted camera, a handheld UMI gripper, and a custom Hoi! gripper—so that both robot and human perspectives can be compared.
- By equipping the tool embodiment with end-effector force and tactile sensing, the dataset supports evaluating transfer between human and robotic viewpoints and exploring underused modalities like forces.
- A project website is provided for access and details about the dataset: https://timengelbracht.github.io/Hoi-Dataset-Website/.
Related Articles

Reported ban on ‘sex robots’ by online platform fuels debate on AI boundaries and content moderation
Reddit r/artificial

FastAPI With LangChain and MongoDB
Dev.to
Best AI Game Creator in 2026
Dev.to
![[Patterns] AI Agent Error Handling That Actually Works](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Frn5czaopq2vzo7cglady.png&w=3840&q=75)
[Patterns] AI Agent Error Handling That Actually Works
Dev.to

Building ONNX Embedding Workflows in Oracle AI Database with Python
Dev.to