AI Navigate

NBAvatar: Neural Billboards Avatars with Realistic Hand-Face Interaction

arXiv cs.CV / 3/13/2026

📰 NewsModels & Research

Key Points

  • NBAvatar proposes a method for realistic rendering of head avatars that accounts for non-rigid deformations caused by hand-face interaction.
  • It combines training of oriented planar primitives with neural rendering to create a representation that preserves temporally and pose-consistent geometry while delivering fine-grained appearance details.
  • Experiments show NBAvatar implicitly learns color transformations due to face-hand interactions and achieves up to 30% LPIPS reduction at high-resolution rendering, with improvements in PSNR and SSIM over Gaussian-based avatars and the InteractAvatar method.
  • The work suggests applications in animated avatars for AR/VR and telepresence where hand-face interactions are common.

Abstract

We present NBAvatar - a method for realistic rendering of head avatars handling non-rigid deformations caused by hand-face interaction. We introduce a novel representation for animated avatars by combining the training of oriented planar primitives with neural rendering. Such a combination of explicit and implicit representations enables NBAvatar to handle temporally and pose-consistent geometry, along with fine-grained appearance details provided by the neural rendering technique. In our experiments, we demonstrate that NBAvatar implicitly learns color transformations caused by face-hand interactions and surpasses existing approaches in terms of novel-view and novel-pose rendering quality. Specifically, NBAvatar achieves up to 30% LPIPS reduction under high-resolution megapixel rendering compared to Gaussian-based avatar methods, while also improving PSNR and SSIM, and achieves higher structural similarity compared to the state-of-the-art hand-face interaction method InteractAvatar.