EfficientMonoHair: Fast Strand-Level Reconstruction from Monocular Video via Multi-View Direction Fusion

arXiv cs.CV / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • EfficientMonoHair is presented as a fast, accurate framework for reconstructing strand-level hair geometry from monocular video, aiming to reduce the accuracy–efficiency trade-off in existing approaches.
  • The method combines an implicit neural representation with multi-view geometric fusion, using a fusion-patch-based multi-view optimization to cut down the number of optimization iterations needed for point cloud direction estimation.
  • It introduces a parallel “hair-growing” strategy that relaxes voxel occupancy constraints, improving stability and robustness for large-scale strand tracing even when orientation fields are noisy or inaccurate.
  • Experiments on real-world hairstyles reportedly produce high-fidelity strand reconstructions, while synthetic benchmarks show quality comparable to state-of-the-art methods with nearly an order-of-magnitude runtime improvement.

Abstract

Strand-level hair geometry reconstruction is a fundamental problem in virtual human modeling and the digitization of hairstyles. However, existing methods still suffer from a significant trade-off between accuracy and efficiency. Implicit neural representations can capture the global hair shape but often fail to preserve fine-grained strand details, while explicit optimization-based approaches achieve high-fidelity reconstructions at the cost of heavy computation and poor scalability. To address this issue, we propose EfficientMonoHair, a fast and accurate framework that combines the implicit neural network with multi-view geometric fusion for strand-level reconstruction from monocular video. Our method introduces a fusion-patch-based multi-view optimization that reduces the number of optimization iterations for point cloud direction, as well as a novel parallel hair-growing strategy that relaxes voxel occupancy constraints, allowing large-scale strand tracing to remain stable and robust even under inaccurate or noisy orientation fields. Extensive experiments on representative real-world hairstyles demonstrate that our method can robustly reconstruct high-fidelity strand geometries with accuracy. On synthetic benchmarks, our method achieves reconstruction quality comparable to state-of-the-art methods, while improving runtime efficiency by nearly an order of magnitude.