FaceLiVTv2: An Improved Hybrid Architecture for Efficient Mobile Face Recognition
arXiv cs.CV / 4/13/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper introduces FaceLiVTv2, a lightweight hybrid CNN–Transformer architecture aimed at improving the accuracy–efficiency trade-off for mobile and edge face recognition under tight latency, memory, and energy constraints.
- FaceLiVTv2’s key innovation is Lite MHLA, which replaces a heavier multi-layer attention design with multi-head linear token projections and affine rescale transformations to reduce redundancy while maintaining diversity across attention heads.
- The model integrates Lite MHLA into a unified RepMix block to coordinate global–local feature interactions and uses global depthwise convolution for adaptive spatial aggregation during embedding generation.
- Experiments on benchmarks including LFW, CA-LFW, CP-LFW, CFP-FP, AgeDB-30, and IJB show consistent accuracy improvements over existing lightweight methods while boosting runtime efficiency.
- Reported performance gains include a 22% reduction in mobile inference latency vs. FaceLiVTv1 and up to 30.8% speedups over GhostFaceNets, with additional 20–41% latency improvements over EdgeFace and KANFace while retaining higher recognition accuracy.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
วิธีใช้ AI ทำ SEO ให้เว็บติดอันดับ Google (2026)
Dev.to