[D] On-Device Real-Time Visibility Restoration: Deterministic CV vs. Quantized ML Models. Looking for insights on Edge Preservation vs. Latency.

Reddit r/MachineLearning / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • A team building an iOS real-time camera engine describes a baseline deterministic computer-vision method that removes severe atmospheric interference (smog, heavy rain, murky water) at 1080p/30fps on-device with zero latency and high edge preservation.
  • They are considering adding an optional ML-based toggle using quantized models (e.g., lightweight U-Net/MobileNet via CoreML) to improve structural integrity in highly degraded frames while minimizing battery drain and FPS impact.
  • The request seeks community guidance on the trade-off between classical CV and quantized ML for edge preservation, latency, and power consumption in real-time edge deployments.
  • They provide an App Store link to a testing build and share side-by-side technical comparison materials to evaluate whether ML accuracy gains justify computational overhead.
  • The discussion is framed as an architectural comparison for on-device video restoration pipelines, emphasizing operational constraints rather than offline benchmark accuracy.
[D] On-Device Real-Time Visibility Restoration: Deterministic CV vs. Quantized ML Models. Looking for insights on Edge Preservation vs. Latency.

Hey everyone,

We have been working on a real-time camera engine for iOS that currently uses a purely deterministic Computer Vision approach to mathematically strip away extreme atmospheric interference (smog, heavy rain, murky water). Currently, it runs locally on the CPU at 1080p 30fps with zero latency and high edge preservation.

We are now looking to implement an optional ML-based engine toggle. The goal is to see if a quantized model (e.g., a lightweight U-Net or MobileNet via CoreML) can improve the structural integrity of objects in heavily degraded frames without the massive battery drain and FPS drop usually associated with on-device inference.

For those with experience in deploying real-time video processing models on edge devices, what are your thoughts on the trade-off between classical CV and ML for this specific use case? Is the leap in accuracy worth the computational overhead?

App Store link (Completely ad-free Lite version for testing the current baseline): https://apps.apple.com/us/app/clearview-cam-lite/id6760249427

We've linked a side-by-side technical comparison image and a baseline stress-test video below. Looking forward to any architectural feedback from the community!

submitted by /u/tknzn
[link] [comments]