RandMark: On Random Watermarking of Visual Foundation Models
arXiv cs.CV / 3/12/2026
📰 NewsModels & Research
Key Points
- RandMark proposes an ownership verification framework for visual foundation models by embedding digital watermarks into internal representations with a small encoder-decoder network.
- The watermarking uses random embedding on a hold-out set of input images, making watermark statistics detectable in functional copies of watermarked models.
- Theoretical and empirical results show a low probability of false detection on non-watermarked models and a low probability of false misdetection on watermarked models.
- This work supports IP protection for VFMs by enabling reliable ownership verification with minimal impact on model utility.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA