Pointy - A Lightweight Transformer for Point Cloud Foundation Models
arXiv cs.CV / 3/12/2026
📰 NewsModels & Research
Key Points
- Pointy introduces a lightweight transformer-based architecture for point cloud foundation models that reduces reliance on cross-modal supervision.
- The model is trained on just 39k point clouds yet outperforms several larger foundation models trained on 200k+ samples, challenging data-volume assumptions.
- The authors perform a comprehensive replication study with standardized training regimes to isolate architectural contributions and compare tokenizer-free backbones.
- Results show simple backbones can approach state-of-the-art results achieved by data- and modality-rich models, highlighting the value of careful design.
- The work provides open-source code, pre-trained models, and training protocols on GitHub for broader replication and use.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA