| I get that you don’t always want to give away your best stuff, but man, I would hate if they didn’t put this out to us Local folks. Fingers crossed 🤞 that they give it a full open source / open weights release. [link] [comments] |
Will they or won’t they? Why they gotta toy with our emotions?
Reddit r/LocalLLaMA / 3/22/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- The Reddit post on r/LocalLLaMA expresses a desire for a full open-source/open-weights release of LocalLLaMA to the local community.
- It emphasizes wanting openness rather than withholding the best assets, signaling a preference for transparency in AI releases.
- The post frames the potential release as a community-driven hope rather than an official corporate announcement.
- Overall, the message reflects ongoing interest in accessible AI models and open-source releases within the LocalLLaMA ecosystem.
Related Articles
We Scanned 11,529 MCP Servers for EU AI Act Compliance
Dev.to

Math needs thinking time, everyday knowledge needs memory, and a new Transformer architecture aims to deliver both
THE DECODER
Kreuzberg v4.5.0: We loved Docling's model so much that we gave it a faster engine
Reddit r/LocalLLaMA
Today, what hardware to get for running large-ish local models like qwen 120b ?
Reddit r/LocalLLaMA
Running mistral locally for meeting notes and it's honestly good enough for my use case
Reddit r/LocalLLaMA