| Saw this on a youtube video, repo is https://github.com/MiniMax-AI/OpenRoom it's a MiniMax project. I'm Running on Qwen_Qwen3.5-35B-A3B-Q6_K in the image mainly just because that is what was loaded in memory, and have tested with 27B (obviously a lot slower) on my inference. I imagine https://huggingface.co/ArliAI/Qwen3.5-27B-Derestricted would be used by a lot of guys with this project for ... planning to build thermonuclear devices to take over the world, or just gooning or whatever. I just submitted https://github.com/MiniMax-AI/OpenRoom/pull/29 to add llama.cpp, pretty simple change just removed the required API key requirement mainly and add a dropdown option for llama.cpp. [link] [comments] |
LocalLLamMA men of culture, MiniMax Openroom seems to work fine on Qwen 27b.
Reddit r/LocalLLaMA / 3/27/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage
Key Points
- Reddit users report that MiniMax’s OpenRoom project appears to work well locally when paired with Qwen 27B, though it runs slower than smaller models.
- The post points to MiniMax-AI/OpenRoom on GitHub and suggests Qwen3.5-27B-Derestricted is likely to be a commonly used choice with this setup.
- A contributor says they opened a pull request to add llama.cpp support by removing the API key requirement and adding a UI dropdown option for selecting llama.cpp.
- The discussion frames OpenRoom as part of the broader “LocalLLaMA” ecosystem for running language models locally via community integration efforts.
Related Articles
What Is Artificial Intelligence and How Does It Actually Work?
Dev.to
Forge – Turn Dev Conversations into Structured Decisions
Dev.to
Cortex – A Local-First Knowledge Graph for Developers
Dev.to
45 MCP Tools: Everything Your Claude Agent Can Do with a Wallet
Dev.to
SmartLead Architect: Building an AI-Driven Lead Scoring and Outreach Engine
Dev.to