Why doesn't any OSS tool treat llama.cpp as a first class citizen?

Reddit r/LocalLLaMA / 4/21/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • Many open-source AI tools (including editor integrations and “open source” copilot-like extensions) do not treat llama.cpp as a first-class backend, instead defaulting to Ollama and sometimes LM Studio.
  • The author argues there’s little engineering effort to add llama.cpp support, such as by exposing an endpoint that is OpenAI API–compatible and configurable via port/URL settings.
  • The post claims Ollama has gained mindshare despite (in the author’s view) being less aligned with OSS principles, especially compared with llama.cpp.
  • The author notes llama.cpp is already usable for average developers and would benefit from broader tooling support.
  • The goal of the post is to reach developers maintaining these OSS tools and encourage them to improve llama.cpp integration.

Be it opencode, VS code copilot extension or whatever "open source" AI tool, I rarely see llama.cpp treated as a first class provider? Every single one of them has ollama and sometimes LMStudio. Engineering wise there's literally 0 effort to have llama.cpp be listed the same as ollama. Or better yet, simply make it a label agnostic openai API compatible endpoint and let me fill in the port number/enpoint.. This is especially annoying as ollama is the scummy turncoat stealing from llama.cpp that still has the mindshare despite it being clear as day that they are not good members of the OSS ecosystem. llama.cpp is now very usable for the average dev (majority of userbase currently) and reasonably so for the average joe.

I'm high key hoping that this post will reach devs who are making these tools..

submitted by /u/rm-rf-rm
[link] [comments]