Guys this is so fun!

Reddit r/LocalLLaMA / 4/28/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The author shares their experience moving from vLLM to LM Studio to get local model running reliably on a 24GB MacBook Air.
  • They connect multiple devices using “LM Link,” including two laptops to an RTX Pro 6000 Blackwell workstation and a phone running “LM Mini.”
  • They are testing a range of models, including Qwen3.5 9B now active and Qwen3.6 variants downloading, plus experiments with Llama 3.3 70B and DeepSeek R1 Distill in quantized formats.
  • The post emphasizes the excitement of building a personal multi-device local LLM setup and invites others to try similar configurations.

Running my own models. I was having some trouble getting vLLM going so dropped down to LM Studio which I've used on my 24GB MacBook Air.

I now have LM Link across both laptops into the AI Workstation RTX Pro 6000 Blackwell. And my phone on LM Mini. It's so cool and I'm just getting started.

Currently have Qwen3.5 9B going with Qwen3.6 27B and 35B A3B downloading. Going to play with some Llamas too 3.3 70B Instruct Q8, Deepseek R1 Distill Q8, 3.3 70B Q4, and 3.2 11B Vision Instruct.

Wow what a time to be alive!

submitted by /u/Perfect-Flounder7856
[link] [comments]