Running my own models. I was having some trouble getting vLLM going so dropped down to LM Studio which I've used on my 24GB MacBook Air.
I now have LM Link across both laptops into the AI Workstation RTX Pro 6000 Blackwell. And my phone on LM Mini. It's so cool and I'm just getting started.
Currently have Qwen3.5 9B going with Qwen3.6 27B and 35B A3B downloading. Going to play with some Llamas too 3.3 70B Instruct Q8, Deepseek R1 Distill Q8, 3.3 70B Q4, and 3.2 11B Vision Instruct.
Wow what a time to be alive!
[link] [comments]



