I'm continuing to play around with local llms on my framework13 laptop.
So, limited memory bandwith and processing power means exploring MoE quantized models below 40B params.
surprisingly for me gpt-oss-20B did pretty well..
[link] [comments]
Reddit r/LocalLLaMA / 4/11/2026
I'm continuing to play around with local llms on my framework13 laptop.
So, limited memory bandwith and processing power means exploring MoE quantized models below 40B params.
surprisingly for me gpt-oss-20B did pretty well..
This article is featured in our daily AI news digest — key takeaways and action items at a glance.