New Unsloth Studio Release!

Reddit r/LocalLLaMA / 3/28/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • Unsloth announced a new Unsloth Studio beta release and reports shipping 50+ features, updates, and fixes within a week of launch.
  • The update includes pre-compiled llama.cpp/mamba_ssm binaries for faster ~1-minute installs and smaller footprint, along with better auto-detection of existing models across LM Studio and Hugging Face.
  • Inference performance improvements are claimed, including 20–30% faster inference and corrected token/s reporting to reflect true inference speed rather than startup overhead.
  • Tool calling was upgraded with improved parsing/accuracy, faster execution, and a dedicated Tool Outputs panel with timers, plus “one line” uv install/update commands.
  • Major platform stability fixes target Windows/macOS setups (including crash and non-NVIDIA install issues), along with fixes for CPU RAM spikes, persistence of custom system prompts, and a Colab free T4 notebook issue.
New Unsloth Studio Release!

Hey guys, it's been a week since we launched Unsloth Studio (Beta). Thanks so much for trying it out, the support and feedback! We shipped 50+ new features, updates and fixes.

New features / major improvements:

  • Pre-compiled llama.cpp / mamba_ssm binaries for ~1min installs and -50% less size
  • Auto-detection of existing models from LM Studio, Hugging Face etc.
  • 20–30% faster inference, now similar to llama-server / llama.cpp speeds.
  • Tool calling: better parsing, better accuracy, faster execution, no raw tool markup in chat, plus a new Tool Outputs panel and timers.
  • New one line uv install and update commands
  • New Desktop app shortcuts that close properly.
  • Data Recipes now supports macOS, CPU and multi-file uploads.
  • Preliminary AMD support for Linux.
  • Inference token/s reporting fixed so it reflects actual inference speed instead of including startup time.
  • Revamped docs with detailed guides on uninstall, deleting models etc
  • Lots of new settings added including context length, detailed prompt info, web sources etc.

Important fixes / stability

  • Major Windows and Mac setup fixes: silent exits, conda startup crashes, broken non-NVIDIA installs, and setup validation issues.
  • CPU RAM spike fixed.
  • Custom system prompts/presets now persist across reloads.
  • Colab free T4 notebook fixed.

macOS, Linux, WSL Install:

curl -fsSL https://unsloth.ai/install.sh | sh 

Windows Install:

irm https://unsloth.ai/install.ps1 | iex 

Launch via:

unsloth studio -H 0.0.0.0 -p 8888 

Update (for Linux / Mac / WSL)

unsloth studio update 

Update (for Windows - we're still working on a faster method like Linux)

irm https://unsloth.ai/install.ps1 | iex 

Thanks so much guys and please note because this is Beta we are still going to push a lot of new features and fixes in the next few weeks.

If you have any suggestions for what you'd like us to add please let us know!
MLX, AMD, API calls are coming early next month! :)

See our change-log for more details on changes: https://unsloth.ai/docs/new/changelog

submitted by /u/danielhanchen
[link] [comments]
広告