AI Navigate

[P] mlx-tune – Fine-tune LLMs on Apple Silicon with MLX (SFT, DPO, GRPO, VLM)

Reddit r/MachineLearning / 3/17/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • mlx-tune is a Python library that enables fine-tuning LLMs natively on Apple Silicon using Apple's MLX framework.
  • It supports SFT, DPO, ORPO, GRPO, KTO, SimPO trainers with proper loss implementations, plus vision-language model fine-tuning (tested with Qwen3.5); the API mirrors Unsloth/TRL so the same training script runs on Mac and CUDA by simply changing the import line.
  • It runs on 8GB+ unified RAM and is built on mlx-lm and mlx-vlm, with LoRA/QLoRA, chat templates for 15 model families, and GGUF export.
  • It is not a replacement for Unsloth on NVIDIA and is intended for local prototyping on Mac before scaling to cloud GPUs; GitHub: https://github.com/ARahim3/mlx-tune
[P] mlx-tune – Fine-tune LLMs on Apple Silicon with MLX (SFT, DPO, GRPO, VLM)

Sharing mlx-tune, a Python library for fine-tuning LLMs natively on Apple Silicon using Apple's MLX framework.

It supports SFT, DPO, ORPO, GRPO, KTO, SimPO trainers with proper loss implementations, plus vision-language model fine-tuning (tested with Qwen3.5). The API mirrors Unsloth/TRL, so the same training script runs on Mac and CUDA — you only change the import line.

Built on top of mlx-lm and mlx-vlm. LoRA/QLoRA, chat templates for 15 model families, GGUF export. Runs on 8GB+ unified RAM.

Not a replacement for Unsloth on NVIDIA — this is for prototyping locally on Mac before scaling to cloud GPUs.

GitHub: https://github.com/ARahim3/mlx-tune

submitted by /u/A-Rahim
[link] [comments]