[D] How's MLX and jax/ pytorch on MacBooks these days?

Reddit r/MachineLearning / 4/7/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • A user is comparing MacBook Pro options (M5 Pro vs M4 Max with similar CPU and memory) specifically for machine-learning workflows, including local LLM experiments, potential fine-tuning/training, and running VMs/containers.
  • They want to know whether GPU-accelerated ML/DL is realistically feasible on these MacBooks and how different frameworks (MLX, JAX, PyTorch) perform in practice.
  • The discussion focuses on whether Apple’s “neural accelerator”/matmul engines on newer chips (M5 Pro) provide meaningful benefits for ML workloads compared with the M4 Max’s larger GPU and bandwidth.
  • The user is seeking guidance on whether they should prioritize GPU-focused hardware versus CPU and memory depending on their development and experimentation goals.
  • The post is framed as a request for personal experience and up-to-date feasibility/efficiency insights rather than a single new product release.

So I'm looking at buying a new 14 inch MacBook pro with m5 pro and 64 gb of memory vs m4 max with same specs.

My priorities are pro software development including running multiple VMs and agents and containers, and playing around with local LLMs, maybe fine-tuning and also training regular old machine learning models.

it seems like I'd go for the m4 max because of the extra GPU cores, way higher bandwidth, only marginal difference in CPU performance etc but I'm wondering about the neural accelerator stuff.

However, I'm posting here to get some insight on whether it's even feasible to do GPU accelerated machine learning, DL etc on these machines at all, or if I should just focus on CPU and memory. how's mlx, jax, pytorch etc for training these days? Do these matmul neural engines on the m5 help?

Would appreciate any insights on this and if anyone has personal experience. thanks!

submitted by /u/Busy_Alfalfa1104
[link] [comments]