Anyone tried models created by AMD?

Reddit r/LocalLLaMA / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • A Reddit user asks why AMD appears less active than NVIDIA in releasing widely used model families, citing NVIDIA’s popular Nemotron releases as a comparison point.
  • The user notes that Hugging Face lists many AMD models (about 400) and is especially surprised to see more than 20 releases in the MXFP4 format.
  • They point to specific large models (e.g., Qwen3.5-397B-A17B-MXFP4, GLM-5-MXFP4, MiniMax-M2.5-MXFP4, Kimi-K2.5-MXFP4, and Qwen3-Coder-Next-MXFP4) and ask whether anyone has tested them.
  • The post expresses hope that AMD-produced MXFP4 models will perform better than MXFP4 models created by third-party quantizers, and suggests AMD should release MXFP4 for more small-to-medium models.

I had question that why AMD is not creating models like how NVIDIA doing it. NVIDIA's Nemotron models are so popular(Ex: Nemotron-3-Nano-30B-A3B, Llama-3_3-Nemotron-Super-49B & recent Nemotron-3-Super-120B-A12B).

Not sure, anyone brought this topic here before or not.

But when I searched HF, I found AMD's page which has 400 models.

https://huggingface.co/amd/models?sort=created

But little bit surprised to see that they released 20+ models in MXFP4 format.

https://huggingface.co/amd/models?sort=created&search=mxfp4

Anyone tested these models? I see models such as Qwen3.5-397B-A17B-MXFP4, GLM-5-MXFP4, MiniMax-M2.5-MXFP4, Kimi-K2.5-MXFP4, Qwen3-Coder-Next-MXFP4. Wish they released MXFP4 for more small & medium models. Hope they do now onwards.

I hope these MXFP4 models would be better(as these coming from AMD itself) than typical MXFP4 models by quanters.

submitted by /u/pmttyji
[link] [comments]