FINAL-Bench/Darwin-36B-Opus · Hugging Face

Reddit r/LocalLLaMA / 4/26/2026

📰 NewsSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • Darwin-36B-Opus is a 36B-parameter Mixture-of-Experts (MoE) language model released on Hugging Face, built via the “Darwin V7” evolutionary breeding engine.
  • The model is created by recombining two publicly available parent models: Qwen3.6-35B-A3B (as the MoE expert-structure “Father”) and a Claude-Opus-4.6 reasoning-distilled variant (as the behavior “Mother”).
  • The Darwin V7 breeding process is described as fully automated and capable of producing a deployable bfloat16 checkpoint in under an hour on a single GPU.
  • On the GPQA Diamond benchmark (physics, chemistry, biology), Darwin-36B-Opus reportedly scores 88.4%, positioning it as the top performer within the Darwin family.
FINAL-Bench/Darwin-36B-Opus · Hugging Face

https://huggingface.co/bartowski/FINAL-Bench_Darwin-36B-Opus-GGUF

Darwin-36B-Opus is a 36-billion-parameter mixture-of-experts (MoE) language model produced by the Darwin V7 evolutionary breeding engine from two publicly available parents:

Darwin V7 recombines these two parents into a single descendant that preserves the Mother's distilled chain-of-thought behavior while retaining the structural fidelity of the Father's expert topology. The breeding process is fully automated and produces a deployable bfloat16 checkpoint in under an hour on a single GPU.

On the GPQA Diamond benchmark — 198 graduate-level questions in physics, chemistry, and biology — Darwin-36B-Opus achieves 88.4%, establishing it as the highest-performing model in the Darwin family and extending the series' record of producing state-of-the-art open models through evolution rather than retraining

submitted by /u/jacek2023
[link] [comments]