AI Navigate

Bielik-Minitron-7B: Compressing Large Language Models via Structured Pruning and Knowledge Distillation for the Polish Language

arXiv cs.CL / 3/13/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • Bielik-Minitron-7B is a compressed 7.35B parameter version of Bielik-11B-v3.0 optimized for European languages (including Polish) using a two-stage compression approach inspired by the NVIDIA Minitron method.
  • The compression reduces parameters by 33.4%, from 11.04B to 7.35B, using structured hybrid pruning with NVIDIA Model Optimizer and logit-based distillation with NVIDIA NeMo.
  • After distillation, an alignment pipeline consisting of Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO-P), and Reinforcement Learning with GRPO was applied to recover model quality.
  • The final model reportedly recovers about 90% of the baseline performance while offering up to 50% faster inference, enabling cheaper deployment for less-represented languages.
  • This work illustrates a practical pathway to deploy efficient language models for European languages, preserving quality while reducing inference costs, supported by NVIDIA tooling.

Abstract

This report details the creation of Bielik-Minitron-7B, a compressed 7.35B parameter version of the Bielik-11B-v3.0 model, specifically optimized for European languages. By leveraging a two-stage compression methodology inspired by the NVIDIA Minitron approach, we combined structured hybrid pruning and knowledge distillation to reduce the model's parameter count by 33.4%, from 11.04B to 7.35B. We utilized the NVIDIA Model Optimizer for structural pruning and the NVIDIA NeMo Framework for logit-based distillation for quality recovery. Following distillation, the model underwent a rigorous alignment pipeline consisting of Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO-P), and Reinforcement Learning (GRPO). Our final model successfully recovered approximately 90% of the baseline model's performance while providing up to 50% inference speedup. This approach demonstrates an efficient pathway to create language models for less-represented languages, preserving the original model quality while reducing inference deployment costs.