We've been quietly working on a fine-tuned model and finally decided to put it out there. The idea was to fine-tune a really small LLM that could be mediocre at CodeGen, but then try to enhance it by feeding it better quality code for a very niche CodeGen task (To be precise: UIgen, in one particular framework, language & CSS library) We got the idea from this paper: https://arxiv.org/abs/2506.02153
Overview
Qwendean is a 4 billion parameter model fine-tuned on top of Qwen3-4B for UI gen tasks. It was trained on a {prompt, completion} pair JSONL dataset consisting of around 4K samples. Won't get into minute details since you can directly check out the Colab notebook for now: https://colab.research.google.com/drive/1r7g7xyG1tegQJntL82cIwu-iog-fhv0i?usp=sharing
The end goal is to build something like Vercel's v0.dev. For that we're currently building a LangGraph system where a bigger model delegates tasks to these SLMs for generating the UI, which then goes into the synthesizer. Once we get some time after writing our academic thesis, we'll put out a clean repo covering all the training and LangGraph stuff under Apache 2.0
We're not great fine-tuning wizards like others here, better vibecoders maybe... so it is not the best out there, but we are looking for honest feedback from the community, especially from people who work on fine-tuning.
Model: https://huggingface.co/iamdyeus/qwendean-4b
Quantised: https://huggingface.co/iamdyeus/qwendean-4b-GGUF
[link] [comments]




