Today I announce the first two models I am posting on here! First off, hello all of r/LocalLLaMA, nice to join. But I would love to show off the General Reasoning Agent for Project Exploration, dubbed as GRaPE. GRaPE is on the second generation, and has two models
- GRaPE Mini
- GRaPE Flash
These models are 5B and 9B respectively, and support 6 thinking modes to allocate budgets, so you don't get overthinking like in the Qwen3.5 models. All of which is detailed in the Huggingface repo at the end of this post. I have generally found medium / low is the sweet spot, but minimal exists if you cannot bear thinking at all.
GRaPE 2 was trained with lots and lots of examples of being an agent, so code agent, browser agent, etc; And the models has decent coding performance!
Huge thanks to r/unsloth for making GRaPE 2 possible.
[link] [comments]
