Which of these do you think we'll get in May? Also, feel free to pick/rank which ones you'd want the most badly:
more Gemma4 models (124b?) (other sizes?)
more Qwen3.6 models (9b? 122b? 397b?)
new Qwen Coder model (80b Even Nexter?) (~397b/400b+ coder?)
new GLM model in the 100b-300b size range?
small Kimi model of some sort?
more Nvidia/Nemotron models?
new Stepfun model?
new OpenAI OSS model(s)?
Meta Avocado/Paricado model(s)?
more MiniMax model(s)? (maybe some different sizes)?
more MiMo model(s)? (maybe some different sizes)?
more Mistral models?
new Devstral models?
more DeepSeekv4 sizes?
more Granite models?
new Phi model(s)?
new NousResearch finetunes of any really big models?
more Bonsai models?
a model with a significantly improved version/implementation of engram?
Any new Taalas-style model-on-a-chip burners? (and maybe of bigger models)?
Any surprise new models from any other hardware players other than Nvidia (i.e. a local LLM from AMD, Intel, Samsung, Micron, or someone like that)?
other models?
Any interesting tech/methods/concepts/improvements you're predicting or hoping for?
[link] [comments]

