AI Navigate

Experiment: How far can a 28M model go in business email generation?

Reddit r/LocalLLaMA / 3/20/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • A ~28M parameter Transformer model trained on synthetic business emails can sometimes generate coherent email-like text, despite struggles with instruction-following.
  • The post frames this as an experiment to explore how far small models can go with structured generation under tight parameter constraints, acknowledging it is not perfect.
  • A sample output from the prompt "Write a polite refusal email" demonstrates that the model can occasionally produce near-usable text but remains inconsistent and prone to drift.
  • Links to the GitHub and HuggingFace repositories are provided, and the implementation is described as loosely based on TRM experiments and mlx-trm implementations, inviting feedback on improving consistency and instruction-following.

I’ve been experimenting with training a small (~28M parameter) Transformer model on synthetic business email data.

It’s definitely not perfect and still struggles with instruction-following, but I was surprised that it can sometimes produce reasonably coherent email-like text.

The model is very small compared to typical LLMs, so this was more of an experiment to see how far structured generation can go under tight parameter constraints.

Some generations are messy or drift off-topic, but occasionally it produces outputs that almost look usable.

I’d be interested in any feedback, especially ideas on improving consistency or instruction following in small models.

Here’s one sample output:

Prompt: "Write a polite refusal email"

Output:

I understand this is a Friday evening, but I'm happy to provide more information.
I’ll do my best to discuss the details and explore possible alternatives.

We’ll keep you updated on our progress. Please let me know if this is something you’d be interested in.

Best,

[name]

This is from a ~28M parameter model, so it's still inconsistent but occasionally gets close.

If anyone’s interested:
GitHub: https://github.com/kamisori-daijin/textrm
HuggingFace: https://huggingface.co/Kamisori-daijin/textrm-28M-bizmail

(Implementation is loosely based on some TRM experiments and mlx-trm implementations.)

submitted by /u/AdhesivenessSea9511
[link] [comments]