Hi everyone,
I’ve been experimenting with building a local book-generation pipeline that tries to solve the common problem with AI-generated novels: they often feel repetitive, lose track of characters, and have no real narrative structure.
Instead of just prompting a model to “write a book”, the system breaks the process into multiple stages.
Current pipeline looks roughly like this:
INPUT
→ World / setting generator
→ Character architect
→ Story synopsis
→ Chapter planner
→ Scene planner
→ Scene writer
→ Critic
→ Rewrite
→ Continuity memory
Each step produces structured outputs that the next step consumes.
The goal is to mimic how a writers’ room might structure a story rather than letting the model improvise everything.
Current stack:
Writer model
• qwen3.5:9b Critic / editor
• qwen3.5:27b Runtime
• Ollama The critic step checks for things like:
• character consistency
• pacing problems
• repetitive dialogue
• plot drift
Then it sends rewrite instructions back to the writer.
One thing I’m experimenting with now is adding emotion / tension curves per chapter, so the story has a measurable rise and fall rather than staying flat.
Example structure per chapter:
tension
conflict
reveal
shift
release
So far this has already improved the output quite a lot compared to single-prompt generation.
I’m curious if anyone else here has experimented with multi-stage narrative pipelines like this, or has ideas for improving long-form generation.
Some things I’m considering next:
• persistent character memory
• story arc tracking (act 1 / 2 / 3)
• training a small LoRA on novels for better prose style
Would love to hear thoughts or suggestions.
[link] [comments]




