AI Navigate

I’m building a local AI system that generates full novels

Reddit r/LocalLLaMA / 3/13/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The post describes building a local AI system that generates long-form novels using a multi-stage pipeline (world/setting, character architect, story synopsis, chapter planner, scene planner, scene writer, critic, rewrite, continuity memory) instead of relying on a single prompt.
  • The goal is to mimic a writers' room to improve character consistency, pacing, and overall narrative structure.
  • The current setup uses locally hosted models—qwen3.5:9b for writing and qwen3.5:27b for critique—with Ollama as the runtime, and the critic assesses issues like character consistency, pacing problems, repetitive dialogue, and plot drift.
  • The author is experimenting with per-chapter emotion/tension curves to create measurable rise and fall, including a structure of tension, conflict, reveal, shift, and release.
  • Future directions include persistent character memory, act-based story arcs, and training a small LoRA on novels to improve prose style, with the author seeking feedback from the community.

Hi everyone,

I’ve been experimenting with building a local book-generation pipeline that tries to solve the common problem with AI-generated novels: they often feel repetitive, lose track of characters, and have no real narrative structure.

Instead of just prompting a model to “write a book”, the system breaks the process into multiple stages.

Current pipeline looks roughly like this:

INPUT

→ World / setting generator

→ Character architect

→ Story synopsis

→ Chapter planner

→ Scene planner

→ Scene writer

→ Critic

→ Rewrite

→ Continuity memory

Each step produces structured outputs that the next step consumes.

The goal is to mimic how a writers’ room might structure a story rather than letting the model improvise everything.

Current stack:

Writer model

• qwen3.5:9b 

Critic / editor

• qwen3.5:27b 

Runtime

• Ollama 

The critic step checks for things like:

• character consistency

• pacing problems

• repetitive dialogue

• plot drift

Then it sends rewrite instructions back to the writer.

One thing I’m experimenting with now is adding emotion / tension curves per chapter, so the story has a measurable rise and fall rather than staying flat.

Example structure per chapter:

tension

conflict

reveal

shift

release

So far this has already improved the output quite a lot compared to single-prompt generation.

I’m curious if anyone else here has experimented with multi-stage narrative pipelines like this, or has ideas for improving long-form generation.

Some things I’m considering next:

• persistent character memory

• story arc tracking (act 1 / 2 / 3)

• training a small LoRA on novels for better prose style

Would love to hear thoughts or suggestions.

submitted by /u/Worldly_Code_4146
[link] [comments]