I Built an AI Video Factory That Runs 24/7 — Fully Open Source

Dev.to / 5/9/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The article describes an open-source, fully automated YouTube “AI video factory” that runs 24/7 to discover trends, write scripts, generate TTS voiceovers, gather B-roll, render videos with FFmpeg, and upload to YouTube.
  • Its key differentiator is “Dual Parallel AI” scripting, where Qwen and Ollama run simultaneously and cross-score each other to reduce low-quality AI output before production.
  • The workflow is organized as a 12-stage pipeline covering trend detection, hook injection, memory to avoid repetition, web research, series generation, cinematic FFmpeg rendering, and thumbnail/CTR optimization.
  • The author explains the motivation—personal time spent editing and high costs of hiring editors, plus generic results from existing AI tools—and shares a quick-start setup requiring Ollama and FFmpeg.
  • The project is built with Python 3.11, FFmpeg, Ollama, Qwen, and F5-TTS, and the repository is released under the MIT License.

I wanted to share a project I've been building: Mesin Cuan Viral Architect — a fully automated YouTube content pipeline.

What it does:
Discovers trending topics, writes scripts with dual parallel AI, generates TTS voiceover, fetches B-roll footage, renders cinematic video via FFmpeg, and uploads to YouTube — all autonomous, 24/7.

The secret sauce — Dual Parallel AI Scripting:
Most AI content tools use a single LLM → generic output. Mesin Cuan runs Qwen + Ollama simultaneously, then they cross-score each other to eliminate AI slop before production. The higher-scoring script wins.

12-engine pipeline:

  • Viral Loop Engine — real-time trend detection
  • Dual Parallel AI — Qwen + Ollama cross-scoring
  • Script Quality Scorer — multi-dimensional validation
  • Hook Engine — auto-injects high-retention openers
  • Memory Engine — never repeats content
  • Research Engine — web research before scripting
  • Series Engine — auto-generates multi-part content
  • Neon Visuals — cinematic FFmpeg rendering
  • Smart SFX Mixer — niche-aware sound effects
  • Thumbnail Intelligence — AI-driven CTR optimization
  • OAuth2 Analytics — per-channel retention dashboard
  • Pipeline Estimator — ETA prediction for batch renders

Why I built this:
I was spending 8+ hours editing a single video. Hiring editors was too expensive and existing AI tools produced generic content. So I built my own video factory.

Tech stack: Python 3.11 + FFmpeg + Ollama + Qwen + F5-TTS

Quick start (only needs Ollama + FFmpeg):

git clone https://github.com/algojogacor/mesin-cuan.git
cd mesin-cuan
python main.py --channel ch_id_horror --skip-qc

Repo: https://github.com/algojogacor/mesin-cuan

MIT License — free to use, modify, distribute. Built solo at 18.