CutClaw: Agentic Hours-Long Video Editing via Music Synchronization

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • CutClaw is presented as an autonomous multi-agent framework that transforms hours of raw footage into short, meaningful videos with music synchronized editing.
  • The system uses hierarchical multimodal decomposition to capture both fine-grained visual details and global structure while also processing audio for alignment.
  • A “Playwriter Agent” coordinates narrative consistency over long horizons by anchoring visual scenes to musical shifts.
  • “Editor” and “Reviewer” agents collaborate to optimize the final cut using aesthetic and semantic criteria, improving the selection of fine-grained clips.
  • Experiments on hours-long-to-short generation report significant gains over state-of-the-art baselines, and the authors provide code via GitHub.

Abstract

Editing the video content with audio alignment forms a digital human-made art in current social media. However, the time-consuming and repetitive nature of manual video editing has long been a challenge for filmmakers and professional content creators alike. In this paper, we introduce CutClaw, an autonomous multi-agent framework designed to edit hours-long raw footage into meaningful short videos that leverages the capabilities of multiple Multimodal Language Models~(MLLMs) as an agent system. It produces videos with synchronized music, followed by instructions, and a visually appealing appearance. In detail, our approach begins by employing a hierarchical multimodal decomposition that captures both fine-grained details and global structures across visual and audio footage. Then, to ensure narrative consistency, a Playwriter Agent orchestrates the whole storytelling flow and structures the long-term narrative, anchoring visual scenes to musical shifts. Finally, to construct a short edited video, Editor and Reviewer Agents collaboratively optimize the final cut via selecting fine-grained visual content based on rigorous aesthetic and semantic criteria. We conduct detailed experiments to demonstrate that CutClaw significantly outperforms state-of-the-art baselines in generating high-quality, rhythm-aligned videos. The code is available at: https://github.com/GVCLab/CutClaw.