ContentV: Efficient Training of Video Generation Models with Limited Compute

Bytedance Douyin Content Team

ContentV generates high-quality videos with advanced AI technology.

Abstract

Recent advances in video generation demand increasingly efficient training recipes to mitigate escalating computational costs. In this report, we present ContentV, an 8B-parameter text-to-video model that achieves state-of-the-art performance (85.14 on VBench) after training on 256×64GB Neural Processing Units (NPUs) for merely four weeks. ContentV generates diverse, high-quality videos across multiple resolutions and durations from text prompts, enabled by three key innovations: (1) A minimalist architecture that maximizes reuse of pre-trained image generation models for video generation; (2) A systematic multi-stage training strategy leveraging flow matching for enhanced efficiency; and (3) A cost-effective reinforcement learning with human feedback framework that improves generation quality without requiring additional human annotations.

Human Videos

Animal Videos

Scene Videos

Creative & Unusual Videos

Portrait Videos

More Samples

Quality & Consistency

ContentV ensures high temporal consistency and visual quality across generated videos, maintaining coherent motion and realistic appearance throughout the sequence.

Diverse Content

ContentV generates diverse content across multiple categories, demonstrating versatility and adaptability in video synthesis.

BibTeX

@article{contentv2025,
  title={ContentV: Efficient Training of Video Generation Models with Limited Compute}, 
  author={Wenfeng Lin and Renjie Chen and Boyuan Liu and Shiyue Yan and Ruoyu Feng and Jiangchuan Wei and Yichen Zhang and Yimeng Zhou and Chao Feng and Jiao Ran and Qi Wu and Zuotao Liu and Mingyu Guo},
  journal={arXiv preprint arXiv:2506.05343},
  year={2025}

  }