MicroCinema:

A Divide-and-Conquer Approach for Text-to-Video Generation

Yanhui Wang*, Jianmin Bao*, Wenming Weng, Ruoyu Feng, Dacheng Yin, Tao Yang, Jingxu Zhang, Qi Dai Zhiyuan Zhao, Chunyu Wang, Kai Qiu, Yuhui Yuan, Chuanxin Tang,
Xiaoyan Sun, Chong Luo, Baining Guo

1University of Science and Technology of China, 2Microsoft Research Asia, 3Xi'an Jiaotong University

*Equal Contribution, This work was done during the internship at MSRA, Project Lead.

Promotional Video


Abstract

We present MicroCinema, a straightforward yet effective framework for high-quality and coherent text-to-video generation. Unlike existing approaches that align text prompts with video directly, MicroCinema introduces a Divide-and-Conquer strategy which divides the text-to-video into a two-stage process: text-to-image generation and image&text-to-video generation. This strategy offers two significant advantages. a) It allows us to take full advantage of the recent advances in text-to-image models, such as Stable Diffusion, Midjourney, and DALLE, to generate photorealistic and highly detailed images. b) Leveraging the generated image, the model can allocate less focus to fine-grained appearance details, prioritizing the efficient learning of motion dynamics.

To implement this strategy effectively, we introduce two core designs. First, we propose the Appearance Injection Network, enhancing the preservation of the appearance of the given image. Second, we introduce the Appearance Noise Prior, a novel mechanism aimed at maintaining the capabilities of pre-trained 2D diffusion models. These design elements empower MicroCinema to generate high-quality videos with precise motion, guided by the provided text prompts. Extensive experiments demonstrate the superiority of the proposed framework. Concretely, MicroCinema achieves SOTA zero-shot FVD of 342.86 on UCF-101 and 377.40 on MSR-VTT.

Overall architecture of MicroCinema

框图描述

Overall architecture of our proposed diffusion-based image&text-to-video model in MicroCinema. The proposed AppearNet is designed to furnish appearance information for video generation. Furthermore, we introduce the Appearance Noise Prior, which involves adding an appropriate amount of the center frame into the noise, enhancing the model's ability to generate videos consistent with the appearance of the conditional image.

Experiment Results

Comparison on the zero-shot text-to-video generation performance on UCF-101 and MSR-VTT
Methods UCF-101 MSR-VTT
FVD ↓ IS ↑ FVD ↓ CLIPSIM ↑
Using WebVid-10M and additional data for training
Make-A-Video 367.23 33.00 - 0.3049
VideoFactory 410.00 - - 0.3005
ModelScope 410.00 - 550.00 0.2930
Lavie 526.30 - - 0.2949
VidRD 363.19 39.37 - -
PYoCo 355.19 47.76 - 0.3204
Using WebVid-10M only for training
LVDM 641.80 - 742.00 0.2381
CogVideo 701.59 25.27 1294 0.2631
MagicVideo 699.00 - 998.00 -
Video LDM 550.61 33.45 - 0.2929
VideoComposer - - 580 0.2932
VideoFusion 639.90 17.49 581.00 0.2795
SimDA - - 456.00 0.2945
Show-1 394.46 35.42 538.00 0.3072
MicroCinema (Ours) 342.86 37.46 377.40 0.2967