ModelGrow: Continual Text-to-Video Pre-training with Model Expansion and Language Understanding Enhancement

Hong Kong University of Science and Technology1 Renmin University of China2
*Indicates Equal Contribution

Indicates Corresponding Authors

Abstract

Text-to-video (T2V) generation has gained significant attention recently. However, the costs of training a T2V model from scratch remain persistently high, and there is considerable room for improving the generation performance, especially under limited computation resources. This work explores the continual general pre-training of text-to-video models, enabling the model to ``grow" its abilities based on a pre-trained foundation, analogous to how humans acquire new knowledge based on past experiences. There is a lack of extensive study of the continual pre-training techniques in T2V generation. In this work, we take the initial step toward exploring this task systematically and propose ModelGrow. Specifically, we break this task into two key aspects: increasing model capacity and improving semantic understanding. For model capacity, we introduce several novel techniques to expand the model size, enabling it to store new knowledge and improve generation performance. For semantic understanding, we propose a method that leverages large language models as advanced text encoders, integrating them into T2V models to enhance language comprehension and guide generation results according to detailed prompts. This approach enables the model to achieve better semantic alignment, particularly in response to complex user prompts. Extensive experiments demonstrate the effectiveness of our method across various metrics. The source code and the model of ModelGrow will be publicly available.

Video Presentation

Main Result Comparisons

               Base Model                          LoRA-0.7B                         Expansion-1.4B                   Expansion-LLM-1.4B

                                     Black swan gliding past a green lily pad.

               Base Model                          LoRA-0.7B                         Expansion-1.4B                   Expansion-LLM-1.4B

                                        Girl with curly hair riding a red bike.

               Base Model                          LoRA-0.7B                         Expansion-1.4B                   Expansion-LLM-1.4B

                                         A cat wearing sunglasses at a pool.

               Base Model                          LoRA-0.7B                         Expansion-1.4B                   Expansion-LLM-1.4B

                                    Pyramidal tent sheltering a round grill.

BibTeX

 
    @article{rao2024modelgrowcontinualtexttovideopretraining,
      title={ModelGrow: Continual Text-to-Video Pre-training with Model Expansion and Language Understanding Enhancement}, 
      author={Zhefan Rao and Liya Ji and Yazhou Xing and Runtao Liu and Zhaoyang Liu and Jiaxin Xie and Ziqiao Peng and Yingqing He and Qifeng Chen},
      year={2024},
      eprint={2412.18966},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.18966}, 
    }