WAVE: Weight Template for Adaptive Initialization of Variable-sized Models

F Feng, Y Xie, J Wang, X Geng - arXiv preprint arXiv:2406.17503, 2024 - arxiv.org
arXiv preprint arXiv:2406.17503, 2024arxiv.org
The expansion of model parameters underscores the significance of pre-trained models;
however, the constraints encountered during model deployment necessitate models of
variable sizes. Consequently, the traditional pre-training and fine-tuning paradigm fails to
address the initialization problem when target models are incompatible with pre-trained
models. We tackle this issue from a multitasking perspective and introduce\textbf {WAVE},
which incorporates a set of shared\textbf {W} eight templates for\textbf {A} daptive …
The expansion of model parameters underscores the significance of pre-trained models; however, the constraints encountered during model deployment necessitate models of variable sizes. Consequently, the traditional pre-training and fine-tuning paradigm fails to address the initialization problem when target models are incompatible with pre-trained models. We tackle this issue from a multitasking perspective and introduce \textbf{WAVE}, which incorporates a set of shared \textbf{W}eight templates for \textbf{A}daptive initialization of \textbf{V}ariable-siz\textbf{E}d Models. During initialization, target models will initialize the corresponding weight scalers tailored to their model size, which are sufficient to learn the connection rules of weight templates based on the Kronecker product from a limited amount of data. For the construction of the weight templates, WAVE utilizes the \textit{Learngene} framework, which structurally condenses common knowledge from ancestry models into weight templates as the learngenes through knowledge distillation. This process allows the integration of pre-trained models' knowledge into structured knowledge according to the rules of weight templates. We provide a comprehensive benchmark for the learngenes, and extensive experiments demonstrate the efficacy of WAVE. The results show that WAVE achieves state-of-the-art performance when initializing models with various depth and width, and even outperforms the direct pre-training of entire models, particularly for smaller models, saving approximately and in computational and storage resources, respectively. WAVE simultaneously achieves the most efficient knowledge transfer across a series of datasets, specifically achieving an average improvement of 1.8\% and 1.2\% on 7 downstream datasets.
arxiv.org