Аннотация
Knowledge Distillation (KD) is a promising technique for reducing the high
computational demand of large language models (LLMs). However, previous KD
methods are primarily applied to white-box classification models or training
small models to imitate black-box model APIs like ChatGPT. How to effectively
distill the knowledge from white-box generative LLMs is still under-explored,
which becomes more and more important with the prosperity of LLMs. In this
work, we propose MiniLLM that distills smaller language models from generative
larger language models. We first replace the forward Kullback-Leibler
divergence (KLD) objective in the standard KD approaches with reverse KLD,
which is more suitable for KD on generative language models, to prevent the
student model from overestimating the low-probability regions of the teacher
distribution. Then, we derive an effective optimization approach to learn this
objective. Extensive experiments in the instruction-following setting show that
the MiniLLM models generate more precise responses with the higher overall
quality, lower exposure bias, better calibration, and higher long-text
generation performance. Our method is also scalable for different model
families with 120M to 13B parameters. We will release our code and model
checkpoints at https://aka.ms/MiniLLM.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)