Zusammenfassung
Building accurate language models that capture meaningful long-term
dependencies is a core challenge in natural language processing. Towards this
end, we present a calibration-based approach to measure long-term discrepancies
between a generative sequence model and the true distribution, and use these
discrepancies to improve the model. Empirically, we show that state-of-the-art
language models, including LSTMs and Transformers, are miscalibrated:
the entropy rates of their generations drift dramatically upward over time. We
then provide provable methods to mitigate this phenomenon. Furthermore, we show
how this calibration-based approach can also be used to measure the amount of
memory that language models use for prediction.
Nutzer