We introduce AudioLM, a framework for high-quality audio generation with
long-term consistency. AudioLM maps the input audio to a sequence of discrete
tokens and casts audio generation as a language modeling task in this
representation space. We show how existing audio tokenizers provide different
trade-offs between reconstruction quality and long-term structure, and we
propose a hybrid tokenization scheme to achieve both objectives. Namely, we
leverage the discretized activations of a masked language model pre-trained on
audio to capture long-term structure and the discrete codes produced by a
neural audio codec to achieve high-quality synthesis. By training on large
corpora of raw audio waveforms, AudioLM learns to generate natural and coherent
continuations given short prompts. When trained on speech, and without any
transcript or annotation, AudioLM generates syntactically and semantically
plausible speech continuations while also maintaining speaker identity and
prosody for unseen speakers. Furthermore, we demonstrate how our approach
extends beyond speech by generating coherent piano music continuations, despite
being trained without any symbolic representation of music.
Description
[2209.03143] AudioLM: a Language Modeling Approach to Audio Generation
%0 Generic
%1 borsos2022audiolm
%A Borsos, Zalán
%A Marinier, Raphaël
%A Vincent, Damien
%A Kharitonov, Eugene
%A Pietquin, Olivier
%A Sharifi, Matt
%A Teboul, Olivier
%A Grangier, David
%A Tagliasacchi, Marco
%A Zeghidour, Neil
%D 2022
%K audio dmir-readinggroup idea:bee_audio_llm llm
%T AudioLM: a Language Modeling Approach to Audio Generation
%U http://arxiv.org/abs/2209.03143
%X We introduce AudioLM, a framework for high-quality audio generation with
long-term consistency. AudioLM maps the input audio to a sequence of discrete
tokens and casts audio generation as a language modeling task in this
representation space. We show how existing audio tokenizers provide different
trade-offs between reconstruction quality and long-term structure, and we
propose a hybrid tokenization scheme to achieve both objectives. Namely, we
leverage the discretized activations of a masked language model pre-trained on
audio to capture long-term structure and the discrete codes produced by a
neural audio codec to achieve high-quality synthesis. By training on large
corpora of raw audio waveforms, AudioLM learns to generate natural and coherent
continuations given short prompts. When trained on speech, and without any
transcript or annotation, AudioLM generates syntactically and semantically
plausible speech continuations while also maintaining speaker identity and
prosody for unseen speakers. Furthermore, we demonstrate how our approach
extends beyond speech by generating coherent piano music continuations, despite
being trained without any symbolic representation of music.
@misc{borsos2022audiolm,
abstract = {We introduce AudioLM, a framework for high-quality audio generation with
long-term consistency. AudioLM maps the input audio to a sequence of discrete
tokens and casts audio generation as a language modeling task in this
representation space. We show how existing audio tokenizers provide different
trade-offs between reconstruction quality and long-term structure, and we
propose a hybrid tokenization scheme to achieve both objectives. Namely, we
leverage the discretized activations of a masked language model pre-trained on
audio to capture long-term structure and the discrete codes produced by a
neural audio codec to achieve high-quality synthesis. By training on large
corpora of raw audio waveforms, AudioLM learns to generate natural and coherent
continuations given short prompts. When trained on speech, and without any
transcript or annotation, AudioLM generates syntactically and semantically
plausible speech continuations while also maintaining speaker identity and
prosody for unseen speakers. Furthermore, we demonstrate how our approach
extends beyond speech by generating coherent piano music continuations, despite
being trained without any symbolic representation of music.},
added-at = {2023-06-13T13:11:45.000+0200},
author = {Borsos, Zalán and Marinier, Raphaël and Vincent, Damien and Kharitonov, Eugene and Pietquin, Olivier and Sharifi, Matt and Teboul, Olivier and Grangier, David and Tagliasacchi, Marco and Zeghidour, Neil},
biburl = {https://www.bibsonomy.org/bibtex/2e4cb2f62b378c7b24aa548d7e2cdff3a/martinr},
description = {[2209.03143] AudioLM: a Language Modeling Approach to Audio Generation},
interhash = {e75044f3868f2068b830f6aa52d28d75},
intrahash = {e4cb2f62b378c7b24aa548d7e2cdff3a},
keywords = {audio dmir-readinggroup idea:bee_audio_llm llm},
note = {cite arxiv:2209.03143},
timestamp = {2023-06-13T13:11:45.000+0200},
title = {AudioLM: a Language Modeling Approach to Audio Generation},
url = {http://arxiv.org/abs/2209.03143},
year = 2022
}