S. Bai, J. Kolter, and V. Koltun. (2019)cite arxiv:1909.01377Comment: NeurIPS 2019 Spotlight Oral.
Abstract
We present a new approach to modeling sequential data: the deep equilibrium
model (DEQ). Motivated by an observation that the hidden layers of many
existing deep sequence models converge towards some fixed point, we propose the
DEQ approach that directly finds these equilibrium points via root-finding.
Such a method is equivalent to running an infinite depth (weight-tied)
feedforward network, but has the notable advantage that we can analytically
backpropagate through the equilibrium point using implicit differentiation.
Using this approach, training and prediction in these networks require only
constant memory, regardless of the effective "depth" of the network. We
demonstrate how DEQs can be applied to two state-of-the-art deep sequence
models: self-attention transformers and trellis networks. On large-scale
language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs
1) often improve performance over these state-of-the-art models (for similar
parameter counts); 2) have similar computational requirements to existing
models; and 3) vastly reduce memory consumption (often the bottleneck for
training large sequence models), demonstrating an up-to 88% memory reduction in
our experiments. The code is available at https://github.com/locuslab/deq .
%0 Generic
%1 bai2019equilibrium
%A Bai, Shaojie
%A Kolter, J. Zico
%A Koltun, Vladlen
%D 2019
%K deep_implicit_learning equilibrium_models from:adulny implicit implicit_function implicit_layer implicit_model thema:ba thema:eqilibrium
%T Deep Equilibrium Models
%U http://arxiv.org/abs/1909.01377
%X We present a new approach to modeling sequential data: the deep equilibrium
model (DEQ). Motivated by an observation that the hidden layers of many
existing deep sequence models converge towards some fixed point, we propose the
DEQ approach that directly finds these equilibrium points via root-finding.
Such a method is equivalent to running an infinite depth (weight-tied)
feedforward network, but has the notable advantage that we can analytically
backpropagate through the equilibrium point using implicit differentiation.
Using this approach, training and prediction in these networks require only
constant memory, regardless of the effective "depth" of the network. We
demonstrate how DEQs can be applied to two state-of-the-art deep sequence
models: self-attention transformers and trellis networks. On large-scale
language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs
1) often improve performance over these state-of-the-art models (for similar
parameter counts); 2) have similar computational requirements to existing
models; and 3) vastly reduce memory consumption (often the bottleneck for
training large sequence models), demonstrating an up-to 88% memory reduction in
our experiments. The code is available at https://github.com/locuslab/deq .
@misc{bai2019equilibrium,
abstract = {We present a new approach to modeling sequential data: the deep equilibrium
model (DEQ). Motivated by an observation that the hidden layers of many
existing deep sequence models converge towards some fixed point, we propose the
DEQ approach that directly finds these equilibrium points via root-finding.
Such a method is equivalent to running an infinite depth (weight-tied)
feedforward network, but has the notable advantage that we can analytically
backpropagate through the equilibrium point using implicit differentiation.
Using this approach, training and prediction in these networks require only
constant memory, regardless of the effective "depth" of the network. We
demonstrate how DEQs can be applied to two state-of-the-art deep sequence
models: self-attention transformers and trellis networks. On large-scale
language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs
1) often improve performance over these state-of-the-art models (for similar
parameter counts); 2) have similar computational requirements to existing
models; and 3) vastly reduce memory consumption (often the bottleneck for
training large sequence models), demonstrating an up-to 88% memory reduction in
our experiments. The code is available at https://github.com/locuslab/deq .},
added-at = {2021-02-24T12:23:48.000+0100},
author = {Bai, Shaojie and Kolter, J. Zico and Koltun, Vladlen},
biburl = {https://www.bibsonomy.org/bibtex/295192d754b793d6f7196fce1e1b1ee2d/adulny},
description = {1909.01377.pdf},
interhash = {544502ee091d70bebd82574e19bc9f47},
intrahash = {95192d754b793d6f7196fce1e1b1ee2d},
keywords = {deep_implicit_learning equilibrium_models from:adulny implicit implicit_function implicit_layer implicit_model thema:ba thema:eqilibrium},
note = {cite arxiv:1909.01377Comment: NeurIPS 2019 Spotlight Oral},
timestamp = {2022-04-19T12:50:32.000+0200},
title = {Deep Equilibrium Models},
url = {http://arxiv.org/abs/1909.01377},
year = 2019
}