End-to-end optimization has achieved state-of-the-art performance on many
specific problems, but there is no straight-forward way to combine pretrained
models for new problems. Here, we explore improving modularity by learning a
post-hoc interface between two existing models to solve a new task.
Specifically, we take inspiration from neural machine translation, and cast the
challenging problem of cross-modal domain transfer as unsupervised translation
between the latent spaces of pretrained deep generative models. By abstracting
away the data representation, we demonstrate that it is possible to transfer
across different modalities (e.g., image-to-audio) and even different types of
generative models (e.g., VAE-to-GAN). We compare to state-of-the-art techniques
and find that a straight-forward variational autoencoder is able to best bridge
the two generative models through learning a shared latent space. We can
further impose supervised alignment of attributes in both domains with a
classifier in the shared latent space. Through qualitative and quantitative
evaluations, we demonstrate that locality and semantic alignment are preserved
through the transfer process, as indicated by high transfer accuracies and
smooth interpolations within a class. Finally, we show this modular structure
speeds up training of new interface models by several orders of magnitude by
decoupling it from expensive retraining of base generative models.
Описание
Latent Translation: Crossing Modalities by Bridging Generative Models
%0 Generic
%1 tian2019latent
%A Tian, Yingtao
%A Engel, Jesse
%D 2019
%K dl gan
%T Latent Translation: Crossing Modalities by Bridging Generative Models
%U http://arxiv.org/abs/1902.08261
%X End-to-end optimization has achieved state-of-the-art performance on many
specific problems, but there is no straight-forward way to combine pretrained
models for new problems. Here, we explore improving modularity by learning a
post-hoc interface between two existing models to solve a new task.
Specifically, we take inspiration from neural machine translation, and cast the
challenging problem of cross-modal domain transfer as unsupervised translation
between the latent spaces of pretrained deep generative models. By abstracting
away the data representation, we demonstrate that it is possible to transfer
across different modalities (e.g., image-to-audio) and even different types of
generative models (e.g., VAE-to-GAN). We compare to state-of-the-art techniques
and find that a straight-forward variational autoencoder is able to best bridge
the two generative models through learning a shared latent space. We can
further impose supervised alignment of attributes in both domains with a
classifier in the shared latent space. Through qualitative and quantitative
evaluations, we demonstrate that locality and semantic alignment are preserved
through the transfer process, as indicated by high transfer accuracies and
smooth interpolations within a class. Finally, we show this modular structure
speeds up training of new interface models by several orders of magnitude by
decoupling it from expensive retraining of base generative models.
@misc{tian2019latent,
abstract = {End-to-end optimization has achieved state-of-the-art performance on many
specific problems, but there is no straight-forward way to combine pretrained
models for new problems. Here, we explore improving modularity by learning a
post-hoc interface between two existing models to solve a new task.
Specifically, we take inspiration from neural machine translation, and cast the
challenging problem of cross-modal domain transfer as unsupervised translation
between the latent spaces of pretrained deep generative models. By abstracting
away the data representation, we demonstrate that it is possible to transfer
across different modalities (e.g., image-to-audio) and even different types of
generative models (e.g., VAE-to-GAN). We compare to state-of-the-art techniques
and find that a straight-forward variational autoencoder is able to best bridge
the two generative models through learning a shared latent space. We can
further impose supervised alignment of attributes in both domains with a
classifier in the shared latent space. Through qualitative and quantitative
evaluations, we demonstrate that locality and semantic alignment are preserved
through the transfer process, as indicated by high transfer accuracies and
smooth interpolations within a class. Finally, we show this modular structure
speeds up training of new interface models by several orders of magnitude by
decoupling it from expensive retraining of base generative models.},
added-at = {2019-02-26T15:39:24.000+0100},
author = {Tian, Yingtao and Engel, Jesse},
biburl = {https://www.bibsonomy.org/bibtex/28e4bda031a9f8ea7ebfa1d4921fde927/bechr7},
description = {Latent Translation: Crossing Modalities by Bridging Generative Models},
interhash = {edb6007ca699ee4a6eebd2b99be57382},
intrahash = {8e4bda031a9f8ea7ebfa1d4921fde927},
keywords = {dl gan},
note = {cite arxiv:1902.08261},
timestamp = {2019-02-26T15:39:24.000+0100},
title = {Latent Translation: Crossing Modalities by Bridging Generative Models},
url = {http://arxiv.org/abs/1902.08261},
year = 2019
}