We present a novel online depth map fusion approach that learns depth map
aggregation in a latent feature space. While previous fusion methods use an
explicit scene representation like signed distance functions (SDFs), we propose
a learned feature representation for the fusion. The key idea is a separation
between the scene representation used for the fusion and the output scene
representation, via an additional translator network. Our neural network
architecture consists of two main parts: a depth and feature fusion
sub-network, which is followed by a translator sub-network to produce the final
surface representation (e.g. TSDF) for visualization or other tasks. Our
approach is an online process, handles high noise levels, and is particularly
able to deal with gross outliers common for photometric stereo-based depth
maps. Experiments on real and synthetic data demonstrate improved results
compared to the state of the art, especially in challenging scenarios with
large amounts of noise and outliers.
Description
[2011.14791] NeuralFusion: Online Depth Fusion in Latent Space
%0 Generic
%1 weder2020neuralfusion
%A Weder, Silvan
%A Schönberger, Johannes L.
%A Pollefeys, Marc
%A Oswald, Martin R.
%D 2020
%K 3d_reconstruction cvpr21 deeplearning neural_reconstruction
%T NeuralFusion: Online Depth Fusion in Latent Space
%U http://arxiv.org/abs/2011.14791
%X We present a novel online depth map fusion approach that learns depth map
aggregation in a latent feature space. While previous fusion methods use an
explicit scene representation like signed distance functions (SDFs), we propose
a learned feature representation for the fusion. The key idea is a separation
between the scene representation used for the fusion and the output scene
representation, via an additional translator network. Our neural network
architecture consists of two main parts: a depth and feature fusion
sub-network, which is followed by a translator sub-network to produce the final
surface representation (e.g. TSDF) for visualization or other tasks. Our
approach is an online process, handles high noise levels, and is particularly
able to deal with gross outliers common for photometric stereo-based depth
maps. Experiments on real and synthetic data demonstrate improved results
compared to the state of the art, especially in challenging scenarios with
large amounts of noise and outliers.
@misc{weder2020neuralfusion,
abstract = {We present a novel online depth map fusion approach that learns depth map
aggregation in a latent feature space. While previous fusion methods use an
explicit scene representation like signed distance functions (SDFs), we propose
a learned feature representation for the fusion. The key idea is a separation
between the scene representation used for the fusion and the output scene
representation, via an additional translator network. Our neural network
architecture consists of two main parts: a depth and feature fusion
sub-network, which is followed by a translator sub-network to produce the final
surface representation (e.g. TSDF) for visualization or other tasks. Our
approach is an online process, handles high noise levels, and is particularly
able to deal with gross outliers common for photometric stereo-based depth
maps. Experiments on real and synthetic data demonstrate improved results
compared to the state of the art, especially in challenging scenarios with
large amounts of noise and outliers.},
added-at = {2021-06-26T11:15:58.000+0200},
author = {Weder, Silvan and Schönberger, Johannes L. and Pollefeys, Marc and Oswald, Martin R.},
biburl = {https://www.bibsonomy.org/bibtex/2f6e4098790c0f81588197268ce2442fe/shuncheng.wu},
description = {[2011.14791] NeuralFusion: Online Depth Fusion in Latent Space},
interhash = {c7fc42fcafad3111e9d24d5f69821c2b},
intrahash = {f6e4098790c0f81588197268ce2442fe},
keywords = {3d_reconstruction cvpr21 deeplearning neural_reconstruction},
note = {cite arxiv:2011.14791},
timestamp = {2021-06-26T11:15:58.000+0200},
title = {NeuralFusion: Online Depth Fusion in Latent Space},
url = {http://arxiv.org/abs/2011.14791},
year = 2020
}