Abstract
Previous online 3D dense reconstruction methods struggle to achieve the
balance between memory storage and surface quality, largely due to the usage of
stagnant underlying geometry representation, such as TSDF (truncated signed
distance functions) or surfels, without any knowledge of the scene priors. In
this paper, we present DI-Fusion (Deep Implicit Fusion), based on a novel 3D
representation, i.e. Probabilistic Local Implicit Voxels (PLIVoxs), for online
3D reconstruction with a commodity RGB-D camera. Our PLIVox encodes scene
priors considering both the local geometry and uncertainty parameterized by a
deep neural network. With such deep priors, we are able to perform online
implicit 3D reconstruction achieving state-of-the-art camera trajectory
estimation accuracy and mapping quality, while achieving better storage
efficiency compared with previous online 3D reconstruction approaches. Our
implementation is available at https://www.github.com/huangjh-pub/di-fusion.
Users
Please
log in to take part in the discussion (add own reviews or comments).