@rpennec

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

, and . (2017)cite arxiv:1703.04977Comment: NIPS 2017.

Abstract

There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

Description

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

Links and resources

Tags

community

  • @annakrause
  • @dblp
  • @rpennec
@rpennec's tags highlighted