Despite the widely-spread consensus on the brain complexity, sprouts of the
single neuron revolution emerged in neuroscience in the 1970s. They brought
many unexpected discoveries, including grandmother or concept cells and sparse
coding of information in the brain.
In machine learning for a long time, the famous curse of dimensionality
seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of
dimensionality becomes gradually more and more popular. Ensembles of
non-interacting or weakly interacting simple units prove to be an effective
tool for solving essentially multidimensional problems. This approach is
especially useful for one-shot (non-iterative) correction of errors in large
legacy artificial intelligence systems.
These simplicity revolutions in the era of complexity have deep fundamental
reasons grounded in geometry of multidimensional data spaces. To explore and
understand these reasons we revisit the background ideas of statistical
physics. In the course of the 20th century they were developed into the
concentration of measure theory. New stochastic separation theorems reveal the
fine structure of the data clouds.
We review and analyse biological, physical, and mathematical problems at the
core of the fundamental question: how can high-dimensional brain organise
reliable and fast learning in high-dimensional world of data by simple tools?
Two critical applications are reviewed to exemplify the approach: one-shot
correction of errors in intellectual systems and emergence of static and
associative memories in ensembles of single neurons.
Описание
[1809.07656] The unreasonable effectiveness of small neural ensembles in high-dimensional brain
%0 Journal Article
%1 gorban2018unreasonable
%A Gorban, A. N.
%A Makarov, V. A.
%A Tyukin, I. Y.
%D 2018
%K deep-learning machine-learning
%R 10.1016/j.plrev.2018.09.005
%T The unreasonable effectiveness of small neural ensembles in
high-dimensional brain
%U http://arxiv.org/abs/1809.07656
%X Despite the widely-spread consensus on the brain complexity, sprouts of the
single neuron revolution emerged in neuroscience in the 1970s. They brought
many unexpected discoveries, including grandmother or concept cells and sparse
coding of information in the brain.
In machine learning for a long time, the famous curse of dimensionality
seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of
dimensionality becomes gradually more and more popular. Ensembles of
non-interacting or weakly interacting simple units prove to be an effective
tool for solving essentially multidimensional problems. This approach is
especially useful for one-shot (non-iterative) correction of errors in large
legacy artificial intelligence systems.
These simplicity revolutions in the era of complexity have deep fundamental
reasons grounded in geometry of multidimensional data spaces. To explore and
understand these reasons we revisit the background ideas of statistical
physics. In the course of the 20th century they were developed into the
concentration of measure theory. New stochastic separation theorems reveal the
fine structure of the data clouds.
We review and analyse biological, physical, and mathematical problems at the
core of the fundamental question: how can high-dimensional brain organise
reliable and fast learning in high-dimensional world of data by simple tools?
Two critical applications are reviewed to exemplify the approach: one-shot
correction of errors in intellectual systems and emergence of static and
associative memories in ensembles of single neurons.
@article{gorban2018unreasonable,
abstract = {Despite the widely-spread consensus on the brain complexity, sprouts of the
single neuron revolution emerged in neuroscience in the 1970s. They brought
many unexpected discoveries, including grandmother or concept cells and sparse
coding of information in the brain.
In machine learning for a long time, the famous curse of dimensionality
seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of
dimensionality becomes gradually more and more popular. Ensembles of
non-interacting or weakly interacting simple units prove to be an effective
tool for solving essentially multidimensional problems. This approach is
especially useful for one-shot (non-iterative) correction of errors in large
legacy artificial intelligence systems.
These simplicity revolutions in the era of complexity have deep fundamental
reasons grounded in geometry of multidimensional data spaces. To explore and
understand these reasons we revisit the background ideas of statistical
physics. In the course of the 20th century they were developed into the
concentration of measure theory. New stochastic separation theorems reveal the
fine structure of the data clouds.
We review and analyse biological, physical, and mathematical problems at the
core of the fundamental question: how can high-dimensional brain organise
reliable and fast learning in high-dimensional world of data by simple tools?
Two critical applications are reviewed to exemplify the approach: one-shot
correction of errors in intellectual systems and emergence of static and
associative memories in ensembles of single neurons.},
added-at = {2019-06-13T15:21:54.000+0200},
author = {Gorban, A. N. and Makarov, V. A. and Tyukin, I. Y.},
biburl = {https://www.bibsonomy.org/bibtex/2ccb713b3f5517fa782b69a4ec5a54728/kirk86},
description = {[1809.07656] The unreasonable effectiveness of small neural ensembles in high-dimensional brain},
doi = {10.1016/j.plrev.2018.09.005},
interhash = {f458f1eca4896156603c81869f90bfdd},
intrahash = {ccb713b3f5517fa782b69a4ec5a54728},
keywords = {deep-learning machine-learning},
note = {cite arxiv:1809.07656Comment: Review paper, accepted in Physics of Life Reviews; minor corrections},
timestamp = {2019-06-13T15:21:54.000+0200},
title = {The unreasonable effectiveness of small neural ensembles in
high-dimensional brain},
url = {http://arxiv.org/abs/1809.07656},
year = 2018
}