Abstract
Despite the widely-spread consensus on the brain complexity, sprouts of the
single neuron revolution emerged in neuroscience in the 1970s. They brought
many unexpected discoveries, including grandmother or concept cells and sparse
coding of information in the brain.
In machine learning for a long time, the famous curse of dimensionality
seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of
dimensionality becomes gradually more and more popular. Ensembles of
non-interacting or weakly interacting simple units prove to be an effective
tool for solving essentially multidimensional problems. This approach is
especially useful for one-shot (non-iterative) correction of errors in large
legacy artificial intelligence systems.
These simplicity revolutions in the era of complexity have deep fundamental
reasons grounded in geometry of multidimensional data spaces. To explore and
understand these reasons we revisit the background ideas of statistical
physics. In the course of the 20th century they were developed into the
concentration of measure theory. New stochastic separation theorems reveal the
fine structure of the data clouds.
We review and analyse biological, physical, and mathematical problems at the
core of the fundamental question: how can high-dimensional brain organise
reliable and fast learning in high-dimensional world of data by simple tools?
Two critical applications are reviewed to exemplify the approach: one-shot
correction of errors in intellectual systems and emergence of static and
associative memories in ensembles of single neurons.
Users
Please
log in to take part in the discussion (add own reviews or comments).