Abstract
Geometric analysis is a very capable theory to understand the influence of
the high dimensionality of the input data in machine learning (ML) and
knowledge discovery (KD). With our approach we can assess how far the
application of a specific KD/ML-algorithm to a concrete data set is prone to
the curse of dimensionality. To this end we extend V.~Pestov's axiomatic
approach to the instrinsic dimension of data sets, based on the seminal work by
M.~Gromov on concentration phenomena, and provide an adaptable and
computationally feasible model for studying observable geometric invariants
associated to features that are natural to both the data and the learning
procedure. In detail, we investigate data represented by formal contexts and
give first theoretical as well as experimental insights into the intrinsic
dimension of a concept lattice. Because of the correspondence between formal
concepts and maximal cliques in graphs, applications to social network analysis
are at hand.
Users
Please
log in to take part in the discussion (add own reviews or comments).