J. Hirth, und T. Hanika. (2022)cite arxiv:2209.13517Comment: 17 pages, 8 figures, 9 tables.
Zusammenfassung
Explaining neural network models is a challenging task that remains unsolved
in its entirety to this day. This is especially true for high dimensional and
complex data. With the present work, we introduce two notions for conceptual
views of a neural network, specifically a many-valued and a symbolic view. Both
provide novel analysis methods to enable a human AI analyst to grasp deeper
insights into the knowledge that is captured by the neurons of a network. We
test the conceptual expressivity of our novel views through different
experiments on the ImageNet and Fruit-360 data sets. Furthermore, we show to
which extent the views allow to quantify the conceptual similarity of different
learning architectures. Finally, we demonstrate how conceptual views can be
applied for abductive learning of human comprehensible rules from neurons. In
summary, with our work, we contribute to the most relevant task of globally
explaining neural networks models.
%0 Generic
%1 hirth2022formal
%A Hirth, Johannes
%A Hanika, Tom
%D 2022
%K 2022 NN conceptual fca kde kdepub myown network neural publist views
%T Formal Conceptual Views in Neural Networks
%U http://arxiv.org/abs/2209.13517
%X Explaining neural network models is a challenging task that remains unsolved
in its entirety to this day. This is especially true for high dimensional and
complex data. With the present work, we introduce two notions for conceptual
views of a neural network, specifically a many-valued and a symbolic view. Both
provide novel analysis methods to enable a human AI analyst to grasp deeper
insights into the knowledge that is captured by the neurons of a network. We
test the conceptual expressivity of our novel views through different
experiments on the ImageNet and Fruit-360 data sets. Furthermore, we show to
which extent the views allow to quantify the conceptual similarity of different
learning architectures. Finally, we demonstrate how conceptual views can be
applied for abductive learning of human comprehensible rules from neurons. In
summary, with our work, we contribute to the most relevant task of globally
explaining neural networks models.
@misc{hirth2022formal,
abstract = {Explaining neural network models is a challenging task that remains unsolved
in its entirety to this day. This is especially true for high dimensional and
complex data. With the present work, we introduce two notions for conceptual
views of a neural network, specifically a many-valued and a symbolic view. Both
provide novel analysis methods to enable a human AI analyst to grasp deeper
insights into the knowledge that is captured by the neurons of a network. We
test the conceptual expressivity of our novel views through different
experiments on the ImageNet and Fruit-360 data sets. Furthermore, we show to
which extent the views allow to quantify the conceptual similarity of different
learning architectures. Finally, we demonstrate how conceptual views can be
applied for abductive learning of human comprehensible rules from neurons. In
summary, with our work, we contribute to the most relevant task of globally
explaining neural networks models.},
added-at = {2022-09-28T10:46:55.000+0200},
author = {Hirth, Johannes and Hanika, Tom},
biburl = {https://www.bibsonomy.org/bibtex/2d40a5033ba022c8f88208c8ec83eb171/tomhanika},
description = {Formal Conceptual Views in Neural Networks},
interhash = {b9c4e79c7df662923a449b31f6405277},
intrahash = {d40a5033ba022c8f88208c8ec83eb171},
keywords = {2022 NN conceptual fca kde kdepub myown network neural publist views},
note = {cite arxiv:2209.13517Comment: 17 pages, 8 figures, 9 tables},
timestamp = {2023-01-26T20:05:07.000+0100},
title = {Formal Conceptual Views in Neural Networks},
url = {http://arxiv.org/abs/2209.13517},
year = 2022
}