@tillmo

A fuzzy loss for ontology classification

, , , and . NeSy 2024: The 18th International Conference on Neural-symbolic Learning and Reasoning, volume 14979 of Springer lecture notes, page 101--118. Springer, (2024)
DOI: https://doi.org/10.1007/978-3-031-71167-1_6

Abstract

Deep learning models are often unaware of the inherent constraints of the task they are applied to. However, many downstream tasks require logical consistency. For ontology classification tasks, such constraints include subsumption and disjointness relations between classes. In order to increase the consistency of deep learning models, we propose a fuzzy loss that combines label-based loss with terms penalising subsumption- or disjointness-violations. Our evaluation on the ChEBI ontology shows that the fuzzy loss is able to decrease the number of consistency violations by several orders of magnitude without decreasing the classification performance. In addition, we use the fuzzy loss for unsupervised learning. We show that this can further improve consistency on data from a distribution outside the scope of the supervised training.

Links and resources

Tags

community

  • @tillmo
  • @dblp
@tillmo's tags highlighted