Abstract
In this article, we consider the problem of privacy-preserving distributed deep learning where data privacy is protected by fully homomorphic encryption. The aim is to develop a method for practical and scalable distributed deep learning with fully homomorphic encrypted data. The method must address the issue arising from the large computational cost associated with fully homomorphic encrypted data to offer a practical and scalable solution. An approach that leverages fuzzy-based membership mappings for data representation learning is considered for distributed deep learning with fully homomorphic encrypted data. The method introduces globally convergent and robust variational membership mappings to build local deep models. The local models are combined in a robust and flexible manner by means of fuzzy attributes to build a global model such that the global model can be homomorphically evaluated in an efficient manner. The membership-mappings-based privacy-preserving distributed deep learning method is accurate, practical, and scalable. This is verified through numerous experiments that include demonstrations using MNIST and Freiburg Groceries datasets, and a biomedical application related to the detection of mental stress on individuals. This study develops globally convergent and robust variational membership mappings for their application to accurate, practical, and scalable privacy-preserving distributed deep learning.
Users
Please
log in to take part in the discussion (add own reviews or comments).