Abstract
Differential privacy (DP) is a popular mechanism for training machine
learning models with bounded leakage about the presence of specific points in
the training data. The cost of differential privacy is a reduction in the
model's accuracy. We demonstrate that this cost is not borne equally: accuracy
of DP models drops much more for the underrepresented classes and subgroups.
For example, a DP gender classification model exhibits much lower accuracy
for black faces than for white faces. Critically, this gap is bigger in the DP
model than in the non-DP model, i.e., if the original model is unfair, the
unfairness becomes worse once DP is applied. We demonstrate this effect for a
variety of tasks and models, including sentiment analysis of text and image
classification. We then explain why DP training mechanisms such as gradient
clipping and noise addition have disproportionate effect on the
underrepresented and more complex subgroups, resulting in a disparate reduction
of model accuracy.
Users
Please
log in to take part in the discussion (add own reviews or comments).