Jump to Content

Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations

Alex Beutel
Zhe Zhao
FAT/ML 2017 -- Workshop at KDD 2017 (2017)

Abstract

How can we learn classifier that is ``fair'' for a protected or sensitive group, when we do not know if the input to the classifier affects the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and possibly impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a remarkably small amount of data is needed to train these models, and there is still a gap between the theoretical implications and the empirical results.

Research Areas