Improving Smiling Detection with Race and Gender Diversity
Recent progress in deep learning has been accompanied by a growing concern for whether models are fair for users, with equally good performance across different demographics. In computer vision research, such questions are relevant to face detection and the related task of face attribute detection, among others. We measure race and gender inclusion in the context of smiling detection, and introduce a method for improving smiling detection across demographic groups. Our method introduces several modifications over existing detection methods, leveraging twofold transfer learning to better model facial diversity. Results show that this technique improves accuracy against strong baselines for most demographic groups as well as overall. Our best-performing model defines a new state-of-the-art for smiling detection, reaching 91% on the Faces of the World dataset. The accompanying multi-head diversity classifier also defines a new state-of-the-art for gender classification, reaching 93.87% on the Faces of the World dataset. This research demonstrates the utility of modeling race and gender to improve a face attribute detection task, using a twofold transfer learning framework that allows for privacy towards individuals in a target dataset.