Jump to Content

Inverting Face Embeddings with Convolutional Neural Networks

Abstract

Deep neural networks have dramatically advanced the state of the art for many areas of machine learning. Recently they have been shown to have a remarkable ability to generate highly complex visual artifacts such as images and text rather than simply recognize them. In this work we use neural networks to effectively invert low-dimensional face embeddings while producing realistically looking consistent images. Our contribution is twofold, first we show that a gradient ascent style approaches can be used to reproduce consistent images, with a help of a guiding image. Second, we demonstrate that we can train a separate neural network to effectively solve the minimization problem in one pass, and generate images in real-time. We then evaluate the loss imposed by using a neural network instead of the gradient descent by comparing the final values of the minimized loss function.