Variable Rate Image Compression with Recurrent Neural Networks
Venue
International Conference on Learning Representations (2016)
Publication Year
2016
Authors
George Toderici, Sean M. O'Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, Rahul Sukthankar
BibTeX
Abstract
A large fraction of Internet traffic is now driven by requests from mobile devices
with relatively small screens and often stringent bandwidth requirements. Due to
these factors, it has become the norm for modern graphics-heavy websites to
transmit low-resolution, low-bytecount image previews (thumbnails) as part of the
initial page load process to improve apparent page responsiveness. Increasing
thumbnail compression beyond the capabilities of existing codecs is therefore a
current research focus, as any byte savings will significantly enhance the
experience of mobile device users. Toward this end, we propose a general framework
for variable-rate image compression and a novel architecture based on convolutional
and deconvolutional LSTM recurrent networks. Our models address the main issues
that have prevented autoencoder neural networks from competing with existing image
compression algorithms: (1) our networks only need to be trained once (not
per-image), regardless of input image dimensions and the desired compression rate;
(2) our networks are progressive, meaning that the more bits are sent, the more
accurate the image reconstruction; and (3) the proposed architecture is at least as
efficient as a standard purpose-trained autoencoder for a given number of bits. On
a large-scale benchmark of 32×32 thumbnails, our LSTM-based approaches provide
better visual quality than (headerless) JPEG, JPEG2000 and WebP, with a storage
size that is reduced by 10% or more.
