Jump to Content

D-Nets: Beyond Patch-Based Image Descriptors

Felix von Hundelshausen
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR'12) (2012)

Abstract

Despite much research on patch-based descriptors, SIFT remains the gold standard for finding correspondences across images and recent descriptors focus primarily on improving speed rather than accuracy. In this paper we propose Descriptor-Nets (D-Nets), a computationally efficient method that significantly improves the accuracy of image matching by going beyond patch-based approaches. D-Nets constructs a network in which nodes correspond to traditional sparsely or densely sampled keypoints, and where image content is sampled from selected edges in this net. Not only is our proposed representation invariant to cropping, translation, scale, reflection and rotation, but it is also significantly more robust to severe perspective and non-linear distortions. We present several variants of our algorithm, including one that tunes itself to the image complexity and an efficient parallelized variant that employs a fixed grid. Comprehensive direct comparisons against SIFT and ORB on standard datasets demonstrate that D-Nets dominates existing approaches in terms of precision and recall while retaining computational efficiency.

Research Areas