dc.description.abstract | Many photographs are taken in perspective. Techniques for rectifying resulting perspective distortions typically rely on the existence of parallel lines in the scene. In scenarios where such parallel lines are hard to automatically extract or manually annotate, the unwarping process remains a challenge. In this paper, we introduce an automatic algorithm to rectifying images containing textures of repeated elements lying on an unknown plane. We unwrap the input by maximizing for image self-similarity over the space of homography transformations. We map a set of detected regional descriptors to surfaces in a transformation space, compute the intersection points among triplets of such surfaces, and then use consensus among the projected intersection points to extract the correcting transform. Our algorithm is global, robust, and does not require explicit or accurate detection of similar elements. We evaluate our method on a variety of challenging textures and images. The rectified outputs are directly useful for various tasks including texture synthesis, image completion, etc. | en_US |