Show simple item record

dc.contributor.authorCai, Yiqien_US
dc.contributor.authorGuo, Xiaohuen_US
dc.contributor.editorEitan Grinspun and Bernd Bickel and Yoshinori Dobashien_US
dc.date.accessioned2016-10-11T05:19:48Z
dc.date.available2016-10-11T05:19:48Z
dc.date.issued2016
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.13017
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13017
dc.description.abstractSuperpixels have been widely used as a preprocessing step in various computer vision tasks. Spatial compactness and color homogeneity are the two key factors determining the quality of the superpixel representation. In this paper, these two objectives are considered separately and anisotropic superpixels are generated to better adapt to local image content. We develop a unimodular Gaussian generative model to guide the color homogeneity within a superpixel by learning local pixel color variations. It turns out maximizing the log-likelihood of our generative model is equivalent to solving a Centroidal Voronoi Tessellation (CVT) problem. Moreover, we provide the theoretical guarantee that the CVT result is invariant to affine illumination change, which makes our anisotropic superpixel generation algorithm well suited for image/video analysis in varying illumination environment. The effectiveness of our method in image/video superpixel generation is demonstrated through the comparison with other state-of-the-art methods.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleAnisotropic Superpixel Generation Based on Mahalanobis Distanceen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersImage Processing
dc.description.volume35
dc.description.number7
dc.identifier.doi10.1111/cgf.13017
dc.identifier.pages199-207


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record