Show simple item record

dc.contributor.authorGuerrero-Viu, Juliaen_US
dc.contributor.authorSubias, Jose Danielen_US
dc.contributor.authorSerrano, Anaen_US
dc.contributor.authorStorrs, Katherine R.en_US
dc.contributor.authorFleming, Roland W.en_US
dc.contributor.authorMasia, Belenen_US
dc.contributor.authorGutierrez, Diegoen_US
dc.contributor.editorBermano, Amit H.en_US
dc.contributor.editorKalogerakis, Evangelosen_US
dc.date.accessioned2024-04-16T14:40:44Z
dc.date.available2024-04-16T14:40:44Z
dc.date.issued2024
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.15037
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf15037
dc.description.abstractEstimating perceptual attributes of materials directly from images is a challenging task due to their complex, not fullyunderstood interactions with external factors, such as geometry and lighting. Supervised deep learning models have recently been shown to outperform traditional approaches, but rely on large datasets of human-annotated images for accurate perception predictions. Obtaining reliable annotations is a costly endeavor, aggravated by the limited ability of these models to generalise to different aspects of appearance. In this work, we show how a much smaller set of human annotations (''strong labels'') can be effectively augmented with automatically derived ''weak labels'' in the context of learning a low-dimensional image-computable gloss metric. We evaluate three alternative weak labels for predicting human gloss perception from limited annotated data. Incorporating weak labels enhances our gloss prediction beyond the current state of the art. Moreover, it enables a substantial reduction in human annotation costs without sacrificing accuracy, whether working with rendered images or real photographs.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution-NonCommercial 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Perception; Dimensionality reduction and manifold learning; Supervised learning
dc.subjectComputing methodologies
dc.subjectPerception
dc.subjectDimensionality reduction and manifold learning
dc.subjectSupervised learning
dc.titlePredicting Perceived Gloss: Do Weak Labels Suffice?en_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersPerceptual Rendering
dc.description.volume43
dc.description.number2
dc.identifier.doi10.1111/cgf.15037
dc.identifier.pages13 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial 4.0 International License
Except where otherwise noted, this item's license is described as Attribution-NonCommercial 4.0 International License