dc.contributor.author | Sedlmair, Michael | en_US |
dc.contributor.author | Aupetit, Michael | en_US |
dc.contributor.editor | H. Carr, K.-L. Ma, and G. Santucci | en_US |
dc.date.accessioned | 2015-05-22T12:51:23Z | |
dc.date.available | 2015-05-22T12:51:23Z | |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1111/cgf.12632 | en_US |
dc.description.abstract | Visual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data-driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human ''ground truth'' judgements, and to use this human-labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance-an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state-of-the-art class separation measures, using human ground truth data from 828 class separation judgments on color-coded 2D scatterplots. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | H.5.0 [Information Interfaces and Presentation] | en_US |
dc.subject | General | en_US |
dc.title | Data-driven Evaluation of Visual Quality Measures | en_US |
dc.description.seriesinformation | Computer Graphics Forum | en_US |
dc.description.sectionheaders | Evaluation and Design | en_US |
dc.description.volume | 34 | en_US |
dc.description.number | 3 | en_US |
dc.identifier.doi | 10.1111/cgf.12632 | en_US |
dc.identifier.pages | 201-210 | en_US |