Show simple item record

dc.contributor.authorXiao, Shishien_US
dc.contributor.authorHou, Yihanen_US
dc.contributor.authorJin, Chengen_US
dc.contributor.authorZeng, Weien_US
dc.contributor.editorBujack, Roxanaen_US
dc.contributor.editorArchambault, Danielen_US
dc.contributor.editorSchreck, Tobiasen_US
dc.date.accessioned2023-06-10T06:17:02Z
dc.date.available2023-06-10T06:17:02Z
dc.date.issued2023
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14832
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14832
dc.description.abstractRetrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing examplebased chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop a prototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification as well as multi-modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Human-centered computing -> Visualization; Information systems -> Query intent; Computing methodologies -> Artificial intelligence
dc.subjectHuman centered computing
dc.subjectVisualization
dc.subjectInformation systems
dc.subjectQuery intent
dc.subjectComputing methodologies
dc.subjectArtificial intelligence
dc.titleWYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrievalen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersInteraction and Accessibility
dc.description.volume42
dc.description.number3
dc.identifier.doi10.1111/cgf.14832
dc.identifier.pages311-322
dc.identifier.pages12 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 42-Issue 3
    EuroVis 2023 - Conference Proceedings

Show simple item record