dc.contributor.author | Gao, Jinzhu | en_US |
dc.contributor.author | Liu, Huadong | en_US |
dc.contributor.author | Huang, Jian | en_US |
dc.contributor.author | Beck, Micah | en_US |
dc.contributor.author | Wu, Qishi | en_US |
dc.contributor.author | Moore, Terry | en_US |
dc.contributor.author | Kohl, James | en_US |
dc.contributor.editor | Jean M. Favre and Kwan-Liu Ma | en_US |
dc.date.accessioned | 2014-01-26T16:43:25Z | |
dc.date.available | 2014-01-26T16:43:25Z | |
dc.date.issued | 2008 | en_US |
dc.identifier.isbn | 978-3-905674-04-0 | en_US |
dc.identifier.issn | 1727-348X | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/EGPGV/EGPGV08/065-072 | en_US |
dc.description.abstract | It is often desirable or necessary to perform scientific visualization in geographically remote locations, away from the centralized data storage systems that hold massive amounts of scientific results. The larger such scientific datasets are, the less practical it is to move these datasets to remote locations for collaborators. In such scenarios, efficient remote visualization solutions can be crucial. Yet the use of distributed or heterogeneous computing resources raises several challenges for large-scale data visualization. Algorithms must be robust and incorporate advanced load balancing and scheduling techniques. In this paper, we propose a time-critical remote visualization system that can be deployed over distributed and heterogeneous computing resources. We introduce an "importance" metric to measure the need for processing each data partition based on its degree of contribution to the final visual image. Factors contributing to this metric include specific application requirements, value distributions inside the data partition, and viewing parameters. We incorporate "visibility" in our measurement as well so that empty or invisible blocks will not be processed. Guided by the data blocks' importance values, our dynamic scheduling scheme determines the rendering priority for each visible block. That is, more important blocks will be rendered first. In time-critical scenarios, our scheduling algorithm also dynamically reduces the level-of-detail for the less important regions so that visualization can be finished in a user-specified time limit with highest possible image quality. This system enables interactive sharing of visualization results. To evaluate the performance of this system, we present a case study using a 250 Gigabyte dataset on 170 distributed processors. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Categories and Subject Descriptors (according to ACM CCS): I.3.2 [Graphics Systems]: Distributed/network graphics I.3.6 [Methodology and Techniques]: Graphics data structures and data types | en_US |
dc.title | Time-Critical Distributed Visualization with Fault Tolerance | en_US |
dc.description.seriesinformation | Eurographics Symposium on Parallel Graphics and Visualization | en_US |