Temporal Coherence Predictor for Time Varying Volume Data Based on Perceptual Functions
Abstract
This paper introduces an empirical, perceptually-based method which exploits the temporal coherence in consecutive frames to reduce the CPU-GPU traffic size during real-time visualization of time-varying volume data. In this new scheme, a multi-threaded CPU mechanism simulates GPU pre-rendering functions to characterize the local behaviour of the volume. These functions exploit the temporal coherence in the data to reduce the sending of complete per frame datasets to the GPU. These predictive computations are designed to be simple enough to be run in parallel on the CPU while improving the general performance of GPU rendering. Tests performed provide evidence that we are able to reduce considerably the texture size transferred at each frame without losing visual quality while maintaining performance compared to the sending of entire frames to the GPU. The proposed framework is designed to be scalable to Client/Server network based implementations to deal with multi-user systems.
BibTeX
@inproceedings {10.2312:vmv.20151255,
booktitle = {Vision, Modeling & Visualization},
editor = {David Bommes and Tobias Ritschel and Thomas Schultz},
title = {{Temporal Coherence Predictor for Time Varying Volume Data Based on Perceptual Functions}},
author = {Noonan, Tom and Campoalegre, Lazaro and Dingliana, John},
year = {2015},
publisher = {The Eurographics Association},
ISBN = {978-3-905674-95-8},
DOI = {10.2312/vmv.20151255}
}
booktitle = {Vision, Modeling & Visualization},
editor = {David Bommes and Tobias Ritschel and Thomas Schultz},
title = {{Temporal Coherence Predictor for Time Varying Volume Data Based on Perceptual Functions}},
author = {Noonan, Tom and Campoalegre, Lazaro and Dingliana, John},
year = {2015},
publisher = {The Eurographics Association},
ISBN = {978-3-905674-95-8},
DOI = {10.2312/vmv.20151255}
}