dc.description.abstract | Hierarchical culling is a key acceleration technique used to efficiently handle massive models for ray tracing, collision detection, etc. To support such hierarchical culling, bounding volume hierarchies (BVHs) combined with meshes are widely used. However, BVHs may require a very large amount of memory space, which can negate the benefits of using BVHs. To address this problem, we present a novel hierarchical-culling oriented compact mesh representation, HCCMesh, which tightly integrates a mesh and a BVH together. As an in-core representation of the HCCMesh, we propose an i-HCCMesh representation that provides an efficient random hierarchical traversal and high culling efficiency with a small runtime decompression overhead. To further reduce the storage requirement, the in-core representation is compressed to our out-of-core representation, o-HCCMesh, by using a simple dictionary-based compression method. At runtime, o-HCCMeshes are fetched from an external drive and decompressed to the i-HCCMeshes stored in main memory. The i-HCCMesh and o-HCCMesh show 3.6:1 and 10.4:1 compression ratios on average, compared to a naively compressed (e.g., quantized) mesh and BVH representation. We test the HCCMesh representations with ray tracing, collision detection, photon mapping, and non-photorealistic rendering. Because of the reduced data access time, a smaller working set size, and a low runtime decompression overhead, we can handle models ten times larger in commodity hardware without the expensive disk I/O thrashing. When we avoid the disk I/O thrashing using our representation, we can improve the runtime performances by up to two orders of magnitude over using a naively compressed representation. | en_US |