dc.description.abstract | This paper proposes a new preprocessing method for interactive rendering of complex polygonal virtual environments. The approach divides the space that observer can reach into many rectangular viewpoint regions. For each region, an outer rectangular volume (ORV) is established to surround it. By adaptively partitioning the boundary of the ORV together with the viewpoint region, all the rays that originate from the viewpoint region are divided into the beams whose potentially visible polygon number is less than a preset threshold. If a resultant beam is the smallest and intersects many potentially visible polygons, the beam is simplified as a fixed number of rays and the averaged color of the hit polygons is recorded. For other beams, their potentially visible sets (PVS) of polygons are stored respectively. During an interactive walkthrough, the visual information related to the current viewpoint is retrieved from the storage. The view volume clipping, visibility culling and detail simplification are efficiently supported by these stored data. The rendering time is independent of the scene complexity. | en_US |