dc.description.abstract | The performance of commodity computer components continues to increase dramatically. Processors, internal I/O buses, graphics cards, and network adapters have all exhibited significant improvements without significant increases in cost. Due to the increase in the price/performance ratio of computers utilizing such components, clusters of commodity machines have become commonplace in today s computing world and are steadily displacing specialized, high-end, shared-memory machines for many graphics and visualization workloads. Acceptance, and more importantly utilization, of commodity clusters has been hampered, however, due to the significant challenges introduced when switching from a shared-memory architecture to a distributed memory one. Such challenges range from having to redesign applications for distributed computing to gathering pixels from multiple sources and finally synchronizing multiple video outputs when driving large displays. In addition to these impediments for the application developer, there are also many mundane problems which arise when working with clusters, including their installation and general system administration. This paper details these challenges and the many solutions that have been developed in recent years. As the nature of commodity hardware components suggests, the solutions to these research challenges are largely softwarebased, and include middleware layers for distributing the graphics workload across the cluster as well as for aggregating the final results to display for the user. At the forefront of this discussion will be IBM s Deep View project, whose goal has been the design and implementation of a scalable, affordable, high-performance visualization system for parallel rendering. In the past six years, Deep View has undergone numerous redesigns to make it as efficient as possible. We highlight the issues involved in this process, up to and including the current incarnation of Deep View, as well as what s on the horizon for cluster-based rendering. | en_US |