Massively parallel software rendering for visualizing large-scale data sets

Kwan-Liu Ma, S. Parker
2001 IEEE Computer Graphics and Applications  
F or some time, researchers have done production visualization almost exclusively using high-end graphics workstations. They routinely archived and analyzed the outputs of simulations running on massively parallel supercomputers. Generally, a feature extraction step and a geometric modeling step to significantly reduce the data's size preceded the actual data rendering. Researchers also used this procedure to visualize large-scale data produced by high-resolution sensors and scanners. While the
more » ... graphics workstation allowed interactive visualization of the extracted data, looking only at a lower resolution and polygonal representation of the data defeated the original purpose of performing the high-resolution simulation or scanning. To look at the data more closely, researchers could run batch-mode software rendering of the data at the highest possible resolution on a parallel supercomputer using the rendering parameters suggested by the interactive viewing. However, researchers frequently didn't do this for several reasons. First, a supercomputer is a precious resource. Scientists wanted to reserve their limited computer time on the supercomputers for running simulations rather than visualization calculations. Second, many of the parallel-rendering algorithms don't scale well, so the large number of massively parallel computer processors couldn't be fully and efficiently used. Third, most of the parallel-rendering algorithms were developed for meeting research curiosity rather than for production use. As a result, large and complex data couldn't be rendered cost effectively. However, the current technology trend of cheaper, more powerful computers prompted us to revisit the option of using parallel software rendering (and in some cases, discarding hardware rendering totally). Most graphics cards were mainly optimized for polygon rendering and texture mapping. Scientists can now model physical phenomena with greater accuracy and complexity. Analyzing the resulting data demands advanced rendering features that weren't generally offered by commercial graphics workstations. In addition, the short lifespan, limited resolution, and high cost of graphics workstations constrain what scientists can do. However, the decreasing cost and rapidly increasing performance of commodity PC and network technologies have let us build powerful PC clusters for large-scale computing. Supercomputing is no longer a shared resource. Scientists can build cluster systems dedicated to their own research. They can also build such systems incrementally to solve problems with increasing complexity and scale. More importantly, they can now afford to use the same machine for visualization calculations either for runtime visual monitoring of the simulation or postprocessing visualization calculations. Therefore, parallel software rendering is becoming a viable solution for visualizing large-scale data sets. In this tutorial, we describe two highly scalable, parallel software volume-rendering algorithms. We designed one algorithm for distributed-memory parallel architectures to render unstructured-grid volume data. 1 We designed the other for shared-memory parallel architectures to directly render isosurfaces. 2 Through the discussion of these two algorithms, we address the most relevant issues when using massively parallel computers to render large-scale volumetric data. The focus of our discussion is direct rendering of volumetric data, so we don't consider other techniques for treating the large-scale data visualization problem such as feature extraction, multiresolution schemes, and compression.
doi:10.1109/38.933526 fatcat:hqpl3zqwj5ejnktc6m4rczbkqq