Efficiently combining positions and normals for precise 3D geometry

Diego Nehab, Szymon Rusinkiewicz, James Davis, Ravi Ramamoorthi
2005 ACM SIGGRAPH 2005 Papers on - SIGGRAPH '05  
a) (b) (c) (d) Figure 1: Rendering comparisons. (a) Rendering of 3D scanned range image, (b) same scanned geometry, augmented with a measured normal-map (from photometric stereo), (c) our hybrid surface reconstruction, which combines both position and normal constraints, (d) photograph. Renderings in this paper do not use color information in order to focus on geometric aspects. Note how our method eliminates noise from the range image while introducing real detail. The surface normals are of
more » ... e same quality or better than those from photometric stereo, while most of the low-frequency bias has been eliminated. Abstract Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes or range images) or orientations (normal maps or bump maps). We present an algorithm that combines these two kinds of estimates to produce a new surface that approximates both. Our formulation is linear, allowing it to operate efficiently on complex meshes commonly used in graphics. It also treats high-and low-frequency components separately, allowing it to optimally combine outputs from data sources such as stereo triangulation and photometric stereo, which have different error-vs.-frequency characteristics. We demonstrate the ability of our technique to both recover high-frequency details and avoid low-frequency bias, producing surfaces that are more widely applicable than position or orientation data alone.
doi:10.1145/1186822.1073226 fatcat:qlzmi6ukgjdlxibv6ggkhvgukq