"Three-dimensional range geometry compression via phase encoding," Appl. Opt. (2017)

[106] T. Bell, B. Vlahov, J.P. Allebach, and S. Zhang, "Three-dimensional range geometry compression via phase encoding," Appl. Opt., 56(33), 9285-9292, (2017); doi: 10.1364/AO.56.009285

Abstract

One of the state-of-the-art methods for three-dimensional (3D) range geometry compression is to encode 3D data within a regular 24-bit 2D color image. However, most existing methods use all three color channels to solely encode 3D data, leaving no room to store other information (e.g., texture) within the same image. This paper presents a novel method which utilizes geometric constraints, inherent to the structured light 3D scanning device, to reduce the amount of data which need be stored within the output image. The proposed method thus only requires two color channels to represent 3D data, leaving one channel free to store additional information (such as a texture image). Experimental results verify the overall robustness of the proposed method. For example, a compression ratio of 3038:1 can be achieved, versus the STL format, with a root-mean-square (RMS) error of 0.47% if the output image is compressed with JPEG 80%.

Technical Paper

"Method for large-range structured light system calibration," Appl. Opt., (2016)

[91] Y. An, T. Bell, B. Li, J. Xu and S. Zhang, "Method for large range structured light system calibration", Appl. Opt., 55(33), 9563-9572 (2016); doi:10.1364/AO.55.009563

Structured light system calibration often requires the usage of a calibration target with a similar size as the field of view (FOV), which brings challenges to large range structured light system calibration since fabricating large calibration targets is difficult and expensive. This paper presents a large range system calibration method that does not need a large calibration target. The proposed  method includes two stages: 1) accurately calibrate intrinsics  (i.e. focal lengths, and principle points) at a near range where both the camera and projector are out of focus; and 2) calibrate the extrinsic parameters (translation and rotation) from camera to projector with the assistance of a low-accuracy large range 3D sensor (e.g., Microsoft Kinect). We have developed a large-scale 3D shape measurement system with a FOV of (1120 × 1900 × 1000) mm^3. Experiments demonstrate our system can achieve measurement accuracy as high as 0.07 mm with a standard deviation of 0.80 mm by measuring a 304.8 mm diameter sphere. As a comparison, Kinect V2 only achieved mean error of 0.80 mm with a standard deviation of 3.41 mm for the FOV of measurement.

"High dynamic range real-time 3D shape measurement," Opt. Express, (2016)

[80] C. Jiang, T. Bell, and S. Zhang, "High dynamic range real-time 3D shape measurement," Opt. Express., 24(7), 7337-7346, 2016(Cover feature); doi: 10.1364/OE.24.00733 

Abstract

This paper proposes a method that can measure high-contrast surfaces in real-time without changing camera exposures. We propose to use 180-degree phase-shifted (or inverted) fringe patterns to complement regular fringe patterns. If not all of the regular patterns are saturated, inverted fringe patterns are used in lieu of original saturated patterns for phase retrieval, and if all of the regular fringe patterns are saturated, both the original and inverted fringe patterns are all used for phase computation to reduce phase error. Experimental results demonstrate that three-dimensional (3D) shape measurement can be achieved in real time by adopting the proposed high dynamic range method.

 

 

 

“Method for out-of-focus camera calibration,” Appl. Opt., (2016)

[79] T. Bell, J. Xu, and S. Zhang, "Method for out-of-focus camera calibration," Appl. Opt., 55(9), 2346-2352, 2016; doi: 10.1364/AO.55.002346

Abstract

State-of-the-art camera calibration methods assume that the camera is at least nearly in focus, and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns which encode feature points into the carrier phase; these feature points can be accurately recovered even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amounts of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

Technical Paper

“Multiwavelength depth encoding method for 3D range geometry compression,” Appl. Opt., (2015)

[78] T. Bell and S. Zhang, “Multiwavelength depth encoding method for 3D range geometry compression,” Appl. Opt., 54(36), 10684-10961, 2015; doi: 10.1364/AO.54.010684

Abstract

This paper presents a novel method for representing three-dimensional (3D) range data within regular two-dimensional (2D) images using multiwavelength encoding. These 2D images can then be further compressed using traditional lossless (e.g., PNG) or lossy (e.g., JPEG) image compression techniques. Current 3D range data compression methods require significant filtering to reduce lossy compression artifacts. The nature of the proposed encoding, however, offers a significant level of robustness to such artifacts brought about by high levels of JPEG compression. This enables extremely high compression ratios while maintaining a very low reconstruction error percentage with little to no filtering required to remove compression artifacts. For example, when encoding 3D geometry with the proposed method and storing the resulting 2D image with Matlab R2014a JPEG80 image compression, compression ratios of approximately 935:1 versus the OBJ format can be achieved at an error rate of approximately 0.027% without any filtering.

Technical Paper

"Towards superfast three-dimensional optical metrology with digital micromirror device (DMD) platforms," Opt. Eng., (2015)

[67] T. Bell* and S. Zhang, "Towards superfast three-dimensional optical metrology with digital micromirror device (DMD) platforms," Opt. Eng., 53(11), 112206, 2014; doi: 10.1117/1.OE.53.11.112206

Decade-long research efforts toward superfast three-dimensional (3-D) shape measurement leveraging the digital micromirror device (DMD) platforms are summarized. Specifically, we will present the following technologies: (1) high-resolution real-time 3-D shape measurement technology that achieves 30 Hz simultaneous 3-D shape acquisition, reconstruction, and display with more than 300,000 points per frame; (2) superfast 3-D optical metrology technology that achieves 3-D measurement at a rate of tens of kilohertz utilizing the binary defocusing method we invented; and (3) the improvement of the binary defocusing technology for superfast and high-accuracy 3-D optical metrology using the DMD platforms. Both principles and experimental results are presented.