Improved 3D measurement with a novel preprocessing method in DFP

Robotics and Biomimetics, Nov 2017

Shadow and background are two common factors in digital fringe projection, which lead to ambiguity in three-dimensional measurement and thereby need to be seriously considered. Preprocessing is often needed to segment the object from invalid points. The existing segmentation approaches based on modulation normally perform well in pure dark background circumstances, which, however, lose accuracy in situations of white or complex background. In this paper, an accurate shadow and background removal technique is proposed, which segments the shadow by one threshold from modulation histogram and segments the background by the threshold in intensity histogram. Experiments are well designed and conducted to verify the effectiveness and reliability of the proposed method.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

https://link.springer.com/content/pdf/10.1186%2Fs40638-017-0077-z.pdf

Improved 3D measurement with a novel preprocessing method in DFP

Xiao and Li Robot. Biomim. Improved 3D measurement with a novel preprocessing method in DFP Yi Xiao 0 You‑Fu Li 0 0 Department of Mechanical and Biomedical Engineering, City University of Hong Kong , Tat Chee Avenue, Kowloon , Hong Kong Shadow and background are two common factors in digital fringe projection, which lead to ambiguity in threedimensional measurement and thereby need to be seriously considered. Preprocessing is often needed to segment the object from invalid points. The existing segmentation approaches based on modulation normally perform well in pure dark background circumstances, which, however, lose accuracy in situations of white or complex background. In this paper, an accurate shadow and background removal technique is proposed, which segments the shadow by one threshold from modulation histogram and segments the background by the threshold in intensity histogram. Experiments are well designed and conducted to verify the effectiveness and reliability of the proposed method. Modulation histogram; Coding map; Segmentation; Preprocessing; Binary defocusing; Digital fringe projection Background Digital fringe projection (DFP) techniques are widely employed in flexible, non-contact and high-speed 3D shape measurement [ 1 ]. In a DFP system, a sequence of phase-shifted sinusoidal fringes is often projected on the object by the projector, and the fringes are distorted by the object surface and captured by a camera. Phase map can be retrieved from the deformed fringes, and the object height information is calculated from the phase map in a calibrated DFP system [ 2 ]. However, shadow and the background are inevitable, since the projector and camera are arranged from different viewpoints. Invalid points such as shadow and background should be identified and removed from the object. Researchers made great efforts to remedy the influence of invalid points including the shadow and background. Skydan et al. [ 3 ] utilized multiple projectors to probe the object from different viewpoints to achieve shadow free reconstruction. However, the increased cost of hardware keeps this method from commonly utilized. Zhang [ 4 ] proposed to employ the Gaussian filter on the fringes to remove random noise and identify the invalid points by the monotonicity of the unwrapped phase. However, the Gaussian filter introduces errors to the object details. Chen et  al. [ 5 ] applied a threshold to the least-squares fitting errors in temporal phase unwrapping for invalid points detection. However, this method is vulnerable to noise [ 6 ]. Huang and Asundi [ 6 ] proposed a compact framework combining modulation, rms error and monotonicity for shadow and background removal and error detection. Intensity modulation is very effective in measuring how informative are the pixels, and can be used to detect background and shadow. However, manually adjusting the threshold is time-consuming. In practice, the threshold selection is subject to measurement conditions such as the environmental illumination and object surface characteristics. Lu et al. [ 7 ] proposed a technique to remove shadow points by mapping the 3D results into projector coordinates, and the modulation is not needed. However, this method can only detect shadow caused by the DFP system [ 8 ]. Otsu’s method [ 9 ] is widely utilized for thresholding in image segmentation, which is automatic and efficient. However, it fails to provide optimal threshold when the class to be separated increases or when the intensity histogram is close to unimodal distribution [ 10 ]. Ng [ 10 ] improved this technique through a weighting factor, considering the occurrence probability of the threshold point. Both Otsu’s method and Ng’s method aim for image segmentation based on intensity histogram. The literature [ 8 ] utilized the automatic thresholding method in modulation histogram for object detection. However, their method can only deal with dark background with low modulation, since the background and shadow are with similar low modulation, while the object is with obviously higher modulation level, and only one threshold is needed to segment the object. When the background is a white board or complex with higher or similar modulation level, it is difficult to segment the background from the object. In this situation, there will be three classes in the modulation map, and two thresholds are needed to separate the object from the background and shadow, as shown in Fig. 1. The method in [ 8 ] cannot deal well with this situation. In this paper, we apply the multi-thresholding technique on modulation histogram and propose a preprocessing method to detect the valid points of the object by firstly segmenting the shadow using one threshold from the modulation histogram. Secondly, we project one more picture onto the object and reference plane and calculate the intensity difference of the captured images, and the histogram of the difference map is analyzed for the background detection. We call this one more picture the coding map. The rest of this paper is organized as follows: We introduce the related principles and existing methods in Related work. In “Methods” section, we introduce the details of how to implement our proposed object segmentation technique. In the experiments and results part, we present and compare some segmentation results using our method and the expanded conventional method. The 3D shape reconstruction result is also presented in this section. In the end, we make a summary in “Conclusion”. (typically the object and background with shadow): class C0 includes the pixels with levels k1, k2, . . . , kt , and class C1 includes the pixels with levels kt+1, kt+2, . . . , kL , where kt is the threshold to be determined. The occurrence probability of each class can be calculated as, ω0 = Pr (C0) = pi = ω(t) ω1 = Pr (C1) = pi = 1 − ω(t) t i=1 L i=t+1 and the class mean levels are, μ0 = i · pi/ω0 = μ(t)/ω(t) μ1 = i · pi/ω1 = μΓ − μ(t) 1 − ω(t) Related work N‑step phase shifting and modulation Phase-shifting algorithms are widely utilized in the stationary object measurement due to their high accuracy and flexibility [ 11 ]. They carry out point-by-point measurement and calculate wrapped phase value from −π to π. For the N-step phase-shifting method, sinusoidal fringes with the following intensity modulation are often used [ 4 ], In x, y = Ia + Im cos ϕ x, y + 2π (n − 1) N where n is the phase-shifting number and N is the total phase-shifting steps. In is the intensity map of the nth sinusoidal fringes and Ia and Im are the average intensity and modulation intensity, respectively. The wrapped phase φw can be calculated as [ 6 ], ϕw = − tan−1 nN=−01 In · sin 2Nnπ nN=−01 In · cos 2Nnπ The modulation M is defined as, M = (3) It shows how much useful information is contained in each pixel. It is usually selected as the reliability map to guide the phase unwrapping and object segmentation [ 12 ]. If the proper threshold t is found, object can be identified from the background, shadow and the less informative pixels. However, manually adjusting the modulation threshold is very tedious and unstable, since the modulation varies according to measuring conditions, such as the incoherent light, the reflection of object and background, and the occlusion caused by object step height. Existing methods of threshold selection Otsu’s method is commonly utilized for quick segment of the object and background based on image intensity. For a given image, if we distribute the gray levels into L bins ranging from 1 to L, ki represent the total number of pixels with gray-level i and K is the total pixels of the given image, K = k1 + k2 + · · · + kL. The occurrence probability of gray-level i is calculated as, pi = ki , K pi ≥ 0, pi = 1. L i=1 When a single value threshold is applied, the pixels of the given image are to be divided into two classes t i=1 L i=t+1 L i=1 μΓ = i · pi ω0 · μ0 + ω1 · μ1 = μΓ ω0 + ω1 = 1 t∗ = Arg Max σB2(t) (1) (2) 2 (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) where ω(t) and μ(t) are the zeroth-order and the firstorder cumulative moments of the histogram up to tth level, respectively. The total average gray level of the whole image is calculated as, For any selection of t, it is easily verified that According to the discriminant criterion analysis [ 9 ], Otsu showed that the optimal threshold t∗ can be calculated by maximizing the between-class variance, where the between-class variance σB2 is defined as, σB2 = ω0(μ0 − μΓ )2 + ω1(μ1 − μΓ )2 The optimal threshold t∗ is often calculated by an equivalent, but simpler equation [ 13 ], t∗ = Arg Max ω0μ02 + ω1μ12 Otsu’s method works well on the histogram of bimodal distribution, but not robust for histograms of unimodal or close to unimodal [ 10 ]. Ng [ 10 ] developed a valley emphasis method to improve Otsu’s method. By adding a weighting factor, then the threshold is calculated by considering two elements, the small occurrence and the big between-class variance. The threshold of Ng’s method is calculated as, tv∗ = Arg Max (1 − pt )σB2(t) (15) The above two methods for automatic threshold selection are intended for image segment based on gray-level histogram. The literature [ 8 ] utilizes them in modulation histogram for object segmentation. However, in their work, the background is dark, so invalid points in shadow and background are with low modulation level, and the object is with higher modulation level; only one threshold is enough to segment the object. As shown in Fig.  1, Fig.  1a shows a captured fringe on the object with dark background, Fig. 1b shows the modulation map of the captured fringes, and Fig. 1c shows the histogram of the modulation map. The modulation histogram is within two classes, and it is easy to find the threshold t1, to segment the valid points and invalid points. In practical, the modulation histogram is not necessarily in two classes, such as when a white board is used as the background for system calibration, as shown in Fig. 1d. Figure 1e shows the modulation map of Fig. 1d, and Fig. 1f shows the histogram of the modulation map. As can be seen that when the background is a white board, the modulation level of the background will be high, and the modulation histogram in Fig. 1f is to be classified to three categories. The background is with middle to high modulation, the object is with medium modulation, and the shadow is with low modulation level. Two thresholds need to be calculated for shadow and the background segmentation separately. For this situation, the conventional method cannot be utilized directly. Methods To segment the object from white background, or complex background, we firstly applied the expanded Ng’s method for multi-threshold calculation in modulation histogram. Then, we proposed our method for shadow and background detection. Figure  2 shows the flowchart of our method. The first threshold calculated from modulation histogram is utilized for shadow segmentation. For the background segmentation, we project one coding image onto the object and calculate the intensity difference between the object and the background. The threshold in intensity histogram is used for background segmentation. Details on how to segment the shadow and background are introduced as follows. Expanded thresholding method The literature [ 8 ] has improved and applied Ng’s method for single thresholding in the fringe modulation histogram for object detection in digital fringe projection technique, while it only discussed the situation of a dark background, in which only one threshold is needed for object segmentation. For DFP system with a white or complex background, we apply the multi-thresholding Ng’s method on the modulation. The expanded Ng’s method can be described by [ 9 ], t1∗, t2∗, . . . tM∗−1  = Arg Max1 −  M−1 j=1  ptj M k=1 ωk · μk2    (16) Utilizing this equation, two thresholds t1 and t2 in Fig. 1f can be calculated. Pixels with modulation level smaller than t1 are regarded as the shadow, pixels with modulation level larger than t2 are regarded as background, and the object pixels are with medium modulation level. However, the multi-threshold calculation is less credible [ 9 ]. What’s worse, when the background is complex, with modulation levels distributed for a large range, it is difficult to segment the background by just modulation. In our method, only t1 is utilized for shadow detection, and the background is segmented from image intensity. Figure 3 shows the preliminary detection results, and black pixels are shadow and invalid points. Intensity‑based background segmentation For background segment, we project an extra coding image with intensity of Eq. (17) on the object and background and analyze the intensity of their difference to calculate a reliable tin. I x, y = 255 × x N Here 255 is the total gray-level range, and N is the column of the projected image. The coding image for projection is shown in Fig. 4. The captured coding image on the reference plane Iflat is shown in Fig. 5a, and the captured coding image on the object Iobj is shown in Fig. 5b. The intensity difference map Idiff shown in Fig.  5c is calculated by subtracting Iflat from Iobj. Here (x, y) is omitted for simplicity. (17) where Idiff is the intensity difference map calculated from Eq. (18) and tin is the intensity threshold. Experiments and results Experiments are carried out to test the proposed shadow and background removal technique. A DFP 3D shape measurement system in Fig.  7 with defocused projector projecting binary fringes of width T  =  30 is employed Since the extra projected image contains a lot of useful information for background detection, we call it the coding map. The histogram of difference coding map Idiff is shown in Fig.  6a. Utilizing the single threshold criteria in [ 10 ], we can calculate a reliable intensity threshold Iin for segmenting the background. The 150th row cross-section intensity of Fig. 5a–c is shown in Fig. 6b. So with the multi-thresholding Ng’s method utilized on modulation histogram, the object valid points matrix Vvalid is computed as, Vvalid = B(M, t1) ◦ ¬B(M, t2) where B is a matrix with the same size as M, calculated as, Bij(M, t) = 10, wwhheerree MMiijj ≤> tt , M, is the matrix of modulation map and t1 and t2 are the first and second threshold of modulation histogram calculated by (16). ° represents the Hadamard product of two matrices, and ¬ means negative. Multi-threshold calculation is less credible [ 9 ], and the background may be complex. We analyze intensity difference of the coding map to find tin for background segmentation, and the lower threshold t1 from modulation is still used for shadow detection. The proposed object valid points matrix Vpro is calculated as, Vpro = B(M, t1) ◦ ¬B(Idiff, tin) (18) (19) (20) to measure the 3D objects. Utilizing defocused binary fringes can avoid nonlinear gamma correction [ 14 ]. The projected fringes are deformed by the object and captured by a camera. Phase of the object surface is retrieved by phase-shifting technique, and height information is calculated after system calibration [ 15 ]. The hardware in the study includes a DLP projector of model AAXA P4-X with native resolution of 480  ×  854 pixels and a CCD camera of Point Gray FL3-U3-13S2M-CS with resolution of 1328 × 1048 pixels. The camera is attached with a 6-mm focal-length lens of model Kowa LM6JC. The projection distance is about 40 cm. Shadow and background segmentation In this experiment, two different objects are tested and segmented, and the results are shown in Fig.  8 for the first object and Fig.  9 for the second object. The calculated thresholds are shown in Table  1. Three different defocusing levels of the projector are utilized, to produce different fringe contrasts and modulation levels. Figure  8a shows the modulation histogram of the captured fringe patterns, and Fig. 8b shows the histogram of intensity difference for the captured coding image. Figure  8c shows the object segmentation by single threshold, as we can see from this picture, only one threshold is not enough to segment the whole object when the background is with high modulation level. It only segments the shadow from the object. Figure  8d shows the detected object by modulation thresholds t1 and t2, as we can see, it can segment the shadow and background from the object, but part of the background is detected as the valid points of the object. There are two reasons: First, multi-threshold calculation is not always credible [ 9 ], and second, when the background is complicated with modulation levels distributed in both the second cluster and the third cluster, background segmentation based on pure modulation is prone to error. Figure  8e shows the detected object by our proposed method, the background is segmented based on the intensity difference histogram of the coding map shown in Fig.  8b, and threshold tin is utilized. We may notice that the detected object is more accurate than Fig.  8c. The similar trends are shown in Fig.  8f–j for slightly defocused projector and Fig.  8k–o for strongly defocused projector. They provide different fringe contrasts and modulation levels. We may see that when the projector defocusing level increases, the modulation thresholds t1 and t2 become smaller, because the defocusing will depress the fringe modulation level in general. The same experiments are also done on the second object, and similar results are shown in Fig. 9. To demonstrate that our proposed method can work with a more complex background, we put a small statue near Object Object1 Object2 the measuring object to make the background more complex. Results are shown in Fig.  10. Figure  10a shows the modulation histogram of the captured fringes, Fig.  10b shows the histogram of the intensity difference for the captured coding map, and Fig.  10c shows the object with a small statue beside it. Object segmented by Ng’s method based on modulation is shown in Fig.  10d, and by our proposed method, it is shown in Fig. 10e. We may see that our proposed method can accurately segment the object from background, while the modulation-based method cannot segment the object from complex background. Our proposed method can segment valid points of the object more accurately than that of pure modulation, in most practical conditions. 3D reconstruction After we retrieved the phase map of the object, the height information can be calculated by system calibration [ 15 ]. One commonly utilized method calibrates the camera and the projector separately to find the system parameters [ 16 ]. This kind of method is easy to understand, because each system parameter has its geometric meaning, but is also time-consuming, and error prone [ 17 ]. Because the projector is regarded as an inversed camera, its calibration accuracy depends on the camera calibration process. In this work, we apply the calibration framework presented in [ 15 ] to calculate the height information of the object. For a general DFP system with arbitrary arrangements, the governing equation of the 3D height is computed as [ 18, 19 ], z = fc/fd , fc = 1 + c1ϕ + (c2 + c3ϕ)i + (c4 + c5ϕ)j + (c6 + c7ϕ)i2 + (c8 + c9ϕ)j2, fd = d0 + d1ϕ + (d2 + d3ϕ)i + (d4 + d5ϕ)j + (d6 + d7ϕ)i2 + (d8 + d9ϕ)j2, (21) where z is the height at pixel (i, j) and φ is the phase value of the projection fringe at that pixel. c1–c9 and d0–d9 are constants related to system parameters. To determine the 19 coefficients, we need to know some sample points height information on the calibration board, their corresponding phase φ and pixel position (i, j) and use leastsquares algorithm to find the coefficients. In our experiment, a 2D checkerboard with 12  ×  16 black and white squares is utilized as the calibration object. The calibration includes obtaining the 3D coordinates and phase value of all calibration points on the checkerboard, at ten different positions. Phase-shifted sinusoidal fringes and an extra white image are projected on to the calibration board and captured by the camera. The camera intrinsic and extrinsic parameters are calibrated with the captured clear checkerboard. We define the points in the world and camera coordinate system as xw, yw, zw T and xc, yc, zc T, respectively. Generally, zw is set to zero, so the relationship between the world and camera coordinate systems is expressed by,  xc   R11 R12 T1   xw   yc  =  R21 R22 T2   yw  zc  R31 R32 T3  1  (22) here R and T represent the rotation and translation elements of the camera extrinsic parameters. Using Eq. (22), we can find all the calibration points in the camera coordinate system. Set the first calibration board position as the reference plane and its coordinate system as the world coordinate system. The literature [ 15 ] computes the reference plane equation in camera coordinate system and calculates the distance of each calibration point to this plane as the points’ height. In our experiments, all the calibration points are transformed to the world coordinate system according to their respective transformation matrix; then, Zw is the point’s height. The system coefficients c1–c9 and d0–d9 are computed through minimizing a nonlinear least-squares error function as, m arg min c,d k=1 fc − zkb fd 2 , (23) where k is the ordinal number of each point and m denotes the total number of points. An initial guess of coefficients c1–c9 and d0–d9 is obtained by minimizing a Conclusion m k=1 b fc − fd zk linear least-squares error of S = Levenberg–Marquardt algorithm is utilized to verify the results. The reconstructed 3D object is shown in Fig. 11. The object in Fig. 11a is preprocessed by object segmentation based on modulation histogram, and that of Fig.  11b is preprocessed by our proposed method with modulation and intensity histogram being analyzed. As we can see, the modulation-based segmentation can remove the shadow correctly, so as our proposed method. However, in Fig.  11a, part of the measurement platform is segmented as part of the object, which should be removed as background, while our proposed method can accurately remove the shadow and complex background from the object points. In this paper, we proposed a novel preprocessing method for object segmentation in DFP 3D shape measurement. We firstly applied the multi-threshold Ng’s method on modulation histogram and then proposed our method for shadow and background detection based on modulation and intensity histogram. Experiments verified that our proposed method can improve the 3D shape measurement with white and complex background. Authors’ contributions YX built the experiment system, implemented the algorithm, collected and analyzed the data, and wrote the manuscript. YL supervised the main idea and revised the manuscript. Both authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Funding This work was financially supported by the Research Grants Council of Hong Kong (Project No. CityU 11205015), the National Natural Science Foundation of China (Grant No. 61673329) and the Center for Robotics and Automation (CRA) at CityU. The funding body had no direct input on either data collection, experiments design or execution, or the writing of the manuscript. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in pub‑ lished maps and institutional affiliations. 1. Gorthi SS , Rastogi P . Fringe projection techniques: whither we are? Opt Lasers Eng . 2010 ; 48 ( 2 ): 133 - 40 . 2. Guo Q , Xi J , Member S , Song L. Fringe pattern analysis with message passing based expectation maximization for fringe projection profilometry . IEEE Access . 2016 ; 4 : 4310 - 20 . 3. Skydan OA , Lalor MJ , Burton DR . Using coloured structured light in 3‑D surface measurement . Opt Lasers Eng . 2005 ; 43 : 801 - 14 . 4. Zhang S. Phase unwrapping error reduction framework for a multiplewavelength phase‑shifting algorithm . Opt Eng . 2009 ; 48 ( 10 ): 105601 . 5. Chen F , Su X , Xiang L . Analysis and identification of phase error in phase measuring profilometry . Opt Exp . 2010 ; 18 ( 11 ): 11300 - 7 . 6. Huang L , Asundi AK . Phase invalidity identification framework with the temporal phase unwrapping method . Meas Sci Technol . 2011 ; 22 ( 3 ): 35304 . 7. Lu L , Xi J , Yu Y , Guo Q , Yin Y , Song L . Shadow removal method for phaseshifting profilometry . Appl Opt . 2015 ; 54 ( 19 ): 6059 . 8. Zhang W , Li W , Yan J , Yu L . Adaptive threshold selection for background removal in fringe projection profilometry . Opt Lasers Eng . 2017 ; 90 : 209 - 16 . 9. Otsu N. A threshold selection method from gray‑level histograms . IEEE Trans Syst Man Cybern . 1979 ; 20 ( 1 ): 62 - 6 . 10. Ng HF . Automatic thresholding for defect detection . Pattern Recognit Lett . 2006 ; 27 ( 14 ): 1644 - 9 . 11. Malacara D. Optical shop testing , vol. 59 . New York: Wiley; 2007 . 12. Su X , Chen W . Reliability ‑ guided phase unwrapping algorithm: a review . Opt Lasers Eng . 2004 ; 42 ( 3 ): 245 - 61 . 13. Gdeisat M , Burton D , Lilley F , Arevalillo‑Herráez M . Fast fringe pattern phase demodulation using FIR Hilbert transformers . Opt Commun . 2016 ; 359 : 200 - 6 . 14. Xiao Y , Li Y. High ‑ quality binary fringe generation via joint optimization on intensity and phase . Opt Lasers Eng . 2017 ; 90 : 19 - 26 . 15. Vo M , Wang Z , Hoang T , Nguyen D. Flexible calibration technique for fringe‑projection‑based three ‑ dimensional imaging . Opt Lett . 2010 ; 35 ( 15 ): 3192 - 4 . 16. Li Z , et al. Accurate calibration method for a structured light system . Opt Eng . 2008 ; 47 ( 5 ): 053604 . http://dx.doi.org/10.1117/1.2931517 17. Zhang X , Zhu L . Projector calibration from the camera image point of view . Opt Eng . 2009 ; 48 ( 11 ): 117208 . http://dx.doi.org/10.1117/1.3265551 18. Huang L , Chua P , Asundi A . Least‑squares calibration method for fringe projection profilometry considering camera lens distortion . Appl Opt . 2010 ; 49 ( 9 ): 1539 - 48 . 19. Wang Z , Nguyen D , Barnes J. Some practical considerations in fringe projection profilometry . Opt Lasers Eng . 2010 ; 48 ( 2 ): 218 - 25 .


This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1186%2Fs40638-017-0077-z.pdf

Yi Xiao, You-Fu Li. Improved 3D measurement with a novel preprocessing method in DFP, Robotics and Biomimetics, 2017, 21, DOI: 10.1186/s40638-017-0077-z