A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging

Human-centric Computing and Information Sciences, Oct 2017

An infrared transmitting model from the observed finger vein images is proposed in this paper by using multi-light-intensity imaging. This model is estimated from many pixels’ values under different intensity light in the same scene. Due to the fusion method could be applied in biometric system, the vein images of finger captured in the system, we proposed in this paper, will be normalized and preserved the intact of the vein patterns of the biometric data from tested human’s body. From observed pixels under multi-light-intensity, the curve of the transmitting model is recovered by sliding both of the sampled curve segments and using curve-fitting. The fusion method with each pixel level weighting based on the proposed transmitting model curve is adopted by the smooth spatial and estimation of the block quality. Finally, the results shown that our approach is a convenient and practicable approach for the infrared image fusion and subsequent processing for biometric applications.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

https://link.springer.com/content/pdf/10.1186%2Fs13673-017-0110-9.pdf

A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging

Chen et al. Hum. Cent. Comput. Inf. Sci. A fusion approach based on infrared finger vein transmitting model by using multi‑light‑intensity imaging Liukui Chen 3 Hsing‑Chung Chen 0 1 Zuojin Li 3 Ying Wu 3 0 Dept. of Medical Research, China Medical University Hospital, China Medical University , Taichung 404 , Taiwan 1 Dept. of Computer Sci‐ ence and Information Engineering, Asia University , Taichung 41354 , Taiwan 2 , Chongqing 401331 , China 3 Chongqing University of Science and Technology , Huxi Street 200 An infrared transmitting model from the observed finger vein images is proposed in this paper by using multi‑ light‑ intensity imaging. This model is estimated from many pixels' values under different intensity light in the same scene. Due to the fusion method could be applied in biometric system, the vein images of finger captured in the system, we proposed in this paper, will be normalized and preserved the intact of the vein patterns of the biometric data from tested human's body. From observed pixels under multi‑ light‑ intensity, the curve of the transmitting model is recovered by sliding both of the sampled curve segments and using curve‑ fitting. The fusion method with each pixel level weighting based on the proposed transmitting model curve is adopted by the smooth spatial and estimation of the block quality. Finally, the results shown that our approach is a convenient and practicable approach for the infrared image fusion and subsequent processing for biometric applications. Vein image; Multi‑ light‑ intensity; Transmitting model; Image fusion Introduction The finger vein authentication is highly accurate and convenient by using the individual’s unique biological characteristics. Vascular patterns are unique to each individual—even identical twins have different patterns. Finger vein authentication works based on the vein patterns in the superficial subcutaneous finger skin that are unique [ 1–3 ]. Three main advantages of vein authentication are following: (1) Because the finger veins are hidden inside human’s body, some little risks of forgery or theft appear in daily activities. The conditions on surface of the skin in finger, e.g. dry or wet, will have no effect on its authentication. (2) It is non-invasive and contactless in the finger vein imaging, which is convenient and cleanliness for the users. (3) The stability and complexity of finger vein patterns will be better than other biometric features on human’s body, which have the higher security level for personal identification [ 4 ]. The physiological information extracted from human body including the features of individual face, palm-print or fingerprint, hand-shape, skin, temperature and arterial pulse, etc. is used to recognize personal identification and diagnose some diseases. The information mentioned above, plus subcutaneous superficial vein pattern, could be extracted and digitized as biometric data. It could be further represented as a typical pattern in order to identify individual identification [ 5–9 ]. It is convenience to use the identified biometric to be the access right. The relative applications always focus in the remote access control in the websites, e.g. the website of finance or bank, etc. However, the image data of biometric is sensitive to the physiological conditions and environments. For example, the captured feature in human’s face, where the factors of its illumination distribution and direction should be modified or normalized before storing it. It may exists lots of shadow images or noises in this captured image. Finally, Its features will then be strongly influenced by the shadow images or noises [ 10 ]. On the other hand, the non-uniformed illumination will increase the interference and redundant information, or submerge some patterns. It will lead to the deformation of dimensionality. It is very important to normalize the captured biometric information before keeping them to the storage of biometric system [ 11, 12 ]. The similar problems mentioned above are also appeared in the finger vein image capturing processes [ 13–18 ]. The width of vein in the captured image will be changed under different intensity near-infrared light. Because thickness of each finger is different, the under/over-exposure may appears in the thick/ thin area of the finger by using one fixed-intensity-light. It will be inundated by this vein pattern. The vein pattern integrity is very important for the biometric system. Thus, it is necessary to normalize the illumination in the vein image capture before storing them in the biometric information storages or databases. The main work of finger vein authentication is to collect the data: finger vein images. The quality of the image will affect directly the accuracy and its recognition speed. This paper presents the details in analyses of infrared finger vein images. In addition, the transmitting model is built from the observed data, e.g. multi-light-intensity vein images. Finally, the pixel level fusion method based on the transmitting model as well as spatial smoothing is proposed in this paper. The remainders of this paper are organized as follows: in “The infrared light transmission model of the finger” section, we introduce the infrared light transmission model of the finger. In “Multi-light-intensity finger vein images’ fusion based on the transmitting model” section, we first formalize a multi-light-intensity finger vein images’ fusion based on the transmitting model. Next, we present examinations and discussions in “Examinations and discussions” section. Finally, we draw our conclusions and further works in “Conclusions and further works” section. The infrared light transmission model of the finger This model is extended and modified from Ref. [ 3 ]. The steps of basic works from bioinformation to the biometric data in this model are described in “Basic works from bioinformation to the biometric data” section, and its single infrared transmitting model is described in “A single infrared transmitting model” section. Basic works from bioinformation to the biometric data The applications of the biometric data includes personal identification and disease diagnosis. The system architecture is illustrated from the bioinformation to the biometric data for a single infrared transmitting model in biometric system shown in Fig. 1. Obviously, the capturing, digitizing and normalizing methods of bioinformation should be efficient in order to record the complete pattern or texture feature information, uniform gray distribution and contrast before their applications. This paper presents a single transmitting model of finger vein imaging in Biometric system and use it for fuse the multi-light-intensity finger vein images to one image, which integrates the vein pattern information of each source image and keeps the complete vein pattern information. A single infrared transmitting model The single infrared transmitting model described in this subsection. It is popular to use near-infrared (NIR) light transmitting the finger to achieve the angiogram imaging. Because the oxyhemoglobin content (HbO) in the venous blood is far beyond the arterial blood and other tissue, such as fat and muscle, the wavelength of the transmitting light absorption should be relatively high. Thus, the 760–1100  nm is suitable for the angiogram imaging from the absorption rate of the water, oxyhemoglobin (HbO) and deoxyhemoglobin (Hb), which is shown in Fig.  2. This higher absorption property of HbO results in that the region of vein pattern is darker than other surrounding region after the NIR light transmitting the finger. This technology is widely used in the vascular vein imaging of breast and cerebral. The tissue optical properties have been modeled based upon photon diffusion theory. The epidermis (the outermost layer of skin) only accounts for 6% of scattering and can be regarded a primary absorptive medium. Therefore, a simplified model on the reflectance of blood and tissue considers the reflectance from only the scattering tissue beneath the epidermis [ 12 ]. The skin is assumed to be a semi-infinite homogeneous medium, under a uniform and diffusive illumination. The photon has a relatively long residence time which allows the photon to engage in a random walk within the medium. The photon diffusion depends on the absorption and scattering properties of the skin, which penetration depth for different wavelengths shown in Fig. 3. Consider all these factors: the tissue (water, Hb and HbO) absorption in the vein, the depth of penetration. The infrared wave band of finger vein imaging is about 850 nm in practice. Because the thickness of the finger is a nonlinear variable, it is hard to only use invariable light intensity to vein imaging at infrared 850 nm. Thus, overexposure and underexposure often appear in the infrared finger vein images. And these areas with over/ under exposure can’t be enhanced, which cause the vein pattern lack in the biometric data extraction. An infrared multi-light-intensity finger vein imaging technology is used in the paper [ 13 ] to solve the problem, which extends the dynamic range of the infrared vein imaging [ 14 ]. Additionally, it is necessary to fuse the complementary vein information in the next process. This paper presents a calculation method of the infrared finger vein transmitting model based on the multi-light-intensity imaging. The model presents the monotone increasing nonlinear function relationship between the light-intensity and pixel-gray value, which can be built by the genetic algorithm and used in the imaging quality estimation of the pixel-level fusion to infrared multi-light-intensity finger vein images. The infrared finger vein transmitting model [ 3 ] is defined as: X is the irradiance of the infrared light of transmitting the finger, and B is represented as a pixel gray value. Generally, the gray level of the pixel is 8 bits. The infrared finger vein transmitting model function [ 3 ] is explicitly written as B = f (X)  Bmin = 0, if X ≤ Xmin  B = f (X), if Xmin < X < Xmax  Bmax = 255, if Xmax ≤ X Assume there are N vein images captured under increasing light intensity Xp, p = 1, . . . , N . The size of each image ism × n, sign K = m × n. The qth pixel of the pth light-intensity image will be denoted Bpq, the set Bpq , p = 1, . . . , N and q ∈ {1, . . . , K }, represents the known observations. The goal is to determine the underlying light values or irradiances, denoted by Xq, that gave rise to the observations Bpq. Because the N vein images has be properly registered in the pixel level, so that for a particular a, the light value Xa contributes to Bpq, p = 1, . . . , N and q ∈ {1, . . . , K }. For this work, a normalized cross-correlation function is used as the matching criterion to register images to 1/2pixel resolution [ 15 ]. The model can be rewritten as: Bpq = fq(Xpq), p = 1, . . . , N , q ∈ {1, . . . , K }. It means the transmitting model of position q is different. Nevertheless the shape of each model is similar, it gives an easy solution to estimate the transmitting model for each pixel for the application. Since f is a monotonic and invertible function, its inverse function could be represented as g. Xpq = gq(Bpq), p = 1, . . . , N , q ∈ {1, . . . , K }. It is necessary to recover the function g and the irradiances of Xp, p = 1, . . . , N , which satisfy the set of equations arising from Eq. (4) in a least-squared error sense. Recovering function g only requires recovering the finite number of values that g (B) could take since the domain of X, pixel brightness values, is finite. Letting Bmin and Bmax be the least and greatest pixel values (integers), q be the number of pixel locations and N be the number of photographs, we formulate the problem as one of finding the [Bmin Bmax ] values of g (B) and the q values of X that minimize the following quadratic objective function [ 3 ]: ξ = N q The first term ensures that the solution satisfies the set of equations arising from Eq.  (4) in a least squares sense. The second term is a smoothness term on the sum of squared values of the second derivative of g to ensure that the function g is smooth; in this discrete setting, the second part can be calculated by the formula (6). g ′′ = g (b + 1) + g (b − 1) − 2g (b) This smoothness term is essential to the formulation in that it provides coupling between the values g (z) in the minimization. The scalar weights the smoothness term relative to the data fitting term, and should be chosen appropriately for the amount of noise expected in the Bij measurements. Because it is quadratic in the Xp and g (z)’s, minimizing ξ is a straightforward linear least squares problem. The overdetermined system of linear equations is robustly solved using the singular value decomposition (SVD) method. An intuitive explanation of the procedure may be found in “The infrared light transmission model of the finger” section and Fig. 2 of reference paper [ 15 ]. In the reference paper [ 16 ], the noise, in the Xp, is an independent Gaussian random variable, in which the variance is σ 2 and the joint probability density function can be written as: P(XB) ≺ exp − 1  A maximum-likelihood (ML) approach is taken to find the high dynamic range image values. The maximum likelihood solution finds the values Xq that maximize the probability in Eq. (7). Maximizing Eq. (7) is equivalent to minimizing the negative of its natural logarithm, which leads to the following objective function to be minimized: ξ(X ) = wpq(IBpq − Xpq)2 p,q With Gaussian simplifying approximation, the noise variances σp2q would be difficult to characterize accurately. Again, detailed knowledge of the image capture process would be required, and the noise characterization would have to be performed each time a different image is captured on a device. Equation  (8) can be minimized by setting the gradient ξ (X ) equal to zero. But if the Xp were unknown in each pixel, one could jointly estimate Xp and Xq by arbitrarily fixing one of the q positions, and then performing an iterative optimization of Eq. (8) with respect to both Xp and Xq. It is difficult to solve these estimating values without the analytic expression of the transmitting model. From the observed pixels, this paper presents the estimated transmitting model curve by the sliding the sampled curve segments, and blending these to a monotone increasing curve based on the genetic algorithm. So, if the blending curve is built or fit and then the other function curve can be redrawn by several sample points. It is possible to recover the blending curve shown in Fig. 4. The mixed complete curve g can be used to get the transmitting model function f , which is shown in Fig. 5. (6) (7) (8) Multi‑light‑intensity finger vein images’ fusion based on the transmitting model This session presents a fusion algorithm for the multi-light-intensity finger vein images based on the transmitting model. In the image pixel level fusion, the imaging quality estimation of the pixel is very important. In Section II, the transmitting model has been established by the observed data. Its derivative curve is shown in Fig. 6. It is obvious that the value of B is about zero in the underexposed and overexposed range. This means that the infrared light intensity in these ranges is not suitable for the finger vein imaging. On the other hand, the value B could be used to evaluate the fitness of the irradiance of the infrared intensity. In this paper, the fusion method is based on the pixel level. Firstly, the infrared multilight-intensity finger vein images are divided into R independent blocks by column. To sign every divided block as Trp, r = 1, 2, . . . , R and p = 1, 2, . . . , N , where r is the index of the block, and p is the image number. In order to estimate the quality of each Trp, the average gray value of the block is calculated by its quality value as g rp = mean2(Trp), r = 1, 2, . . . , R and p = 1, 2, . . . , N. Then, the g rp is put into the derivative curve of Fig.  5 to calculate the Bgrp value in the next fusion. The fusion weight value of the block Trp is defined [ 3 ] as: Srp = exp α · Bg¯rp Grp(x, y) = exp − (y − yc)2 2σ 2 The constant parameter α is the smoothing coefficient. In order to avoid the checkerboard edge between two adjacent blocks, it needs to define other spatial smoothing weighting Grp: (9) (10) The constant parameter σ is the variance of the Gaussian coefficient. x is the row number and y is the column number in the finger vein image, and yc is the block center column number. The weighting is the joint value of the gray information coefficient Srp and spatial smoothing coefficient Grp. The joint weighting is defined as: ωrp = Grp ∗ Srp Its normalized value is defined as:  N  ̟rp = ωrp  p=1 ωrp In the fusion, each fused block Ir , r = 1, 2, . . . , R is calculated by Eq. (13) [ 3 ]: N p=1 Ir = (Irp ∗ ωrp), r = 1, 2, . . . , R (11) (12) (13) Examinations and discussions The sample of infrared multi-light-intensity finger vein images is shown in Fig. 7, which is captured by a self-developed platform, shown in Fig. 8. The infrared light intensity is dependent on the duty of PWM, which drives the infrared led irradiance. The transmitting model is shown in Fig. 9 and the differential curve is shown in Fig. 10. In the fusion step, the three finger vein images are selected to the weighting fuse [ 17, 18 ], which is Fig. 7c–e. Each of them has been divided into ten blocks by the column shown in Fig. 11. According to the transmitting model curve, the most suitable blocks are blending to one finger vein image, which is shown in Fig. 12. The weighting value of Srp can be calculated by Eq. (9), which is shown in Fig. 13. The weighting value ofGrp could be calculated by Eq. (10), which is shown in Fig. 14. The joint weighting value of wrp can be calculated by Eq. (11), which is shown in Fig. 15. The fusion finger vein image is blended by Eq. (12), which is shown in Fig. 16. Two other fuse methods are tested for the performance comparison in this paper. One is discrete wavelet transform (DWT) and the other is contrast pyramid, which flow charts are shown in Fig. 17. The source images are decomposed by discrete wavelet transform. And chooses the max coefficient at each pixel before the image rebuild. The source images are pyramid decomposed by the down sample. And calculate the contrast at each pixel. The pyramid layer which has max contrast value is choice before the pyramid image rebuild. The fused performance is tested by the following statistics method [ 3 ]. The standard deviation of an image is defined as formula (14), µ is the mean value of the image I in which the size is m × n and σ is the standard deviation. The Shannon Information entropy of the image is defined as formula (15), the P(gray) is the gray probability of the pixel in the image I: H (I) = − 255 The standard deviation and information entropy of multi-light-intensity finger vein images together with the fused image by proposed method are shown in Table  1 [ 3 ]. However, the standard deviation and information entropy of the fused image is less than Figs.  2, 3, and 4, that means the gray uniformity and consistency of the fused image is better than Figs. 2, 3, and 4. For the image of Fig. 10a, its gray contrast is quite low, in which the image is nearly under exposure. The degree of dependence between one source image and the fused image could be measured by the mutual information (FMI), which can be calculated by the formula (16): Images The results of fusion mutual information (MI) between the source image and the fused image are shown in Table  2 [ 3 ]. The MI between the three source images and fused image is the sum of the MI of each source image and fused image. The information fused from the source images could be calculated as the fusion quality index (FQI), which could be calculated by Eq. (18). where i is computed over a window w, which can be calculated by the formula (19): (16) (17) (18) (19) c(w) is a normalized version of C(w), which can be calculated by the formula (20): FMI = MI (Ii, If ) FQI SSIM C(w) = max(σ 2 , σI2 , . . . , σI2 ) I1 2 4 (20) image. are shown in Table 3 [ 3 ]. QI Ii, If |w is the quality index over a window for a given source image and fused In the test, the size of the window is 8 × 8. The FQI values of the fusion quality index In order to compare the fused performance, the structural similarity index measure (SSIM) is applied in this test. The results are shown in Table 4 [ 3 ]. The results of Tables 1, 2, 3 and 4 show that the proposed fused method based on the column blocking of the image is effective applied to the infrared multi-light-intensity finger vein images. Conclusions and further works The infrared finger-transmitting model is proposed in this paper, which it could be easily built by the observed data of multiple light-intensity images. This model provides a better approach to get the intact vein patterns by adopting the vein biometric data captured by the bioinformation. The features of captured image are estimated and fused by using this model’s differential curves. In this paper, the examination approach has been proven that it is an efficient and practical method for the finger’s fusion approach via infrared transmitting model. It is suitable for fusion of the infrared images in biometric system. Finally, the applications in detail and their analyses on while applying the multi-lightintensity finger vein images’ fusion which is based on the transmitting model to big data environments will be stated in future works. Authors’ contributions The authors’ contributions are summarized below. LC have made substantial contributions to conception and design, involved in drafting the manuscript. ZL and YW have made the acquisition of data and analysis and interpretation of data. The critically important intellectual contents of this manuscript have been revised by HCC. All authors read and approved the final manuscript. Acknowledgements This study was funded in part by the Natural Science Foundation Project of CQ CSTC (cstc2011jjA40012), Foundation and Frontier Project of CQ CSTC (cstc2014jcyjA40006), and Campus Research Foundation of Chongqing University of Science and Technology (CK2011B09, CK2011B05). This work was also supported in part by Asia University, Taiwan, and China Medical University Hospital, China Medical University, Taiwan, under Grant ASIA‑105‑ CMUH‑04. Competing interests The authors declare that they have no competing interests. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. 1. Shin KY , Park YH , Nguyen DT ( 2014 ) Finger‑ Vein image enhancement using a fuzzy‑based fusion method with gabor and retinex filtering . Sensors 14 ( 2 ): 3095 - 3129 2. Tistarelli M , Schouten B ( 2011 ) Biometrics in ambient intelligence . J Ambient Intell Human Comput 2 ( 2 ): 113 - 126 3. Liukui C , Zuojin L , Ying W , Lixiao F ( 2014 ) A principal component analysis fusion method on infrared multi‑light ‑ intensity finger vein images , BWCCA . pp 281 - 286 4. Kikuchi H , Nagai K , Ogata W , Nishigaki M ( 2010 ) Privacy‑preserving similarity evaluation and application to remote biometrics authentication . Soft Comput 14 ( 5 ): 529 - 536 5. Greene CS , Tan J , Ung M , Moore JH , Cheng C ( 2014 ) Big data bioinformatics . J Cell Physiol 229 ( 12 ): 1896 - 1900 6. Ogiela MR , Ogiela L , Ogiela U ( 2015 ) Biometric methods for advanced strategic data sharing protocols . In: Barolli L, Palmieri F , Silva HDD , et al. ( eds) 9th international conference on innovative mobile and internet services in ubiquitous computing (IMIS), Blumenau . pp 179 - 183 7. Ogiela MR , Ogiela U , Ogiela L ( 2012 ) Secure information sharing using personal biometric characteristics . In: Kim TH , Kang JJ , Grosky WI , et al. (eds) 4th international mega‑ conference on future generation information technology (FGIT 2012 ), Korea Woman Train Ctr, Kangwondo, South Korea Dec 16-19 , 2012 , Book series: Communications in computer and information science , vol. 353 pp 369 - 373 8. Ogiela L , Ogiela MR ( 2016 ) Bio‑inspired cryptographic techniques in information management applications . In: Barolli L, Takizawa M , Enokido T , et al. ( eds) IEEE 30th international conference on advanced information networking and applications (IEEE AINA) , Switzerland Mar 23‑25 , 2016 , Book series: International conference on advanced information networking and applications . pp 1059 - 1063 9. Chen HC , Kuo SS , Sun SC , Chang CH ( 2016 ) A distinguishing arterial pulse waves approach by using image processing and feature extraction technique . J Med Syst 40 : 215 . doi: 10 .1007/s10916‑016‑0568‑4 10. Chen W , Er MJ , Wu S ( 2006 ) Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domai . IEEE Trans Syst Man Cybern B (Cybernetics) 36 ( 2 ): 458 - 466 11. Wu X , Zhu X , Wu GQ , Ding W ( 2014 ) Data mining with big data . IEEE Trans Knowl Data Eng 26 ( 1 ): 97 - 107 12. Urbach R ( 1969 ) The biologic effects of ultraviolet radiation . Pergamon Press, New York. http://www.inchem.org/ documents/ehc/ehc/ehc23.htm# SubSectionNumber:2.2.1 13. Chen LK , Li ZJ , Wu Y , Xiang Y ( 2013 ) Dynamic range extend on finger vein image based on infrared multi‑light ‑ intensity vascular imaging . MEIMEI2013. ChongQing , vol. 427 - 429 , pp 1832 - 1835 14. Jacobs K , Loscos C , Ward G ( 2008 ) Automatic high‑ dynamic range image generation for dynamic scenes . IEEE Comput Gr Appl 28 ( 2 ): 84 - 93 15. Debevec PE , Malik J ( 1997 ) Recovering high dynamic range radiance maps from photographs . In: Whitted T, MonesHattal B , Owen SG (eds) Proc. of the ACM SIGGRAPH . ACM Press, New York, pp 369 - 378 16. Rovid A , Hashimoto T , Varlaki P ( 2007 ) Improved high dynamic range image reproduction method . In: Fodor J , Prostean O (eds) Proc. of the 4th Int'l Symp. on applied computational intelligence and informatics , IEEE Computer Society, Washington. pp 203 - 207 17. Yang J , Shi Y ( 2014 ) Towards finger‑ vein image restoration and enhancement for finger‑ vein recognition . Inf Sci 1 ( 268 ): 33 - 52 18. Zhang J , Dai X , Sun QD , Wang BP ( 2011 ) Directly fusion method for combining variable exposure value images (in Chinese) . J Software 22 ( 4 ): 813 - 825 (in Chinese) 19. Delpy DT , Cope M ( 1997 ) Quantification in tissue near‑infrared spectroscopy . Philos Trans R Soc B Biol Sci 352 : 649 - 659


This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1186%2Fs13673-017-0110-9.pdf

Liukui Chen, Hsing-Chung Chen, Zuojin Li, Ying Wu. A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging, Human-centric Computing and Information Sciences, 2017, 35, DOI: 10.1186/s13673-017-0110-9