Accepted Manuscript
Automatic 3D Reconstruction of SEM images based on Nano-robotic Manipulation and Epipolar Plane Images Weili Ding , Yanxin Zhang , Haojian Lu , Wenfeng Wan , Yajing Shen PII: DOI: Reference:
S0304-3991(18)30050-0 https://doi.org/10.1016/j.ultramic.2019.02.014 ULTRAM 12740
To appear in:
Ultramicroscopy
Received date: Revised date: Accepted date:
19 February 2018 16 February 2019 18 February 2019
Please cite this article as: Weili Ding , Yanxin Zhang , Haojian Lu , Wenfeng Wan , Yajing Shen , Automatic 3D Reconstruction of SEM images based on Nano-robotic Manipulation and Epipolar Plane Images, Ultramicroscopy (2019), doi: https://doi.org/10.1016/j.ultramic.2019.02.014
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights:
AC
CE
PT
ED
M
AN US
We reports a new 3D reconstruction algorithm by using the light field theory to reconstruct the 3D surface model of the SEM samples. A nano-robotic system is employed to automatically capture a groups of SEM images along a linear path with a fixed step size. The depth image of the input SEM image is reconstructed based on the specific linear structures emerging in the epipolar-plane images (EPI), which is generated from the captured SEM images. The proposed algorithm leads to a satisfactory 3D reconstruction results of nearly all kind of SEM objects, especially for those samples which have high complex highly complex micro surfaces with flat surfaces, and the 3D surface of the micro-image beyond the scope of the field of SEM camera. A microscopy database of the different types of SEM samples is built in this paper.
CR IP T
ACCEPTED MANUSCRIPT
Automatic 3D Reconstruction of SEM images based on Nano-robotic Manipulation and Epipolar Plane Images Weili Ding 1*, Yanxin Zhang 1, Haojian Lu2, Wenfeng Wan2 and Yajing Shen 2,3*
CR IP T
1 Institute of Electrical Engineering, Yanshan University, 438 West of Hebei Avenue, Haigang District, Qinhuangdao 066004, China; E-Mail:
[email protected](D.W.)
2 Mechanical and Biomedical Engineering Department, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong 00852, China; E-Mails:
[email protected] (Y.S.);
[email protected] (H.L.),
[email protected] (W.W.)
AN US
3 CityU Shenzhen Research Institute, Shen Zhen, 8 Yuexing 1st Road, Shenzhen 518000, China * Author to whom correspondence should be addressed; E-Mail:
[email protected](D.W.);
[email protected] (Y.S.); Tel.: 13722579226(D.W.); TEL.: +852-34422045 (Y.S.).
AC
CE
PT
ED
M
Abstract This paper reports a new and general 3D reconstruction algorithm by using the light field reconstruction theory to construct the 3D SEM images in a large range effectively. Firstly, a nano-robotic system is employed to automatically capture a groups of SEM images along a linear path with a fixed step size, thereby the 3D SEM images can be reconstructed beyond the field of view (FOV) of SEM. Then, the epipolar-plane images (EPI) are generated, and the depth image is reconstructed based on the specific linear structures emerging in EPI and the automatically depth estimation algorithm. After that, the depth image is stitched and the dense 3D point cloud are obtained by using the delaunay technology. In the proposed algorithm, the depth reconstruction processing doesn’t depend on the matching corresponding points technology, thus nearly all kinds of SEM samples can be reconstructed, even the sample has simple texture structure or almost flat surface. In addition, the proposed method allows to construct the 3D images out of the FOV of SEM with the assistance of nanorobot. The performance of the proposed algorithm is tested on our self-built database with several microscopic samples, which verify the proposed algorithm is general and effective, and it is particularly well adapted for reconstruction of the highly complex micro surfaces with very flat surfaces in a large range. Keywords: light field; SEM surface reconstruction; epipolar-plane images; depth estimation
1 Introduction Scanning electron microscope (SEM) is one of the most commonly used instruments in biological, mechanical, and materials sciences. However, most of the existing SEM system can only provide 2D images due to the imaging mechanism. To effectively measure and visualize the surface structures of microscopic samples, the reconstruction of 3D surface model from 2D SEM images receives increasing
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
attention. Nowadays, 3D reconstruction has been an important topic in microscopy vision, which could provide quantitative and visual information of the microscopic objects being investigated, and find various applications such as medicine, pharmacology, chemistry, and mechanics. Over the past few years, several SEM 3D reconstruction algorithms have been proposed [1], and most of them are based on the 3D Computer Vision algorithm. The Photometric stereo (PS) [2] is commonly used algorithms in 3D SEM reconstruction. In PS based method, a set of 2D images from a single view point with varying light directions are captured, and the 3D geometry of the SEM sample will be rapidly computed. The researchers focus on how to improve the algorithm and make it more applicable to SEM image. For example, Paluszynski and Slowko [3] developed a method of PS to reconstruct the third dimension of smooth objects. Yuniarti [4] solved the PS problem in case of noisy images of 3D reconstruction. Vynnyk et al. [5] focusing on the detector efficiency and the distribution of the electron beams. Estellers et al. [6] development of an optimization based surface reconstruction algorithm. Wang et al. [7] use the shape from shading (SFS) technique to analyze the gray-scale information of a single top-view SEM image, and finally to reconstruct the 3D surface morphology. Generally, an appropriate computational cost can be easily implemented by this method, but it is not able to create a whole 3D model since it only uses a single perspective. Moreover, it requires additional lighting, and it is difficult to produce images under different illumination directions using standard SEM machines. Stereo-vision and the structure from motion (SFM) [8] is another kind of method, which is also widely applied in SEM reconstruction. The existing algorithms, such as [9-13], mainly use corresponding feature points in multiple image pairs to reconstruct the 3D surface of SEM samples. Initially, researchers (e.g. Raspanti et al. [9] and Samak et al. [10]) focus on how to use SFM algorithm or stereo-vision based algorithms to reconstruct 3D micro structure surfaces from SEM images. Then the advantages and limitations of these approaches to perform 3D reconstruction for SEM images have been studied. Carli et al. [11] carried out a theoretical uncertainty evaluation of the stereo-pair technique for the problem of 3D SEM surface modeling. Zolotukhin et al. [12] examined the advantages and limitations of the SFM approach to perform 3D reconstruction for SEM images. Recently, more and more researchers, such as Eulitz [13], Tafti [14, 17], [15], Ball [16], and Baghaie [18], have worked deeply on SEM reconstruction problem. Some software systems were also developed for the 3D roughness reconstruction application [19, 20]. In these works, Eulitz et.al [13] studied the 3D surface reconstruction problem from SEM samples based on the principles of optical close range photogrammetry, and the results showed that the SEM micrographs are suitable for 3D reconstruction by optical photogrammetry software, such as 123D Catch. Ball et.al proved that the SFM photogrammetry method (e.g. Visual SFM, Insight 3D, 123D Catch and Agisoft Photoscan) is efficient for generating high-resolution 3D models by using a wide range of samples and several different SEM samples. Tafti et.al discussed the recent techniques and algorithms in SEM reconstruction [1], and designed an optimized, adaptive, and
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
intelligent multi-view approach named 3DSEM++ [14, 17] for 3D surface reconstruction of SEM images. He also made a 3D SEM dataset publicly and freely available to the research community [21]. Baghaie et.al proposed a novel framework using sparse-dense correspondence [18], and an accurate approach for high fidelity 3D reconstruction of highly complex microscopic samples [22]. Compared with the PS based method, the stereo-vision and SFM based method can reconstruct a complete 3D model without additional lighting, and the multiple images can be captured easily by moving the camera around the sample, or rotating the sample within the field of view. However, they still have some limitation for 3D SEM surface reconstruction. First, matching the corresponding points in the stereo pair is still a challenge. One reason is that there are different kinds of noises, highlights and multiple shadows on SEM images which consequently decrease the matching accuracy and reliability of 3D SEM surface modeling. The other reason is the threshold for detecting the inliers corresponding points as well as the number of outliers are difficult to set in an adaptive and intelligent way. Thus, currently some researchers perform the matching points manually to obtain a good result of reconstruction, and it is a tedious task for the operator. The automatic matching methods also exist, but the SEM images must be produced by a high-quality stage so that the specimen can be rotated accurately and centrically. Unfortunately, this is not easy to do because precise alignment is required to get satisfactory 3D reconstructions [1]. Second, not all SEM samples are suitable for 3D reconstruction based on the stereo-vision or SFM method. Generally, if the surface of the sample is too complex, with shaded or too dark areas which the SEM detector cannot capture, 3D reconstruction is not possible or not accurate. Moreover, flat samples with an almost 2D surface are not suitable either. Third, only rare public datasets for SEM images are available now on the Internet [21]. Such makes it is difficult to examine and analyze the quality attributes of the 3D SEM surface reconstruction algorithms such as accuracy, reliability, robustness, and efficiency. To avoid the problems caused by the stereo-vision and SFM based method, we proposed a new 3D reconstruction algorithm of SEM images in this paper. Unlike existing methods, the proposed algorithm is based on depth reconstruction theory from 4D light fields [23], which is not depend on the feature matching, thus lead to a satisfactory 3D reconstruction results of nearly all kind of objects, especially for those samples which cannot be reconstructed by the stereo-vision and SFM based method. The input of our algorithm is a dense set of photographs captured along a linear path by the nano-robotic manipulation in the SEM system [24]. Then the epipolar-plane images (EPI) are generated and an adaptive 3D reconstruction algorithm based on EPI is presented. The proposed reconstruction algorithm includes four steps: pro-processing, depth reconstruction, depth images stitching and 3D modeling. At the image pre-processing stage, the bilateral filter algorithm is used to denoise EPI and the edges of EPI are detected by using the Sobel operator. Then, the depth image is reconstructed based on the slope of the traces extracted from the EPI base on [23]. Next, the generated depth images are stitched into a wide field-of-view depth image. Finally, the 3D model is generated by using the Delaunay triangulation technology.
ACCEPTED MANUSCRIPT
Additionally, we also built a database including several real microscopic samples to analyze the quality attributes of our 3D SEM surface reconstruction algorithm.
2 EPI generation using the nano-robotic manipulation system
AC
CE
PT
ED
M
AN US
CR IP T
In our system, the 2D SEM images are captured by the designed nano-robotic manipulation system [24]. Fig.1.a illustrates the EPI generation procedure for multi-directional image sensing at a small scale. In the process of experiment preparation, the sample is fixed on the sample stage via either glue, tape or customized grippers according to sample properties. After assembling the nano-robotic manipulation system inside SEM, the linear positioner (ECS3030, Attocube Inc., Muenchen, Germany) is utilized to move step by step along x axis with a fixed step size, and one SEM image is taken after each step.
Fig.1. The process of EPI generation. (a) The nano-robotic manipulation system for EPI generation at small scale inside SEM; (b) a series of captured SEM images; (c) the generated EPI.
As shown in Fig.1.b, micro-images are captured densely along a linear path. Then the image sequences are superimposed to form a three-dimensional image collection (See Fig.1.c). Next, a slice operation is performed to extract pixels from the same height of the three-dimensional image collection, and a series of new images with linear characteristics, called EPI, are obtained. In the EPI, every captured point corresponds to a linear trace, the relationship between the slope of the trace and the distance to camera can be denoted as [23, 27]
ACCEPTED MANUSCRIPT Z f slope
f s , d
(1)
where Z is the distance to camera of the point P, f is the focal length, d is the parallax, s is the moving distance of the SEM system.
3 3D reconstruction algorithm of SEM images
CR IP T
As shown in Algorithm 1, the process of the micro-nano images reconstruction follows four steps: EPI pre-processing, depth reconstruction, image stitching and 3D modeling.
3.1 EPI pre-processing
M
AN US
Algorithm 1: The micro-nano images reconstruction Input: img[N] – n micro-images Output: Depth – the depth image 1 EPI images generation 2 Image pre-processing of the EPI images 3 Depth reconstruction Maximum between-cluster variance Edge depth estimation Non-edge depth diffusion 4 Depth image stitching 5 3D modeling
PT
ED
Because of the influence of lighting conditions and the quantitative error of sensors, the original EPI contains a lot of noises. To suppress the noise and retain image edge information, bilateral filter algorithm [28] is chose to deal with every piece of EPI. Compared with the general Gaussian filtering algorithm, bilateral filtering considers the space distance relationship between pixels and the degree of similarity between pixels comprehensively, thus it can keep the general block of the original image
CE
roughly and preserve the edge information. The bilateral filter is expressed as BF ,
AC
which is defined as follows [28]: BF E P
1 WP
G p q G E s
qS
r
p
Eq Eq ,
(2)
where WP is the normalized coefficient: WP G s p q G r Ep Eq ,
(3)
qS
p = u, s is the current pixel to filter and S is the 2D window around p. In this paper, the window size is set as 7 7 template. G and G are Gaussian function in the s
r
ACCEPTED MANUSCRIPT
form of
G x
x2
1 2 e 2 2
. Using the Gaussian distribution model, the algorithm
AN US
CR IP T
calculates the geometry distance coefficient and brightness difference coefficient between the center pixel and each pixel inside the window, then the two coefficients multiplied and normalized, finally we get the bilateral weight coefficient. Fig.2 illustrates the comparison result for an EPI before and after bilateral filtering, which indicates that most of noises can be filtered and the edge information can be preserved.
Fig.2. (a) EPI before bilateral filter; (b) EPI after bilateral filter
M
After filtering, the edges are detected by utilizing the cross edges detection model [26], which is defined as follows:
u, sV u , s H u , s
E u, s E u , s , 2
(4)
ED
S u, s
where H u, s and V u, s is the horizontal and vertical window. And then the
CE
PT
edge pixel is detected based on the following condition. 1 S u, s e , Edge u, s 0 others
(5)
where e is a threshold of edge energy (Here, its value is set as 0.015). That means if
AC
S u, s e , the reference point p u, s can be taken as an edge pixel.
3.2 Depth reconstruction By (1), the slope is proportional to the depth and is inversely proportional to the parallax. For convenient expression, the depth is replaced with parallax in the following sections, and the parallax of each point in the SEM image is calculated based on the depth reconstruction algorithm proposed in [26, 27], which follows two main steps: (1) Edge depth estimation
ACCEPTED MANUSCRIPT For any reference point p u, s in the extracted edges (See Fig.3 for example), if the parallax d is given, we can get the corresponding point set d , and all points in
d lie in a straight line ld . Here, d can be described as follows
d u, s u s s d ,s s 1, 2,..., n .
CR IP T
(6)
Fig.3. The straight line in EPI
Then a consistent evaluation function is defined using the depth estimation method in [26]. That is
where x e
x2 2 2
1 d u, s
E u, s E u,s , u, s d
u,sd
,
(7)
AN US
Pd u, s
, E u, s E u,s is the edge value difference between p u, s
and p u ', s ' in d , and sets to 1.5 / 255 .
The consistent evaluation function means the correlation of pixels in d . The
M
bigger the value, the larger the probability that the pixels in d comes from the same straight line. Then, the winner takes all strategy (WTA) was used to estimate the
PT
ED
final depth value D u, s of p u, s in a searching window [26], that is
D u, s arg max Pd u, s . ˆ
(8)
d
AC
CE
In order to make the result more effective, we choose the window size as 1 5 , and set coefficients are 0.1, 0.2, 0.4, 0.2, 0.1 to filter some noise. (2) Non-edge depth diffusion After estimating the edge depth values, the depth values for internal points are estimated based on depth diffusion strategy. Suppose that the left and right edge points which near the internal point p u, s
are p1 u1 , s , p2 u2 , s , and its corresponding depth values are d u1 , s , d u2 , s .Then we can get transcendental depth u, s of
p u, s through bilateral linear
interpolation:
u, s
rdist rrange ldist lrange rdist rrange
d u1 ,s
ldist lrange ldist lrange rdist rrange
d u2 ,s ,
(9)
ACCEPTED MANUSCRIPT
u u1 ldist u u 2 1 rdist 1 ldist where . E u, s E u1 , s l range E u , s E u1 , s E u , s E u2 , s rrange 1 lrange
CR IP T
After getting a prior depth , we can calculate prior utilizing Gaussian distribution within the 3 scope to the prior depth, that is E prior
d 2 exp 2 2 0
,
d 3 ,
.
(10)
others
AN US
Within the searching range, the measurement set d can be obtained by EPI adaptive framework [26] when given a parallax d. Based on the LABS distribution, the likelihood depth is defined as:
E u ,s d
E u ,s E u, s 1 , d
(11)
M
Elikehood
exp
ED
where is the constant coefficient. In this paper, 5 . For all the damage points,
Elikehood 0 .
PT
likelihood mark to 0 uniformly because of measurement set d , namely
CE
Using prior depth and the likelihood depth, the final depth energy function is defined as:
E d log E prior log Elikehood .
(12)
AC
And the depth values of non-edge region are calculated based on the winner take all strategy (WTA).
3.3 Depth image stitching and 3D modeling From the 2D SEM images acquisition method, we know that there is only 2D translation between any two images, and the internal parameters, external parameters, object distance of the SEM camera is the same. That means if the field of view of the camera is overlapping, the overlap width d of the field of view is determined, and the two images can be stitched into a wide-of-field image. The image stitching algorithm used in this study contains three steps: bilateral filter,
ACCEPTED MANUSCRIPT
overlap width calculation and image fusion. Here, the bilateral filter is based on (2),
m Emin min Right i Left i i ,
CR IP T
and a nearest interpolation process is also used after bilateral filter for the depth image de-noising. In overlap width calculation process, the column metric matching strategy based on minimum curvature deviation is proposed. Firstly, a reference area between l p and l p w of the input depth image is chosen firstly, and a series of corners and their curvatures in the reference area will be calculated based on Harris algorithm [29]. Then, we search the matching corners in the same row of the matched image according to the minimum curvature deviation Emin , that is (13)
AN US
where Left i is the curvature of the corner i in the input image, Right i is the curvature of the corresponding corner i in the matched image. Finally, the distance between the input image and the matched image will be obtained and the two images are stitched into a new depth image. According to the depth information of each pixel in the depth image, the 3D coordinate information ( x d , y d , z d ) of any point p( x d , y d ) in depth image can be calculated, the formula of its transformation relation is shown as follows,
M
x d ( x d xo ) sx y d ( y d yo ) s y , z f s / d d
(14)
AC
CE
PT
ED
where f , x , y is the efficient focal distance and principle point of SEM microscope, which can be calibrated based on FSM algorithm [25]. sx , s y is the scale factor in the x and y direction. Here, the scale factor equals the scale of the SEM image divided by the corresponding pixel number. s is the scale, u is the gray value in the reconstructed depth image. After obtaining the 3D coordinate of all pixels, 4 adjacent pixels are defined as a basic adjacent unit. In this unit, seven linking modes of the triangle meshes can be obtained (Fig. 4) based on the distance of any two adjacent points, and the mesh model of the sample will be generated using delaunay triangulation method [30]. Here, the mesh model is saved as *.ply file, which can be opened by the Meshlab software [31].
Fig.4. Seven linking modes of the triangle meshes.
4 Results and Discussion 4.1 Experimental database description
ACCEPTED MANUSCRIPT
Images properties
Step size
Number of images
PDMS
Regular Pillar 1
Magnification:200× WD:15.4 mm
2 μm
55
stitched
Irregular Pillar 2
Magnification:400× WD:9.2 mm
1 μm
62
non-stitched
Irregular Pillar 3
Magnification:400× WD:10.9 mm
1 μm
60
non-stitched
1 μm
60
non-stitched
ED
Magnification:500× WD:10.4 mm
Sample image
Model type
Magnification:1000× WD:9.7 mm
0.5 μm
60
non-stitched
Surface Particle 2
Magnification:400× WD:9.6 mm
1 μm
53
non-stitched
Surface Particle 3
Magnification:1000× WD:10.6 mm
1 μm
60
non-stitched
Surface Particle 4
Magnification:2000× WD:10.7 mm
1 μm
43
non-stitched
Surface Particle 5
Magnification:500× WD:13.6 mm
1 μm
56
stitched
AC
Surface Particle 1
CE
PT
Irregular Pillar 4
AN US
Sample name
M
Material
CR IP T
To highlight the effectiveness of the proposed reconstruction algorithm, we built a microscopy database of the different types of SEM samples. Each sample includes a series of original 2D SEM images, the reconstructed depth image and 3D surface model. The raw 2D SEM images were captured with the nano-robotic manipulation system. And they have to remain stable and linear movement without deformations during the images acquisition. The reconstructed depth image and 3D surface model were generated using the algorithm illustrated in 3.2 and 3.3. Here, the 3D surface models are shown as 3D point clouds (.ply format). 16 samples are included in our database, and their 3D model are divided into two classes: non-stitched model (See Fig.5.a as an example) and stitched model (See Fig.5.b as an example). That means some samples, whose complete structure can be seen in any depth image, thus it is unnecessary to use the image stitching algorithm in 3.3. Thus the non-stitched model is generated from a depth image directly. Table 1 shows more details about the name, attribute, and SEM configuration of the samples. TABLE 1 The samples’ name, attributes, capture PARAMETERS
ACCEPTED MANUSCRIPT
Magnification:1000× WD:13.5 mm
0.5 μm
55
stitched
Surface Particle 7
Magnification:1000× WD:13.6 mm
0.5 μm
55
stitched
Bulk
Magnification:200× WD:12.8 mm
10 μm
60
non-stitched
Pillar
Magnification:1000× WD:12.8 mm
1 μm
60
AFM Cantilever
Magnification:3000× WD:11.3 mm
0.5 μm
50
Copper
Fracture Surface
Magnification:1000× WD:7.9 mm
0.5 μm
50
SU8
Multihole Cube
Magnification:500× WD:21.2 mm
0.5 μm
AN US 130
stitched
non-stitched
stitched
non-stitched
CE
PT
ED
M
Silicon
CR IP T
Surface Particle 6
AC
Fig.5. Visualization of two samples in the database using the proposed method. (a) The original 2D SEM images (the 1st, 2ed and the last image), depth image which corresponds the first original SEM image and non-stitched model, (b) the original 2D SEM images (the 1st, 26th and the last image), the stitched depth image and stitched model.
4.2 3D reconstruction results In our experiment, all reconstruction results were implemented by VS2010 C++ on a PC with 2.70GHz (4 CPUs) Intel(R) Core(TM) i7-2620M CPU, and 8 GB of RAM. The 3D visualization results of six real microscopic samples are shown in Fig.6-Fig.8. The (a) in these figures present the captured 2D SEM images in the database. The corresponding 2D depth images and 3D point clouds, which have been reconstructed by using our proposed algorithm, are shown in the (b) and (c) in each figure respectively.
ACCEPTED MANUSCRIPT
CE
PT
ED
M
AN US
CR IP T
Firstly, typical samples with regular structure and irregular structure are shown in Fig.6 and Fig.7, respectively. Because the complete structure of the sample is visible in one view image, the non-stitched 3D models are illustrated in Fig.6 and Fig.7. From these two figures, we see that the basic 3D geometric structures of regular and irregular samples can be reconstructed effectively, although there is a small amount of noise in the background. In Fig.6, the sample has simple shape, texture and nearly flat surface. Generally, the micrographs of this kind of samples are difficult to acquire with a fixed rotate tilt angle difference. Thus, the 3D models are also difficult to reconstruct by using multi-view based algorithms [1]. For comparison, with our proposed algorithm, the series of micrographs can be easily captured by the nano-robot with a fixed step size along a linear path. And the reconstructed depth image and 3D model can be obtained easily. In Fig.6.b and Fig. 6.c, we see that the reconstructed structure of the regular sample is in a reasonable scope. And even the sample has simple texture (e.g. the samples in the first and the last row), the dense 3D point clouds still can be generated.
AC
Fig.6. Visualization of the regular samples in the database using the proposed method. (a) 2D SEM image, (b) the reconstructed depth image, and (c) 3D point clouds.
AN US
CR IP T
ACCEPTED MANUSCRIPT
Fig.7. Visualization of the irregular samples in the database using the proposed method. (a) 2D SEM images, (b) the reconstructed depth image, and (c) 3D point clouds.
AC
CE
PT
ED
M
In Fig.7, the sample has irregular shape, and the three samples in Fig.7.a represent different cases, respectively. The first sample has a irregular and nearly flat surface, the second sample is high complex with some highlight and shaded areas. And the third sample has irregular structure with some highlight area. The visualization results in Fig.7.c indicates that the three kind of SEM objects are suitable for reconstruction by using our algorithm. Even the surface is high complex, flat, or has some highlight and shaded areas which the SEM detector cannot avoid, the 3D reconstruction by using our algorithm is still possible.
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
ED
Fig.8. Visualization of samples with high complex or nearly flat surface in the database using the proposed method. (a) The first, the middle and the last 2D SEM image for input, (b) the stitched depth image, and (c) 3D point clouds of the stitched 3D model.
AC
CE
PT
Fig.8 shows another four samples, which are big and out of the field of view of the SEM camera. That means the complete structure information of the sample cannot be acquired by a single SEM image. But with our image acquisition mode, a bigger field of view of the sample can be obtained easily by moving the nano-robot with a fixed step size along a linear path. Moreover, the samples in the first three rows have high complex structure and nearly flat surface, and the sample in the fourth row has a simple shape and texture. After obtaining the depth image of each input original image based on the algorithm in Section 3, the stitched depth image and stitched 3D model of each sample is generated and shown in Fig.8.b and Fig.8.c. It's obvious that the stitched depth images reconstructed a complete 3D surface which is covered by all input SEM images. Especially the depth image in the last row of Fig. 8b obtains nine similar reconstructed pillars after the stitching processing successfully. That means our stitching algorithm is effective, and no shape distortion will be occurred. The 3D point clouds in Fig. 8c shows the 3D geometric structure of the reconstructed models, which indicates that the proposed method can obtain a wide field 3D model of the sample by capturing micro-images along a linear path. And the samples with nearly flat surface still can be reconstructed by using the proposed algorithm. Overall, Fig.6-Fig.8 indicates that nearly all SEM objects are accessible for
ACCEPTED MANUSCRIPT
CR IP T
reconstruction by using our algorithm. To further estimate the accuracy of our algorithm, a standard micro-cube is prepared to conduct 3D reconstruction based on the proposed algorithm. This cube is made of SU-8 photoresist and is fabricated with 3D laser lithography system (Photonic Professional GT Nano-scribe GmbH) and the edge length is 47.0 μm. The micro-cube has been moved 129 steps and therefore 130 SEM images have been captured. Fig.9 shows part of those SEM images. We see that the micro-cube contains a rich edge and angular structures, which makes the calibration and reconstruction step more accurate and contrastive.
AN US
Fig.9. 2D SEM images for the micro-cube. (a) The first image, (b) the 40th image,(c) the 80th image and (d) the 130th image.
AC
CE
PT
ED
M
Fig.10 shows the 3D reconstruction results from different perspectives based on our proposed algorithm (See Fig.10.a) and Visual SFM [25] algorithm (See Fig.10.b). Apart from a small number of noise interference, the reconstructed 3D model of our algorithm is similar with the real micro-cube in shape, and the three-dimensional cavity structure can be seen clearly in the reconstructed results. For comparison, the traditional algorithm-
Fig.10. Reconstructed result of the micro-cube by using (a) the proposed algorithm, (b) visual SFM algorithm [25].
Visual SFM only reconstructs a few of discrete points and the three-dimensional cavity structure of the micro-cube cannot be recovered efficiently (See the last image in Fig.10.b) To verify the accuracy of our reconstruction algorithm, the real value and the measured value of the side length of the micro-cube are compared one by one. As shown in Fig.11, the vertex A-G are selected firstly, and then 3D coordinate of A-G are calculated based on (3). Finally, the measured length between any two vertices is obtained by using the distance formula between two points. Table 2 lists all measured values. We see that the calculated 3D model gives a mean side
ACCEPTED MANUSCRIPT
CR IP T
length of 43.82μm, having 6.67% mean error with the real value 47.0 m. Further, compared with the measured length of AE, CF and DG, the value of AB, BD, AC, CD, EF and FG are closer to the real value, having 0.45% minimum error. That means the proposed 3D reconstruction method can estimate the original shape of the sample well, and it is useful for 3D shape measuring of the micro-samples.
TABLE 2
AN US
Fig.11. Reconstructed depth image of the micro-cube and the selected points which used calculated the estimated 3D coordinate.
The distance of the selected points in Fig.11
Distance
Calculated value
Distance
Calculated value
AB
47.6614μm
AC
47.2133μm
CD
45.3108μm
AC
CE
PT
ED
M
46.4292μm AE 35.8806μm CF 38.1434μm EF 49.1806μm DG 38.1434μm FG 46.4006μm Mean 43.81814 Real value 47.0μm Mean Error 6.77% Min Error 0.45% Max Error 23.66% Fig.12 shows the comparison results of the same irregular sample at different magnification rate. Fig.11.a gives the 3D reconstruction model of our proposed algorithm. Fig.12.b gives the 3D model reconstructed by Visual SFM [25]. From the same viewpoint, we see that the 3D point clouds reconstructed by the two methods have the similar 3D surface structure. However, the 3D point clouds reconstructed by Visual SFM algorithm are sparser than the 3D point clouds reconstructed by our algorithm. Especially when the magnification rate is larger than 500 and the texture features are gradually disappeared, our algorithm can reconstruction dense and accurate point cloud data successfully; on the contrary the Visual SFM algorithm can only generate very few 3D points. That means, compared with the traditional SFM reconstruction algorithm of SEM, our proposed algorithm can reconstruct the dense point clouds without the influence of the texture features. BD
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
4.3 Discussion
ED
Fig.12. (a) 2D SEM images of an irregular sample, which captured at different magnification rate; (b) 3D point clouds generated by our proposed algorithm; (c) 3D point clouds generated by Visual SFM [25] 3D reconstruction algorithm
AC
CE
PT
SEM 3D reconstruction is a critical problem in the microscope imaging. Though single-view, multi-view and hybrid approaches have been proposed or applied in this field [1], and the remarkable progress have been made in recent years, but not all SEM samples are suitable for reconstruction. This study demonstrates a general and new way for the 3D surface reconstruction of SEM samples. It is not based the theory of PS, stereo-vision or SFM, but based on the light field reconstruction theory. Because the depth image is estimated based on the specific linear structures emerging in a densely sampled 3D light field, our algorithm avoids the image acquisition problems, the camera calibration and feature matching processing in multi-view approaches, and it also not need the special lighting condition. Firstly, the 2D images are acquired with a fixed rotate tilt angle difference in the multi-view approaches. This makes the micrographs of some samples with flat surface, or big surface which is beyond the field of view of SEM camera cannot be captured and reconstructed successfully. Our algorithm acquires the images with a fixed step size along a linear path, thus it can avoid above problem and capture the complete 2D SEM images by adjusting the step size and path. The test on Fig. 8 show that our
ACCEPTED MANUSCRIPT
PT
ED
M
AN US
CR IP T
algorithm is well adapted for reconstruction of the highly complex micro surfaces with flat surfaces, and the 3D model of the micro-image beyond the scope of the field of SEM. These characters make our algorithm is robustness and efficiency. Secondly, because the proposed algorithm is not dependent on the matched pixels/patch, nearly all kinds of SEM samples can be reconstructed, even the sample has simple texture or sparse texture features. In other word, if a feature detector algorithm (e.g., the SIFT algorithm) cannot detect a reasonable number of feature points, then 3D SEM reconstruction may be failed in multi-view approaches, but it is still possible to reconstruct the dense 3D point cloud (See Fig.6 as an example) using our algorithm. Thirdly, the propose algorithm can overcome some influence of the highlight and shadows (See Fig.7 and Fig.12 for example). Because the 2D SEM images are acquired along a linear path, and the number of the captured image is not limited. This makes the highlight and shadows area in one SEM image may be disappeared in other SEM images. Moreover, our algorithm uses depth diffusion strategy to obtain the depth information of the non-edge region. That means if the depth of the edge region is estimated accurately, then the depth values of the region with the highlight or shadow can be reconstructed. In the proposed method, the quality of 3D constructed SEM images is mainly determined by two parameters: the quality of the SEM image and the positioning accuracy of the nanorobot. During SEM imaging, noise usually exists due to electric charge, especially when the sample is non-conductive. As the results shown in Fig. 9-Fig.11, when the sample is made from SU-8 (a kind of non-conductive material), the noise exists in the original image, resulting the noise in the finally reconstructed 3D image. To reduce this noise, a better SEM imaging strategy should be used, such as by coating conductive material, etc. For positioning, if the SEM images cannot be captured in a very strictly linear path, it will result inaccurate edges in EPI images. To reduce this error, a better positioning platform, such as nanorobotic system, could be employed. As future work, we plan to further improve the 3D model accuracy by optimize the above two parameters.
CE
5 Conclusion
AC
In this study, a new approach is proposed for 3D reconstruction of SEM samples. The results indicate that the proposed algorithm is well suited to any kind of microscopic samples, including the samples with sparse texture features, highly complex and flat surface. The entire workflow does not need any specific lighting condition and feature matching operation, which makes it a more general solution than the existing single-view and multi-view based approaches. This work offers new opportunities for seeing and examining micro surfaces, which could benefit the characterization of micro-nano samples, and potentially promote a wide range of scientific disciplines including biological, mechanical, and materials sciences. Acknowledgement This work was supported by the National Natural Science Foundation of China (No.
ACCEPTED MANUSCRIPT
61773326), the Natural Science Foundation of Hebei Province (No. F2016203211).
AC
CE
PT
ED
M
AN US
CR IP T
[1] P.Tafti, A.B.Kirkpatrick, Z.Alavi. Recent advances in 3D SEM surface reconstruction,Micron, 2015,78:54-66. [2] R.J.Woodham. Photometric method for determining surface orientation from multiple images. Optical engineering. 1980, 19(1):191139-. [3] J. Paluszynski, W. Slowko, Surface reconstruction with the photometric method in SEM. Vaccum, 2005, 78, 533-537. [4] A.Yuniarti. 3D Surface Reconstruction of Noisy Photometric Stereo. University of Western Australia. 2007. [5] T.Vynnyk, T.Schultheis, T.Fahlbusch, et.al. 3D-measurement with the stereo scanning electron microscope on sub-micrometer structures. Journal of the European Optical Society-Rapid publications.2010, 5 (1):138-138. [6] V.Estellers, J.P.Thiran, M.Gabrani. Surface reconstruction from microscopic images in optical lithography. IEEE Trans. Image Process. 2014, 23 (8), 3560-3573. [7] Q.Wang, F.Zhu, etc. Three-dimensional reconstruction techniques based on one single SEM image. Nami Jishu Yu Jingmi Gongcheng/nanotechnology. 2013, 11(6): 541-545. [8] R.Hartley, A.Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press. 2003. [9] M.Raspanti, E.Binaghi, I.Gallo, A.Manelli. A vision-based 3D reconstruction techniques for scanning electron microscopy: direct comparison with atomic force microscopy. Microsc. Res. Tech. 2005, 67, 1-7. [10]D.Samak, A.ischer, D.Rittel. 3D reconstruction and visualization of microstructure surfaces from 2D images. Ann. 2007, 56:149-152. [11] L.Carli, G.Genta, A.Cantatore, et.al. Uncertainty evaluation for three-dimensional scanning electron microscope reconstructions based on the stereo-pair technique. Measurement Science and Technology. 2011, 22(3):035103. [12] A.Zolotukhin, I.Safonov, K.Kryzhanovskii, 3D reconstruction for a scanning electron microscope. Pattern Recognit. Image Anal. 2013, 23 (1):168-174. [13] M.Eulitz, and G.Reiss. 3D reconstruction of SEM images by use of optical photogrammetry software. Journal of structural biology, 2015, 191(2):190-196. [14] A.P.Tafti. 3D SEM Surface Reconstruction: An Optimized, Adaptive, and Intelligent Approach. (Doctoral dissertation, The University of Wisconsin-Milwaukee). 2016. [15] A.V.Kudryavtsev, S.Dembélé, and N.Piat. Stereo-image rectification for dense 3D reconstruction in scanning electron microscope. International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), 2017, 1-6. [16] A.D.Ball, P.A.Job and A.E.L.Walker. SEM-microphoto grammetry, a new take on an old method for generating high-resolution 3D models for SEM images, Journal of microscopy, 2017, 267(2):214-226. [17] A.P.Tafti, J.D.Holz, A.Baghaie et.al. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction. Micron, 2016, 87: 33-45. [18] A.Baghaie, A.P.Tafti, H.A.Owen, et.al. SD-SEM: sparse-dense correspondence
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
for 3D reconstruction of microscopic samples. Micron, 2017, 97:41-55. [19] https://www.phenom-world.com/software/3d-reconstruction. [20] http://www.digitalsurf.fr/en/index.html. [21] A.P.Taft, A.B.Kirkpatrick, J.D.Holz, et al, 3DSEM: A 3Dmicroscopy dataset. Data in Brief 6, 2016, 112-116. [22] A.Baghaie1, A. P. Tafti, H.A. Owen, et.al. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation. PLoS ONE, 2017, 12(4): e0175078. [23] C.Kim, H.Zimmer,Y.Pritch, et al. Scene reconstruction from high spatio-angular resolution light fields. ACM Transactions on Graphics, 2013, 32(4): 73-83. [24] Y.J.Shen, W.F.Wan, et.al. Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot. Sensors 2015, 15: 31566-31580. [25] http://ccwu.me/vsfm/. [26] W.L.Ding, P.C.Ma, et.al. High Resolution Light Field Depth Reconstruction Algorithm Based on Priori Likelihood. Acta Optica Sinica, 2015, 35(7): 0715002. [27] W.L.Ding, C.Yu, et.al, Study on adaptive light field 3D reconstruction algorithm based on array image. Chinese Journal of Scientific Instrument.2016, 37(9):2156-2165 [28] C.Tomasi, R.Manduchi. Bilateral filtering for gray and color images. Sixth International Conference on Computer Vision. Bombay.1998, 839-846. [29] C.Harris and M. Stephens. A combined corner and edge detector. Proceedings of the 4th Alvey Vision Conference. 1988, 147-151. [30] Delaunay, Boris. Sur la sphère vide. Bulletin de l'Académie des Sciences de l'URSS, Classe des sciences mathématiques et naturelles.1934, 6:793-800. [31] http://www.meshlab.net/