Flow Measurement and Instrumentation 49 (2016) 70–88
Contents lists available at ScienceDirect
Flow Measurement and Instrumentation journal homepage: www.elsevier.com/locate/flowmeasinst
Parametric study on light field volumetric particle image velocimetry Shengxian Shi a,n, Jianhua Wang b, Junfei Ding a, Zhou Zhao a, T.H. New c a b c
Gas Turbine Research Institute, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China Systems Engineering Research Institute of China State Shipbuilding Corporation, Beijing 100094, China School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore
art ic l e i nf o
a b s t r a c t
Article history: Received 30 December 2015 Received in revised form 5 March 2016 Accepted 1 May 2016 Available online 3 May 2016
This paper presents a comprehensive investigation on how key design features can affect the performance of a plenoptic camera for single-camera volumetric velocity measurement technique. It firstly presents the prototyping of an in-house high resolution plenoptic camera; followed by an introduction to the framework of reconstructing 3D particle images from 2D light field images. Based on linear optics, a set of synthetic light field images were then generated by tracing light rays from a point light source to the plenoptic camera sensor. Detailed analysis were performed on these images to examine the effects of key parameters such as pixel microlens ratio (PMR), microlens geometry, reconstruction iteration number, relaxation factor and voxel to pixel ratio on the resolution of plenoptic camera and the final particle reconstruction quality. It is found that the microlens geometry is the vital parameter that affects the overall system performance. Hexagonal microlens generally outperforms square microlens in terms of resolution and reconstruction quality. Another important parameter is PMR, which affects resolution in x-, y- and z-directions, and high PMR does not necessarily lead to a better reconstruction quality. & 2016 Elsevier Ltd. All rights reserved.
Keywords: Volumetric particle image velocimetry (3DPIV) Light field imaging Plenoptic camera Multiplicative algebraic reconstruction technique (MART)
1. Introduction As a non-intrusive planar velocity measurement technique, two-dimensional particle image velocimetry (2D-PIV) has progressed rapidly over the past thirty years, and is maturing into a standard fluid diagnostic method which is widely used in many areas such as fundamental fluid mechanics, micro-fluids, biofluids, aerodynamics, combustion and turbomachinery [1–4]. However, many fluid phenomena are highly complex and threedimensional in nature, and two dimensional velocity measurements are therefore insufficient to elucidate their complicated fluid physics completely. In view of this limitation, principles and techniques of 2D-PIV have been extended by multiple studies to enable the measurements of two-dimensional three-component (2D-3C) and full volumetric velocity fields (3D-3C). One of the first attempts was to introduce one additional camera to the traditional 2D-PIV system, and measure the third velocity component according to stereoscopic imaging principles (StereoPIV) [5,6]. As Stereo-PIV can only provide an additional third velocity component into a 2D velocity field, a natural extension was to simultaneously measure 2D-3C velocity slices for multiple planes by using a series of scanning laser sheets and a pair of high speed n
Corresponding author. E-mail address:
[email protected] (S. Shi).
http://dx.doi.org/10.1016/j.flowmeasinst.2016.05.006 0955-5986/& 2016 Elsevier Ltd. All rights reserved.
cameras [7,8]. The so-called Scanning PIV is fundamentally a 2D-3C method, and its measurable velocity cannot exceed 1 m/s due to limitations in camera frame rate, laser repetition rate or scanning mirror speed [9]. On the other hand, instead of measuring 3D velocity through multiple view geometry, Defocusing Digital PIV (DDPIV) recovers depth information from defocused images which are normally produced by a three-aperture mask. As DDPIV estimates particle 3D coordinate from its triple defocused images, a single camera DDPIV system is limited to flows with very low particle density, and normally a triple-camera arrangement is needed to resolve the flow field with satisfactory accuracy [10,11]. One of the truly volumetric velocity measurement techniques is Holographic PIV (HPIV), which records three dimensional particle displacement by in-line or off-axis holography and subsequently calculates velocity distribution by particle tracking or cross-correlation from reconstructed digital holograms. The application of this technique, however, is greatly limited by its cumbersome experimental setup and small measurement volume when holograms are recorded by CCD/CMOS sensors [12–14]. A significant step forward in the development of three dimensional velocity measurement techniques is Tomographic PIV (Tomo-PIV), which typically uses four cameras to capture particle images from different viewing angles and reconstructs 3D particle image via multiplicative reconstruction technique (MART) [9,15]. Tomo-PIV has advantages in high spatial resolution as well as relative large measurable volume (measurable range along optical axis is smaller than lateral directions though),
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
71
Table 1 Summary of current volumetric PIV techniques. Type
Number of cameras
Measurement volume (mm3)
Seeding density (ppp)
DDPIV [10,11] HPIV [12–14] Tomo-PIV [9,15] SAPIV [16]
1–3 1 2–8
150 150 150 10 10 10 80 100 20
0.034 0.0015–0.014 0.02–0.08
8–15
65 40 32
0.015–0.125
Fig. 3. Light ray path of plenoptic camera.
Fig. 1. Light field parameterisation methods [22].
and is being widely used in experimental fluid mechanics studies. Another multi-camera 3D velocity measurement technique is synthetic aperture PIV (SAPIV) [16]. It uses a large camera array (normally 8–15 cameras) to capture the light field image for seeding particles and reconstructs 3D particle image through synthetic aperture refocusing method. SAPIV can tolerate much higher particle density than Tomo-PIV and its measurable range along optical axis can be on the same order as lateral directions. Characteristics of different volumetric PIV techniques discussed about in terms of number of cameras, typical measurement volume and spatial resolution (particles per pixel, ppp), are summarised in the table below. Note that key parameters like measurement volume and seeding density of these volumetric PIV techniques may change in the future with the advancement of CCD/CMOS sensor technology and 3D reconstruction algorithms. For more details on the respective volumetric velocity measurement techniques, readers are referred to the above-mentioned papers (Table 1). The above-mentioned 3D-PIV techniques employ either highly complex optical systems or multi-camera arrangements, which not just complicates experimental procedures and increases hardware cost, most importantly, it prevents these techniques from being applied in many flow scenarios where
optical access is limited. As such, measuring volumetric velocity fields via a single camera is highly desirable for the experimental community. One recently developed single camera 3D velocity measurement technique employs a three-vision prism to realize triple-view particle image recording with one CCD sensor [17]. Following similar data processing procedures as Tomo-PIV, this technique can provide accurate 3D velocity measurement for a relatively small volume. Another single camera volumetric velocity measurement technique is the light field photography based PIV (shorted as LFPIV hereafter). Unlike the camera array system used by SAPIV, the LFPIV records particle 4D light field images through the combination of a high resolution micro-lens array (MLA) and a high resolution CCD sensor (the so-called plenoptic camera). Studies have demonstrated that LFPIV can resolve 3D velocity fields through ray tracing based reconstruction and 3D cross-correlation [18,20]. Although LFPIV is still in its early development stage, attempts have been made on measuring IC-engine flow by LFPIV, showing its great potentials in resolving complex 3D flows [20,21]. In particular, the authors are motivated to employ the present light field based volumetric particle image velocimetry technique in the areas of complex jet flows [22–26] and flapping membranes [27–29] in the future. Preceding studies have demonstrated that while 2D-PIV techniques may be able to shed light on certain aspects of the flow scenarios, full appreciation and quantification of the 3D flow fields remain elusive. Good understanding of the 3D flow fields is essential towards optimizing complex jet flows and flapping membrane dynamics for mixing enhancements and renewable energy generation respectively. The current work presents a systematic analysis on the effects of key optical and experimental parameters on particle image reconstruction accuracy as well as measurement resolution of
Fig. 2. Schematic of (a) plenoptic camera and (b) focused plenoptic camera.
72
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Fig. 4. In-house developed plenoptic camera. (a) Schematic of the Plenoptic camera. (b) Assembled Plenoptic camera. (c) MLA calibration image.
Table 2 Key parameters of the in-house plenoptic camera. Symbol
nlx nly pl fl npx npy pp fm Pm So Sl M (f/#)m (f/#)l
Parameter
MLA resolution: X MLA resolution: Y Microlens pitch Microlens focal length Camera resolution: X Camera resolution: Y Pixel pitch Main lens focal length Main lens aperture Object distance Image distance Magnification factor Main lens f number Micro-lens f number
Camera value
458 301 77 μm 308 μm 6600 4400 5.5 μm – – – – – – –
Simulation value (square)
Simulation value (hexagonal)
PMR8
PMR14
PMR28
PMR8
PMR14
PMR28
63 63 44 μm
31 31 77 μm 308 μm 448 448 5.5 μm 50 mm 25 mm 100 mm 100 mm 1 2 4
15 15 154 μm
63 72 44 μm
15 18 154 μm
1 2
3.5 7
31 36 77 μm 308 μm 448 448 5.5 μm 50 mm 25 mm 100 mm 100 mm 1 2 4
3.5 7
LFPIV, which is currently lacking but fundamentally important to evaluate the performance of this novel volumetric velocity measurement technique. In the following sections, basic knowledge of light field imaging is introduced in Section 2. Methodology of single plenoptic camera based volumetric velocimetry and construction of the in-house high resolution plenoptic camera system are outlined in Section 3. Section 4 evaluates the performance of the proposed LFPIV technique via ray tracing simulation. Section 5 concludes the current work and points out directions for next step studies
1 2
2. Basics of light field imaging 2.1. Light field modelling “Light field” was firstly termed by Arun Gershun in 1936, and was defined as a collection of light rays that travels along all directions in three-dimensional space [30]. The spatial distribution of energy or radiance density L of such light rays was modelled as the following 5D plenoptic function (the word plenoptic comes from Latin plenus, which means complete, full, and optic) [31].
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
73
Fig. 5. (a) and (b) Raw light field image taken by the plenoptic camera; (c) digitally refocus on the turbine blade, (d) digitally refocus on the notebook; (e) and (f) perspective shift (the viewing angle difference between figure (e) and (f) is highlighted by the red boxes). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
L=L (x, y , z, θ ,∅)
(1)
where (x, y, z ) and (θ , ∅) denotes the spatial position and angular location of a ray, respectively (Fig. 1a). It was later argued that this 5D plenoptic function contains redundant information if only the light field outsides an object's convex hull is considered. As such, when ray travels in regions free of occlusion or transparent media (e.g. water and air), its radiance density can be determined by the 4D plenoptic function [32,33].
L=L (u, v, s , t )
(2)
where (u, v ) and (s, t ) is the location that ray intersects with two parallel planes, respectively (Fig. 1b). Based on these parameterisations, a very straightforward but awkward method to record the light field is to capture a series of images for the st plane from different perspectives in the uv plane. For static light fields, this can be done by flying a camera around the scene with a precise translation/rotation stage or robotic arm [33,34]. If a dynamic scene is involved, the light field can be captured by a 1D or 2D camera array as what realised in the Stanford Computer Graphics Laboratory [35]. This is essentially the method used by Techet et al. [16] for recording light field of PIV seeding
particles. A more practical way of measuring light field is to integrate a MLA into a single camera, such that each ray of the light field can be determined by its intersection with the aperture plane and sensor plane. This so-called plenoptic camera is a perfect implementation of the two-plane parameterisation for the 4D plenoptic function, and is superior to the camera-array arrangement in being physically compact [36,37]. The configuration employed in Ng's design [37,38] is to position MLA at one focal length of microlens (all microlens in MLA have the same focal length) away from the camera sensor plane, which provides the best angular resolution (determined by the number of pixels beneath each microlens) but moderate spatial resolution (depends on the number of microlens) for light field (Fig. 2a). Another configuration, which was termed as focused plenoptic camera or Plenoptic camera 2.0, positions MLA somewhere between the main lens and camera sensor in order to maximise the spatial resolution by sacrificing directional sampling resolution (Fig. 2b) [39]. From volumetric velocity measurements point of view, higher angular resolution is desirable to better resolve the seeding particle's location along the optical axis (z-direction), which is critical for accurately reconstructing 3D particle images. Based on these considerations, we confine ourselves in applying
74
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Through Main lens
⎛ x′ ⎞ ⎛ 1 0 ⎜ ⎟ ⎜ ′ y 0 1 ⎜ ⎜ ⎟= ⎜ θ′ ⎟ ⎜ −1/fm 0 ⎜ ⎟ ⎜⎜ ⎝ φ′ ⎠ ⎝ 0 −1/fm
0 0 ⎞⎛ x ⎞ ⎟ 0 0 ⎟⎜ y ⎟ ⎟⎜ ⎟ 1 0 ⎟ ⎜⎜ θ ⎟⎟ ⎟ 0 1 ⎠⎝ φ ⎠
(4)
Main lens - MLA
⎛ x′ ⎞ ⎛ 1 ⎜ ⎟ ⎜ ⎜ y′ ⎟=⎜ 0 ⎜ θ′ ⎟ ⎜ 0 ⎜ ⎟ ⎜⎜ ⎝ φ′ ⎠ ⎝ 0
Measurement Area CCD MLA Particles Main-lens Image
0 Si 0 ⎞⎟ ⎛ x ⎞ 1 0 Si ⎟ ⎜ y ⎟ ⎟⎜ ⎟ 0 1 0 ⎟ ⎜⎜ θ ⎟⎟ ⎟ φ 0 0 1 ⎠⎝ ⎠
(5)
Through MLA
⎛ x′ ⎞ ⎛ 1 0 ⎜ ⎟ ⎜ ′ y 0 1 ⎜ ⎜ ⎟= ⎜ θ′ ⎟ ⎜ −1/fl 0 ⎜ ⎟ ⎜⎜ ⎝ φ′ ⎠ ⎝ 0 −1/fl
Lens combination
0 0 ⎞⎛ x ⎞ ⎛ 0 ⎞ ⎜ ⎟ ⎟ 0 0 ⎟⎜ y ⎟ ⎜ 0 ⎟ ⎟ ⎜ +⎜ ⎟ ⎟ 1 0 ⎟ ⎜⎜ θ ⎟⎟ ⎜ Sx /fl ⎟ ⎟ ⎝ φ ⎠ ⎜ S /f ⎟ 0 1⎠ ⎝ y l⎠
(6)
MLA - CCD
⎛ x′ ⎞ ⎛ 1 ⎜ ⎟ ⎜ ⎜ y′ ⎟=⎜ 0 ⎜ θ′ ⎟ ⎜ 0 ⎜ ⎟ ⎜⎜ ⎝ φ′ ⎠ ⎝ 0
Laser
0 fl 0 ⎞⎟ ⎛ x ⎞ 1 0 fl ⎟ ⎜ y ⎟ ⎟⎜ ⎟ 0 1 0 ⎟ ⎜⎜ θ ⎟⎟ ⎟ φ 0 0 1 ⎠⎝ ⎠
(7)
2.2. Plenoptic camera prototyping Fig. 6. Schematic of light field based particle image velocimetry.
the plenoptic camera framework (Ng's design [37]) onto volumetric velocity measurement in the current study. However, extending the current analysis to volumetric velocity measurement technique based on focused plenoptic camera would be a very intriguing topic in the near future, when CCD/CMOS sensor resolution increase to such a level that light field can be sampled with nearly equal resolution in both angular and spatial domains. To lay a foundation for plenoptic camera prototyping and camera performance evaluation, the paths of light rays that enter the plenoptic camera need to be precisely modelled. For PIV related light field recording, this means to spatially trace each ray from its initial scattering by a seeding particle to its final intersection with the camera sensor. As demonstrated by Georgiev et al. [40], such a ray tracing procedure can be realised by using linear optics. To do that, the two-plane parameterisation is slightly modified from L (u, v, s, t ) to L (x, y, θ , φ), where (x, y ) and (θ , φ) defines the intersection location and angle of the ray made with a plane perpendicular to the camera optical axis. Fig. 3 illustrates the path of a light ray for a plenoptic camera (shown in 1D for simplicity), from which the spatial location variation of the light ray when it travels from point O to the CCD plane can be calculated according to Eqs. (3)–(7). In the equations, so is the object distance, si is the image distance, fm is the focal length of the main lens, fl is the focal length of the microlens, pm is the aperture of the main lens, pl is the microlens pitch, pp is the pixel pitch, Sx and Sy is the shift of microlens centre from the main optical axis. Particle O - Main lens
⎛ x′ ⎞ ⎛ 1 ⎜ ⎟ ⎜ ⎜ y′ ⎟=⎜ 0 ⎜ θ′ ⎟ ⎜ 0 ⎜ ⎟ ⎜⎜ ⎝ φ′ ⎠ ⎝ 0
0 So 0 ⎞⎟ ⎛ x ⎞ 1 0 So ⎟ ⎜ y ⎟ ⎟⎜ ⎟ 0 1 0 ⎟ ⎜⎜ θ ⎟⎟ ⎟ φ 0 0 1 ⎠⎝ ⎠
(3)
Based on the two-plane parameterisation of 4D plenoptic function, an in-house high resolution plenoptic camera was constructed to capture light field image for PIV seeding particles. There are mainly two considerations in building the camera, namely achieving maximum sampling resolution for light field as well as maintaining short inter-frame time for subsequent cross correlation analysis. The first consideration essentially means a densely packed CCD or CMOS photo sensor. Although many high resolution medium format or full-frame digital single-lens reflex cameras (DSLR) are commercially available (e.g. Mamiya Phase One iXR has an effectively resolution of 80 Megapixel, and Canon EOS 5D Mark III has an effectively resolution of 50.6 Megapixel), they are inherently unsuitable for PIV applications due to its relatively large time interval between two consecutive frames. When the in-house plenoptic camera was built in December 2014, the highest resolution PIV camera available was the Imperx B6640, which uses a KAI29050 CCD sensor with a resolution of 6600 4400 pixel. To the authors' best knowledge, a higher resolution PIV camera that currently available is the Imperx T88H0, which uses a KAI-47051 CCD sensor and provides a resolution of 8880 5304 (this camera is not officially released when this paper is written). On the other hand, a high resolution MLA is also desirable as it directly determines the spatial resolution according to the twoplane parameterisation. Those conventionally available MLAs normally contain only few hundred microlens, and are insufficient for plenoptic cameras. Ng [38] and Fahringer et al. [18] used customised square packing MLAs from Adaptive Optics Associates (296 296 and 189 193, respectively) for their plenoptic cameras. The current plenoptic camera uses a customised hexagonal packing MLA with a resolution of 408 314. While it will be shown here that MLA resolution and geometry significantly effects performance, the choice of these parameters in most plenoptic cameras is generally the result of the cost and availability of MLA arrays. One of the challenges encountered in assembling the plenoptic camera was to precisely position the MLA at one focal length away
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Focal Plane
Main lens
75
MLA CCD
Fig. 7. Schematic of the ray tracing based weighting coefficient algorithms. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)
Fig. 8. Comparison between (a) synthetic light field image, (b) weighting coefficient by ray tracing based method and (c) sphere–cylinder intersection algorithm for particle at (dx¼ 0, dy¼0, dz ¼0.385 mm).
from the CCD plane, which is 308 mm for the current configuration. This fractional separation requires removing the protective cover glass from the CCD sensor, as the thickness of which is on the order of millimetres. As such, the assembling work was carried out in a Class 1000 clean room to avoid the CCD and MLA being contaminated by dusts in the air. According to the geometry of Imperx B6640, the height between CCD plane and bottom of the lens mount is only few centimetres. This small volume is where the MLA could be fitted since it is made as large as the CCD sensor in order to make the best use of pixel resolution. Based on this
constrain, the MLA mount, which consists of base ring, MLA holder and cover plate, was designed with tight tolerances and it is only 16 mm height in its final assembly status (as schematically shown in Fig. 4a, patent pending). The MLA was positioned above the CCD by the mount through the following three steps. Firstly, the base ring was fixed onto the camera body by replacing the original CCD protection frame. Secondly, the MLA was tightly fitted into the holder and fixed with the cover plate. Lastly, the assembly of MLA and MLA holder is fixed onto the base ring by using a set of springs and M1.4 screws
76
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Focal Plane
Main lens
MLA CCD
Fig. 9. Light field images generated by ray tracing simulation (a) light source on the focal plane centre, (b) light source offsets to the focal plane, dz = 0.385mm and (c) light source offsets to the focal plane, dz = 0.385mm , dy = 0.0055mm , (d) light source offsets to the focal plane, dz ’=0.405mm . (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)
with a thread pitch of 0.3 mm. By carefully tuning the adjusting screws, the separation between MLA and CCD could be controlled with an accuracy of less than 1 mm. To ascertain the separation between MLA and CCD is exactly one focal length, the plenoptic camera took a set of images for a point source without the main lens attached. The light source was placed few metres away from the camera to essentially simulate parallel light rays, which would create a sharp dot image beneath each microlens if the one focal length separation was reached. The entire calibration process took about 10 screw adjustments and while tedious, no further
calibration is needed in the future usage of the plenoptic camera. Fig. 4b shows the assembled plenoptic camera without the main lens. Key parameters of the in-house plenoptic camera and their corresponding value used in the ray tracing simulation are listed in Table 2. As an example, Fig. 5a and b shows a raw light field image captured by the in-house plenoptic camera, Fig. 5c–f shows the digital refocused images and perspective shifted images, which are calculated from the raw image according to the rendering method described in [38].
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Focal Plane
Main lens
MLA
CCD
Fig. 10. Formation of an unresolvable block by back tracing outermost light rays of discretised light columns from edges of a microlens. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)
3. Volumetric PIV based on a single plenoptic camera 3.1. Outline of the technique Fig. 6 illustrates the principle of measuring 3D velocity field by a single plenoptic camera, which consists of four steps, namely camera calibration, light field image acquisition, 3D particle image reconstruction and 3D cross-correlation. As introduced in the previous section, the one-time calibration was accomplished when the MLA is positioned one focal length away from the CCD. The objective of camera calibration during image processing is to calculate the image coordinate for microlens centre, which is achieved by performing Gaussian curve-fitting for the calibration image (Fig. 4c). This is a much simpler process when compared with camera calibration in Tomo-PIV, where the camera matrix needs to be calculated for every camera each time when experimental setup is changed. In the next step, particle light field images are captured in a similar fashion as 2D-PIV, without the need of pan-tilting Scheimpflug lenses for each camera, which is a cumbersome but necessary step for Tomo-PIV to ensure every camera focuses on the same area. After requiring two consecutive particle light field images, how to reconstruct 3D particle image is the key to LFPIV, and it is even more complicated than 3D reconstruction in TomoPIV. This will be detailed in the follow analysis. Finally, 3D velocity fields can be calculated from 3D particle image pairs by using the 3D extension of classic 2D cross-correlation method such as the iterative multi-grid algorithm [41]. 3.2. Dense ray tracing based image reconstruction Earlier studies have shown that multiplicative algebraic reconstruction technique (MART) can be applied to plenoptic light field images to reconstruct 3D particle images [18,19]. Given an initial guess of voxel intensity for a 3D particle image, MART iteratively updates the voxel value according to the following equation [43,44].
⎞ μwi, j ⎛ I ( xi , yi ) ⎟ E (Xj , Y j, Zj )k + 1 = E (Xj , Y j, Zj )k ⎜⎜ k⎟ ⎝ ∑j ∈ Ni wi, j E (Xj , Y j, Zj ) ⎠
(8)
where E (Xj , Yj, Zj ) is the intensity of the j-th voxel, I (xi , yi ) is the intensity of the i-th pixel, which is known from the captured light field image. wi, j is the weighting coefficient, which quantifies the contribution of light from the j-th voxel to the intensity of i-th pixel. Another reconstruction algorithm, namely computational refocusing based method, the reconstruction accuracy is not as good as MART even though it is supposed to be more efficient [42]. Calculating the weighting coefficient accurately is the key to accurately reconstruct the 3D particle image (voxel intensity).
77
Tomo-PIV uses the sphere–cylinder intersection method, which models the voxel as sphere and the pixel line-of-sight as cylinder, and calculates the weighting coefficient according to the crosssection volume between the two [9,15]. This method was directly applied to plenoptic image reconstruction by Fahringer et al. [18]. However, the pixel line-of-sight of plenoptic camera is not as straightforward as that of Tomo-PIV cameras, as can be seen in Fig. 3. Calculating weighting coefficient by sphere–cylinder intersection method may not faithfully reflect the light intensity distribution for plenoptic camera. It has been shown that the weighting coefficient could be more accurately determined by the ray tracing method [19]. According to the two plane model, the spatial resolution of a plenoptic camera is decided by the number of pixels beneath each microlens, that is to say, pixel beneath each microlens views the object from a specific angle. Hence, the main lens can be discretised according to the number of pixels below the microlens. As shown in Fig. 7a, the weighting coefficient is calculated in two steps. Firstly, for a given voxel, its discretised light rays (e.g. two light rays are plotted in Fig. 7a are traced through the main lens to the affected microlens). The first portion of weighting coefficient ( w1) is calculated according to the overlap between the light ray and the affected microlens (e.g. the overlap area between the yellow light ray and the affected three microlens, as shown in Fig. 7b). In the second step, light ray is traced to the sensor plane, and the second portion of the weighting coefficient ( w2) is calculated according to the overlap between the light ray and affected pixels (e.g. the overlap area between the green light ray and the affected four pixels, as shown in Fig. 7c). Finally, the weighting coefficient is calculated as the multiplication of w1 and w2. To validate this new method, the calculated weighting coefficient was plotted as a grey scale image, and compared with the synthetic light field image as well as the result from the sphere– cylinder intersection method. As can be seen from Fig. 8, weighting coefficient calculated by the new algorithm matches better to the synthetic light field image.
4. Simulation study According to the ray propagation equations (Eqs. (3)–(7)), light field image of a point light source or a seeding particle can be simulated by randomly emanating a large number of rays, which sweep across the main lens aperture. It was found that five million rays are sufficient to generate a statically meaningful light field image. As an example, Fig. 9a shows a synthetic light field image for point source located right on the centre of the focal plane, Fig. 9b shows the light field image for the point source located on the optical axis but offset at 0.385 mm away from the focal plane, Fig. 9c presents the light field image for the point source which is offset at 0.385 mm away from the focal plane and 0.0055 mm to the optical axis, and Fig. 9d shows the light field image for the point source located on the optical axis but offset at 0.405 mm away from the focal plane. In the following sections, ray tracing method is used to simulate rays from a point light source at various locations to study the effects of PMR and microlens geometry on the spatial resolution of a plenoptic camera. A set of synthetic light field particle images were generated to study the effects of reconstruction parameters and particle density on the performance of light field PIV technique. 4.1. Effects of pixel microlens ratio on plenoptic camera resolution Analysis made by Georgeiv et al. [45] pointed out that the spatial and angular resolution of a plenoptic camera is determined
78
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
0.25 0.2 0.15 0.1
Y (mm)
0.05 0 -0.05 -0.1 -0.15 -0.2 -0.25
-1
-0.8
-0.6
-0.4
-0.2
-0.8
-0.6
-0.4
-0.2
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
Z (mm)
0.25 0.2 0.15 0.1
Y (mm)
0.05 0 -0.05 -0.1 -0.15 -0.2 -0.25
-1
Z (mm)
0.25 0.2 0.15 0.1
Y (mm)
0.05 0 -0.05 -0.1 -0.15 -0.2 -0.25
-1
Z (mm)
Fig. 11. Schematic of resolution variation in y–z plane for (a) PMR¼8, (b) PMR¼ 14 and (c) PMR ¼28. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Available area
Unusable area Fig. 12. Comparison of pixel usage between hexagonal and square microlens.
by the resolution of MLA and number of pixels beneath each microlens, respectively. For volumetric velocity measurement, angular resolution will be an important factor to determine the resolution in z-direction (along camera optical axis). As such,
qualitative ray tracing analysis is made in this section to study the effect of PMR on y–z plane resolution of the plenoptic camera. Note that the current analysis focuses on the resolution limit of plenoptic light field imaging technique only, and the effects of image sensor sensitivity is omitted since this is highly depend on specific sensors employed. For brevity, analysis is only made for square microlens, but the conclusions can be generally extent to hexagonal microlens as well. During the ray tracing analysis, the pixel size and number of pixels were kept constant at 5.5 mm and 109, respectively, and the PMR was varied to have values of 8, 14 and 28 by changing the microlens size. According to plenoptic light field imaging, the angular resolution is determined by the number of pixels per microlens i.e. PMR. That is to say, light field of a point source is discretised into portions equal to the PMR number, as shown in Fig. 9 (for brevity, the PMR number in Figs. 9 and 10 is limited to 5). Two spatially separated point light sources are said to be resolved if the location variation results in any light columns be captured by different microlens. Take Fig. 9b and c as an example, a shift of 0.0055 mm in the y-direction causes the orange and yellow light columns to move across two adjacent microlens, and results in detectable pixel intensity variation. However, when the separation of two point light source is too small, light columns from the two point light source will be captured by the same 0.6
100 80
0.4
100 80
0.4
60
20 0
0 -20
-0.2
Error_Z (mm)
40
0.2
(Pixel)
Error_Z (mm)
60
-40
40
0.2
20 0
0 -20
-0.2
-40
-60 -0.4
-0.6 -20
-60 -0.4
-80 -100 -15
-10
-5
0
5
10
15
-0.6 -20
20
-80 -100 -15
-10
Z (Y=0) (mm)
0
0
-20 -0.2
-40 -60
-0.4
-80 -100 5
Z (Y=0.0385) (mm)
10
15
20
Resoluable_Z (mm)
20
(Pixel)
Error_Z (mm)
40
0.2
0
10
15
20
1.5
60
-5
5
2
80
0.4
-10
0
2.5
100
-15
-5
Z (Y=-0.0385) (mm)
0.6
-0.6 -20
(Pixel)
0.6
79
1 0.5 0 -0.5 -1 -1.5 -2 -2.5 -2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Real_Z (Y=0) (mm)
Fig. 13. Square microlens: variation of z-direction resolution for at (a) x¼ 0 mm, y¼ 0 mm; (b) x ¼0 mm, y ¼ 0.0385 mm; (c) x ¼0 mm, y ¼ 0.0385 mm; and (d) detailed Z direction resolution near the focal plane at x¼ 0 mm, y¼0 mm.
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88 0.6
100 80
0.4
100 80
0.4
60
0.2
20 0
0 -20
-0.2
Error_Z (mm)
40
(Pixel)
Error_Z (mm)
60
-40
40
0.2
20 0
0 -20
-0.2
-40 -60
-60 -0.4
-0.6 -20
-0.4
-80 -100 -15
-10
-5
0
5
10
15
-0.6 -20
20
-80 -100 -15
-10
-5
0
5
10
15
0.6
0.6
100 80
0.4
100 80
0.4
60
0.2
20 0
0
-20 -0.2
Error_Z (mm)
40
(Pixel)
Error_Z (mm)
60
-40
40
0.2
20 0
0
-20 -0.2
-40
-60 -0.4
-60 -0.4
-80 -100 -15
-10
-5
0
5
10
15
-0.6 -20
20
-80 -100 -15
-10
Z ((X,Y)=(-0.0385,0)) (mm)
0 -20
-0.2
-40 -60
-0.4
-80 -100 5
10
15
20
Z ((X,Y)=(0,0)) (mm)
Resoluable_Z (mm)
20 0
(Pixel)
Error_Z (mm)
40
0.2
0
10
15
20
1.5
60
-5
5
2
80
0.4
-10
0
2.5
100
-15
-5
Z ((X,Y)=(0.0385,0)) (mm)
0.6
-0.6 -20
20
Z ((X,Y)=(0,0.044456)) (mm)
Z ((X,Y)=(0,-0.044456)) (mm)
-0.6 -20
(Pixel)
0.6
(Pixel)
80
1 0.5 0 -0.5 -1 -1.5 -2 -2.5 -2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Real_Z ((X,Y)=(0,0)) (mm)
Fig. 14. Hexagonal microlens: variation of z-direction resolution at (a) x¼ 0 mm, y¼ 0.044456 mm; (b) x ¼0 mm, y¼ 0.044456 mm; (c) x ¼ 0.0385 mm, y¼ 0 mm; (d) x¼ 0.0385 mm, y¼0 mm; (e) x ¼0 mm, y¼ 0 mm; and (f) detailed z-direction resolution near the focal plane at x ¼0 mm, y¼ 0 mm.
group of microlens and result in nearly identical pixel intensities (as shown in Fig. 9b and d). To better illustrate the resolution limit, the outermost light rays (or boundary) of the discretised light columns are plotted in Fig. 10. Take the top green line as an example, which is plotted by
tracing a light ray from the lower edge of the centre microlens through the top portion of the discretised main lens, and back to the object side. It is clear that any point light source moving across such line will result in the top light column moving across the centre microlens, and hence leads to pixel intensity variations.
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
81
0.006
1
0.004
0
(pixel)
Error_Y (mm)
0.5 0.002
0
-0.002 -0.5 -0.004 -1
-0.006 -0.4
-0.3
-0.2
-0.1
0
Y (Z=5) (mm)
0.3
0.4
0.03
0.06
10
0.04 5 0.02
Resoluable_Y (mm)
15
0.08
(pixel)
Lowest_resolution_Y (mm)
0.2
0.045
0.1
0 -20
0.1
0.015
0
-0.015
-0.03
-10
0
10
0 20
Z (mm)
-0.045 -0.045
-0.03
-0.015
0
0.015
0.03
0.045
Real_Y (Z=5) (mm)
Fig. 15. Square microlens: (a) variation of y-direction resolution at x¼ 0 mm, z¼ 5 mm; (b) distribution of the lowest y-direction resolution; (c) detailed y-direction resolution near the optical axis at x ¼0 mm, z ¼5 mm.
Based on such analysis, performing back ray tracing for outermost light rays of the discretised light columns at the upper edge of the centre microlens would form a series of closed blocks. Any point light sources inside these blocks cannot be distinguished. As an example, Fig. 9b and d are images of two point light sources inside the highlighted block in Fig. 10. To qualitatively demonstrate spatial resolution limitation for different PMRs, a series of point light sources were positioned in y–z plane in the range of 1 mm oz o1 mm and 0.3 mm oy o0.3 mm, with focal plane located at z ¼0 mm and optical axis located at y¼ 0 mm. Detailed simulation parameters are listed in Table 2. Traces of the discretised light columns are plotted in Fig. 11 for three different PMR numbers, where blank blocks represent unresolvable area, and separation between two red lines represents one microlens size. An instant observation from Fig. 11 is that the y-resolution decreases with the increase of PMR, especially near the focal plane, whereas z-resolution is increasing with the increase of PMR. From the volumetric PIV point of view, a higher z-resolution is desirable; hence a very small pixel size as well as very dense microlens array is preferred. However, too small pixel size will greatly reduce the camera sensitivity and densely packed microlens array will significantly increase the manufacturing cost or even impossible to fabricate. As such,
prototyping the in-house plenoptic camera with PMR14 can be seen to be a compromise of performance and cost. 4.2. Effects of microlens geometry on plenoptic camera resolution Ng [38] reported that hexagonal microlens can greatly increase pixel usage efficiency than the square one, which can be clearly seen from Fig. 12 that the unusable pixels under a hexagonal microlens is much less than that of a square microlens. To quantitatively assess the effect of microlens geometry on the camera resolution, detailed resolution distribution in x, y and z directions were extracted from the abovementioned ray tracing method and presented in this section. For all tests discussed in this section, the PMR number is fixed at 14. Fig. 13 plots the z-direction resolution for square microlens. As the resolution distribution is symmetrical to y-axis for the square microlens, only three representative planes i.e. along optical axis (y¼0) and around the optical axis (y¼ 7 0.0385 mm) is plotted here. Note that the microlens diameter is 0.077 mm and the optical axis (y¼0) passes through the microlens centre as shown in Fig. 11b. In Fig. 13a, b and c, y-axis represents the residual between the real z coordinate of point light source and the resolved coordinate by ray tracing. It shows that the z-direction resolution on
82
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
0.006
1
0.004
0
(pixel)
Error_Y (mm)
0.5 0.002
0
-0.002 -0.5 -0.004 -1
-0.006 -0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
Y ((X,Z)=(0,5)) (mm)
0.045
0.03
0.06
10
0.04 5 0.02
0 -20
Resoluable_Y (mm)
15
0.08
(pixel)
Lowest_resolution_Y (mm)
0.1
0.015
0
-0.015
-0.03
-10
0
10
0 20
Z (X=0) (mm)
-0.045 -0.045
-0.03
-0.015
0
0.015
0.03
0.045
Real_Y ((X,Z)=(0,5)) (mm)
Fig. 16. Hexagonal microlens: (a) variation of y-direction resolution at x ¼0 mm, z ¼5 mm; (b) distribution of the lowest y-direction resolution; (c) detailed y-direction resolution near the optical axis at x ¼0 mm, z ¼ 5 mm.
both sides of one microlens is much higher than that along the microlens centre, which is consistent with the blank grid size shown in Fig. 11b. Fig. 13d shows the detailed z-resolution distribution for a small range of 0.25 mm oz o0.25 mm, which clearly demonstrates the resolution variation and the lowest resolution is located in the focal plane. Similar analysis was performed for hexagonal microlens and the results are plotted in Fig. 14. Five representative planes, i.e. along and around the optical axis, are selected since the resolution distribution is asymmetrical to either x or y axis. Clearly, the z-resolution of hexagonal is much better than that of square microlens. Fig. 15 plots the y-direction resolution for square microlens, xdirection resolution would be the same due to the symmetry. Fig. 15a shows that the y-direction presents a quasi-periodicity in a small range ( 0.4 mm o yo0.4 mm), and Fig. 15c shows the yresolution variation in one “cycle”. By only plotting the lowest yresolution along the optical axis ( 20 mm oz o20 mm), it shows that the y-resolution in the near field is higher than the far field. Note that the optical axis points away from the CCD plane. As a contrast, Figs. 16 and 17 show the y- and x-direction resolution for the hexagonal microlens. Comparing Fig. 16a, c and Fig. 17a, c with Fig. 15a, c, it can be seen that although the y- and x-direction
resolutions of the hexagonal microlens present similar quasi-periodicity as the square microlens, the resolution is generally higher. Meanwhile, the y- and x-direction resolutions of the hexagonal microlens do not decay along the optical axis (Fig. 16b and Fig. 17b), which is the case for square microlens (Fig. 15b). Also, when comparing Fig. 15 with Fig. 13, Figs. 16 and 17 with Fig. 14, it can be concluded that the resolutions in x- and y-directions are nearly one magnitude higher than that in the z-direction for both microlens configurations. 4.3. Effects of reconstruction parameters and particle density on particle reconstruction quality To explore the effects of reconstruction parameters, namely relaxation factor m, iteration number as well as seeding particle density on the reconstruction quality of LFPIV, a set of light field particle images were generated for hexagonal microlens configuration by ray tracing simulation, and the reconstruction quality index was calculated according to the Eq. (9). For all reconstruction tests, the voxel domain was discretised into 448 × 448 × 53voxel in such a way that the voxel to pixel ratio in x- and y-direction is 1:1 and voxel to pixel ratio in z-direction is 1:14 (i.e. one voxel to one
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
83
Fig. 17. Hexagonal microlens: (a) variation of x-direction resolution at y¼ 0 mm, z ¼5 mm; (b) distribution of the lowest x-direction resolution; (c) detailed x-direction resolution near the optical axis at y ¼0 mm, z ¼5 mm.
microlens size, similar as [18]). To speed up the simulation process, all calculations were performed via GPU parallel computation by using an NVIDIA Geforce980 processor.
Q=
∑ E1(x, y , z ) E 0 (x, y , z ) ∑ E12 (x, y , z )* ∑ E02 (x, y , z )
and the final Q value was averaged over 50 simulation results. As expected, the reconstruction quality decreases gradually with the increase of particle density, which is consistent with results reported in [42].
(9)
4.4. Effects of microlens geometry on particle reconstruction quality
where E0 (x, y, z ) is the exact voxel intensity, which is approximated by Gaussian distribution, and E1 (x, y, z ) is the voxel intensity of reconstructed particle image. Fig. 18 plots the reconstruction quality variation for different MART iterations and relaxation factors. The results were calculated from light field image of one single particle, which was located at x, y ¼0 and z¼ 0.385 mm, where the plenoptic camera provides the highest resolution (Fig. 14e). As the figure shows, a larger relaxation factor helps MART to converge quicker to a solution and higher iteration numbers help to increase reconstruction quality, which agrees well with conclusions from [15,42]. Fig. 19 plots the reconstruction quality distribution for various particle densities, which is quantified as number of particles per microlens (ppm). For each ppm test, the particle locations were randomly generated,
From the above analysis, it is clear that the microlens geometry and PMR greatly affect the resolution of a plenoptic camera system. This section will further explore the effects of such parameters on the reconstruction quality of LFPIV. Using simulation parameters listed in Table 2, a set of synthetic light field images were generated for hexagonal and square microlens with PMR values set at 8, 14 and 28 (Fig. 20). Note that the exact particle number for these synthetic light field images was kept at 60 for all cases (corresponds to ppm ¼0.05 for the PMR¼ 14 case). The difference in pixel intensity pattern among these figures is actually caused by the variation of resolution, which is a direct result of differences in microlens shape and PMR. These images were reconstructed with MART via GPU parallel computation where relaxation factor and iteration number were kept at 1 and
84
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
1 0.9 0.8 0.7
Q
0.6 0.5 Relaxation Factor
0.4
0.2 0.4 0.6 0.8 1.0
0.3 0.2 0.1 0
2
4
6
8
10 12 14 16 18 20 22 24 26 28 30
Iteration
Fig. 18. Reconstruction quality distributions for different iteration number and relaxation factor.
0.8
Q
0.6
0.4
0.2
0
0.01
0.05
0.1
0.5
1
2
3 4
Particles per microlens (ppm) Fig. 19. Reconstruction variations with the particle density.
10, respectively. Before discretising the measurement volume for reconstruction, considerations were taken on how to properly determine the voxel to pixel size. From the above resolution analysis, it is known that the x- and y-resolution of both hexagonal and square microlens are around 1 pixel (Figs. 15–17), whereas zresolution is around 7 and 14 pixels for hexagonal and square microlens, respectively (Figs. 13 and 14). As such, following presents the reconstruction results for two types of voxel to pixel size, namely 1:1:7 and 1:1:14. For voxel to pixel ratio equals to 1:1:7 case, the reconstruction area was discretised into 448 × 448 × 106 voxel, and Fig. 21 shows the voxel intensity contours for a sub-region of 1 × 95 × 106 voxel (corresponding to x=0,y= − 0. 891~ − 0. 374,z = − 2. 04~2. 04mm) for various configurations. The exact distribution of voxel intensity contour in shown in Fig. 21(a), which was generated in such a way
that the voxel intensity in x- and y-direction are presented by a Gaussian distribution of three voxel diameter, similar as [15]. For z-direction however, the voxel was firstly divided into 7 sections (so that the “sub-voxel” has a voxel to pixel ratio of 1), the intensity of these “sub-voxels” were linearly interpolated from Gaussian distribution according to the separation distance between the “sub-voxel” centres to the particle centre. The final voxel intensity in z-direction is then the sum of its corresponding “sub-voxel” intensities. A straight observation from Fig. 21 is that hexagonal microlens provide generally better reconstruction results than square ones. Take particles highlighted by the red oval as an example, hexagonal microlens could correctly separate these two particles regardless the variation of PMR. The square microlens, however, could only differentiate these two particles when PMR is 14 and 28. On the other hand, effects of PMR on the x- and y-resolution for the same microlens geometry can be readily seen in the figure. Take particles highlighted by the black oval as an example, with increasing PMR, the reconstructed particle images gradually become thicker (especially for square microlens) in y-direction due to the reduced resolution in x–y plane. Fig. 21(g) shows an extreme example for such circumstance. Two particles highlighted by the black oval were reconstructed into one single particle, while these particles can be correctly separated at PMR¼8 and 14 (Fig. 21(c) and (e)). When the reconstruction area was discretised with a voxel to pixel ratio of 1:1:14 ( 448 × 448 × 53 voxel), similar conclusions can be drawn for hexagonal and square microlens. As the voxel intensity contours shown in Fig. 22 (1 × 95 × 53 voxel sub-region), hexagonal microlens consistently produce better reconstruction results than the square ones. Similarly, with increasing PMR, reconstruction results of two microlens geometries show a clear reduction in y-resolution. On the other hand, larger voxel to pixel ratio leads to a slight decrease in the capability of resolving closely located particles for both hexagonal and square microlens. For example, the two particles highlighted by red oval in Fig. 22(b), (d) and (e) cannot be corrected distinguished by using the current voxel to pixel ratio, while they can be correctly reconstructed with the 1:1:7 voxel to pixel ratio, as shown in Fig. 21(b), (d) and (e). There are two advantages of using a large voxel to pixel ratio. Firstly, the reconstructed particle image will be spread over a smaller range of voxels, and therefore will have a voxel intensity profile which is more similar to the exact intensity distribution (e.g. Fig. 22(d) vs. Fig. 21(d)). Secondly, computational time for particle reconstruction will be half of that using the 1:1:7 voxel to pixel ratio. Under current GPU acceleration settings, reconstructing a full size light field image takes about 10 hrs if the voxel to pixel ratio is 1:1:7. To better present the effects of PMR and microlens geometry on reconstruction quality, the variation of reconstruction quality factor Q with PMR and voxel to pixel ratio is plotted in Fig. 23. As already discussed in the above analysis, hexagonal microlens produce better reconstruction results than square ones under same PMR and voxel to pixel ratio settings. For the same microlens shape, reconstruction quality reaches a peak for the tested PMR range. Further increasing PMR, if it is practically possible, would lead to a clear decrease in x–y plane resolution as shown in Figs. 21 and 22.
5. Conclusions Current work presents a volumetric velocity measurement technique based on single in-house developed high resolution plenoptic camera. Ray tracing based method was developed to render light field images, generate synthetic light field particle images as well as construct a ray tracing based MART reconstruction algorithm. By using
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
85
Fig. 20. Synthetic light field images for: (a) hexagonal microlens, PMR¼ 8; (b) square microlens, PMR¼ 8; (c) hexagonal microlens, PMR¼ 14, (d) square microlens, PMR¼ 14; (e) hexagonal microlens, PMR¼ 28, (f) square microlens, PMR¼28.
synthetic light field particle images, effects of key parameters such as pixel-microlens ratio, microlens geometry, MART iteration number and particle density on the performance of this new technique have been studied in detail. It is found that z-resolution is in the order of one microlens size, and the x- and y-resolution is in the order of the pixel size. Hexagonal microlens provide much higher volumetric resolution than square ones. The current study also reveals that
although the z-direction resolution of hexagonal microlens based plenoptic camera can reach half that of the microlens size, the MART reconstruction algorithm could not fully utilise such potential. New reconstruction methods dedicated for dense viewing angle scenarios need to be developed to further improve the reconstruction performance.
86
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
Intensity 60 50 40 30 20 10
y z
Fig. 21. (a) Voxel intensity contour of true 3D particle images; voxel intensity contours of reconstructed 3D particle images for: (b) hexagonal microlens, PMR¼ 8; (c) square microlens, PMR ¼8; (d) hexagonal microlens, PMR ¼14, (e) square microlens, PMR¼ 14; (f) hexagonal microlens, PMR ¼28, (g) square microlens, PMR¼ 28. Voxel to pixel ratio 1:1:7 for x-, y- and z- direction. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
87
Fig. 22. (a) Voxel intensity contour of true 3D particle images; voxel intensity contours of reconstructed 3D particle images for: (b) hexagonal microlens, PMR¼ 8; (c) square microlens, PMR ¼8; (d) hexagonal microlens, PMR¼ 14, (e) square microlens, PMR¼ 14; (f) hexagonal microlens, PMR ¼28, (g) square microlens, PMR¼ 28. Voxel to pixel ratio 1:1:14 for x-, y- and z-direction. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)
88
S. Shi et al. / Flow Measurement and Instrumentation 49 (2016) 70–88
0.8 1:1:14_hexagon 1:1:14_square 1:1:7_hexagon 1:1:7_square
0.7 0.6
[14] [15] [16] [17]
Q
0.5
[18] [19]
0.4 0.3
[20]
0.2
[21]
0.1
[22]
0
[23]
8
14
PMR
28
Fig. 23. Reconstruction quality distributions for various PMRs and voxel to pixel ratios.
Acknowledgements This work was supported by the National Natural Science Foundation of China (Grant no. 11472175) and the Shanghai Raising Star Program (Grant no. 15QA1402400). The authors would like to gratefully acknowledge Dr. B.S. Thurow from Auburn University for helpful discussions on constructing the in-house plenoptic camera.
[24] [25] [26] [27] [28]
[29] [30] [31]
[32] [33]
References [34] [1] R.J. Adrian, C.S., Yao, Development of pulsed laser velocimetry (PLV) for measurement of turbulent flow, In: Int. Symposium on Turbulence, University of Missouri, Rolla, 1984, pp. 170–186. [2] M. Raffel, C.E. Willert, S. Wereley, J. Kompenhans, Particle Image Velocimetry a Practical Guide, 2nd ed., Springer, Berlin, Heidelberg, New York, 2007. [3] A. Schroeder, C.E. Willert, Particle Image Velocimetry: New Developments and Recent Applications, Springer, New York, 2008. [4] R.J. Adrian, J. Westerweel, Particle Image Velocimetry, Cambridge University Press, Cambridge, United Kingdom, 2010. [5] M.P. Arroyo, C.A. Greated, Stereoscopic particle image velocimetry, Meas. Sci. Technol. 2 (1991) 1181–1186. [6] A.K. Prasad, R.J. Adrian, Stereoscopic particle image velocimetry applied to liquid flows, Exp. Fluids 15 (1993) 49–60. [7] C. Brucker, 3-D scanning-particle-image-velocimetry: Technique and application to a spherical cap wake flow, Appl. Sci. Res. 56 (1996) 157–179. [8] T. Hori, J. Sakakibara, High-speed scanning stereoscopic PIV for 3D vorticity measurement in liquids, Meas. Sci. Technol. 15 (2004) 1067–1078. [9] F. Scarano, Tomographic PIV: principles and practice, Meas. Sci. Technol. 24 (2013) 1–28. [10] C. Willert, M. Gharib, Three-dimensional particle imaging with a single camera, Exp. Fluids 12 (1992) 353–358. [11] F. Pereira, M. Gharib, D. Dabiri, M. Modarress, Defocusing PIV: a three-component 3-D PIV measurement technique application to bubbly flows, Exp. Fluids 29 (2000) S78–S84. [12] K. Hinsch, Holographic particle image velocimetry, Meas. Sci. Technol. 13 (2002) R61–R72. [13] M. Arroyo, K. Hinsch, Recent developments of PIV towards 3Dmeasurements,
[35]
[36] [37]
[38] [39] [40] [41]
[42]
[43] [44] [45]
in: Andreas Schroeder, Christian E. Willer (Eds.), Particle Image Velocimetry: New Developments and Recent Applications, Springer, New York, 2008. J. Katz, J. Sheng, Applications of holography in fluid mechanics and particle dynamics, Annu. Rev. Fluid Mech. 42 (2010) 531–555. G. Elsinga, F. Scarano, B. Wieneke, B. van Oudheusden, Tomographic particle image velocimetry, Exp. Fluids 41 (2006) 933–947. J. Belden, T. Truscott, M. Axiak, A. Techet, Three-dimensional synthetic aperture particle image velocimetry, Meas. Sci. Technol. 21 (2010) 1–21. Q. Gao, H.P. Wang, J.J. Wang, A single camera volumetric particle image velocimetry and its application, Sci. China Technol. Sci. 55 (2012) 2501–2510. T.W. Fahringer, K.P. Lynch, B.S. Thurow, Volumetric particle image velocimetry with a single plenoptic camera, Meas. Sci. Technol. 26 (2015) 115201. J.F. Ding, J.H. Wang, Y.Z. Liu, S.X. Shi, Dense Ray Tracing Based Reconstruction Algorithm for Light Field Volumetric Particle Image Velocimetry, in: 7th Australian Conference on Laser Diagnostics in Fluid Mechanics and Combustion, Melbourne, Australia, 2015. T. Nonn, J. Kitzhofer, D. Hess, C. Brucker, Measurements in an IC-engine flow using light-field volumetric velocimetry, in: 16th Int. Symp. on Applications of Laser Techniques to Fluid Mechanics, Lisbon, 2012. H. Chen, V. Sick, Plenoptic particle tracking velocimetry for internal combustion engine measurements, in: 11th International Symposium on Particle Image Velocimetry–PIV15, Santa Barbara, California, 2015. K. Zaman, A. Hussain, Vortex pairing in a circular jet under controlled excitation. Part 1. General jet response, J. Fluid Mech. 101 (1980) 449–491. A. Hussain, K. Zaman, Vortex pairing in a circular jet under controlled excitation. Part 2. Coherent structure dynamics, J. Fluid Mech. 101 (1980) 493–544. T. New, W. Tay, Effects of cross-stream radial injections on a round jet, J. Turb. 7 (2006), No. 57. T. New, D. Tsovolos, Influence of nozzle sharpness on the flow fields of V-notched nozzle jets, Phys. Fluids 21 (2009) 084107. S. Shi, T. New, Some observations in the vortex-turning behaviour of noncircular inclined jets, Exp. Fluids 54 (2013) 1–11. J. Allen, A. Smits, Energy harvesting eel, J. Fluids Struct. 15 (2001) 629–640. S. Shi, T. New, Y. Liu, Flapping dynamics of a low aspect-ratio energy-harvesting membrane immersed in a square cylinder wake, Exp. Therm. Fluid Sci. 46 (2013) 151–161. S. Shi, T. New, Y. Liu, Effects of aspect-ratio on the flapping behaviour of energy-harvesting membrane, Exp. Therm. Fluid Sci. 52 (2014) 339–346. M. Levoy, Light fields and computational imaging, Computer 8 (2006) 46–55. E.H. Adelson, J.R. Bergen, The plenoptic function and the elements of early vision, in: M. Landy, J.A. Movshon (Eds.), Computational Models of Visual Processing, MIT Press, Cambridge, Mass, 1991. M. Levoy, P. Hanrahan, Light field rendering, ACM Trans. Graph. (1996) 31–42. S.J. Gortler, R. Grzeszczuk, R. Szeliski, M.F. Cohen, The lumigraph, ACM Trans. Graph. (1996) 43–54. M. Levoy, K. Pulli, B. Curless, Rusinkiewicz S, Koller D, Pereira L, Fulk D. The Digital Michelangelo Project. Proc. ACM Siggraph, ACM Press, 2000, pp.131-144. B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, M. Levoy, High-performance imaging using large camera arrays, ACM Trans. Graph. 24 (2005) 765–776. T. Adelson, J. Wang, Single lens stereo with a plenoptic camera, IEEE Trans. Pattern Anal. Mach. Intell. (1992) 99–106. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, P. Hanrahan, Light Field Photography with a Hand-Held Plenoptic Camera, tech. report CTSR 2005-02, Stanford University, 2005. R. Ng, Digital light field photography. PhD thesis, Stanford, CA, USA, 2006. A. Lumsdaine, T. Georgiev, The focused plenoptic camera, in: Computational Photography (ICCP), 2009 IEEE International Conference on. 2009. T. Georgiev, C. Intwala, Light-field camera design for integral view photography, Adobe Tech Report, 2003. J. Soria, An investigation of the near wake of a circular cylinder using a videobased digital cross-correlation particle image velocimetry technique, Exp. Therm. Fluid Sci. 12 (1996) 221–233. T. Fahringer, B.S. Thurow, On the development of filtered refocusing: A volumetric reconstruction algorithm for plenoptic-PIV. 11th International Symposium on Particle Image Velocimetry–PIV15, Santa Barbara, California, 2015. N.A. Worth, T.B. Nickels, Acceleration of Tomo-PIV by estimating the initial volume intensity distribution, Exp. Fluids 45 (2008) 847–856. C. Atkinson, J. Soria, An efficient simultaneous reconstruction technique for tomographic particle image velocimetry, Exp. Fluids 47 (2009) 553–568. T. Georgiev, K. Zheng, B. Curless, D. Salesin, S. Nayar, C. Intwala, Spatio-Angular Resolution Tradeoff in Integral Photography, Eurographics Symposium on Rendering, 2006.