A simplified holography based superresolution system

A simplified holography based superresolution system

Optics and Lasers in Engineering 75 (2015) 27–38 Contents lists available at ScienceDirect Optics and Lasers in Engineering journal homepage: www.el...

4MB Sizes 3 Downloads 111 Views

Optics and Lasers in Engineering 75 (2015) 27–38

Contents lists available at ScienceDirect

Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng

A simplified holography based superresolution system Asloob Ahmad Mudassar Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan

art ic l e i nf o

a b s t r a c t

Article history: Received 13 February 2015 Received in revised form 11 June 2015 Accepted 11 June 2015

In this paper we are proposing a simple idea based on holography to achieve superresolution. The object is illuminated by three fibers which maintain the mutual coherence between the light waves. The object in-plane rotation along with fiber-based illumination is used to achieve superresolution. The object in a 4f optical system is illuminated by an on-axis fiber to make the central part of the object's spectrum to the pass through the limiting square-aperture placed at the Fourier plane and the corresponding hologram of the image is recorded at the image plane. The on-axis fiber is switched off and the two off axis fibers (one positioned on the vertical axis and the other positioned on diagonal) are switched on one by one for each orientation of the object position. Four orientations of object in-plane rotation are used differing in angle by 901. This will allow the recording of eight holographic images in addition to the one recorded with on-axis fiber. The three fibers are at the vertices of a right angled isosceles triangle and are aligned toward the centre of the lens following the fiber plane to generate plane waves for object illumination. The nine holographic images are processed for construction of object's original spectrum, the inverse of which gives the super-resolved image of the original object. Mathematical modeling and simulations are reported. & 2015 Elsevier Ltd. All rights reserved.

Keywords: Super resolution 4f Imaging system Band limited systems Holography Imaging

1. Introduction Superresolution imaging mainly addresses techniques used to resolve the object's contents beyond the classical limit. Optical resolution is mainly limited by the finite size of apertures in an imaging system and by the diffraction at the exit pupil of the imaging system. Finite size of CCD pixels and the separation between the pixels also limit the resolution called Geometric Superresolution. In the present work, we are assuming that the size of the CCD pixels is smaller than the size of the diffraction limited spot and that the separation between the CCD pixels is smaller than what is required to sample a given signal using Nyquist sampling. This implies that we are ignoring the resolution limit due to CCD pixels and are addressing only the optical aspect of the resolution limit. With the above mentioned characteristics the CCD may be regarded as an ideal imaging device. To overcome the resolution limit due to diffraction, the imaging lens may be replaced with a lens of larger numerical aperture (NA) or the superresolution techniques may be employed with the lower NA lens to effectively improve its numerical aperture. Numerical aperture of an objective lens can be increased by using tilted beam illumination by employing either the vertical cavity surface emitting array laser, or an array of lenses or diffraction grating elements [1–5]. Using these techniques, the higher order

E-mail address: [email protected] http://dx.doi.org/10.1016/j.optlaseng.2015.06.006 0143-8166/& 2015 Elsevier Ltd. All rights reserved.

spatial frequencies of the target object are recovered and thus the optical resolution is enhanced. Structured illumination in the form of projected fringes on the target object for enhanced resolution has also been reported [6,7]. To obtain super-resolved images while keeping the three dimensional aspect of the object intact, holography coupled super resolution techniques have been reported in [8–11]. These techniques are also named as digital holographic microscopy. The structured illumination using an array of fibers is one of the recent techniques reported in [12] for the recovery of higher spatial frequencies of the object based on holographic superresolution. In this technique an array of fibers is used to illuminate the object from different directions and different portions of the object's spectrum are passed through the passband of the imaging system and the holographic images are recorded corresponding to different portions of the object's spectrum. The recorded holographic images are processed using an algorithm in which images are Fourier transformed and desired portion of the object's spectra is filtered and combined to synthesize the overall spectrum of the object. The synthesized object is then recovered by inverse Fourier transforming the synthesized spectrum. The main drawback with the technique in [12] is that it involves an array of large number of fibers for the synthesis of the object. In the present work this problem has been addressed. In the current work we present a technique in which an array of large number of fibers used in the previous work reported in [12] is replaced by a small number of fibers at the expense of

28

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

object rotation. Three fibers are enough for three times superresolution imaging when combined with object rotation. The three fibers are arranged on the vertices of a right-angled isosceles triangle and one fiber at a time is used to illuminate the object transparency placed at the object plane in a 4f system. During the object illumination, the object is rotated in steps of π/2, π and 3π/2 with respect to its initial orientation and images corresponding to different portions of the object's spectrum obtained in each step are recorded at the image plane in the presence of a reference beam, thus recording the image holograms corresponding to the different parts of the object's spectrum. The holograms are processed computationally and desired portions of the object's spectrum are retrieved which are then combined to obtain synthesized spectrum and synthesized image which we may refer to as super-resolved image. In the next section we are outlining the description of our proposed scheme for super-resolution.

2. System description The proposed technique is a super resolution technique based on holographic imaging. It involves recording of different images at the image plane in the presence of a reference beam mutually coherent with the object beams. The proposed experimental arrangement consists of a He–Ne laser coupled to a 1 by 4 coupler with equal intensity outputs from the four output fibers. One of the output fibers acts as a reference beam and illuminates the image plane. The proposed technique uses very simple illumination consisting of three fibers which emit mutually coherent beams. The fibers are arranged with their distal ends lying on a plane perpendicular to the optical axis of a 4f imaging system and are at the vertices of a right angled isosceles triangle. The fiber 1 is positioned at the optical axis of the imaging system and the fibers 2 and 3 are located at the other two vertices of the isosceles rightangled triangle. The scheme is shown in Fig. 1. The fibers in the equal sides of the isosceles triangle are separated by the size of the limiting square-aperture to allow the adjacent parts of the spectrum to pass through the limiting aperture without any overlapping and without any discontinuity when operated by one fiber at a time. The limiting aperture is a square aperture centered on the optical axis and is placed at the Fourier transform plane of the 4f imaging system parallel to the object plane. We have deliberately set up the dimensions of this limiting aperture to make the

imaging system a band-limited system. The aperture will allow only a small part of the object's original spectrum to pass through it and contributes as an image on the image plane due to the inverse Fourier transforming nature of lens 3. Lenses 1, 2 and 3 all have the same focal lengths equal to f and are positioned after the fiber plane in the given order. The distal ends of the three fibers forming an object illumination system act as three point sources placed at the front focal plane of lens1. Lens 1 generates three plane waves corresponding to the three fibers to illuminate the object. Only one fiber will be used at one time and fiber 1 can be used once during the whole experiment. When fiber 1 is in the ON state and fibers 2 and 3 are in the OFF state then the part of the object spectrum that will pass through the limiting aperture is the central portion of the object's spectrum equal in size to the limiting aperture of the imaging system and is marked by 1 in the object's assumed spectrum shown in Fig. 2(a). The spectral portion marked by 1 is the band pass spectrum of the object (under no rotation) when illuminated by fiber 1. Image corresponding to this portion of the spectrum will be recorded as the hologram in the presence of the reference beam. When the object is illuminated by switching ON fiber 2, the portion of the object spectrum marked by 2 in Fig. 2(a) will then pass through the limiting aperture of the optical system (Fig. 1) and the corresponding image hologram will be recorded on the charged coupled device (CCD). In the next step, fiber 3 is switched ON and the object is illuminated. The portion of the object's spectrum marked by 3 in Fig. 2(a) will then pass through the limiting aperture of the optical system (Fig. 1) and the corresponding image hologram will be recorded. With fiber 1 image hologram corresponding to the on-axis portion of the object spectrum with dimensions equal to the limiting aperture of the optical system is recorded whereas with fibers 2 and 3 image holograms corresponding to the off-axis portion of the object spectrum (labeled by portions 2 and 3 in Fig. 2(a)) are recorded. The object is now rotated counter-clockwise by π/2 so that the portion of the spectrum labeled by “a” in Fig. 2(a) takes the position of portion 2 and the portion labeled with “b” takes the position of portion 3. The rotated spectrum is shown in Fig. 2(b). The object after rotation is illuminated by fibers 2 and 3 one by one and the π/2 counter-clockwise rotated portion of spectrum labeled by “a” and “b”, respectively, passes through the limiting aperture of the optical system and contributes as images at the image plane where the holograms of these images corresponding to these rotated portions of the spectrum are recorded. The object

Fig. 1. Proposed experimental setup for holography based super-resolution using in-plane object rotation and simple fiber based illumination.

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

29

Fig. 2. Rotation of an assumed object spectrum with division into 9 square portions of equal size, (a) spectrum of object in reference position, (b) rotation of spectrum in (a) counter-clockwise by π/2, (c) rotation of spectrum in (a) counter-clockwise by π, (d) rotation of spectrum in (a) counter-clockwise by 3π/2.

is further rotated counter-clockwise by another π/2 from its previous position or by π from its initial position. This changes the spectral positions such that the spectral portion marked by “c” in Fig. 2(a) now takes the position marked by “2” and the spectral portion labeled by “d” now takes the position of the portion labeled by “3”. This is shown in Fig. 2(c). Positions and orientations of the letters “c” and “d” in Fig. 2(c) show the status after the rotation has taken place. The portions of the spectrum labeled by “c” and “d” in Fig. 2(a) have undergone a counter-clockwise rotation of π as shown in Fig. 2(c) and these portions will pass through the optical system and will contribute as images at the image plane. The object is then given a further counter-clockwise rotation of another π/2 with respect to its previous position (Fig. 2(c)) or 3π/2 with respect to its initial position (Fig. 2(a)). The corresponding object spectrum is shown in Fig. 2(d). Illumination of the object with fibers 2 and 3 separately will allow the spectral portions “e” and “f” respectively to pass through the system. This proposed technique involves the recording of nine different holograms corresponding to each portion of the spectrum with orientations shown in Fig. 2. The digital holograms are imported into a computer and processing is done in the Fourier domain wherefrom each hologram contributes respective portion. The selected portions obtained from the spectral processing of each hologram are given the desired position and orientation in the formation process of the object's original spectrum. In the next section we are presenting the mathematical modeling of our proposed technique.

3. Mathematical modeling of the proposed technique In the proposed holography based superresolution system the input object is illuminated by plane waves from different angles produced by the combination of an input fiber array and an achromatic double convex lens1 as shown in Fig.1. The fiber array consists of three fibers and the object will be illuminated only by three plane waves one by one. The fibers marked as 2 and 3 will be used with in-plane rotation of the object by 0, π/2, π and 3π/2 for the full recovery of the object spectrum. We set the origin of coordinate system at the central fiber 1 with the Z-axis along the optical axis of the system shown in Fig. 1. The plane containing the distal ends of the three fibers is perpendicular to the Z-axis. The coordinates of the three fibers can be written in the form: ðma^i; na^jÞ where m and n are the integer indices which run along the X-axis and Y-axis and ‘a’ is the length of one side of the square limiting aperture placed at the Fourier transform plane in Fig. 1 and the unit vectors in the direction of these axes are represented by biand bj, respectively. For the fibers 1, 2 and 3 the values of ðm; nÞ are (0, 0), (0, 1) and (1, 1), respectively. The distal ends of the fibers are directed toward the centre of lens 1 of Fig. 1 which pierces the Z-axis at (0, 0, f), where f is the focal length of lens 1. The unit

vector pointing in the direction of the propagation of plane waves generated by the combination of fibers and lens 1 is given by p^ ¼ γ ð m a; n a; f Þ 1

γ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2

ðmaÞ þ ðnaÞ2 þ f

2

ð1Þ

The parameter a (apart from some scaling factor) limits the bandwidth of the imaging system shown in Fig.1 and is placed at the Fourier transform plane. The square aperture is held with its ! axes parallel to the dimensions of the input object. Let r ¼ ðx; y; zÞ be any general point on the plane waves having amplitude Ao then the field at the input plane (F ip ) illuminating the object is given by ! ^ r F ip ¼ Ao U eik p: ð2Þ The input object is held nearly in contact with lens 1 just after it. For simplicity of calculations we are assuming the object of infinite dimensions or equivalently the maximum value of spatial frequency along any direction in the object is much greater than the reciprocal of length of the object in that direction. input  object  be represented by    Let the   g x cos θ þ y sin θ ;  x sin θ þ y cos θ where θ is defined in counter-clockwise sense and can take four values 0, π/2, π, 3π/2 for the proposed technique. The field just after the input object (M m;n;θ ) can be found from !          p: r M m;n;θ ðx; yÞ ¼ Ao :eikb g x cos θ þ y sin θ ; x sin θ þ y cos θ ð3Þ Let f x and f y be the variables at the Fourier transform plane then the size of the square limiting aperture   ða=λf Þ can  bedefined by the product of two rect functions: rect f x =ða=λf Þ :rect f y =ða=λf Þ . The optical system shown in Fig. 1 is symmetric about the Fourier transform plane if the system is considered from the object plane to the image plane. Due to this symmetry and due to unit magnification of the system, we may consider ðx; yÞas the coordinates at the image plane. The field at the image plane (U m;n;θ ) can be found from



fy fx :rect U m;n;θ ðx; yÞ ¼ ℑ  1 ℑ M m;n;θ ðx; yÞ rect a=λf a=λf  2 a a a x :sinc y ð4Þ ¼ M m;n;θ ðx; yÞ    sinc λf λf λf where  is the convolution operator and sincðxÞ ¼ sin ðπ xÞ=ðπ xÞ. The field given by Eq. (4) is made to interfere at the image plane with a reference beam. We are assuming that the propagation vector of our reference beam lies in the (–X)Z plane making an angle α with the positive Z-axis. The reference beam can be written in the form given by Rα;ϕ ¼ Ai :e½i:kðz: where

cos ½α  x sin ½αÞ þ i:ϕ: cos ½β

ð5Þ

ϕ is the phase of the reference beam which is

30

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

controllable with four possible values: 0, π/2, π, 3π/2 and this helps to determine the unknown phase factor cos β [13]. In the absence of this unknown phase factor we may set ϕ ¼ 0. The variable z takes a constant value at the image plane which is equal to 5f shown in Fig. 1. The constant phase  for the scheme  factor Exp i:k:z cos ½α may also be dropped. The simplified form of the reference beam in Eq. (5) can be written as given in Eq. (6) which is appropriate for the present case. Rα ðx; yÞ  Ai :e  i:k

sin ½αx

ð6Þ

The two fields, one given by Eq. (4) and the other given by Eq. (6), interfere at the image plane where CCD camera records the hologram. The interferogram recorded by the CCD is given by  2 I m;n;θ ðx; yÞ ¼ U m;n;θ þRα   2 ¼ U m;n;θ Rnα þ U nm;n;θ Rα þ U m;n;θ  þ jRα j2 ð7Þ Either the term U m;n;θ Rnα or U nm;n;  θ Rα can be processed to obtain U m;n;θ , provided α Z sin  1 a=f . After recording the holograms on a CCD (Eq. (7)), the next step is to process these holograms numerically on a computer to obtain U m;n;θ . The first numerical step is to take the Fourier ofoEq. (7) and then separate n

transform out either ℑ U m;n;θ Rnα or ℑ U nm;n;θ Rα for further processing. Labeling the desired term (des for desired) (e.g. first term on right hand side of Eq. (7)) by I des m;n;θ , we obtain Eq. (8) as follows: n o

n ℑ I des m;n;θ ¼ ℑ U m;n;θ Rα



fy fx ¼ ℑ M m;n;θ :rect rect  ℑ Rnα a=λ:f a=λ:f

        cos θ þ f y þ γ :n:a sin θ ; f x  λa:f þ γ :m:a ¼ Ao Ai :ei:γ :f :z G λ λ      a γ :m:a γ :n:a þ sin θ þ f y þ cos θ  fx λ:f x λ λ ! f x  λa:f fy rect rect a=λ:f a=λ:f

ð8Þ

As the terms in Eq. (8) are shifted to the right side by  obvious  a=λ:f (i.e. along the þ f x axis) and this shift is caused by the presence of the shifted reference beam used in the recording of image hologram given by Eq. (7). Eq. (8) also indicates a shift of the object spectrum due to the plane wave illumination   and the amount of this shift is equal to ððγ :m:aÞ=λÞ; ððγ :n:aÞ=λÞ . The shift caused by the reference beam makes the terms in Eq. (8) to be separable from the Fourier transform of the other three terms present in Eq. (7). The spectral part given on the right hand side of Eq. (8) is shifted to the centre of a blank image and by doing this we remove the spectral off-set caused by the reference beam and Eq. (8) takes the following form.

   n o      cos θ þ f y þ γ :n:a sin θ ; f x þ γ :m:a ℑ I des ¼ Ao Ai :ei:γ :f :z G λ λ m;n;θ centred





 fxþ rect





    γ :m:a γ :n:a sin θ þ f y þ cos θ λ λ

fy fx rect a=λ:f a=λ:f



ð9Þ

The object rotation parameter θ takes four values (0, π/2, π, 3π/ 2) and corresponding to each value of θ, three combinations of plane waves due to three fibers are available. Eq. (9) actually gives nine different spectral parts of the whole object spectrum. The nine spectral parts bounded by two rect functions are shifted to their actual positions and the shift of each spectral γ :n:a part is given by f x ¼  γ :m:a λ ; f y ¼  λ . After making the necessary shifts to each spectral part given by Eq. (9), the following modified equation can be written. In writing Eq. (10), we have

also dropped the constant phase term ei:γ :f :z .

n o ℑ I des m;n;θ

centred þ shif ted

rect

         ¼ Ao Ai G f x cos θ þ f y sin θ ;  f x sin θ þ f y cos θ

! ! f x  γ :m:a f y  γ :n:a λ λ rect a=λ:f a=λ:f

ð10Þ

Eq. (10) stands for nine measurements. Combining these nine measurements into a single equation can be written only with three times broadened rect functions and the resulting equation will take the following form as given by  

fy fx ℑ I des ¼ Ao Ai G f x ; f y rect rect ð11Þ 3a=λ:f 3a=λ:f Eq. (11) gives the synthesized spectrum and I des is the synthesized image which is obtained from      2  2 3a2  3a 3a 2 x sinc y  ð12Þ I des ¼ Ao  Ai  :  g ðx; yÞ  sinc λf λf λf The band-limited image can be obtained from Eqs. (3) and (4) by using a single fiber f1 ðm ¼ 0; n ¼ 0Þand un-rotated input object   θ ¼ 0 . The band-limited image is obtained from    2   2  a 2  a a x sinc y  ð13Þ I band limited ¼ Ao    g ðx; yÞ  sinc λf λf λf The comparison of the band-limited image given by Eq. (13) and the synthesized image given by Eq. (12) reveals that the synthesized image is three times more resolved than the band-limited image because the main lobe of the point spread function (given by sinc functions) in Eq. (12) is three times narrower than the point spread function used in the band-limited image. Thus Eq. (12) gives three times super-resolved image for the proposed scheme given in Fig. 1. In the next section we are presenting the simulation results to support our idea.

4. Simulation results For the simulation of the proposed technique we have chosen Lena's image as an input object transparency. The original Lena's image is shown in Fig. 3(a) with its corresponding spectrum in Fig. 3 (b). The object shown in Fig. 3(a) will be regarded as the input object without any rotation. The object in Fig. 3(a) is illuminated by fiber 1 using the setup shown in Fig. 1. The limiting aperture at the Fourier plane was a square aperture that could pass only one third of the input object spatial information towards the image plane. In the presence of the coherent reference beam at the image plane as shown in Fig. 1, interference occurs between the image through the 4f system and the reference beam. A CCD at the image plane (not shown in Fig.1) records the hologram of the input object. The recorded hologram of the input object when illuminated using the light from fiber 1 is shown in Fig. 4(a). With the illumination of the light from fiber 1, the central portion of the object spectrum as indicated by label 1 in Fig. 2(a) will pass through the 4f system of Fig. 1. The input object is held in the same orientation and is then illuminated by the light from fiber 2. The recorded hologram of the input object with illumination from fiber 2 is shown in Fig. 4(e). With the illumination of the light from fiber 2, the portion of the object spectrum as indicated by label 2 in Fig. 2(a) will pass through the 4f system of Fig. 1 and will contribute at the image plane where the hologram is recorded on a CCD. Keeping the object fixed, it is then illuminated by the light from fiber 3 and the corresponding hologram is shown in Fig. 4(i). With the illumination of the light from fiber 3, the portion of the object spectrum as indicated by label 3 in Fig. 2(a) will pass through the 4f system of Fig. 1 and will contribute at the image plane where the hologram is recorded on a CCD.

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

31

Fig. 3. Image of Lena in (a) with its spectrum in (b).

Three holograms are recorded using fibers 1, 2 and 3 for one orientation of the input object. The object is then rotated counterclockwise by π/2 and three more holograms are recorded using the fibers 1, 2 and 3. They are shown in Fig. 5(a), (e) and (i) corresponding to the illumination by fibers 1, 2 and 3, respectively. The parts of spectra labeled by ‘1’, ‘a’ and ‘b’ as shown in Fig. 2(b) corresponding to the illumination by fibers 1, 2 and 3 then pass through the limiting aperture of the imaging system and contribute at the image plane in the form of an image where the corresponding holograms are recorded. The input object is further rotated counter-clockwise by another π/2 from its previous position or by π from its initial position shown in Fig. 1(a). Using the illumination from fibers 1, 2 and 3, three more holograms are recorded which are shown in Fig. 6(a), (e) and (i), respectively. The parts of spectra labeled by ‘1’, ‘c’ and ‘d’ as shown in Fig. 2(c) corresponding to the illumination by fibers 1, 2 and 3 then pass through the limiting aperture of the imaging system and contribute at the image plane in the form of an image where the corresponding holograms are recorded. The input object is given further counter-clockwise rotation of another π/2 from its previous position or 3π/2 from its initial position shown in Fig. 1 (a) and three more holograms are recorded which are shown in Fig. 7(a), (e) and (i) corresponding to the illumination by fibers 1, 2 and 3, respectively. The parts of spectra labeled by ‘1’, ‘e’ and ‘f’ as shown in Fig. 2(d) corresponding to the illumination by fibers 1, 2 and 3 then pass through the limiting aperture of the imaging system and contribute at the image plane in the form of an image where the corresponding holograms are recorded. A total of 12 holograms are recorded (8 non-redundant and 4 redundant, in practice 9 holograms are required which are obtained when fiber 1 is used once and fibers 2 and 3 are used for all rotations of the object), that is, three holograms corresponding to each rotation of the input object with the possible rotation angles for the objects are 0, π/2, π, 3π/2, all in the same sense of object rotation either clockwise or counter-clockwise. The recording of 12 holograms allows an extension of passband of the imaging system by three times. If the extension of bandwidth in a super-resolved image is required to be equal to N times the bandwidth of the passband then the number of fibers required in the setup is equal to (N2 þ3)/ 4 with the number of holograms to be recorded will be equal to (N2 þ 3) and the positions of the fibers on the illumination plane will be represented by (–m a, –n a, 0) where m and n are integer indices which runs along the x and y axes. For each n 40, …, (N– 1)/2, m takes the values m ¼0, …, (N–1)/2 except for the central fiber for which n ¼0 and m ¼0. The number of plane waves Exp[i k. r] generated by the fibers are equal to the number of fibers with !   b where p b is defined by Eq. (1). propagation vector k ¼ kp

After recording the 12 holograms for the case of N¼ 3 (extension of bandwidth three times the passband bandwidth) the next step is to process these holograms numerically on a computer for the recovery of other parts of the spectrum which are obtained by the illumination from fibers 2 and 3 along with rotation of the object by π/2, π and 3π/2. Each hologram is Fourier transformed and each transformed image shows three parts as obvious from Figs. 4(b, f, j), 5(b, f, j), 6(b, f, j) and 7(b, f, j). In each part of these images, the spectrum on the right hand side and the spectrum on the left hand side are complex conjugates of each other; however, the right hand side spectrum is the actual part of the original object's spectrum and is thus filtered out. The central part of the spectrum contains the spectrum of the square of the original object and the reference beams and thus is not useful. After filtering of the right hand spectra from each of the images shown in Figs. 4(b, f, j), 5(b, f, j), 6(b, f, j) and 7(b, f, j), these spectra are positioned in the centre of blank images as shown in Figs. 4(c, g, k), 5(c, g, k), 6(c, g, k) and 7(c, g, k) in order the inverse Fourier transforms algorithms to be applied correctly. The filtered and centered spectra corresponding to each hologram are inverse Fourier transformed and images corresponding to these parts of the spectrum (not mandatory for the proposed technique) are obtained which are shown in Figs. 4(d, h, i), 5(d, h, i), 6(d, h, i) and 7(d, h, i). For each rotational position of the original object, three holograms are recorded and three filtered spectral parts are obtained. These spectral parts are then repositioned according to the fibers used to obtain them. Four blank images are required to combine the required spectral parts obtained with three fibers for four rotational positions of the original object which is shown in Fig. 3(a). With fiber 1, the filtered spectra shown in Figs. 4(c), 5(c), 6(c) and 7(c) are positioned in the centre of blank images, with fiber 2, the filtered spectra shown in Figs. 4(g), 5(g), 6(g) and 7(g) are positioned at the position labeled by ‘2’ in Fig. 2(a) of the same blank images and with fiber 3, the filtered spectrum shown in Figs. 4(k), 5(k), 6(k) and 7(k) are positioned at the position labeled by ‘3’ in Fig. 2(a) of the same blank images. For each rotational position of the original object with illumination by the fibers 1, 2 and 3 the combined spectra in each case are shown in Figs. 4(m), 5(m), 6(m) and 7(m) respectively. The inverse Fourier transforms of these combined spectra are shown respectively in Figs. 4(n), 5(n), 6(n) and 7(n). These images (not mandatory for the proposed technique) can be compared qualitative with the band pass image shown in Fig. 4(d). The combined spectra shown in Figs. 4(m), 5(m), 6(m) and 7 (m) correspond to the labels (1, 2, 3), (1, a, b), (1, c, d) and (1, e, f) in Fig. 2(a), (b), (c) and (d), respectively. The spectrum in Fig. 4 (m) does not require any rotation as the three spectral parts in

32

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

Fig. 4. Recording of image holograms and their computational processing for the Lena's image as the object with 01 rotation and under illumination by three fibres: (a), (e), (i) are holograms recorded with fibres 1, 2 and 3, respectively, with respective spectra shown in (b), (f) and (j) with filtering and centering of right hand side spectra from each are shown in (c), (g) and (k) and the images corresponding to the filtered spectra are shown in (d), (h) and (l). Synthesized spectrum obtained by combining the spectra shown in (c), (g) and (k) is shown in (m) with the corresponding synthesized image in (n).

Fig. 4(m) are in their correct positions and orientations. To correct the positions and orientations of the three spectral parts in each of Figs. 5(m), 6(m) and 7(m), they are rotated clockwise by π/2, π and 3π/2, respectively, and the resulting spectrum is shown in Figs. 5(o), 6(o) and 7(o), respectively, with their corresponding images (not mandatory for the proposed technique) in Figs. 5(p), 6(p) and 7(p).

The final step in the computational processing of the proposed technique is to combine the four sets of spectrum shown in Figs. 4(m), 5(o), 6(o) and 7(o) to obtain the synthesized spectrum. The synthesized spectrum and the corresponding synthesized images are shown in Fig. 8(e) and (f), respectively, and, for comparison, the original object used in the simulation and its corresponding spectrum and the band-limited objects and the

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

33

Fig. 5. Recording of image holograms and their computational processing for the Lena's image as the object with π/2 counter-clockwise rotation and under illumination by three fibres: (a), (e), (i) are holograms recorded with fibres 1, 2 and 3, respectively, with respective spectra shown in (b), (f) and (j) with filtering and centering of right hand side spectra from each are shown in (c), (g) and (k) and the images corresponding to the filtered spectra are shown in (d), (h) and (l). Synthesized spectrum obtained by combining the spectra shown in (c), (g) and (k) is shown in (m) with the corresponding synthesized image in (n). The clockwise rotation of the synthesised spectrum in (m) and the synthesised image in (n) by π/2 are shown in (o) and (p), respectively.

corresponding spectrum are also shown in Fig. 8(a, b) and (c, d), respectively. The synthesized spectrum in Fig. 8(f) when normalized matches with the original object spectrum shown in Fig. 8 (b) up to four decimal points. The synthesized image shown in Fig. 8(e) also matches with the original object shown in Fig. 8 (a) when compared in the normalized forms.

5. General discussion It will be appropriate to give a brief comparison between the proposed technique and the technique given in [12]. The two techniques are based on the same principle of recording the images, i.e. the holograms of the band-limited images of the object

34

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

Fig. 6. Recording of image holograms and their computational processing for the Lena's image as the object with π counter-clockwise rotation and under illumination by three fibres: (a), (e), (i) are holograms recorded with fibres 1, 2 and 3, respectively, with respective spectra shown in (b), (f) and (j) with filtering and centering of right hand side spectra from each are shown in (c), (g) and (k) and the images corresponding to the filtered spectra are shown in (d), (h) and (l). Synthesized spectrum obtained by combining the spectra shown in (c), (g) and (k) is shown in (m) with the corresponding synthesized image in (n). The clockwise rotation of the synthesised spectrum in (m) and the synthesised image in (n) by π are shown in (o) and (p), respectively.

are recorded. Both systems are 4f systems in which a limiting aperture is placed at the Fourier plane and both use a fixed reference beam. The illumination system is based on an array of 25 fibers arranged in 5 rows and in 5 columns in [12], whereas 3 fibers are arranged as shown in Fig. 1 of the proposed method with the size of the limiting aperture in [12] equal to half the size

of the limiting aperture used in the proposed method. Due to different sizes of limiting apertures, the technique in [12] addresses extension of bandwidth five times that of the corresponding limiting aperture, whereas the proposed method addresses extension of bandwidth three times that of the corresponding limiting aperture but the overall achievement in terms of

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

35

Fig. 7. Recording of image holograms and their computational processing for the Lena's image as the object with 3π/2 counter-clockwise rotation and under illumination by three fibres: (a), (e), (i) are holograms recorded with fibres 1, 2 and 3, respectively, with respective spectra shown in (b), (f) and (j) with filtering and centering of right hand side spectra from each are shown in (c), (g) and (k) and the images corresponding to the filtered spectra are shown in (d), (h) and (l). Synthesized spectrum obtained by combining the spectra shown in (c), (g) and (k) is shown in (m) with the corresponding synthesized image in (n). The clockwise rotation of the synthesised spectrum in (m) and the synthesised image in (n) by 3π/2 are shown in (o) and (p), respectively.

super resolution is the same in both cases. The planar input object was fixed in the work contained in [12], whereas the input object is rotated (in-plane rotation) by 0, π/2, π, 3π/2 during illumination by the three fibers in the proposed work. The reduction in the number of fibers in the proposed method in comparison with the method in [12] is due to the use of in-plane rotation of the input object by angles separated by π/2.

For a true comparison, we are now assuming that both the systems use the same size of the limiting aperture equal to the size of the limiting aperture mentioned in the proposed method. For a three times super resolution, the proposed technique would require three fibers to be arranged as shown in Fig. 1 with object in-plane rotation by 0, π/2, π, 3π/2, i.e. illumination of the object by three fixed plane waves along with object in-plane rotation at

36

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

Fig. 8. Original image of Lena in (a) with its spectrum in (b) (from Fig. 3), band-limited image in (c) with its spectrum in (d) (from Fig. 4(c, d)) and the synthesised image of Lena in (e) with its synthesised spectrum in (f) (from Figs. 4(m), 5(o), 6(o), and 7(o)).

four discrete positions will help to achieve three times the super resolution, whereas the work contained in [12] would require 9 fibers arranged in 3 rows and in 3 columns but with the input object fixed, i.e. illumination of the object by nine fixed plane waves with no object rotation. With the same size of the limiting aperture, the technique reported in [12] uses more fibers in comparison with the current technique and the handling of more fibers is relatively a bit difficult task and so the current technique has an advantage in this respect over the previous technique [12]. The current technique additionally uses in-plane object rotation in four discrete positions separated by π/2. Object rotation is a relatively convenient task in comparison with the task of arranging

additionally 6 more fibers as in [12] but of course object in-plane rotation in the current technique is an overhead in comparison with the use of fixed object as in [12]. The holograms recorded in both the techniques are equal in number; however, the processing of the recorded holograms for the synthesis of super resolved object spectrum would involve more steps and more challenges for the current technique in comparison with the technique in [12]. The technique in [12] would require more input optical power and more challenges in setting up of more fibers in comparison with the current technique. Due to the similarity of the experimental set ups in both the techniques, the rest of the experimental challenges and difficulties may be considered equal.

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

In view of the above discussion, the current technique may be given an edge over the technique reported in [12] due to its simple illumination structure. The formation of synthesized spectrum in the proposed technique was achieved from band limited spectral parts obtained by the illumination of object with three fibers one by one and with the coupling of the object in-plane rotation. The formation of synthesized spectrum as explained in Section 4 was based purely on ideal conditions. In real physical situations, there may be a separation or overlapping of the neighboring spectral parts depending on larger or smaller separation between the fibers than the separation dictated by the limiting aperture and the synthesized spectrum may be corrupted. To avoid this situation we suggest a calibration of the related experimental setups by the use of two overlapped (on a single glass plate) amplitude cosinusoidal object transparencies with spatial cutoff frequencies along the vertical and along the diagonal axes, respectively, twice the cutoff frequencies set up by the limiting aperture along these two axes. The illumination of such an object with the three fibers should focus the corresponding point sources at the centre of the limiting aperture and the recording of holograms would take place at the image plane. The synthesized spectrum will be reconstructed from these holograms and finally a synthesized image will be formed. The spatial frequencies in the synthesized image in the vertical and diagonal directions can be compared with the corresponding data on the original object in order to estimate and measure any separation or overlapping between the neighboring spectral parts and corrections can be made at the illumination end to resolve such problems. In a simulation code modeling the current technique, the fibers can be set up both in position and in orientation very accurately and an accurately made cosinusoidal object transparency can be placed to evaluate the performance of the system where the effect on the overlapping of spectral contents of the object can be demonstrated easily. In simulation, system parameters can be modeled very accurately and their effects on the synthesized images can be demonstrated which is sometimes not possible with experimentation alone. Experimental work in the field of digital holography, a part of which has been reported in [13], provides useful information concerning construction and reconstruction of holograms and related issues and such technical information can be straightforwardly be extended to the current technique as far as the experimentation is concerned. The illumination system of the current technique is based on the use of three fibers and we are assuming that they have nearly equal mode field diameter. There may be an overlap in the neighboring spectral parts while passing through the limiting aperture if the fibers have different mode field diameters. This would require a calibration procedure similar to the one mentioned in the foregoing paragraph for the correction of spectral overlap during the experiment. The current technique has been demonstrated with a 4f imaging system as far as the simulation is concerned. It is because a 4f system is generally used as a coherent imaging system for the demonstration of proof of principle experiments as it provides the desired flexibility in the control of bandwidths of the imaging systems by manipulating the size of the limiting aperture. This flexibility is missing when working with real imaging systems like microscopes. Once the techniques are demonstrated using a 4f system then the techniques may be adopted to work with real imaging systems like ordinary or specialized microscopes. To adopt the current technique based on 4f system for a general microscope based system, the configuration of the fibers in the illumination system will remain the same. However, the separation between the fibers would change according to the cut off frequency of the microscope objective lens given by ð2NAÞ=λ (NA is

37

the numerical aperture of the objective lens and factor 2 is due to equal numerical aperture of the illuminating system) and this has been denoted by the reciprocal of letter ‘a’ for the 4f system of Fig. 1. We have given a relationship in Section 3 between the cutoff frequency of the 4f system and the separation between the fibers. The same relationship can also be used to work out the separation between the fibers when considering a microscopy based imaging system. The 4f system of Fig. 1 just after the object plane is replaced by microscope objective lens, microscope tube (if necessary) and the eye piece. The eye piece may be coupled with a CCD device where a coherent reference beam from the other fiber similar to fiber 4 in Fig. 1 may be used to record band limited holograms which can similarly be processed to obtain synthesized spectrum and synthesized image. An appropriate alternative to the scheme for conversion of the current technique into a microscopy based system would equally work. If we take NA ¼ 0:6 (which is valid for ordinary microscope), the size of the limiting aperture becomes 0.53 mm. In Section 3, we have given calculations for the general size of the limiting aperture which may be specified for any given numerical aperture of the objective lens.

6. Conclusion In the presence of many other holography based superresolution techniques, the main emphasis of our technique was to come out with a simpler method to obtain three times more super-resolved images. The technique uses simple illumination method based on three fibers and simple in-plane object rotation by 0, π/2, π and 3π/2. The three fibers lie at the vertices of a right angled isosceles triangle with one vertex (central fiber) on the optical axis of the system. The central fiber is used only once whereas the other two fibers are used for each in-plane rotation of the object. A total of nine non-redundant holograms are recorded to synthesize the object spectrum. If the central fiber is also used for all the rotational positions of the object then in addition to eight non-redundant holograms, 4 redundant holograms can also be obtained and the additional information obtained from these redundant holograms can be used to determine the phase of the reference beam if it changes from measurement to measurement. The mathematical modeling has been presented in the general form and does not require extension for the 5 times or 7 times super-resolution. The general form of illumination has also been addressed in the section on simulation results outlining the shape of fiber array which can be used for 5 times or 7 times superresolution. The simulation results have been carefully obtained to support the mathematical foundation of the proposed technique and each aspect of the proposed method has been elaborated with simulation results. Digital recording of holograms and extraction of object spectra is not new now. Keeping this fact in mind, we have not performed the experimental part of the technique with this believe that no new information would be available and the simulation results would be enough to establish and to elaborate the main features of the proposed method. We believe that the proposed technique may be useful as a simplified holographic based method of super-resolution.

Acknowledgment We acknowledge Higher Education Commission of Pakistan (HEC) for provision of literature and PIEAS for providing technical facilities.

38

A.A. Mudassar / Optics and Lasers in Engineering 75 (2015) 27–38

References [1] Mico V, Zalevsky Z, García-Martínez P, García J. Superresolved imaging in digital holography by superposition of tilted wavefronts. Appl Opt 2006;45:822–8. [2] Mico V, Zalevsky Z, García J. Superresolution optical system by common-path interferometry. Opt Express 2006;14:5168–77. [3] Mico V, Zalevsky Z, Ferreira C, García J. Superresolution digital holographic microscopy for three dimensional samples. Opt Express 2008;16:19260–70. [4] Phan AH, Park JH, Kim N. Super-resolution digital holographic microscopy for three dimensional sample using multipoint light source illumination. Jpn J Appl Phys 2011;50:092503. [5] Paturzo M, Merola F, Grilli S, De Nicola S, Finizio A, Ferraro P. Super-resolution in digital holography by a two dimensional dynamic phase grating. Opt Express 2008;16:17107–18.

[6] Mudassar AA, Hussain A. Superresolution of active spatial frequency heterodyning using holographic approach. Appl Opt 2010;49:3434–41. [7] Mudassar A, Harvey AR, Greenaway AH, Jones JDC. Resolution beyond classical limits with spatial frequency heterodyning. Chin Opt Lett 2006;4:148. [8] Mico V, Zalevsky Z, Garcia Martinez P, Garcia J. Synthetic aperture super resolution with multiple off-axis holograms. J Opt Soc Am A 2006;23:3162–70. [9] Ueda M, Sato T. Superresolution by holography. J Opt Soc Am A 1971;61:418–9. [10] Mitsuhiro U, Takuso S, Masato K. Superresolution by multiple super-position of image holograms having different carrier frequencies. J Mod Opt 1973;20:403–10. [11] Massig JH. Digital off-axis holography with a synthetic aperture. Opt Lett 2002;27:2179–81. [12] Hussain A, Mudassar AA. Holography based super resolution. Opt Comm 2012;285:2303–10. [13] Rastogi PK. Digital speckle pattern interferometry and related techniques. England: John Wiley and Sons Ltd.; 2001.