scispace - formally typeset
Search or ask a question

Showing papers on "Depth of field published in 2022"


Journal ArticleDOI
TL;DR: A new multi-focus image fusion method based on sparse representation (DWT-SR) is proposed, which reduces the operation burden by decomposing multiple frequency bands, and multi-channel operation is carried out by GPU parallel operation and the running time of the algorithm is further reduced.
Abstract: In the principle of lens imaging, when we project a three-dimensional object onto a photosensitive element through a convex lens, the point intersecting the focal plane can show a clear image of the photosensitive element, and the object point far away from the focal plane presents a fuzzy image point. The imaging position is considered to be clear within the limited size of the front and back of the focal plane. Otherwise, the image is considered to be fuzzy. In microscopic scenes, an electron microscope is usually used as the shooting equipment, which can basically eliminate the factors of defocus between the lens and the object. Most of the blur is caused by the shallow depth of field of the microscope, which makes the image defocused. Based on this, this paper analyzes the causes of defocusing in a video microscope and finds out that the shallow depth of field is the main reason, so we choose the corresponding deblurring method: the multi-focus image fusion method. We proposed a new multi-focus image fusion method based on sparse representation (DWT-SR). The operation burden is reduced by decomposing multiple frequency bands, and multi-channel operation is carried out by GPU parallel operation. The running time of the algorithm is further reduced. The results indicate that the DWT-SR algorithm introduced in this paper is higher in contrast and has much more details. It also solves the problem that dictionary training sparse approximation takes a long time.

48 citations


Proceedings ArticleDOI
27 Jul 2022
TL;DR: Holographic Glasses as discussed by the authors are composed of a pupil-replicating waveguide, a spatial light modulator, and a geometric phase lens to create holographic images in a lightweight and thin form factor.
Abstract: We present Holographic Glasses, a holographic near-eye display system with an eyeglasses-like form factor for virtual reality. Holographic Glasses are composed of a pupil-replicating waveguide, a spatial light modulator, and a geometric phase lens to create holographic images in a lightweight and thin form factor. The proposed design can deliver full-color 3D holographic images using an optical stack of 2.5 mm thickness. A novel pupil-high-order gradient descent algorithm is presented for the correct phase calculation with the user’s varying pupil size. We implement benchtop and wearable prototypes for testing. Our binocular wearable prototype supports 3D focus cues and provides a diagonal field of view of 22.8° with a 2.3 mm static eye box and additional capabilities of dynamic eye box with beam steering, while weighing only 60 g excluding the driving board.

20 citations



Journal ArticleDOI
TL;DR: In this paper , a bifunctional reconfigurable metalens for 3D depth imaging was proposed by dynamically switching between extended depth-of-field (EDOF) PSF and depth-sensitive double-helix PSF (DH-PSF), using the former metalens to reconstruct clear images of each depth and the latter to accurately estimate depth.
Abstract: Depth imaging is very important for many emerging technologies, such as artificial intelligence, driverless vehicles and facial recognition. However, all these applications demand compact and low-power systems that are beyond the capabilities of most state-of-art depth cameras. Recently, metasurface-based depth imaging that exploits point spread function (PSF) engineering has been demonstrated to be miniaturized and single shot without requiring active illumination or multiple viewpoint exposures. A pair of spatially adjacent metalenses with an extended depth-of-field (EDOF) PSF and a depth-sensitive double-helix PSF (DH-PSF) were used, using the former metalens to reconstruct clear images of each depth and the latter to accurately estimate depth. However, due to these two metalenses being non-coaxial, parallax in capturing scenes is inevitable, which would limit the depth precision and field of view. In this work, a bifunctional reconfigurable metalens for 3D depth imaging was proposed by dynamically switching between EDOF-PSF and DH-PSF. Specifically, a polarization-independent metalens working at 1550 nm with a compact 1 mm2 aperture was realized, which can generate a focused accelerating beam and a focused rotating beam at the phase transition of crystalline and amorphous Ge2Sb2Te5 (GST), respectively. Combined with the deconvolution algorithm, we demonstrated the good capabilities of scene reconstruction and depth imaging using a theoretical simulation and achieved a depth measurement error of only 3.42%.

11 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a hologram that imitates defocus blur of incoherent light by engineering diffracted pattern of coherent light with adopting multi-plane holography, thereby offering real world-like defocus- blur and photorealistic reconstruction.
Abstract: Abstract Holography is one of the most prominent approaches to realize true-to-life reconstructions of objects. However, owing to the limited resolution of spatial light modulators compared to static holograms, reconstructed objects exhibit various coherent properties, such as content-dependent defocus blur and interference-induced noise. The coherent properties severely distort depth perception, the core of holographic displays to realize 3D scenes beyond 2D displays. Here, we propose a hologram that imitates defocus blur of incoherent light by engineering diffracted pattern of coherent light with adopting multi-plane holography, thereby offering real world-like defocus blur and photorealistic reconstruction. The proposed hologram is synthesized by optimizing a wave field to reconstruct numerous varifocal images after propagating the corresponding focal distances where the varifocal images are rendered using a physically-based renderer. Moreover, to reduce the computational costs associated with rendering and optimizing, we also demonstrate a network-based synthetic method that requires only an RGB-D image.

9 citations


Journal ArticleDOI
30 Nov 2022-PhotoniX
TL;DR: In this paper , a hardware-modification-free method that modulates the phase-space point-spreadfunctions (PSFs) to extend the effective high-resolution range along the z-axis by 3 times was proposed.
Abstract: Abstract High-speed visualization of three-dimensional (3D) processes across a large field of view with cellular resolution is essential for understanding living systems. Light-field microscopy (LFM) has emerged as a powerful tool for fast volumetric imaging. However, one inherent limitation of LFM is that the achievable lateral resolution degrades rapidly with the increase of the distance from the focal plane, which hinders the applications in observing thick samples. Here, we propose Spherical-Aberration-assisted scanning LFM (SAsLFM), a hardware-modification-free method that modulates the phase-space point-spread-functions (PSFs) to extend the effective high-resolution range along the z-axis by ~ 3 times. By transferring the foci to different depths, we take full advantage of the redundant light-field data to preserve finer details over an extended depth range and reduce artifacts near the original focal plane. Experiments on a USAF-resolution chart and zebrafish vasculatures were conducted to verify the effectiveness of the method. We further investigated the capability of SAsLFM in dynamic samples by imaging large-scale calcium transients in the mouse brain, tracking freely-moving jellyfish, and recording the development of Drosophila embryos. In addition, combined with deep-learning approaches, we accelerated the three-dimensional reconstruction of SAsLFM by three orders of magnitude. Our method is compatible with various phase-space imaging techniques without increasing system complexity and can facilitate high-speed large-scale volumetric imaging in thick samples.

8 citations


Journal ArticleDOI
TL;DR: In this paper , a high-efficiency extended depth-of-focus metalens is proposed by adjoint-based topology-shape optimization approach, wherein the theoretical electric field intensity corresponding to a variable focal-length phase is utilized as the figure of merit.
Abstract: Abstract Longitudinal optical field modulation is of critical importance in a wide range of applications, including optical imaging, spectroscopy, and optical manipulation. However, it remains a considerable challenge to realize a uniformly distributed light field with extended depth-of-focus. Here, a high-efficiency extended depth-of-focus metalens is proposed by adjoint-based topology-shape optimization approach, wherein the theoretical electric field intensity corresponding to a variable focal-length phase is utilized as the figure of merit. Using a dozen of metalens with random structure parameters as initial structures, the average focal depth of topology-shape optimized metalens is greatly improved up to 18.80 μm (about 29.7λ), which is 1.54 times higher than the diffraction-limited focal depth. Moreover, all the topology-shape optimized metalens exhibit high diffraction efficiency exceeding 0.7 over the whole focal depth range, which is approximately three times greater than that of the forward design. Our results offer a new insight into the design of extended depth-of-focus metalens and may find potential applications in imaging, holography, and optical fabrication.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a real-time fiber-optic infrared imaging system was proposed to capture a flexible wide field of view (FOV) and large depth of field infrared image in real time.
Abstract: A key limitation in the observation of instruments used in operations and heart sutures during a procedure is the scattering and absorption during optical imaging in the presence of blood. Therefore, we propose a novel real-time fiber-optic infrared imaging system simultaneously capturing a flexible wide field of view (FOV) and large depth of field infrared image in real time. The assessment criteria for imaging quality of the objective and coupling lens have been optimized and evaluated. Furthermore, the feasibility of manufacturing and assembly has been demonstrated with tolerance sensitivity and the Monte Carlo analysis. The simulated results show that the optical system can achieve a large working distance of 8 to 25 mm, a wide FOV of 120°, and the relative illuminance is over 0.98 in the overall FOV. To achieve high imaging quality in the proposed system, the modulation transfer function is over 0.661 at 16.7 lp/mm for a 320×256 short wavelength infrared camera sensor with a pixel size of 30 µm.

7 citations


Proceedings ArticleDOI
04 Mar 2022
TL;DR: In this article , a miniature objective that minimizes spherical aberration across large range of focusing depths is presented, which can achieve on-axis diffraction limited focusing in water for depths from 0 μm to just over 1400 μm and a diffraction-limited field of view of up to 290 μm for a 790 nm laser.
Abstract: One major advantage of multiphoton microscopy (MPM) is that it can image below tissue surface and produce a stack of images showing sample structure at various depths. A miniature objective with depth scanning capability is needed for MPM endoscopy. Spherical aberration may be induced when changing the focusing depth during multiphoton microscope depth scanning, thus limiting the range over which images may be acquired. A specially designed miniature objective that minimizes spherical aberration across large range of focusing depths is presented. Simulations show that the 0.53 numerical aperture design can achieve on-axis diffraction limited focusing in water for depths from 0 μm to just over 1400 μm and a diffraction limited field of view of up to 290 μm for a 790 nm laser. In experiment, our multiphoton microscope demonstrates a field of view of 64 μm by 100 μm and a depth scanning range of 440 μm, limited by the scanning hardware. Depth scanning capability is confirmed by imaging 0.1 μm diameter fluorescent beads across the 440 μm range. Biological samples to a depth of 150 μm are imaged using the custom objective; the imaging depth is mainly limited by the absorption and scattering of the sample.

6 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived a concise imaging model by expressing the sensor tilt angles and the lens magnification into a simplified intrinsic matrix, and an integrated calibration algorithm without solving the tilt angle and a stereo-rectification method for stereo matching were developed.

6 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented a Gates' interferometer configuration with a LED source to project a sinusoidal fringe pattern without speckle noise and a very long field depth.

Journal ArticleDOI
TL;DR: In this paper , a chip-scale metalens device is implemented by a SiNx array with a co-and cross-polarization multiplexed dual-phase design and dispersive spectrum zoom effect.
Abstract: Microscopy is very important in research and industry, yet traditional optical microscopy suffers from the limited field-of-view (FOV) and depth-of-field (DOF) in high-resolution imaging. We demonstrate a simultaneous large FOV and DOF microscope imaging technology based on a chip-scale metalens device that is implemented by a SiNx metalens array with a co- and cross-polarization multiplexed dual-phase design and dispersive spectrum zoom effect. A 4-mm × 4-mm FOV is obtained with a resolution of 1.74 μm and DOF of 200 μm within a wavelength range of 450 to 510 nm, which definitely exceeds the performance of traditional microscopes with the same resolution. Moreover, it is realized in a miniaturized compact prototype, showing an overall advantage for portable and convenient microscope technology.

Journal ArticleDOI
TL;DR: In this article , the authors proposed compact light field photography for acquiring large-scale light fields with simple optics and a small number of sensors in arbitrary formats ranging from two-dimensional area to single point detectors, culminating in a dense multi-view measurement with orders of magnitude lower dataload.
Abstract: Inspired by natural living systems, modern cameras can attain three-dimensional vision via multi-view geometry like compound eyes in flies, or time-of-flight sensing like echolocation in bats. However, high-speed, accurate three-dimensional sensing capable of scaling over an extensive distance range and coping well with severe occlusions remains challenging. Here, we report compact light field photography for acquiring large-scale light fields with simple optics and a small number of sensors in arbitrary formats ranging from two-dimensional area to single-point detectors, culminating in a dense multi-view measurement with orders of magnitude lower dataload. We demonstrated compact light field photography for efficient multi-view acquisition of time-of-flight signals to enable snapshot three-dimensional imaging with an extended depth range and through severe scene occlusions. Moreover, we show how compact light field photography can exploit curved and disconnected surfaces for real-time non-line-of-sight 3D vision. Compact light field photography will broadly benefit high-speed 3D imaging and open up new avenues in various disciplines.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed pupil-aware holography that maximizes the perceptual image quality irrespective of the size, location, and orientation of the eye pupil in a near-eye holographic display.
Abstract: Holographic displays promise to deliver unprecedented display capabilities in augmented reality applications, featuring a wide field of view, wide color gamut, spatial resolution, and depth cues all in a compact form factor. While emerging holographic display approaches have been successful in achieving large étendue and high image quality as seen by a camera, the large étendue also reveals a problem that makes existing displays impractical: the sampling of the holographic field by the eye pupil. Existing methods have not investigated this issue due to the lack of displays with large enough étendue, and, as such, they suffer from severe artifacts with varying eye pupil size and location. We show that the holographic field as sampled by the eye pupil is highly varying for existing display setups, and we propose pupil-aware holography that maximizes the perceptual image quality irrespective of the size, location, and orientation of the eye pupil in a near-eye holographic display. We validate the proposed approach both in simulations and on a prototype holographic display and show that our method eliminates severe artifacts and significantly outperforms existing approaches.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a concept of encoding metalens based on the principle of digital addition to freely control the deflection angles of the focal spot in two-dimensional space.

Journal ArticleDOI
27 May 2022-Sensors
TL;DR: In this paper , a low-cost Scheimpflug camera was proposed to enlarge the depth of field of the camera without reducing the lens aperture and magnification; thus, when the measurement points are aligned in the depth direction, all points can be clearly observed in a single field of view with a high power zoom lens.
Abstract: Owing to the limited field of view (FOV) and depth of field (DOF) of a conventional camera, it is quite difficult to employ a single conventional camera to simultaneously measure high-precision displacements at many points on a bridge of dozens or hundreds of meters. Researchers have attempted to obtain a large FOV and wide DOF by a multi-camera system; however, with the growth of the camera number, the cost, complexity and instability of multi-camera systems will increase exponentially. This study proposes a multi-point displacement measurement method for bridges based on a low-cost Scheimpflug camera. The Scheimpflug camera, which meets the Scheimpflug condition, can enlarge the depth of field of the camera without reducing the lens aperture and magnification; thus, when the measurement points are aligned in the depth direction, all points can be clearly observed in a single field of view with a high-power zoom lens. To reduce the impact of camera motions, a motion compensation method applied to the Scheimpflug camera is proposed according to the characteristic that the image plane is not perpendicular to the lens axis in the Scheimpflug camera. Several tests were conducted for performance verification under diverse settings. The results showed that the motion errors in x and y directions were reduced by at least 62% and 92%, respectively, using the proposed method, and the measurements of the camera were highly consistent with LiDAR-based measurements.

Journal ArticleDOI
TL;DR: A real-time full-resolution depth estimation device that can turn any high speed camera into a 3D camera with true depth output, and the comparison with other state of the art algorithms shows the advantages in computational time and precision.
Abstract: This work introduces a real-time full-resolution depth estimation device, which allows integral displays to be fed with a real-time light-field. The core principle of the technique is a high-speed focal stack acquisition method combined with an efficient implementation of the depth estimation algorithm, allowing the generation of real time, high resolution depth maps. As the procedure does not depend on any custom hardware, if the requirements are met, the described method can turn any high speed camera into a 3D camera with true depth output. The concept was tested with an experimental setup consisting of an electronically variable focus lens, a high-speed camera, and a GPU for processing, plus a control board for lens and image sensor synchronization. The comparison with other state of the art algorithms shows our advantages in computational time and precision.

Journal ArticleDOI
TL;DR: In this article , a view and scanning-depth expansion photographic microscope using ultrafast switching mirrors and high-speed vision that can simultaneously extend both the field of view (FOV) and the scanning depth range of a typical optical microscope in real-time.
Abstract: This article presents a novel view and scanning-depth expansion photographic microscope using ultrafast switching mirrors and high-speed vision that can simultaneously extend both the field of view (FOV) and the scanning-depth range of a typical optical microscope in real-time. Two ultrafast switching mirrors that can switch at several hundred hertz are used for micro-gaze control via synchronization through a high-speed vision system. The FOV can be changed while maintaining a constant length of the optical path by adjusting the angles of the two mirrors, which ensures that the extended FOV is within the focal range. We evaluated and designed mirror angle combinations for view expansion at multiple depths, making it possible to extend view and depth in real-time without considering the complex design and computing required by computational imaging micro view expansion approaches. The effectiveness of our system was demonstrated by several experiments, including micro view expansion at a single depth and multiple depths in real-time. In addition, by evaluating the image quality of the view expansion image, our proposed system can automatically output always-focused view expansion images in real-time even when samples move at different depths.

Journal ArticleDOI
TL;DR: In this article , a focused sound field equivalent model was proposed to overcome the sound field distortion caused by the transducer's finite aperture effect and can quickly and effectively restore the elongated tangential resolution in the eccentric imaging regions.
Abstract: The existing back-projection algorithm in photoacoustic tomography assumes that the ultrasonic transducer scanning around the target to a point detector to be an ideal point detector, which leads to a notable tangential blur in the eccentric imaging regions and thus seriously degrades the image quality. In this paper, we propose a novel photoacoustic tomography reconstruction algorithm, which employs a focused sound field equivalent model to overcome the sound field distortion caused by the transducer’s finite aperture effect and can quickly and effectively restore the elongated tangential resolution in the eccentric imaging regions. Simulation results show that with this new method the tangential resolution can be improved by 5 times for the target at 6 mm from the center of rotation center. Experimental results show that this method can effectively restore the image tangential blur in the off-center regions, where the tiny structures of complex targets can be detected. This new method provides a valuable alternative to the conventional back-projection method and plays an important guiding role in the design of photoacoustic tomography systems based on circle/sphere scanning.

Journal ArticleDOI
TL;DR: In this paper , a light field imaging is introduced for microscopic fringe projection profilometry (MFPP) to obtain a larger depth of field, where the depth information is estimated based on the epipolar plane image (EPI) of light field.
Abstract: Fringe projection profilometry (FPP) has been widely researched for three-dimensional (3D) microscopic measurement during recent decades. Nevertheless, some disadvantages arising from the limited depth of field and occlusion still exist and need to be further addressed. In this paper, light field imaging is introduced for microscopic fringe projection profilometry (MFPP) to obtain a larger depth of field. Meanwhile, this system is built with a coaxial structure to reduce occlusion, where the principle of triangulation is no longer applicable. In this situation, the depth information is estimated based on the epipolar plane image (EPI) of light field. In order to make a quantitative measurement, a metric calibration method which establishes the mapping between the slope of the line feature in EPI and the depth information is proposed for this system. Finally, a group of experiments demonstrate that the proposed LF-MFPP system can work well for depth estimation with a large DOF and reduced occlusion.

Journal ArticleDOI
TL;DR: In this paper , a content-aware multi-focus image fusion approach based on deep learning was proposed to extend the depth-of-field of high magnification objectives effectively, using 2-fold fewer focal planes than normally required.
Abstract: Automated digital high-magnification optical microscopy is key to accelerating biology research and improving pathology clinical pathways. High magnification objectives with large numerical apertures are usually preferred to resolve the fine structural details of biological samples, but they have a very limited depth-of-field. Depending on the thickness of the sample, analysis of specimens typically requires the acquisition of multiple images at different focal planes for each field-of-view, followed by the fusion of these planes into an extended depth-of-field image. This translates into low scanning speeds, increased storage space, and processing time not suitable for high-throughput clinical use. We introduce a novel content-aware multi-focus image fusion approach based on deep learning which extends the depth-of-field of high magnification objectives effectively. We demonstrate the method with three examples, showing that highly accurate, detailed, extended depth of field images can be obtained at a lower axial sampling rate, using 2-fold fewer focal planes than normally required.

Journal ArticleDOI
TL;DR: In this article , a planar multi-focus MLA (MF-MLA) was prepared on a polydimethylsiloxane (PDMS) substrate by a combination of femtosecond laser wet etching (FLWE) and soft lithography techniques.
Abstract: Microlens array (MLA) is a key part of optical element for biomedical inspection, lab-on-a-chip device, and light-field cameras. However, conventional MLAs always have only one focal plane, which makes them difficult to capture targets at different positions. To solve this problem, a planar multi-focus MLA (MF-MLA) was prepared on a polydimethylsiloxane (PDMS) substrate by a combination of femtosecond laser wet etching (FLWE) and soft lithography techniques. The modulation transfer function (MTF) values of the microlenses all exceeded 0.2 when the spatial frequency was less than 200 lp/mm, indicating the high imaging ability of the lenses. The reported MF-MLA had three different focal lengths, so that targets at different positions could be easily imaged on a single image sensor. In addition, MLAs with more focal points could be fabricated by selecting appropriate processing parameters. It is anticipated that the as-fabricated MF-MLA will become a promising device to improve the performance of the optical system, especially in optical remote sensing, biomedical imaging, and machine vision.

Journal ArticleDOI
TL;DR: In this article , the authors proposed to use Fresnel lenses for holographic sound-field imaging, which has several desired properties, including thinness, lightweight, low cost, and ease of making a large aperture.
Abstract: In this Letter, we propose to use Fresnel lenses for holographic sound-field imaging. Although a Fresnel lens has never been used for sound-field imaging mainly due to its low imaging quality, it has several desired properties, including thinness, lightweight, low cost, and ease of making a large aperture. We constructed an optical holographic imaging system composed of two Fresnel lenses used for magnification and demagnification of the illuminating beam. A proof-of-concept experiment verified that the sound-field imaging with Fresnel lenses is possible by using the spatiotemporally harmonic nature of sound.

Posted ContentDOI
05 Aug 2022-bioRxiv
TL;DR: The EDoF-Miniscope as discussed by the authors integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the gradient refractive index (GRIN) lens of a head-mounted fluorescence miniature microscope.
Abstract: Extended depth of field (EDoF) microscopy has emerged as a powerful solution to greatly increase the access into neuronal populations in table-top imaging platforms. Here, we present EDoF-Miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the gradient refractive index (GRIN) lens of a head-mounted fluorescence miniature microscope, i.e. “miniscope”. We achieve an alignment accuracy of 70 μm to allow a 2.8X depth-of-field extension between the twin foci. We optimize the phase profile across the whole back aperture through a genetic algorithm that considers the primary GRIN lens aberrations, optical property of the submersion media, and axial intensity loss from tissue scattering in a Fourier optics forward model. Compared to other computational miniscopes, our EDoF-Miniscope produces high-contrast signals that can be recovered by a simple algorithm and can successfully capture volumetrically distributed neuronal signals without significantly compromising the speed, signal-to-noise, signal-to-background, and maintain a comparable 0.9-μm lateral spatial resolution and the size and weight of the miniature platform. We demonstrate the robustness of EDoF-Miniscope against scattering by characterizing its performance in 5-μm and 10-μm beads embedded in scattering phantoms. We demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-μm thick mouse brain sample, as well as vessels in a mouse brain. Built from off-the-shelf components augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a system consisting of a transmissive mirror device (TMD), a semi-transparent mirror (STM), and two integral imaging (II) display units.
Abstract: Depth of field (DOF) and resolution are mutually restricted in integral imaging (II) display. To overcome the trade-offs, we propose an II display system that simultaneously enhances the DOF and resolution. The system consists of a transmissive mirror device (TMD), a semi-transparent mirror (STM), and two II display units. Each II display unit consists of a 4K display screen and a micro-lens array (MLA). Benefiting from the parallel placement of the TMD and the STM, two central depth planes are reconstructed, which effectively enhances the DOF. Meanwhile, the resolution in the overlapping DOF region is increased to two times due to the interpolation of the light field information from two II display units. The impact of the distance between the two II display units and the TMD on the 3D image quality is analyzed. In geometric optics, a distance between the II two display units and the TMD is optimized to eliminate ghost images. In wave optics, a distance is optimized to eliminate 3D pixel gaps by exploiting the diffraction effect of the TMD. Both the geometric and wave optics are considered simultaneously to obtain a high-quality 3D image without ghost images and 3D pixel gaps. A DOF and resolution-enhanced II display system is developed, and the experimental results verify its feasibility.

Journal ArticleDOI
Yani Chen, Hang Liu, Yin Zhou, Feng-Lin Kuang, Lei Li 
TL;DR: Zhang et al. as mentioned in this paper developed an extended the depth of field (EDOF) and zoom microscope, which can realize EDOF with constant magnification and high resolution, and achieved optical axial scanning at different NA and magnifications in real time without any mechanical movement.
Abstract: Extending the depth of field (DOF) is especially essential in thick and 3D sample imaging. However, it's difficult to achieve both large DOF and high resolution in a zoom microscope. Currently, the use of optical sectioning to expand DOF still has the problem of inconstant magnification. Here, we develop an extended the depth of field (EDOF) and zoom microscope, which can realize EDOF with constant magnification and high resolution. Besides, the proposed microscope can achieve optical axial scanning at different NA and magnifications in real time without any mechanical movement. The proposed varifocal lens is employed to realize optical axial scanning, zooming, and keeping constant magnification when extending the DOF. Experimental results show that the proposed microscope can realize a continuous optical zoom of 10-40×, NA from 0.14 to 0.54, and the DOF of microscope can be extended to 1.2 mm.

Journal ArticleDOI
TL;DR: In this paper , the wavefront coding technology is used to modulate the imaging wavefront of the deflectometry, thereby making the measuring system insensitive to the defocus and other low-order aberration including astigmatism, field curvature, and so on.
Abstract: Phase measuring deflectometry is a powerful measuring method for complex optical surfaces, which captures the reflected fringe images encoded on the screen under the premise of focusing the measured specular surface. Due to the limited depth of field of the camera, the captured images and the measured surface cannot be focused at the same time. To solve the position-angle uncertainty issue, in this Letter, the wavefront coding technology is used to modulate the imaging wavefront of the deflectometry, thereby making the measuring system insensitive to the defocus and other low-order aberration including astigmatism, field curvature, and so on. To obtain the accurate phase, the captured fringe images are deconvoluted using the modulated point spread function to reduce the phase error. Demonstrated with a highly curved spherical surface, the measurement accuracy can be improved by four times. Experiments demonstrate that the proposed method can successfully reconstruct the complex surfaces defocusing the captured images, which can greatly release the focusing requirement and improve measurement accuracy.

Journal ArticleDOI
TL;DR: In this article , the authors investigated the competing effects of both depth of field and Airy disk size in three test cases: higher magnification and shorter working distance, lower magnification and short working distance.

Journal ArticleDOI
TL;DR: In this article , a wearable transparent ultrasonic transducer was developed and applied in a photoacoustic imaging (PAI) system with excellent performance, which was fabricated with a sandwich structure consisting of 150 nm thick amorphous indium tin oxide.
Abstract: Photoacoustic imaging (PAI) has become a popular technique for biomedical diagnosis. However, current systems are overly complex or use bulky piezoelectric materials, necessitating options other than these methods for future PAI systems. In this study, a wearable transparent ultrasonic transducer was developed and applied in a PAI system with excellent performance. The acoustic sensor was fabricated with a sandwich structure consisting of 150 nm thick amorphous indium tin oxide on both sides of a $110~\mu \text{m}$ commercial polyvinylidene fluoride film. This single-element sensor exhibited the performance of a 6.7 MHz center frequency, an 86.3% −3 dB fractional bandwidth, and a $20\times20\,\,\times $ 36 mm3 volumetric imaging field. Modified imaging phantoms were made to evaluate the lateral resolution, imaging depth, ability to make a convex photoacoustic image, quality of 3D imaging, and capability to perform PAI directly in the ambient environment with a customized phantom. In summary, the study has demonstrated the sensor’s potential for large-field PAI application in wearable health devices at a low cost.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: Zhang et al. as discussed by the authors proposed a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields, and fine-tuned the network using feature loss on another dataset collected by the two-shot method.
Abstract: Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur. While deep learning approach shows great promise in solving image restoration problems, defocus deblurring demands accurate training data that consists of all-in-focus and defocus image pairs, which is difficult to collect. Naive two-shot capturing cannot achieve pixel-wise correspondence between the defocused and all-in-focus image pairs. Synthetic aperture of light fields is suggested to be a more reliable way to generate accurate image pairs. However, the defocus blur generated from light field data is different from that of the images captured with a traditional digital camera. In this paper, we propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields. We first train the network on a light field-generated dataset for its highly accurate image correspondence. Then, we fine-tune the network using feature loss on another dataset collected by the two-shot method to alleviate the differences between the defocus blur exists in the two domains. This strategy is proved to be highly effective and able to achieve the state-of-the-art performance both quantitatively and qualitatively on multiple test sets. Extensive ablation studies have been conducted to analyze the effect of each network module to the final performance.