scispace - formally typeset
Search or ask a question

Showing papers by "José A. Ferrari published in 2015"


Journal ArticleDOI
TL;DR: The simulations and experimental results show that the proposed generalized phase-shifting algorithm with arbitrary phase-shift values can significantly reduce the influence of the color crosstalk.
Abstract: In order to overcome the limitations of the sequential phase-shifting fringe pattern profilometry for dynamic measurements, a color-channel-based approach is presented. The proposed technique consists of projecting and acquiring a colored image formed by three sinusoidal phase-shifted patterns. Therefore, by using the conventional three-step phase-shifting algorithm, only one color image is required for phase retrieval each time. However, the use of colored fringe patterns leads to a major problem, the color crosstalk, which introduces phase errors when conventional phase-shifting algorithms with fixed phase-shift values are utilized to retrieve the phase. To overcome the crosstalk issue, we propose the use of a generalized phase-shifting algorithm with arbitrary phase-shift values. The simulations and experimental results show that the proposed algorithm can significantly reduce the influence of the color crosstalk.

49 citations


Journal ArticleDOI
TL;DR: A new physically based method with a space variant point spread function (PSF) to accomplish all-in-focus reconstruction (image fusion) from a multi-focus image sequence in order to extend the depth-of-field.
Abstract: Limited depth-of-focus is a problem in many fields of optics, e.g., microscopy and macro-photography. We propose a new physically based method with a space variant point spread function (PSF) to accomplish all-in-focus reconstruction (image fusion) from a multi-focus image sequence in order to extend the depth-of-field. The proposed method works well under strong defocus conditions for color image stacks of arbitrary length. Experimental results are provided to demonstrate that our method outperforms state-of-the-art image fusion algorithms for strong defocus on both synthetic as well as real data images.

32 citations


Journal ArticleDOI
TL;DR: This work proposes a 3D scanning technique based on the combination of orthogonal fringe projection that allows us to compute depth field gradient maps in a fast and efficient manner by measuring the local bending of the projected fringes.

11 citations


Journal ArticleDOI
TL;DR: This work presents some pattern recognition applications of a generalized optical Hough transform and the temporal multiplexing strategies for dynamic scale and orientation-variant detection and validation experiments are presented.
Abstract: We present some pattern recognition applications of a generalized optical Hough transform and the temporal multiplexing strategies for dynamic scale and orientation-variant detection. Unlike computer-based implementations of the Hough transform, in principle its optical implementation does not impose restrictions on the execution time or on the resolution of the images or frame rate of the videos to be processed, which is potentially useful for real-time applications. Validation experiments are presented.

11 citations


Journal ArticleDOI
TL;DR: This work presents an efficient optical implementation of the generalized Hough transform using an electrical lens with variable focal length and a rotating pupil mask matching the pattern to be found, and validation experiments showing its real-time application are presented.
Abstract: The generalized Hough transform is a well-established technique for detecting complex shapes in images containing noisy or missing data. We present an efficient optical implementation of this transform using an electrical lens with variable focal length and a rotating pupil mask matching the pattern to be found. The proposed setup works under fully (i.e., both spatially and temporally) incoherent illumination and can handle orientation changes or scale variations in the pattern. Validation experiments showing its real-time application are presented.

10 citations


Journal ArticleDOI
TL;DR: A novel phase retrieval method is presented that completely sidesteps the phase unwrapping process, significantly eliminating the guessing in phase reconstruction and thus decreasing the time data processing.
Abstract: Phase unwrapping is probably the most challenging step in the phase retrieval process in phase-shifting and spatial-carrier interferometry. Likewise, phase unwrapping is required in 3D-shape profiling and deflectometry. In this paper, we present a novel phase retrieval method that completely sidesteps the phase unwrapping process, significantly eliminating the guessing in phase reconstruction and thus decreasing the time data processing. The proposed wrapping-free method is based on the direct integration of the spatial derivatives of the interference patterns under the single assumption that the phase is continuous. This assumption is valid in most physical applications. Validation experiments are presented confirming the robustness of the proposed method.

10 citations


Journal ArticleDOI
TL;DR: The compensation of bending-induced linear birefringence in single-mode fibers coiled in a nonplanar path by alternating orthogonal bending planes can be applied for the construction of bireFringence-free fiber coils in Faraday sensor heads to improve their sensitivity.
Abstract: We demonstrate the compensation of bending-induced linear birefringence in single-mode fibers coiled in a nonplanar path by alternating orthogonal bending planes. This effect can be applied for the construction of birefringence-free fiber coils in Faraday sensor heads (e.g., in current sensors) to improve their sensitivity. Validation experiments are presented.

7 citations


Proceedings ArticleDOI
TL;DR: An optical implementation of the circle Hough transform with an electrical lens with variable focal length and annular pupil is presented, suitable for real-time applications.
Abstract: We present an optical implementation of the circle Hough transform with an electrical lens with variable focal length and annular pupil. The system works under incoherent light and it is suitable for real-time applications. Validation experimental results are provided.

2 citations


Proceedings ArticleDOI
TL;DR: For optical systems under severe defocus, a method to estimate the focus slices (i.e. in-focus region of each of the acquired images of a stack) from Fourier based all-in-focus reconstructed image is proposed.
Abstract: For optical systems under severe defocus, we propose a method to estimate the focus slices (i.e. in-focus region of each of the acquired images of a stack) from Fourier based all-in-focus reconstructed image. Experimental results are provided.

2 citations


Journal ArticleDOI
TL;DR: This work exploits temporal consistency in the scene to ensure integrability and improve the accuracy of the results, and reviews two known integration algorithms and experiments showing some potential applications for the proposed framework.

1 citations


Book ChapterDOI
09 Nov 2015
TL;DR: A novel one-shot face recognition setup where instead of using a 3D scanner to reconstruct the face, a single photo of the face of a person is acquired while a rectangular pattern is been projected over it and it is possible to extract 3D low-level geometrical features without the explicit 3D reconstruction.
Abstract: In this work we describe a novel one-shot face recognition setup. Instead of using a 3D scanner to reconstruct the face, we acquire a single photo of the face of a person while a rectangular pattern is been projected over it. Using this unique image, it is possible to extract 3D low-level geometrical features without the explicit 3D reconstruction. To handle expression variations and occlusions that may occur (e.g. wearing a scarf or a bonnet), we extract information just from the eyes-forehead and nose regions which tend to be less influenced by facial expressions. Once features are extracted, SVM hyper-planes are obtained from each subject on the database (one vs all approach), then new instances can be classified according to its distance to each of those hyper-planes. The advantage of our method with respect to other ones published in the literature, is that we do not need and explicit 3D reconstruction. Experiments with the Texas 3D Database and with new acquired data are presented, which shows the potential of the presented framework to handle different illumination conditions, pose and facial expressions.

Book ChapterDOI
09 Nov 2015
TL;DR: The main goal of the proposed technique is to use nose candidates to estimate those regions where is expected to find the eyes, and vice versa to take advantage of the information available in the Depth gradient map of the face.
Abstract: In the present work we propose a method for detecting the nose and eyes position when we observe a scene that contains a face. The main goal of the proposed technique is that it capable of bypassing the 3D explicit mapping of the face and instead take advantage of the information available in the Depth gradient map of the face. To this end we will introduce a simple false positive rejection approach restricting the distance between the eyes, and between the eyes and the nose. The main idea is to use nose candidates to estimate those regions where is expected to find the eyes, and vice versa. Experiments with Texas database are presented and the proposed approach is testes when data presents different power of noise and when faces are in different positions with respect to the camera.