scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Image and Graphics in 2015"


Journal ArticleDOI
TL;DR: This paper offers a review of the state-of-the-art document image processing methods and their classification by identifying new trends for automatic document processing and understanding and presents a comparative survey based on important aspects of a marketable system that is dependent on document imageprocessing techniques.
Abstract: This paper offers a review of the state-of-the-art document image processing methods and their classification by identifying new trends for automatic document processing and understanding. Document image processing (DIP) is an important problem related with most of the challenges coming from the image processing field and with applications to digital document summarization, readers for the visually impaired etc. Difficulties in the processing of documents can arise from lighting conditions, page curl, page rotation in 3D, and page layout segmentation. Document image processing is usually performed in the context of higher-level applications that require an undistorted document image such as optical character recognition and document restoration/preservation. Typically, assumptions are made to constrain the processing problem in the context of a particular application. In this survey, we categorize document image processing methods on the basis of the technique, provide detailed descriptions of representative methods in each category, and examine their pros and cons. It important to notice here that the DIP field is broad, thus we try to provide a top–down/horizontal survey rather a bottom up. At the same time, we target the area of document readers for the blind, and use this application to guide us in a top–down survey of DIP. Moreover, we present a comparative survey based on important aspects of a marketable system that is dependent on document image processing techniques.

11 citations


Journal ArticleDOI
TL;DR: Design principles and guidelines that can guide the design of the view for an Information Visualization solution are presented and identified by means of a literature review.
Abstract: The construction of an artifact to visually represent information is usually required by Information Visualization research projects. The end product of design science research is also an artifact and therefore it can be argued that design science research is an appropriate research paradigm for conducting Information Visualization research. Design science research requires that, during the Rigor Cycle, the design of the artifacts should be based on a scientific knowledge base. This article provides a knowledge base in the form of design guidelines that can guide the design of the view for an Information Visualization solution. The design principles and guidelines presented in this article are identified by means of a literature review.

8 citations


Journal ArticleDOI
TL;DR: This work demonstrates this by introducing grammars and rules to generate 108 words of a Persian poem Neyname from Rumi, and proposes hierarchies of complexity of the words to be used in future automatic visual generations of Persian poems.
Abstract: Lyndenmayer systems (L-systems) allow us to grow sophisticated patterns by applying just few simple rules. The L-systems are now universal tools for abstract representation of plant development. Can the L-systems be used to "grow" Persian words and sentences? Yes. We demonstrate this by introducing grammars and rules to generate 108 words of a Persian poem Neyname from Rumi. The proposed method is scalable. We also construct hierarchies of complexity of the words and provide some insights on how the complexity of the poem changes during its development. The grammars and rules proposed could be used in future automatic visual generations of Persian poems.

5 citations


Journal ArticleDOI
TL;DR: This work presents a novel method for 3D edge detection based on Boolean functions and local operators, which is an extension of the 2D edge detector introduced by Vemis et al.
Abstract: Edge detection is one of the most commonly used operations in image processing and computer vision areas. Edges correspond to the boundaries between regions in an image, which are useful for object segmentation and recognition tasks. This work presents a novel method for 3D edge detection based on Boolean functions and local operators, which is an extension of the 2D edge detector introduced by Vemis et al. [Signal Processing45(2), 161–172 (1995)] The proposed method is composed of two main steps. An adaptive binarization process is initially applied to blocks of the image and the resulting binary map is processed with a set of Boolean functions to identify edge points within the blocks. A global threshold, calculated to estimate image intensity variation, is then used to reduce false edges in the image blocks. The proposed method is compared to other 3D gradient filters: Canny, Monga–Deriche, Zucker–Hummel and Sobel operators. Experimental results demonstrate the effectiveness of the proposed technique when applied to several 3D synthetic and real data sets.

4 citations


Journal ArticleDOI
TL;DR: If CSA receives centered input samples and it considers full projection matrices then the obtained solution is equal to the one generated by MPCA, and the general problem of ranking tensor components is considered.
Abstract: In the area of multi-dimensional image databases modeling, the multilinear principal component analysis (MPCA) and concurrent subspace analysis (CSA) approaches were independently proposed and applied for mining image databases. The former follows the classical principal component analysis (PCA) paradigm that centers the sample data before subspace learning. The CSA, on the other hand, performs the learning procedure using the raw data. Besides, the corresponding tensor components have been ranked in order to identify the principal tensor subspaces for separating sample groups for face image analysis and gait recognition. In this paper, we first demonstrate that if CSA receives centered input samples and we consider full projection matrices then the obtained solution is equal to the one generated by MPCA. Then, we consider the general problem of ranking tensor components. We examine the theoretical aspects of typical solutions in this field: (a) Estimating the covariance structure of the database; (b) Computing discriminant weights through separating hyperplanes; (c) Application of Fisher criterium. We discuss these solutions for tensor subspaces learned using centered data (MPCA) and raw data (CSA). In the experimental results we focus on tensor principal components selected by the mentioned techniques for face image analysis considering gender classification as well as reconstruction problems.

4 citations


Journal ArticleDOI
TL;DR: The phase congruency method is used to extract features from palm print ROI images and achieves genuine acceptance rate (GAR) of 100% and FAR of 0.65%.
Abstract: Palm print authentication is a biometric technology to identify a person's identity. In this paper, phase congruency method is used to extract features from palm print ROI images. The phase congruency is an efficient method to extract features at varying illumination condition and the image is invariant to contrast. By applying this method, local phase congruency (LPC), local orientation (LO) and local phase (LP) are extracted individually and fused using score level fusion. To reduce false acceptance rate (FAR), Min-Max threshold range is employed and the proposed method is tested on PolyU database of 7480 images from 374 individuals, with 20 image samples per individual. The proposed system achieves genuine acceptance rate (GAR) of 100% and FAR of 0.65%.

4 citations


Journal ArticleDOI
TL;DR: An improved version of the well-known graph-based segmentation method, Felzenszwalb and Huttenlocher (WFH), which uses a nonlinear discrimination function based on polynomial Mahalanobis Distance (PMD) as the color similarity metric.
Abstract: We present a new segmentation method called weighted Felzenszwalb and Huttenlocher (WFH), an improved version of the well-known graph-based segmentation method, Felzenszwalb and Huttenlocher (FH). Our algorithm uses a nonlinear discrimination function based on polynomial Mahalanobis Distance (PMD) as the color similarity metric. Two empirical validation experiments were performed using as a golden standard ground truths (GTs) from a publicly available source, the Berkeley dataset, and an objective segmentation quality measure, the Rand dissimilarity index. In the first experiment the results were compared against the original FH method. In the second, WFH was compared against several well-known segmentation methods. In both cases, WFH presented significant better similarity results when compared with the golden standard and segmentation results presented a reduction of over-segmented regions.

3 citations


Journal ArticleDOI
TL;DR: A 3D communication system supported by Kinect and head mounted display to provide users communications with realistic sensation and intuitive manipulation and enables users to transfer and share information by intuitive manipulation to augmented reality (AR) objects toward the other users in the future.
Abstract: Existing video communication systems, used in private business or video teleconference, show a part of the body of users on display only. This situation does not have realistic sensation because of showing a part of body on display and showing other users by 2D. This makes users feel communicating with the other users at a long distance without realistic sensation. Furthermore, although these existing communication systems have file transfer function such as sending and receiving file data, it does not use intuitive manipulation. It uses mouse or touching display only. In order to solve these problems, we propose 3D communication system supported by Kinect and head mounted display (HMD) to provide users communications with realistic sensation and intuitive manipulation. This system is able to show whole body of users on HMD as if they were in the same room by 3D reconstruction. It also enables users to transfer and share information by intuitive manipulation to augmented reality (AR) objects toward the other users in the future. The result of this paper is a system that extracts human body by using Kinect, reconstructs extracted human body on HMD, and also recognizes users' hands to be able to manipulate AR objects by a hand.

3 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is able to sample automatically and align the sensed images to form the final map and is compared with satellite images that shows a reasonable performance with geometrically correct registration.
Abstract: Aerial mapping is attracting more attention due to the development in unmanned aerial vehicles (UAVs) and their availability and also vast applications that require a wide aerial photograph of a region in a specific time. The cross-modality as well as translation, rotation, scale change and illumination are the main challenges in aerial image registration. This paper concentrates on an algorithm for aerial image registration to overcome the aforementioned issues. The proposed method is able to sample automatically and align the sensed images to form the final map. The results are compared with satellite images that shows a reasonable performance with geometrically correct registration.

3 citations


Journal ArticleDOI
TL;DR: A new approach that uses zonewise profile features to identify and segment text regions from low resolution images of display boards captured from mobile phone cameras is presented.
Abstract: Automated systems for understanding display boards are finding many applications useful in guiding tourists, assisting visually challenged and also in providing location aware information. Such systems require an automated method to detect and extract text prior to further image analysis. In this paper, a new approach that uses zonewise profile features to identify and segment text regions from low resolution images of display boards captured from mobile phone cameras is presented. The method computes zonewise profile features on every 40 × 40 pixel image block and identifies potential text blocks using newly defined discriminant functions. Further, a merging algorithm is used to merge text blocks to obtain text regions. The method is implemented using the android software development kit and experimented on Sony X-PeriaTM Z C6603/C6602 mobile. The proposed methodology is evaluated on 3240 low resolution images of display boards captured from 2 and/or 5 mega pixel cameras on mobile phones at various pixel...

3 citations


Journal ArticleDOI
TL;DR: A novel computer modeling and simulation method for fruit sunscald disease, which focuses on the morphology change of fruit appearance affected by sun scald disease under the condition of water loss, and an improved mass spring model is proposed, by combining cell turgor pressure with mass spring.
Abstract: Although the simulation of many kinds of natural phenomena has been studied in the field of computer simulation and graphics, the reproduction of natural fruit diseases processes has not received much attention. Sunscald is the representative of physiological diseases. This paper presented a novel computer modeling and simulation method for fruit sunscald. We mainly focus on the morphology change of fruit appearance affected by sunscald disease under the condition of water loss. An improved mass spring model is proposed, by combining cell turgor pressure with mass spring. We adopt this physical deformation model for dynamic simulation of fruit sunscald disease. We calculate the cell turgor pressure variation due to water loss and thereby get the displacement changing of every mass particle in our simulation system. Finally, the software of Maya is adopted to render the deform model to render the simulation result. Experiments demonstrate the effectiveness of our method.

Journal ArticleDOI
TL;DR: This method detects red areas using hue information from a source image and judges whether contours are mini tomatoes or not by using the curvature, and compared it with circle detection method using Hough transform.
Abstract: In this paper, we propose a robust recognition method for occlusion of mini tomatoes based on hue information and the curvature. This method is used for a managing system using robots for hydroponic that we have proposed. In this system, robots need to recognize mini tomatoes to manage farmlands. In a lot of cases, mini tomatoes are covered partially by leaves or other tomatoes. Thence, the system needs a mini tomato recognition method in the situations including occlusion. First, this method detects red areas using hue information from a source image. Second, the method detects contours from the areas by using contour tracking. Finally, the method judges whether contours are mini tomatoes or not by using the curvature. We compared our method with circle detection method using Hough transform. Experimental results showed that the recognition rate of our method was 78.8%. On the other hand, the recognition rate of the comparative method was 47.9%. Therefore, we consider that the proposed method is appropriate for mini tomato recognition in the situations including occlusion.

Journal ArticleDOI
TL;DR: This paper presents a computer vision-based real-time 3D gesture recognition system using depth image which tracks 3D joint position of head, neck, shoulder, arms, hands and legs and 3D motion gesture is recognized using the movement trajectory of those joints.
Abstract: Gesture is one of the fundamental ways of human machine natural interaction. To understand gesture, the system should be able to interpret 3D movements of human. This paper presents a computer vision-based real-time 3D gesture recognition system using depth image which tracks 3D joint position of head, neck, shoulder, arms, hands and legs. This tracking is done by Kinect motion sensor with OpenNI API and 3D motion gesture is recognized using the movement trajectory of those joints. User to Kinect sensor distance is adapted using proposed center of gravity (COG) correction method and 3D joint position is normalized using proposed joint position normalization method. For gesture learning and recognition, data mining classification algorithms such as Naive Bayes and neural network is used. The system is trained to recognize 12 gestures used by umpires in a cricket match. It is trained and tested using about 2000 training instances for 12 gesture of 15 persons. The system is tested using 5-fold cross validation method and achieved 98.11% accuracy with neural network and 88.84% accuracy with Naive Bayes classification method.

Journal ArticleDOI
TL;DR: A new method for inverse geometric reconstruction of conics in 3D space using a ray–surface intersection that requires only three intersection points and does not require to establish correspondence between the two perspective views.
Abstract: This paper presents a new method for inverse geometric reconstruction of conics in 3D space using a ray–surface intersection. The perspective views of the conic in both the image planes are used as the input of the reconstruction algorithm. Least-square curve fitting is used in one of the 2D image planes to obtain the algebraic equation of the projected conic. The ray–surface intersection is performed using a second-order method, where a new criterion is given to provide the unique intersection. A plane is fitted through the evolved intersection points. The constructed plane cuts the conical surface to the desired conic. The proposed method does not require to establish correspondence between the two perspective views. Moreover, it requires only three intersection points. Various experiments are presented to support the validity of the proposed algorithm. Simulation studies are also performed to observe the effect of noise on errors of reconstruction. Effect of quantization errors are also considered in the final reconstruction.

Journal ArticleDOI
TL;DR: In this work, optical flow based on Horn–Schunk with Barren, Fleet and Beuchemin (BFB) kernel has been employed to estimate the motion vectors and the proposed method of GMSC distance function has shown significant change in the tracking outcome.
Abstract: The process explicitly dedicated to estimate the path of the object as it moves along the region of scene in the image plane is the principle of tracking. In other words, it is a strategy to detect and track moving object through a sequence of frames. In this work, optical flow based on Horn–Schunk with Barren, Fleet and Beuchemin (BFB) kernel has been employed to estimate the motion vectors. The peripheries of moving objects are extracted for different shape signatures such as boundary, edge, area, curvature and centroid distance functions. Fourier descriptors (FD) of particular shape signature for each of the candidate templates and model template are computed. Similarity between the model template and candidate templates is confirmed by corresponding minimum Minkowski distance (MD). Subsequently, best match candidate template will be updated by model template in view of tracking process. However, centroid distance function has remarked some potentials and hence it has further motivated to mine it to throw in the proposed novel criteria such as the geometric mean of segmented centroid (GMSC) distance function to track the object. The proposed method of GMSC distance function has shown significant change in the tracking outcome.

Journal ArticleDOI
TL;DR: A novel texture descriptor called generic weighted cubicle pattern (GWCP) is proposed and the proposed operator for texture image classification outperforms other descriptors in terms of abnormality detection in mammogram images.
Abstract: Digital image processing techniques are very useful in abnormality detection in digital mammogram images. Nowadays, texture-based image segmentation of digital mammogram images is very popular due to its better accuracy and precision. Local binary pattern (LBP) descriptor has attracted many researchers working in the field of texture analysis of digital images. Because of its success, many texture descriptors have been introduced as variants of LBP. In this work, we propose a novel texture descriptor called generic weighted cubicle pattern (GWCP) and we analyzed the proposed operator for texture image classification. We also performed abnormality detection through mammogram image segmentation using k-Nearest Neighbors (KNN) algorithm and compared the performance of the proposed texture descriptor with LBP and other variants of LBP namely local ternary pattern (LTPT), extended local texture pattern (ELTP) and local texture pattern (LTPS). For evaluation, we used the performance metrics such as accuracy, error rate, sensitivity, specificity, under estimation fraction and over estimation fraction. The results prove that the proposed method outperforms other descriptors in terms of abnormality detection in mammogram images.

Journal ArticleDOI
TL;DR: The proposed Magno (M)-channel filter enables the model, simulate and arrive at a better understanding of some of the initial mechanisms in visual pathway, while simultaneously providing a fast, biologically inspired algorithm for digital image preprocessing.
Abstract: We propose that the Magno (M)-channel filter, belonging to the extended classical receptive field (ECRF) model, provides us with "vision at a glance", by performing smoothing with edge preservation. We compare the performance of the M-channel filter with the well-known bilateral filter in achieving such "vision at a glance" which is akin to image preprocessing in the computer vision domain. We find that at higher noise levels, the M-channel filter performs better than the bilateral filter in terms of reducing noise while preserving edge details. The M-channel filter is also significantly simpler and therefore faster than the bilateral filter. Overall, the M-channel filter enables us to model, simulate and arrive at a better understanding of some of the initial mechanisms in visual pathway, while simultaneously providing a fast, biologically inspired algorithm for digital image preprocessing.

Journal ArticleDOI
TL;DR: This work presents a framework to design force fields that drive particles to follow a path under the physics-based animation system and uses Bsplines to define the steering force that best approximates the user-specified path.
Abstract: We present a framework to design force fields that drive particles to follow a path under the physics-based animation system. In this framework, a user interactively specifies the desired path, represented by a Bezier curve using a GUI and the attraction force that drives a particle toward the target location. Then, the framework automatically defines the steering force to make a particle follow the desired path. To this end, we use B-splines to define the steering force that best approximates the user-specified path. We demonstrate the effectiveness of our method by showing a large number of particles following the desired path and forming an animated human figure. Our method creates a stable behavior of particles and is fast enough to run in real time.

Journal ArticleDOI
TL;DR: A fresh system for double sided Braille dot recognition is proposed, which employs a two-stage highly efficient and an adaptive technique to differentiate the recto and verso dots from an inter-point Braille expending the horizontal and vertical projection profiles along with distance thresholding for Braille character segmentation.
Abstract: Problem statement: The optical Braille character recognition (OBR) system is in substantial need in order to preserve the Braille documents to make them available in future for the large section of visually impaired people. The recognition and transcribing of the double sided Braille document into its corresponding natural text is indeed a challenging task. This difficulty is due to the overlapping of the front side dots (recto) with that of the back side dots (verso) in the inter-point Braille document. In such settings, the habitual method of template matching to distinguish recto and verso dots is unproductive. Approach: A fresh system for double sided Braille dot recognition is proposed, which employs a two-stage highly efficient and an adaptive technique to differentiate the recto and verso dots from an inter-point Braille expending the horizontal and vertical projection profiles along with distance thresholding for Braille character segmentation. Materials: The efficacy of this segmentation technique is demonstrated on a large dataset consisting of Hindi Devanagari Braille documents with varying image resolution and with diverse word patterns. The primary reason for choosing the Hindi Devanagari Braille is that, Hindi is the national language of India and OBR for the Hindi Devanagari Braille is not available. Results: Braille line segmentation accuracy of 100%, word segmentation accuracy of 99.8% and character segmentation accuracy of 99.4% has been accomplished. Conclusion: This effort of OBR development for Hindi Devanagari Braille has been done for the first time. The proposed method is tolerant to merging of Braille dots and presence of half characters.

Journal ArticleDOI
TL;DR: This paper presents an algorithm to construct initial contours for active contour models using a heuristic method and shows how an accurate contour can be extracted in the current slice.
Abstract: It is an important segmentation approach of CT/MRI images to automatically extract contours in every slice using active contour models. The key point of the segmentation approach is to automatically construct initial contours for active contour models because any active contour model is sensitive to its initial contour. This paper presents an algorithm to construct such initial contours using a heuristic method. Assume that the contour in previous slice (previous contour) is accurate. The contour in the current slice (current contour) is constructed according to the previous contour using the way: Recognition and link of edge points of tissues according to the previous contour. The contour linking edge points is used as the initial contour of the distance regularized level set evolution (DRLSE) method and then an accurate contour can be extracted in the current slice.

Journal ArticleDOI
TL;DR: An upper human body tracking system with agent-based architecture that departs from process-centric model, and introduces a novel model by which agents are bound to the objects or sub-objects being recognized or tracked.
Abstract: In this paper, we present an upper human body tracking system with agent-based architecture. Our agent-based approach departs from process-centric model where the agents are bound to specific processes, and introduces a novel model by which agents are bound to the objects or sub-objects being recognized or tracked. To demonstrate the effectiveness of our system, we use stereo video streams, which are captured by calibrated stereo cameras, as inputs and synthesize human animations which are represented by 3D skeletal motion data. Different from our previous researches, the new system does not require a restricted capture environment with special lighting condition and projected patterns and subjects can wear daily clothes (we do NOT use any markers). With the success from the previous researches, our pre-designed agents are autonomous, self-aware entities that are capable of communicating with other agents to perform tracking within agent coalitions. Each agent with high-level abstracted knowledge seeks 'evidence' for its existence from both low-level features (e.g. motion vector fields, color blobs) as well as from its peers (other agents representing body-parts with which it is compatible). The power of the agent-based approach is the flexibility by which domain information may be encoded within each agent to produce an overall tracking solution.

Journal ArticleDOI
TL;DR: In many documents such as maps, engineering drawings and artistic documents, etc. there exist many printed as well as handwritten materials where text regions and text-lines are not parallel to each other, curved in nature, and having various types of text such as different font size, text and non-text areas lying close to each each other andnon-straight, skewed and warped text- lines.
Abstract: In many documents such as maps, engineering drawings and artistic documents, etc. there exist many printed as well as handwritten materials where text regions and text-lines are not parallel to each other, curved in nature, and having various types of text such as different font size, text and non-text areas lying close to each other and non-straight, skewed and warped text-lines. Optical character recognition (OCR) systems available commercially such as ABYY fine reader and Free OCR, are not capable of handling different ranges of stylistic document images containing curved, multi-oriented, and stylish font text-lines. Extraction of individual text-lines and words from these documents is generally not straight forward. Most of the segmentation works reported is on simple documents but still it remains a highly challenging task to implement an OCR that works under all possible conditions and gives highly accurate results, especially in the case of stylistic documents. This paper presents dilation and floo...

Journal ArticleDOI
TL;DR: This paper introduces a data-driven approach for human locomotion generation that takes as input a set of example locomotion clips and a motion path specified by an animator, and suggests several techniques to synthesize a convincing output animation.
Abstract: This paper introduces a data-driven approach for human locomotion generation that takes as input a set of example locomotion clips and a motion path specified by an animator. Significantly, the approach only requires a single example of straight-path locomotion for each style expressed and can produce a continuous output sequence on an arbitrary path. Our approach considers quantitative and qualitative aspects of motion and suggests several techniques to synthesize a convincing output animation: motion path generation, interactive editing, and physical enhancement for the output animation. Initiated with an example clip, this process produces motion that differs stylistically from any in the example set, yet preserves the high quality of the example motion. As shown in the experimental results, our approach provides efficient locomotion generation by editing motion capture clips, especially for a novice animator, at interactive speed.

Journal ArticleDOI
TL;DR: This work presents fast algorithms for detection and removal of ruled background lines which are intersecting and mixed with the text and shows the results show the benefits of the proposed algorithms with F1-measures 91.43% and 88.52% for Detection and removal respectively.
Abstract: Automation becomes the standard in nearly all aspects of life. Some of these aspects are text analysis, translating and retrieval. This requires machine typed format as a preprocessing step. Converting the handwritten text into machine printed counterpart requires Optical Character Recognition (OCR) system, which requires clean text as input. One of the problems facing the process of getting clean handwritten text is the ruled background lines which are intersecting and mixed with the text. In this work, we present fast algorithms for detection and removal of these ruled lines. The detection stage use only the centralized and squared part of the image document instead of wasting time if the whole image document is used, and use Hough transform for getting the ruled lines location and direction. The removal algorithm uses the color histogram segmentation for separating the text from the ruled lines. The Hue of the color is used to represent colors instead of using all color components. Then the segmented image document is morphologically enhanced and converted to binary image that is suitable for OCR. The results show the benefits of the proposed algorithms with F1-measures 91.43% and 88.52% for detection and removal respectively.

Journal ArticleDOI
TL;DR: This paper analyzes the underlying reason of false minutiae generated by state-of-the-art amplitude modulation–frequency modulation (AM–FM)-based methods and proposes an improved approach by devising a better way to cope with the branch cuts (or discontinuities) in the fingerprint ridge orientation fields, and introducing an effective scheme to remove false Minutiae from the reconstructed fingerprint images.
Abstract: Reconstructing fingerprint images from a given set of minutiae is an important issue in analyzing the masquerade attack of automated fingerprint recognition systems (AFRSs) and in generating large scale databases of synthetic fingerprint images for the performance evaluation of AFRSs. Existing fingerprint reconstruction methods either cannot generate visually plausible or realistic fingerprint images, or suffer from the occurrence of false minutiae in the reconstructed fingerprint images. In this paper, we analyze the underlying reason of false minutiae generated by state-of-the-art amplitude modulation–frequency modulation (AM–FM)-based methods. Furthermore, we propose an improved approach by devising a better way to cope with the branch cuts (or discontinuities) in the fingerprint ridge orientation fields, and by introducing an effective scheme to remove false minutiae from the reconstructed fingerprint images. Compared with previous AM–FM based methods, the proposed method gets rid of block effects and successfully reduces the number of false minutiae. Theoretic proofs are provided with respect to the effectiveness of the proposed method for fingerprints with multiple singular points. The proposed method has also been evaluated on public fingerprint databases. The results demonstrate that it is superior to the existing methods in reconstructing realistic fingerprint images with fewer false minutiae.