Mamdouh F. Fahmy
Bio: Mamdouh F. Fahmy is an academic researcher from Assiut University. The author has contributed to research in topics: Wavelet & Wavelet transform. The author has an hindex of 11, co-authored 94 publications receiving 427 citations. Previous affiliations of Mamdouh F. Fahmy include University of Leeds & University of Kent.
Papers published on a yearly basis
••01 Dec 2013
TL;DR: Experimental results have shown that when analyzing FVC2004, FVC2002, and FVC2000 databases using the proposed algorithm, the average classification error rates are much less than those obtained by other approaches.
Abstract: Fingerprint segmentation is one of the most important preprocessing steps in an automatic fingerprint identification system (AFIS). It is used to separate a fingerprint area (foreground) from the image background. Accurate segmentation of a fingerprint will greatly reduce the computation time of the following processing steps, and discard many spurious minutiae. In this paper, a new segmentation algorithm is presented. Apart from its simplicity, it is characterized by being neither depend on empirical thresholds chosen by experts or a learned model trained by elements generated from manually segmented fingerprints. The algorithm uses the block range as a feature to achieve fingerprint segmentation. Then, some Morphological closing and opening operations are performed, to extract the foreground from the image. The performance of the proposed technique is checked by evaluating the classification error (Err). Experimental results have shown that when analyzing FVC2004, FVC2002, and FVC2000 databases using the proposed algorithm, the average classification error rates are much less than those obtained by other approaches. Several illustrative examples are given to verify this conclusion.
TL;DR: This paper proposes a novel technique for enhanced B-spline based compression for different image coders by preprocessing the image prior to the decomposition stage in any image coder to reduce the amount of data correlation and allow for more compression, as will be shown with the authors' correlation metric.
Abstract: In this paper we propose to develop novel techniques for signal/image decomposition, and reconstruction based on the B-spline mathematical functions. Our proposed B-spline based multiscale/resolution representation is based upon a perfect reconstruction analysis/synthesis point of view. Our proposed B-spline analysis can be utilized for different signal/imaging applications such as compression, prediction, and denoising. We also present a straightforward computationally efficient approach for B-spline basis calculations that is based upon matrix multiplication and avoids any extra generated basis. Then we propose a novel technique for enhanced B-spline based compression for different image coders by preprocessing the image prior to the decomposition stage in any image coder. This would reduce the amount of data correlation and would allow for more compression, as will be shown with our correlation metric. Extensive simulations that have been carried on the well-known SPIHT image coder with and without the proposed correlation removal methodology are presented. Finally, we utilized our proposed B-spline basis for denoising and estimation applications. Illustrative results that demonstrate the efficiency of the proposed approaches are presented.
TL;DR: In this paper, transfer functions were constructed which exhibit flat amplitude and linear phase characteristics over a finite band, and recursive realizations for the implementation in distributed structures were derived, together with a non-reciprocal version suitable for recursive digital filter design.
Abstract: Using the recently derived formula which interpolates to a linear phase response at equidistant frequency intervals in a distributed variable, transfer functions are constructed which exhibit flat amplitude and linear phase characteristics over a finite band. Reciprocal realizations for the implementation in distributed structures are derived, together with a non-reciprocal version ideally suited for recursive digital filter design. Their superiority over the corresponding maximally flat solutions is also illustrated.
TL;DR: An exact estimation of the PSF size is presented, which yields the optimum restored image quality for both noisy and noiseless images, and a technique is proposed to improve the sharpness of the deconvolved images, by constrained maximization of some of the detail wavelet packet energies.
Abstract: Successful blind image deconvolution algorithms require the exact estimation of the Point Spread Function size, PSF. In the absence of any priori information about the imagery system and the true image, this estimation is normally done by trial and error experimentation, until an acceptable restored image quality is obtained. This paper, presents an exact estimation of the PSF size, which yields the optimum restored image quality for both noisy and noiseless images. It is based on evaluating the detail energy of the wave packet decomposition of the blurred image. The minimum detail energies occur at the optimum PSF size. Having accurately estimated the PSF, the paper also proposes a fast double updating algorithm for improving the quality of the restored image. This is achieved by the least squares minimization of a system of linear equations that minimizes some error functions derived from the blurred image. Moreover, a technique is also proposed to improve the sharpness of the deconvolved images, by constrained maximization of some of the detail wavelet packet energies. Simulation results of several examples have verified that the proposed technique manages to yield a sharper image with higher PSNR than classical approaches.
TL;DR: In this paper, an analytical solution for the transfer function of a digital filter which exhibits an optimum maximally flat amplitude characteristic and a maximumally flat delay characteristic simultaneously was obtained for the direct realization in terms of the degree of the network and an arbitrary bandwidth scaling factor.
Abstract: An analytical solution is obtained for the transfer function of a digital filter which exhibits an optimum maximally flat amplitude characteristic and a maximally flat delay characteristic simultaneously. Explicit values for the multipliers are given for the direct realization in terms of the degree of the network and an arbitrary bandwidth scaling factor. Finally, it is concluded that this type of filter is useful in the area where the degree of a non-recursive filter becomes excessive to fulfil an amplitude requirement (e.g. narrow bandwidth) and where recursive filters designed solely on an amplitude basis are too dispersive.
••01 Feb 1986
TL;DR: Wave digital filters (WDFs) as discussed by the authors are modeled after classical filters, preferably in lattice or ladder configurations or generalizations thereof, and have very good properties concerning coefficient accuracy requirements, dynamic range, and especially all aspects of stability under finite-arithmetic conditions.
Abstract: Wave digital filters (WDFs) are modeled after classical filters, preferably in lattice or ladder configurations or generalizations thereof. They have very good properties concerning coefficient accuracy requirements, dynamic range, and especially all aspects of stability under finite-arithmetic conditions. A detailed review of WDF theory is given. For this several goals are set: to offer an introduction for those not familiar with the subject, to stress practical aspects in order to serve as a guide for those wanting to design or apply WDFs, and to give insight into the broad range of aspects of WDF theory and its many relationships with other areas, especially in the signal-processing field. Correspondingly, mathematical analyses are included only if necessary for gaining essential insight, while for all details of more special nature reference is made to existing literature.
01 Jan 1979
TL;DR: In this article, a blind separation of nonstationary sources in the underdetermined case, where there are more sources than sensors, is studied, where the original sources are disjoint in the time-frequency domain.
Abstract: We examine the problem of blind separation of nonstationary sources in the underdetermined case, where there are more sources than sensors. Since time-frequency (TF) signal processing provides effective tools for dealing with nonstationary signals, we propose a new separation method that is based on time-frequency distributions (TFDs). The underlying assumption is that the original sources are disjoint in the time-frequency (TF) domain. The successful method recovers the sources by performing the following four main procedures. First, the spatial time-frequency distribution (STFD) matrices are computed from the observed mixtures. Next, the auto-source TF points are separated from cross-source TF points thanks to the special structure of these mixture STFD matrices. Then, the vectors that correspond to the selected auto-source points are clustered into different classes according to the spatial directions which differ among different sources; each class, now containing the auto-source points of only one source, gives an estimation of the TFD of this source. Finally, the source waveforms are recovered from their TFD estimates using TF synthesis. Simulated experiments indicate the success of the proposed algorithm in different scenarios. We also contribute with two other modified versions of the algorithm to better deal with auto-source point selection.
TL;DR: In this article, a modification of the least squares Prony's method for power quality analysis in terms of estimation of harmonics and interharmonics in an electric power signal is presented.
Abstract: This paper presents a new modification of the least-squares Prony's method for power-quality analysis in terms of estimation of harmonics and interharmonics in an electric power signal. The so-called reduced Prony's method can be competitive, in some specific case, to the Fourier transformation method and the classical LS Prony's method. The modification constitutes in a specific selection of a constant frequency vector in a Fourier-like manner leading to a remarkable reduction of the computational burden and enabling online real-time computations. In addition, a sampling frequency and an analysis window length can be selected to provide the numerical stability of the new algorithm.