scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Image Processing in 1993"


Journal ArticleDOI
Luc Vincent1
TL;DR: An algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.
Abstract: Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm. >

2,064 citations


Journal ArticleDOI
TL;DR: A progressive texture classification algorithm which is not only computationally attractive but also has excellent performance is developed and is compared with that of several other methods.
Abstract: A multiresolution approach based on a modified wavelet transform called the tree-structured wavelet transform or wavelet packets is proposed. The development of this transform is motivated by the observation that a large class of natural textures can be modeled as quasi-periodic signals whose dominant frequencies are located in the middle frequency channels. With the transform, it is possible to zoom into any desired frequency channels for further decomposition. In contrast, the conventional pyramid-structured wavelet transform performs further decomposition in low-frequency channels. A progressive texture classification algorithm which is not only computationally attractive but also has excellent performance is developed. The performance of the present method is compared with that of several other methods. >

1,507 citations


Journal ArticleDOI
TL;DR: In this article, a generalized Gaussian Markov random field (GGMRF) is proposed for image reconstruction in low-dosage transmission tomography, which satisfies several desirable analytical and computational properties for map estimation, including continuous dependence of the estimate on the data and invariance of the character of solutions to scaling of data.
Abstract: The authors present a Markov random field model which allows realistic edge modeling while providing stable maximum a posterior (MAP) solutions. The model, referred to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisfies several desirable analytical and computational properties for map estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the a posteriori log-likelihood function. The GGMRF is demonstrated to be useful for image reconstruction in low-dosage transmission tomography. >

978 citations


Journal ArticleDOI
TL;DR: A fast rate-distortion (R-D) optimal scheme for coding adaptive trees whose individual nodes spawn descendents forming a disjoint and complete basis cover for the space spanned by their parent nodes is presented.
Abstract: A fast rate-distortion (R-D) optimal scheme for coding adaptive trees whose individual nodes spawn descendents forming a disjoint and complete basis cover for the space spanned by their parent nodes is presented. The scheme guarantees operation on the convex hull of the operational R-D curve and uses a fast dynamic programing pruning algorithm to markedly reduce computational complexity. Applications for this coding technique include R. Coefman et al.'s (Yale Univ., 1990) generalized multiresolution wavelet packet decomposition, iterative subband coders, and quadtree structures. Applications to image processing involving wavelet packets as well as discrete cosine transform (DCT) quadtrees are presented. >

798 citations


Journal ArticleDOI
TL;DR: It is shown that VDF can achieve very good filtering results for various noise source models, and provides a link between single-channel image processing and multichannel image processing where both the direction and the magnitude of the image vectors play an important role in the resulting image.
Abstract: Vector directional filters (VDF) for multichannel image processing are introduced and studied. These filters separate the processing of vector-valued signals into directional processing and magnitude processing. This provides a link between single-channel image processing where only magnitude processing is essentially performed, and multichannel image processing where both the direction and the magnitude of the image vectors play an important role in the resulting (processed) image. VDF find applications in satellite image data processing, color image processing, and multispectral biomedical image processing. Results are presented here for the case of color images, as an important example of multichannel image processing. It is shown that VDF can achieve very good filtering results for various noise source models. >

363 citations


Journal ArticleDOI
TL;DR: An approach to obtaining high-resolution image reconstruction from low-resolution, blurred, and noisy multiple-input frames is presented and a recursive-least-squares approach with iterative regularization is developed in the discrete Fourier transform (DFT) domain.
Abstract: An approach to obtaining high-resolution image reconstruction from low-resolution, blurred, and noisy multiple-input frames is presented. A recursive-least-squares approach with iterative regularization is developed in the discrete Fourier transform (DFT) domain. When the input frames are processed recursively, the reconstruction does not converge in general due to the measurement noise and ill-conditioned nature of the deblurring. Through the iterative update of the regularization function and the proper choice of the regularization parameter, good high-resolution reconstructions of low-resolution, blurred, and noisy input frames are obtained. The proposed algorithm minimizes the computational requirements and provides a parallel computation structure since the reconstruction is done independently for each DFT element. Computer simulations demonstrate the performance of the algorithm. >

270 citations


Journal ArticleDOI
TL;DR: A simultaneous version of the multiplicative algebraic reconstruction technique (MART) algorithm, called SMART, is introduced, and its convergence is proved.
Abstract: The related problems of minimizing the functionals F(x)= alpha KL(y,Px)+(1- alpha )KL(p,x) and G(x)= alpha KL(Px,y)+(1- alpha )KL(x,p), respectively, over the set of vectors x>or=0 are considered. KL(a, b) is the cross-entropy (or Kullback-Leibler) distance between two nonnegative vectors a and b. Iterative algorithms for minimizing both functionals using the method of alternating projections are derived. A simultaneous version of the multiplicative algebraic reconstruction technique (MART) algorithm, called SMART, is introduced, and its convergence is proved. >

232 citations


Journal ArticleDOI
TL;DR: It is shown that such a detector has better operating characteristics than a conventional matched filter in the presence of correlated clutter, and for very low signal-to-background ratios, TDLMS-based detection systems show a considerable reduction in the number of false alarms.
Abstract: This work studies the performance of dimensional least mean square (TDLMS) adaptive filters as prewhitening filters for the detection of small objects in image data. The object of interest is assumed to have a very small spatial spread and is obscured by correlated clutter of much larger spatial extent. The correlated clutter is predicted and subtracted from the input signal, leaving components of the spatially small signal in the residual output. The receiver operating characteristics of a detection system augmented by a TDLMS prewhitening filter are plotted using Monte-Carlo techniques. It is shown that such a detector has better operating characteristics than a conventional matched filter in the presence of correlated clutter. For very low signal-to-background ratios, TDLMS-based detection systems show a considerable reduction in the number of false alarms. The output energy in both the residual and prediction channels of such filters is shown to be dependent on the correlation length of the various components in the input signal. False alarm reduction and detection gains obtained by using this detection scheme on thermal infrared sensor data with known object positions is presented. >

180 citations


Journal ArticleDOI
TL;DR: An approach to designing multidimensional linear-phase FIR diamond subband filters having the perfect reconstruction property is presented, based on a transformation of variables technique and is equivalent to the generalized McClellan transformation.
Abstract: An approach to designing multidimensional linear-phase FIR diamond subband filters having the perfect reconstruction property is presented. It is based on a transformation of variables technique and is equivalent to the generalized McClellan transformation. Methods for designing a whole class of transformation are given. The approach consists of two parts; design of the transformation and design of the 1-D filters. The use of Lagrange halfband filters to design the 1-D filters is discussed. The modification of a particular Lagrange halfband filter which gives a pair of simple 1-D filters that are almost similar to each other in their frequency characteristics but still form a perfect reconstruction pair is presented. The design technique is extended to other types of two-channel sampling lattice and subband shapes, in particular, the parallelogram and the diagonally quadrant subband cases. Several numerical design examples are presented to illustrate the flexibility of the design method. >

177 citations


Journal ArticleDOI
TL;DR: A method is proposed whereby a color image is treated as a vector field and the edge information carried directly by the vectors is exploited and the efficiency of the detector is demonstrated.
Abstract: A method is proposed whereby a color image is treated as a vector field and the edge information carried directly by the vectors is exploited. A class of color edge detectors is defined as the minimum over the magnitudes of linear combinations of the sorted vector samples. From this class, a specific edge detector is obtained and its performance characteristics studied. Results of a quantitative evaluation and comparison to other color edge detectors, using Pratt's (1991) figure of merit and an artificially generated test image, are presented. Edge detection results obtained for real color images demonstrate the efficiency of the detector. >

168 citations


Journal ArticleDOI
TL;DR: A Markov random field model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described, and results show that this approach provides good blur estimates and restored images.
Abstract: A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing. >

Journal ArticleDOI
TL;DR: In this article, the estimation of the point spread function (PSF) for blur identification, often a necessary first step in the restoration of real images, method is presented, and the PSF estimate is chosen from a collection of candidate PSFs, which may be constructed using a parametric model or from experimental measurements.
Abstract: The estimation of the point spread function (PSF) for blur identification, often a necessary first step in the restoration of real images, method is presented. The PSF estimate is chosen from a collection of candidate PSFs, which may be constructed using a parametric model or from experimental measurements. The PSF estimate is selected to provide the best match between the restoration residual power spectrum and its expected value, derived under the assumption that the candidate PSF is equal to the true PSF. Several distance measures were studied to determine which one provides the best match. The a priori knowledge required is the noise variance and the original image spectrum. The estimation of these statistics is discussed, and the sensitivity of the method to the estimates is examined analytically and by simulations. The method successfully identified blurs in both synthetically and optically blurred images. >

Journal ArticleDOI
TL;DR: A frequency-domain algorithm for motion estimation based on overlapped transforms of the image data is developed as an alternative to block matching methods, and gives comparable or smaller prediction errors than standard models using exhaustive search block matching.
Abstract: A frequency-domain algorithm for motion estimation based on overlapped transforms of the image data is developed as an alternative to block matching methods. The complex lapped transform (CLT) is first defined by extending the lapped orthogonal transform (LOT) to have complex basis functions. The CLT basis functions decay smoothly to zero at their end points, and overlap by 2:1 when a data sequence is transformed. A method for estimating cross-correlation functions in the CLT domain is developed. This forms the basis of a motion estimation algorithm that calculates vectors for overlapping, windowed regions of data. The overlapping data window used has no block edge discontinuities and results in smoother motion fields. Furthermore, when motion compensation is performed using similar overlapping regions, the algorithm gives comparable or smaller prediction errors than standard models using exhaustive search block matching, and computational load is lower for larger displacement ranges and block sizes. >

Journal ArticleDOI
TL;DR: A new directed-search binary-splitting method which reduces the complexity of the LBG algorithm, and a new initial codebooks selection method which can obtain a good initial codebook is presented.
Abstract: A review and a performance comparison of several often-used vector quantization (VQ) codebook generation algorithms are presented. The codebook generation algorithms discussed include the Linde-Buzo-Gray (LBG) binary-splitting algorithm, the pairwise nearest-neighbor algorithm, the simulated annealing algorithm, and the fuzzy c-means clustering analysis algorithm. A new directed-search binary-splitting method which reduces the complexity of the LBG algorithm, is presented. Also, a new initial codebook selection method which can obtain a good initial codebook is presented. By using this initial codebook selection algorithm, the overall LBG codebook generation time can be reduced by a factor of 1.5-2. >

Journal ArticleDOI
TL;DR: The generalized forward (FRT) and inverse (IFRT) algorithms proposed are fast, eliminate interpolation calculations, and convert directly between a raster scan grid and a rectangular/polar grid in one step.
Abstract: An inversion scheme for reconstruction of images from projections based on the slope-intercept form of the discrete Radon transform is presented. A seminal algorithm for the forward and the inverse transforms proposed by G. Beylkin (1987) demonstrated poor dispersion characteristics for steep slopes and could not invert transforms based on nonlinear slope variations. By formulating the computation of a discrete computation of the continuous Radon transform formula, the authors explicitly derive fast generalized inversion methods that overcome the original shortcomings. The generalized forward (FRT) and inverse (IFRT) algorithms proposed are fast, eliminate interpolation calculations, and convert directly between a raster scan grid and a rectangular/polar grid in one step. >

Journal ArticleDOI
TL;DR: Two classes of algorithms for modeling camera motion in video sequences captured by a camera are proposed and the rate distortion characteristics of the algorithms are compared with that of the block matching algorithm and show that they provide performance characteristics similar to those of the latter with reduced computational complexity.
Abstract: Two classes of algorithms for modeling camera motion in video sequences captured by a camera are proposed. The first class can be applied when there is no camera translation and the motion of the camera can be adequately modeled by zoom, pan, and rotation parameters. The second class is more general in that it can be applied when the camera is undergoing a translation motion, as well as a rotation and zoom and pan. This class uses seven parameters to describe the motion of the camera and requires the depth map to be known at the receiver. The salient feature of both algorithms is that the camera motion is estimated using binary matching of the edges in successive frames. The rate distortion characteristics of the algorithms are compared with that of the block matching algorithm and show that the former provide performance characteristics similar to those of the latter with reduced computational complexity. >

Journal ArticleDOI
TL;DR: The authors provide a general framework for performing processing of stationary multichannel (MC) signals that is linear shift-invariant within channel and shift varying across channels.
Abstract: The authors provide a general framework for performing processing of stationary multichannel (MC) signals that is linear shift-invariant within channel and shift varying across channels. Emphasis is given to the restoration of degraded signals. It is shown that, by utilizing the special structure of semiblock circulant and block diagonal matrices, MC signal processing can be easily carried out in the frequency domain. The generalization of many frequency-domain single-channel (SC) signal processing techniques to the MC case is presented. It is shown that in MC signal processing each frequency component of a signal and system is presented, respectively, by a small vector and a matrix (of size equal to the number of channels), while in SC signal processing each frequency component in both cases is a scalar. >

Journal ArticleDOI
TL;DR: An approach to the lossy compression of color images with limited palette that does not require color quantization of the decoded image is presented, which significantly reduces the decoder computational complexity.
Abstract: An approach to the lossy compression of color images with limited palette that does not require color quantization of the decoded image is presented. The algorithm is particularly suited for coding images using an image-dependent palette. The technique restricts the pixels of the decoded image to take values only in the original palette. Thus, the decoded image can be readily displayed without having to be quantized. For comparable quality and bit rates, the technique significantly reduces the decoder computational complexity. >

Journal ArticleDOI
TL;DR: The proposed signal processing method formulates the multi-line-fitting problem in a special parameter estimation framework such that a signal structure similar to the sensor array processing signal representation is obtained and can be exploited to produce super-resolution estimates for line parameters.
Abstract: A signal processing method is developed for solving the problem of fitting multiple lines in a two-dimensional image. It formulates the multi-line-fitting problem in a special parameter estimation framework such that a signal structure similar to the sensor array processing signal representation is obtained. Then the recently developed algorithms in that formalism can be exploited to produce super-resolution estimates for line parameters. The number of lines may also be estimated in this framework. The signal representation used can be generalized to handle problems of line fitting and of straight edge detection. Details of the proposed algorithm and several experimental results are presented. The method exhibits considerable computational speed superiority over existing single- and multiple-line-fitting algorithms such as the Hough transform method. Potential applications include road tracking in robotic vision, mask wafer alignment in semiconductor manufacturing, aerial image analysis, text alignment in document analysis, and particle tracking in bubble chambers. >

Journal ArticleDOI
TL;DR: A self-governing rate buffer control strategy that can automatically steer the coder to a pseudoconstant bit rate is considered and constrains quantizer adjustments so that a smoother quality transition can be attained.
Abstract: Video coding is a key to successful visual communications. An interframe video coding algorithm using hybrid motion-compensated prediction and interpolation is considered for coding studio quality video at a bit rate of over 5 Mb/s. Interframe coding without a buffer control strategy usually results in variable bit rates. Although packet networks may be capable of handling variable bit rates, in some applications, a constant bit rate is more desirable either for a simpler network configuration or for channels with fixed bandwidth. A self-governing rate buffer control strategy that can automatically steer the coder to a pseudoconstant bit rate is considered. This self-governing rate buffer control strategy employs more progressive quantization parameters, and constrains quantizer adjustments so that a smoother quality transition can be attained. Simulation results illustrate the performance of the pseudoconstant bit rate coder with this buffer control strategy. >

Journal ArticleDOI
TL;DR: A new variable-rate side-match finite-state vector quantization with a block classifier (CSMVQ) algorithm is described, and the improvement over SMVQ can be up to 3 dB at nearly the same bit rate.
Abstract: Future B-ISDN (broadband integrated services digital network) users will be able to send various kinds of information, such as voice, data, and image, over the same network and send information only when necessary. It has been recognized that variable-rate encoding techniques are more suitable than fixed-rate techniques for encoding images in a B-ISDN environment. A new variable-rate side-match finite-state vector quantization with a block classifier (CSMVQ) algorithm is described. In an ordinary fixed-rate SMVQ, the size of the state codebook is fixed. In the CSMVQ algorithm presented, the size of the state codebook is changed according to the characteristics of the current vector which can be predicted by a block classifier. In experiments, the improvement over SMVQ was up to 1.761 dB at a lower bit rate. Moreover, the improvement over VQ can be up to 3 dB at nearly the same bit rate. >

Journal ArticleDOI
TL;DR: Methods for projecting a point onto the intersection of closed and convex sets in a Hilbert space are introduced and applied to signal recovery by best feasible approximation of a reference signal.
Abstract: The objective of set theoretical signal recovery is to find a feasible signal in the form of a point in the intersection of S of sets modeling the information available about the problem. For problems in which the true signal is known to lie near a reference signal r, the solution should not be any feasible point but one which best approximates r, i.e., a projection of r onto S. Such a solution cannot be obtained by the feasibility algorithms currently in use, e.g., the method of projections onto convex sets (POCS) and its offsprings. Methods for projecting a point onto the intersection of closed and convex sets in a Hilbert space are introduced and applied to signal recovery by best feasible approximation of a reference signal. These algorithms are closely related to the above projection methods, to which they add little computational complexity. >

Journal ArticleDOI
TL;DR: An approach to image restoration that recovers image detail using a constrained optimization theoretic approach is introduced and it is argued that a direct measure of image sparseness is the appropriate optimization criterion for deconvolving the image blurring function.
Abstract: The problem of removing blur from, or sharpening, astronomical star field intensity images is discussed. An approach to image restoration that recovers image detail using a constrained optimization theoretic approach is introduced. Ideal star images may be modeled as a few point sources in a uniform background. It is argued that a direct measure of image sparseness is the appropriate optimization criterion for deconvolving the image blurring function. A sparseness criterion based on the l/sub p/ is presented, and candidate algorithms for solving the ensuing nonlinear constrained optimization problem are presented and reviewed. Synthetic and actual star image reconstruction examples are presented to demonstrate the method's superior performance as compared with several image deconvolution methods. >

Journal ArticleDOI
TL;DR: It is demonstrated through simulations that a synthetic-aperture signal processing technique called image addition can be used to reduce the sidelobes associated with the square boundary array, thereby improving the image quality.
Abstract: The effectiveness of a square boundary array in finite-range, pulse-echo imaging is investigated. The images produced by such an array are quite poor when no additional signal processing is used. It is demonstrated through simulations that a synthetic-aperture signal processing technique called image addition can be used to reduce the sidelobes associated with the square boundary array, thereby improving the image quality. Image addition was originally proposed for narrowband imaging of far-field scenes, but it is also useful for finite-range, pulse-echo imaging. The conclusions are expected to apply to other, nonsquare boundary array geometries. >

Journal ArticleDOI
C. J. Hughes1, Mohammed Ghanbari1, D.E. Pearson1, V. Seferidis1, J. Xiong1 
TL;DR: Measurements of subjective picture impairment as a function of network loading in a simulated ATM network are reported, indicating that cells tend to be discarded in bursts, the frequency and severity of which can be related to the loading by a threshold model.
Abstract: Measurements of subjective picture impairment as a function of network loading in a simulated ATM network are reported. The simulation indicated that cells tend to be discarded in bursts, the frequency and severity of which can be related to the loading by a threshold model. The effect of the discards on broadcast-style video, coded using a single-layer H.261-type method, was found to be a function of scene content and movement at the instant of occurrence. If the visibility of cell discards is maintained at or below threshold in worst-case scenes, the study indicated that network loadings around 55% for a multiplex of 16 video sources and around 70% for a multiplex of 48 video sources are achievable. >

Journal ArticleDOI
TL;DR: This work has shown that the geometrical PSF can be used in place of the physical PSF without significant loss in restoration quality when the SNR is less than 30 dB.
Abstract: Point spread function (PSF) models derived from physical optics provide a more accurate representation of real blurs than simpler models based on geometrical optics. However, the physical PSF models do not always result in a significantly better restoration, due to the coarse sampling of the recording device and insufficiently high signal-to-noise ratio (SNR) levels. Low recording resolutions result in aliasing errors in the PSF and suboptimal restorations. A high-resolution representation of the PSF where aliasing errors are minimized is used to obtain improved restorations. The SNR is the parameter which ultimately limits the restoration quality and determines the need for an accurate PSF model. As a rule of thumb, the geometrical PSF can be used in place of the physical PSF without significant loss in restoration quality when the SNR is less than 30 dB. >

Journal ArticleDOI
TL;DR: A fast algorithm is suggested to compute the inverse of the window function matrix, enabling discrete signals to be transformed into generalized nonorthogonal Gabor representations efficiently.
Abstract: Properties of the Gabor transformation used for image representation are discussed. The properties can be expressed in matrix notation, and the complete Gabor coefficients can be found by multiplying the inverse of the Gabor (1946) matrix and the signal vector. The Gabor matrix can be decomposed into the product of a sparse constant complex matrix and another sparse matrix that depends only on the window function. A fast algorithm is suggested to compute the inverse of the window function matrix, enabling discrete signals to be transformed into generalized nonorthogonal Gabor representations efficiently. A comparison is made between this method and the analytical method. The relation between the window function matrix and the biorthogonal functions is demonstrated. A numerical computation method for the biorthogonal functions is proposed. >

Journal ArticleDOI
TL;DR: The application of adaptive (i.e., data-dependent) mathematical morphology techniques to range imagery using structuring elements that automatically adjust to the gray-scale values in a range image in order to deal with features of known physical sizes is discussed.
Abstract: The application of adaptive (i.e., data-dependent) mathematical morphology techniques to range imagery, i.e., the use of structuring elements (SEs) that automatically adjust to the gray-scale values in a range image in order to deal with features of known physical sizes, is discussed. The technique is applicable to any type of image for which the distance to a scene element is available for each pixel. >

Journal ArticleDOI
TL;DR: It is shown that SAR/ISAR imaging of a moving target can be converted into imaging the target in a stationary squint-mode SAR problem where the parameters of the squints depend on the target's velocity.
Abstract: A synthetic aperture radio/inverse synthetic aperture radar (SAR/ISAR) coherent system model and inversion to image a target moving with an unknown constant velocity in a stationary background are presented. The approach is based on a recently developed system modelling and inversion principle for SAR/ISAR imaging that utilizes the spatial Fourier decomposition of SAR data in the synthetic aperture domain to convert the SAR system model's nonlinear phase functions into linear phase functions suitable for a computationally manageable inversion. It is shown that SAR/ISAR imaging of a moving target can be converted into imaging the target in a stationary squint-mode SAR problem where the parameters of the squint-mode geometry depend on the target's velocity. A method for estimating the moving target's velocity that utilizes a spatial Doppler analysis of the SAR data within overlapping subapertures is presented. The spatial Doppler technique does not require the radar signal to be narrowband, so the reconstructed image's resolution is not sacrificed to improve the target's velocity estimator. >

Journal ArticleDOI
TL;DR: An algorithm for processing closely spaced edges and accurately restoring their locations is presented and it is revealed that this approach outperforms the current commercial bar code readers.
Abstract: An algorithm for processing closely spaced edges and accurately restoring their locations is presented. The convolution distortion model is based on interacting edges. The restoration algorithm takes three edges as input and forces the effects of two of them to cancel each other. Thus the third edge appears to be isolated and is located using a traditional edge detector. Experiments on bar code waveforms reveal that this approach outperforms the current commercial bar code readers. >