scispace - formally typeset
Search or ask a question

Showing papers on "Signature recognition published in 1992"


Proceedings ArticleDOI
23 Mar 1992
TL;DR: The authors extend the dynamic time warping algorithm, widely used in automatic speech recognition (ASR), to a dynamic plane warping (DPW) algorithm, for application in the field of optical character recognition (OCR) or similar applications.
Abstract: The authors extend the dynamic time warping (DTW) algorithm, widely used in automatic speech recognition (ASR), to a dynamic plane warping (DPW) algorithm, for application in the field of optical character recognition (OCR) or similar applications. Although direct application of the optimality principle reduced the computational complexity somewhat, the DPW (or image alignment) problem is exponential in the dimensions of the image. It is shown that by applying constraints to the image alignment problem, e.g., limiting the class of possible distortions, one can reduce the computational complexity dramatically, and find the optimal solution to the constrained problem in linear time. A statistical model, the planar hidden Markov model (PHMM), describing statistical properties of images is proposed. The PHMM approach was evaluated using a set of isolated handwritten digits. An overall digit recognition accuracy of 95% was achieved. It is expected that the advantage of this approach will be even more significant for harder tasks, such cursive-writing recognition and spotting. >

162 citations


Proceedings ArticleDOI
Philip Lockwood1, J. Boudy1, M. Blanchet1
23 Mar 1992
TL;DR: The authors extend their previous work where a robust hidden Markov model (HMM) training/recognition framework is proposed and several new aspects are introduced: use of enhanced nonlinear spectral subtraction (NSS) schemes, introduction of root-MFCC parameters, and training of HMMs by a dynamic inference scheme (DIHMM).
Abstract: The authors address the problem of speaker-dependent discrete utterance recognition in noise. Special reference is made to the mismatch effects due to the fact that training and testing are carried out in different environments. The authors extend their previous work (Lockwood and Boudy, 1991) where a robust hidden Markov model (HMM) training/recognition framework is proposed. Several new aspects are introduced: use of enhanced nonlinear spectral subtraction (NSS) schemes, introduction of root-MFCC parameters, use of dynamic features, and training of HMMs by a dynamic inference scheme (DIHMM). These enhancements are discussed from tests performed on bandlimited signals (200-3000 Hz). The authors show that these various optimizations allow a rise from 20% to over 99% in performance. A 93% recognition rate is already achievable on raw data using a weighted modified projection and a root-MFCC dynamic representation. >

53 citations


Proceedings ArticleDOI
15 Jun 1992
TL;DR: A complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model (HMM) is proposed, which includes a morphology- and heuristics-based segmentation algorithm and a modified Viterbi algorithm that searches the globally best path based on the previous l best paths.
Abstract: A complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model (HMM) is proposed. The scheme includes a morphology- and heuristics-based segmentation algorithm and a modified Viterbi algorithm that searches the (l+1)st globally best path based on the previous l best paths. The results of detailed experiments for which the overall recognition rate is up to 89.4% are reported. >

27 citations


Proceedings Article
19 Aug 1992
TL;DR: A new method of feature-based facial codeing allowing an entire face to be represented in less than two hundred bytes of information is introduced, which helps to guide the location and storage of the most important facial parts.
Abstract: A review of competing facial recognition techniques is presented. The authors then go on to introduce a new method of feature-based facial codeing allowing an entire face to be represented in less than two hundred bytes of information. Crucial to this coding process is the use of an a priori model of the face, which helps to guide the location and storage of the most important facial parts. The data reduction is thus achieved while still preserving many of the intrinsic facial recognition features. The algorithm used to perform the data reduction of the face is described. Results, for verification and recognition trials, are presented for a software implementation of the algorithm. >

20 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: A performance analysis of a first order hidden Markov model based OCR system is presented, finding that for most fonts, optimal performance is achieved with 6-state models.
Abstract: Presents a performance analysis of a first order hidden Markov model based OCR system. Trade-offs between accuracy in terms of recognition rates and complexity in terms of the number of states in the model are discussed. For most fonts, optimal performance is achieved with 6-state models. With adequate heuristics and reliable post-processors, 5-state and even 4-state models give reasonable performances (up to 99.60% at 4-states). >

13 citations


24 Jan 1992
TL;DR: A novel means of facial coding allowing an entire face to be represented in less than two hundred bytes of information is introduced, achieved while still preserving many of the intrinsic facial recognition features.
Abstract: Introduces a novel means of facial coding allowing an entire face to be represented in less than two hundred bytes of information. This data reduction is achieved while still preserving many of the intrinsic facial recognition features. The system could thus reduce an input face into a sufficiently small amount of information that it could be stored on a smart-card. The algorithm used to perform the data reduction of the face is described and the results, in verification and recognition trials, are presented for a software implementation of the algorithm. >

9 citations


I. Craw1
24 Jan 1992
TL;DR: The author describes one such hybrid scheme, presenting details of a working system for locating face features; and a coding scheme, based on accurate feature location, which is useful for recognition.
Abstract: In order to develop a successful face-recognition system, it is necessary to remove instance-specific detail from an incoming image, before attempting to match against previously stored codes. Very recently hybrid methods have emerged, which make use of known feature locations either implicitly, or explicitly to provide much better input to a recognition component which has many of the characteristics of a net based method. The author describes one such hybrid scheme, presenting details of a working system for locating face features; and a coding scheme, based on accurate feature location, which is useful for recognition. The author describes some applications. An advantage of feature recognition over net-based methods is the detailed understanding available at an intermediate stage; this can sometimes be valuable in its own right.

8 citations



Journal ArticleDOI
TL;DR: A new signature approach in which the sizes of the signature files are dependent on the number of unique symbols in the alphabet, and therefore for all documents containing English text, the size is constant.
Abstract: Among the techniques used for retrieval of information from free-text or document databases, signature methods have proven to be more efficient in terms of storage overhead and processing speed. Signature methods, however, present the problem of “false drops” in which a document is identified but does not satisfy the user query. In the signature approaches such as Word Signature, and Superimposed Coding, the number of false drops is directly related to the hashing function selected, signature size, and number of signature buffers used for each document. Hashing functions also generate collisions, which will result in false drops. In addition, these signature methods do not take into account the length of the words or the positional information of the characters that constitute the word. The use of “Don't Care Characters” in the queries, therefore, is not possible. This paper presents a new signature approach in which the sizes of the signature files are dependent on the number of unique symbols in the alphabet, and therefore for all documents containing English text, the size is constant. The signature generated in this technique maintains the positional information of characters and therefore allows for Don't Care Characters to be used in the queries. Implementation results and comparison of this technique to the Superimposed Coding method is presented.

5 citations


Proceedings ArticleDOI
23 Mar 1992
TL;DR: A theorem for self-structuring neural models that states that these models are universal approximators and thus relevant for real-world pattern recognition is presented is presented.
Abstract: The majority of neural models for pattern recognition have fixed architecture during training. A typical consequence is nonoptimal and often too large networks. A self-structuring hidden control (SHC) neural model for pattern recognition that establishes a near-optimal architecture during training is proposed. A network architecture reduction of approximately 80-90% in terms of the number of hidden processing elements (PEs) is typically achieved. The SHC model combines self-structuring architecture generation with nonlinear prediction and hidden Markov modeling. A theorem for self-structuring neural models that states that these models are universal approximators and thus relevant for real-world pattern recognition is presented. Using SHC models containing as few as five hidden PEs each for an isolated word recognition task resulted in a recognition rate of 98.4%. SHC models can furthermore be applied to continuous speech recognition. >

4 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: The author constructs a Markov model of a matching for two graphs taking into account local geometric constraints and proposes a distance between two graphs to be able to discriminate their shapes.
Abstract: Presents an algorithm for recognition of plane and rigid objects. The method represents the shape of these objects by valuate graphs. The author constructs a Markov model of a matching for two graphs taking into account local geometric constraints. The author finally proposes a distance between two graphs to be able to discriminate their shapes. >

Proceedings ArticleDOI
07 Jul 1992
TL;DR: A decision tree method for ship noise classification by a decision tree previously calculated during a training phase in which a significant set of situations must be shown to the algorithm.
Abstract: Signature recognition can be useful in a wide range of applications. A decision tree method for ship noise classification is presented. Thie ship noise, once transposed to the frequency's domain through the application of a b"Ii'r, is classified by a decision tree previously calculated during a training phase in which a significant set of situations must be shown to the algorithm. The tree calculation process is explained and results from real e xperiments a re presented. P arallel implementations for improvement of performance are suggested. A well suited architecture transputer based, is also presented for the problem solution.

Proceedings ArticleDOI
30 Aug 1992
TL;DR: A novel approach to automatic signature verification based on the optimisation of a descriptive feature vector for each individual signer is described, which requires a parallel processing approach to system implementation.
Abstract: Describes a novel approach to automatic signature verification based on the optimisation of a descriptive feature vector for each individual signer. The nature of the optimisation process is such that user enrolment becomes highly computationally intensive and hence a parallel processing approach to system implementation is necessary for practical applications. The design approach is evaluated and practical results are presented. >


Proceedings ArticleDOI
31 Aug 1992
TL;DR: The authors present a theorem for self-structuring neural models stating that these models are universal approximators and thus relevant to real-world pattern recognition.
Abstract: The authors propose a self-structuring hidden control (SHC) neural model for pattern recognition which establishes a near-optimal architecture during training. A significant network architecture reduction in terms of the number of hidden processing elements (PEs) is typically achieved. The SHC model combines self-structuring architecture generation with nonlinear prediction and hidden Markov modelling. The authors present a theorem for self-structuring neural models stating that these models are universal approximators and thus relevant to real-world pattern recognition. Using SHC models containing as few as five hidden PEs each for an isolated word recognition task resulted in a recognition rate of 98.4%. SHC models can also be applied to continuous speech recognition. >

Proceedings ArticleDOI
30 Aug 1992
TL;DR: The main aim of the system is to perform an object recognition based on contour description and being independent of noise and indeterminations in the digitization process.
Abstract: The interest in fuzzy algorithms is increasing in a wide range of pattern recognition applications. This paper describes a recognition system using fuzzy algorithms and parameters, with particular enhancement in the system architecture. This system is able to recognize flat polygonal objects in real time. The main aim of the system is to perform an object recognition based on contour description and being independent of noise and indeterminations in the digitization process. This system is made up of three modules. The first one is a break points detector. The second block performs a contour description by means of fuzzy parameters that allows us to recognize the picture in the third block. >

Proceedings ArticleDOI
23 Mar 1992
TL;DR: The authors applied the modified LVQ2 algorithm to a multi-speaker-dependent phoneme recognition task for continuous speech uttered Bunsetsu-by-Bunsetsu, and found that the phoneme Recognition scores obtained were higher than those obtained by the original LVZ2 algorithm.
Abstract: The authors propose a phoneme recognition method based on the learning vector quantization (LVQ2) algorithm. They propose three kinds of modified training algorithms for the LVQ2 algorithm. In the recognition stage, the likelihood matrix is computed using the reference vectors and then the optimum phoneme sequence is computed from the matrix using the two-level dynamic programming (DP)-matching with duration constraints. The recognition score of phonemes in isolated spoken words was 89.1% for the test set. The phoneme recognition scores obtained by the modified LVQ2 algorithm were higher than those obtained by the original LVQ2 algorithm. The authors applied this method to a multi-speaker-dependent phoneme recognition task for continuous speech uttered Bunsetsu-by-Bunsetsu. The phoneme recognition score was 85.5% for the test speech samples in continuous speech. >