scispace - formally typeset
Search or ask a question
Topic

Optical character recognition

About: Optical character recognition is a research topic. Over the lifetime, 7342 publications have been published within this topic receiving 158193 citations. The topic is also known as: OCR & optical character reader.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper designs an image processing module for a mobile device based on the characteristics of a CNN, and proposes a lightweight network structure for optical character recognition (OCR) on specific data sets.
Abstract: Deep learning (DL) is a hot topic in current pattern recognition and machine learning. DL has unprecedented potential to solve many complex machine learning problems and is clearly attractive in the framework of mobile devices. The availability of powerful pattern recognition tools creates tremendous opportunities for next-generation smart applications. A convolutional neural network (CNN) enables data-driven learning and extraction of highly representative, hierarchical image features from appropriate training data. However, for some data sets, the CNN classification method needs adjustments in its structure and parameters. Mobile computing has certain requirements for running time and network weight of the neural network. In this paper, we first design an image processing module for a mobile device based on the characteristics of a CNN. Then, we describe how to use the mobile to collect data, process the data, and construct the data set. Finally, considering the computing environment and data characteristics of mobile devices, we propose a lightweight network structure for optical character recognition (OCR) on specific data sets. The proposed method using a CNN has been validated by comparison with the results of existing methods, used for optical character recognition.

43 citations

Patent
11 Jul 1994
TL;DR: In this article, a post-processing method for an optical character recognition (OCR) method for combining different OCR engines to identify and resolve characters and attributes of the characters that are erroneously recognized by multiple OCR recognition engines.
Abstract: A post-processing method for an optical character recognition (OCR) method for combining different OCR engines to identify and resolve characters and attributes of the characters that are erroneously recognized by multiple optical character recognition engines. The characters can originate from many different types of character environments. OCR engine outputs are synchronized in order to detect matches and mismatches between said OCR engine outputs by using synchronization heuristics. The mismatches are resolved using resolution heuristics and neural networks. The resolution heuristics and neural networks are based on observing many different conventional OCR engines in different character environments to find what specific OCR engine correctly identifies a certain character having particular attributes. The results are encoded into the resolution heuristics and neural networks to create an optimal OCR post-processing solution.

42 citations

Proceedings ArticleDOI
16 Jun 2012
TL;DR: In this work a Gaussian Hidden Markov Model (GHMM) based automatic sign language recognition system is built on the SIGNUM database and could improve the word error rate of this system by more than 8% relative and outperform the best published results on this database by about 6% relative.
Abstract: In this work a Gaussian Hidden Markov Model (GHMM) based automatic sign language recognition system is built on the SIGNUM database. The system is trained on appearance-based features as well as on features derived from a multilayer perceptron (MLP). Appearance-based features are directly extracted from the original images without any colored gloves or sensors. The posterior estimates are derived from a neural network. Whereas MLP based features are well-known in speech and optical character recognition, this is the first time that these features are used in a sign language system. The MLP based features improve the word error rate (WER) of the system from 16% to 13% compared to the appearance-based features. In order to benefit from the different feature types we investigate a combination technique. The models trained on each feature set are combined during the recognition step. By means of the combination technique, we could improve the word error rate of our best system by more than 8% relative and outperform the best published results on this database by about 6% relative.

42 citations

Proceedings ArticleDOI
23 Aug 2010
TL;DR: A binarization method based on edge for video text images, especially for images with complex background or low contrast, that utilizes a local thresholding method and fills up the contour to form characters that are recognizable to OCR software.
Abstract: This paper introduces a binarization method based on edge for video text images, especially for images with complex background or low contrast. The binarization method first detects the contour of the text, and utilizes a local thresholding method to decide the inner side of the contour, and then fills up the contour to form characters that are recognizable to OCR software. Experiment results show that our method is especially effective on complex background and low contrast images.

42 citations

Proceedings ArticleDOI
Xi-Ping Luo1, Jun Li1, Li-Xin Zhen1
23 Aug 2004
TL;DR: This paper introduced the design and implementation of a business card reader based on a built-in camera and proposed a new method based on multi-resolution analysis of document images that improves computation speed and reduces memory requirement of the image-processing step.
Abstract: With the availability of high-resolution cameras and increased computation power, it becomes possible to implement OCR applications such as business card readers in the mobile device. In this paper, we introduced the design and implementation of a business card reader based on a built-in camera. In order to deal with the challenge of limited resources in a mobile device, we proposed a new method based on multi-resolution analysis of document images. This method improves computation speed and reduces memory requirement of the image-processing step by detecting the text areas in the downscaled image and then analyzing each detected area in the original image. For the OCR engine, we used a two-layer classifier to improve speed. Our experiment gives a satisfactory result.

42 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
87% related
Feature (computer vision)
128.2K papers, 1.7M citations
85% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023186
2022425
2021333
2020448
2019430
2018357