scispace - formally typeset
Open AccessPosted Content

Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition

Reads0
Chats0
TLDR
This work presents a framework for the recognition of natural scene text that does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past.
Abstract
In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine -- synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one "reading" words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Action Recognition Based on Efficient Deep Feature Learning in the Spatio-Temporal Domain

TL;DR: A simple, yet robust, 2-D convolutional neural network extended to a concatenated 3-D network that learns to extract features from the spatio-temporal domain of raw video data and is used for content-based recognition of videos.
Posted Content

TextSR: Content-Aware Text Super-Resolution Guided by Recognition.

TL;DR: This work proposes a content-aware text super-resolution network, which uses the loss of text recognition as the Text Perceptual Loss to guide the training of the super- resolution network, and thus it pays more attention to the text content, rather than the irrelevant background area.
Proceedings ArticleDOI

On Vocabulary Reliance in Scene Text Recognition

TL;DR: Zhang et al. as mentioned in this paper established an analytical framework, in which different datasets, metrics and module combinations for quantitative comparisons are devised, to conduct an in-depth study on the problem of vocabulary reliance in scene text recognition.
Book ChapterDOI

RD-GAN: Few/Zero-Shot Chinese Character Style Transfer via Radical Decomposition and Rendering.

TL;DR: A novel radical decomposition-and rendering-based GAN (RD-GAN) is proposed to utilize the radical-level compositions of Chinese characters and achieves few-shot/zero-shot Chinese character style transfer.
Journal ArticleDOI

Adaptive embedding gate for attention-based scene text recognition

TL;DR: The proposed adaptive embedding gate (AEG) focuses on introducing high-order character language models to attention mechanism by controlling the information transmission between adjacent characters and can be easily integrated into the state-of-the-art attentional methods.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Posted Content

Improving neural networks by preventing co-adaptation of feature detectors

TL;DR: The authors randomly omits half of the feature detectors on each training case to prevent complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors.
Journal ArticleDOI

DRC: a dual route cascaded model of visual word recognition and reading aloud.

TL;DR: The DRC model is a computational realization of the dual-route theory of reading, and is the only computational model of reading that can perform the 2 tasks most commonly used to study reading: lexical decision and reading aloud.
Proceedings Article

OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks

TL;DR: In this article, a multiscale and sliding window approach is proposed to predict object boundaries, which is then accumulated rather than suppressed in order to increase detection confidence, and OverFeat is the winner of the ImageNet Large Scale Visual Recognition Challenge 2013.
Related Papers (5)