scispace - formally typeset
Open AccessPosted Content

Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition

Reads0
Chats0
TLDR
This work presents a framework for the recognition of natural scene text that does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past.
Abstract
In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine -- synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one "reading" words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Inductive Visual Localisation: Factorised Training for Superior Generalisation

TL;DR: In this article, the problem of text spotting is decomposed into a sequence of inductive steps and then a recurrent neural network is trained to reproduce these steps, where the RNN is not allowed to learn an arbitrary internal state; instead, it is tasked with mimicking the evolution of a valid state.
Proceedings ArticleDOI

Unsupervised Adaptation for Synthetic-to-Real Handwritten Word Recognition

TL;DR: This paper proposed an unsupervised writer adaptation approach that is able to automatically adjust a generic handwritten word recognizer, fully trained with synthetic fonts, towards a new incoming writer, which can deal with the variability among writing styles and the scarcity of labeled data.
Posted Content

Self-supervised Tumor Segmentation through Layer Decomposition

TL;DR: In this paper, a self-supervised approach for tumor segmentation is proposed, where models from selfsupervised learning are directly applied for the downstream task, without using any manual annotations whatsoever.
Proceedings ArticleDOI

Detecting Text in News Images with Similarity Embedded Proposals

TL;DR: An effective news text detection framework is developed by introducing a novel similarity embedded proposal mechanism to predict similarity for each fine-scale coarse proposal to help construct text bounding boxes.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Posted Content

Improving neural networks by preventing co-adaptation of feature detectors

TL;DR: The authors randomly omits half of the feature detectors on each training case to prevent complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors.
Journal ArticleDOI

DRC: a dual route cascaded model of visual word recognition and reading aloud.

TL;DR: The DRC model is a computational realization of the dual-route theory of reading, and is the only computational model of reading that can perform the 2 tasks most commonly used to study reading: lexical decision and reading aloud.
Proceedings Article

OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks

TL;DR: In this article, a multiscale and sliding window approach is proposed to predict object boundaries, which is then accumulated rather than suppressed in order to increase detection confidence, and OverFeat is the winner of the ImageNet Large Scale Visual Recognition Challenge 2013.
Related Papers (5)