scispace - formally typeset
Open AccessProceedings ArticleDOI

Focusing Attention: Towards Accurate Text Recognition in Natural Images

Reads0
Chats0
TLDR
Zhang et al. as mentioned in this paper proposed Focusing Attention Network (FAN) which employs a focusing attention mechanism to automatically draw back the drifted attention. But the FAN method is not suitable for complex and low-quality images and it cannot get accurate alignment between feature areas and targets for such images.
Abstract
Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and/or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon “attention drift”. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods.

read more

Citations
More filters
Posted Content

Reciprocal Feature Learning via Explicit and Implicit Tasks in Scene Text Recognition

TL;DR: Zhang et al. as mentioned in this paper designed a two-branch reciprocal feature learning framework in order to adequately utilize the features from both the tasks, exploiting the complementary effect between explicit and implicit tasks, the feature is reliably enhanced.
Journal ArticleDOI

Condition Monitoring and Fault Detection of Wind Turbine Driveline With the Implementation of Deep Residual Long Short-Term Memory Network

TL;DR: In this article , a new method utilizing the deep residual LSTM network with attention model (ResLSTM-AM) is proposed in view of the wide application of wind power, it is critically essential to develop solutions to the high fault rate and the long mean time to repair (MTTR).
Posted Content

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer

TL;DR: In this article, a transformer-based decoder is employed to replace RNN-based ones, which makes the whole model architecture very concise, and a novel training strategy is introduced to fully exploit the potential of the transformer in bidirectional language modeling.
Book ChapterDOI

CATNet: Scene Text Recognition Guided by Concatenating Augmented Text Features

TL;DR: In this paper, an end-to-end trainable text recognition model consisting of an auxiliary augmentation module and a text recognizer was proposed, where the auxiliary network acts like an image preprocessing module to extract rich augmented features from the input image to ease the downstream recognition difficulty.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Proceedings ArticleDOI

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Related Papers (5)