scispace - formally typeset
Open AccessProceedings ArticleDOI

Focusing Attention: Towards Accurate Text Recognition in Natural Images

Reads0
Chats0
TLDR
Zhang et al. as mentioned in this paper proposed Focusing Attention Network (FAN) which employs a focusing attention mechanism to automatically draw back the drifted attention. But the FAN method is not suitable for complex and low-quality images and it cannot get accurate alignment between feature areas and targets for such images.
Abstract
Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and/or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon “attention drift”. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods.

read more

Citations
More filters
Journal ArticleDOI

Enhanced Spectral–Spatial Residual Attention Network for Hyperspectral Image Classification

TL;DR: Wang et al. as discussed by the authors proposed an enhanced spectral-spatial residual attention network (ESSRAN) for hyperspectral image classification, which combined spectral and spatial attention networks to extract more discriminative spectral features.
Book ChapterDOI

Image-Enhanced Multi-Modal Representation for Local Topic Detection from Social Media

TL;DR: Zhang et al. as mentioned in this paper proposed an effective local topic detection method with two major modules, called IEMM-LTD, which adopts different prior distributions to model multi-modal information separately and can find the number of topics automatically.
Proceedings ArticleDOI

Improving Irregular Text Recognition by Integrating Gabor Convolutional Network

TL;DR: This work proposes an end-to-end trainable model that combines a Gabor Convolutional Network (GCN) and a Sequence Recognition Network (SRN) that is capable of extracting more robust features against the orientation and evaluates the recognition accuracy on various benchmark datasets of scene text, including both regular and irregular texts.
Proceedings ArticleDOI

Fast Distributional Smoothing for Regularization in CTC Applied to Text Recognition

TL;DR: Fast distributional smoothing (FDS) is proposed as a method for drastically reducing computational costs by minimizing this upper bound for divergence and experiments show that FDS enables efficient semi-supervised learning in sequential-label prediction tasks and that it outperforms a conventional semi- supervised method.
Journal ArticleDOI

Robust Sewer Defect Detection With Text Analysis Based on Deep Learning

TL;DR: A new automated framework for detecting sewage pipe defects based on the attention mechanism, improved YOLOv5 architecture, and location information recognition from CCTV videos is introduced.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Proceedings ArticleDOI

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Related Papers (5)