scispace - formally typeset
Open AccessProceedings ArticleDOI

Focusing Attention: Towards Accurate Text Recognition in Natural Images

Reads0
Chats0
TLDR
Zhang et al. as mentioned in this paper proposed Focusing Attention Network (FAN) which employs a focusing attention mechanism to automatically draw back the drifted attention. But the FAN method is not suitable for complex and low-quality images and it cannot get accurate alignment between feature areas and targets for such images.
Abstract
Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and/or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon “attention drift”. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods.

read more

Citations
More filters

An End-to-End Marking Recognition System for PCB Optical Inspection

TL;DR: In this article , the authors identified those circumstances with sound justification and used carefully selected data augmentation approaches to create a better-performing end-to-end marking recognition system for PCB optical inspection.
Book ChapterDOI

Character Flow Detection and Rectification for Scene Text Spotting.

TL;DR: Zhang et al. as mentioned in this paper proposed a three-stage bottom-up scene text spotter, including text segmentation, text rectification and text recognition, which adopts a feature pyramid network (FPN) to extract character instances by combining local and global information, then a joint network of FPN and bidirectional LSTM is developed to explore the affinity among the isolated characters, which are grouped into character flows.
Posted Content

MetaHTR: Towards Writer-Adaptive Handwritten Text Recognition

TL;DR: The authors propose a meta-learning framework which exploits additional new-writer data through a support set, and outputs a writer-adapted model via single gradient step update, all during inference.
Proceedings ArticleDOI

A Review of Optical Text Recognition from Distorted Scene Image

TL;DR: In this paper , a PRISMA method flow diagram is used to conduct a comparison of different scene text recognition algorithms on various common datasets and compare them to find the weakness and inconsistencies of various scene text classification algorithms between different datasets.
Journal ArticleDOI

Word Recognition Method Using Convolution Deep Learning Approach Used in Smart Cities for Vehicle Identification

TL;DR: This work proposes to consolidate Key Pixel Locator in an image and combine it with Convolutional Neural Network to accomplish great recognition rate and identification rate and demonstrates that the methodology utilizing the variable length beats the strategy utilizing fixed span as far as both exactness and speed.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Proceedings ArticleDOI

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Related Papers (5)