scispace - formally typeset
Journal ArticleDOI

An attention-based network for serial number recognition on banknotes

Reads0
Chats0
TLDR
Zhang et al. as discussed by the authors proposed an attention-based sequence model to recognize serial number characters from the rectified image holistically, rather than segmenting and recognizing individual characters.
Abstract
The serial number recognition (SNR) on banknotes is essential for currency circulation. The performance of the existing SNR methods is significantly influenced by character segmentation, which is challenging due to uneven illumination and complex background. In this paper, we apply deep learning techniques to SNR by proposing an attention-based network, which can be end-to-end trained to avoid the problem of character segmentation. The proposed framework contains two parts: rectification and recognition. First, the rectification network, which can be trained in a weakly supervised manner without additional manual annotations, is built to automatically rectify the tilted and loosely-bounded images and reduce the difficulty of recognition. Then, the recognition network, an attention-based sequence model, recognizes serial number characters from the rectified image holistically, rather than segmenting and recognizing individual characters. To address the problem of complex textures on banknotes, we integrate the deformable convolution into the recognition network, which adaptively focuses on the character regions by using flexible receptive fields to accurately extract optimal character features, while ignoring redundant background information. Extensive experiments conducted on CNY, KRW, EUR and JPY banknotes, demonstrate that the proposed method achieves higher accuracy than the existing methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Parallel Recurrent Module with Inter-layer Attention for Capturing Long-range Feature Relationships

TL;DR: This work proposes a novel module, referred to as a Parallel Recurrent Module with Inter-layer Attention (PI module), which exhibits several unique characteristics, including the ability to memorize information from earlier layers and ameliorate gradient vanishing, which are issues not addressed by existing attention modules.
References
More filters
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Posted Content

PyTorch: An Imperative Style, High-Performance Deep Learning Library

TL;DR: PyTorch as discussed by the authors is a machine learning library that provides an imperative and Pythonic programming style that makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs.
Journal ArticleDOI

Bidirectional recurrent neural networks

TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Proceedings Article

Spatial transformer networks

TL;DR: This work introduces a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network, and can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps.
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TL;DR: This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Related Papers (5)