scispace - formally typeset
Open AccessJournal ArticleDOI

NGFICA Based Digitization of Historic Inscription Images

Reads0
Chats0
TLDR
The proposed method improves word and character recognition accuracies of the OCR system by 65.3% and 54.3%, respectively, and is a suitable method for separating signals from a mixture of highly correlated signals.
Abstract
This paper addresses the problems encountered during digitization and preservation of inscriptions such as perspective distortion and minimal distinction between foreground and background. In general inscriptions possess neither standard size and shape nor colour difference between the foreground and background. Hence the existing methods like variance based extraction and Fast ICA based analysis fail to extract text from these inscription images. Natural gradient flexible ICA (NGFICA) is a suitable method for separating signals from a mixture of highly correlated signals, as it minimizes the dependency among the signals by considering the slope of the signal at each point. We propose an NGFICA based enhancement of inscription images. The proposed method improves word and character recognition accuracies of the OCR system by 65.3% (from 10.1% to 75.4%) and 54.3% (from 32.4% to 86.7%), respectively.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Enhancement and Retrieval of Historic Inscription Images

TL;DR: By separating the text layer from the non-text layer using the proposed cumulants based Blind Source Extraction method, and store them in a digital library with their corresponding historic information, these images are retrieved from database using image search based on Bag-of-Words(BoW) method.
Journal ArticleDOI

Texture Detection for Letter Carving Segmentation of Ancient Copper Inscriptions

TL;DR: The segmentation results of letters on ancient copper inscriptions by using the proposed method have an average accuracy of 90% and are suitable for letter segmentation of the ancient Copper inscriptions.
Proceedings ArticleDOI

Binarization of stone inscripted documents

TL;DR: A new technique to convert images of stone with inscriptions to binary forms is presented and it is compared to other existing algorithms for binary conversion and the results of the proposed method are better than the competing algorithms.
Journal ArticleDOI

Transformasi Lontar Babad Lombok Menuju Digitalisasi Berbasis Natural Gradient Flexible (NGF)

TL;DR: Suku Sasak, ying tinggal di pulau Lombok Nusa Tenggara Barat, memiliki tradisi penulisan di daun lontar ( Borassus Flabellifer ) kering, salah satunya adalah naskah Lontar Babad Lombok, dapat menikmati lontara babad lombok seiring berlalunya waktu, menjadi rapuh dan mudah patah sehingga memerlukan perawatan.
Proceedings ArticleDOI

Enhancement and Segmentation of Historical Records

TL;DR: This paper addresses pre-processing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions.
References
More filters
Book

Independent Component Analysis

TL;DR: Independent component analysis as mentioned in this paper is a statistical generative model based on sparse coding, which is basically a proper probabilistic formulation of the ideas underpinning sparse coding and can be interpreted as providing a Bayesian prior.
Journal ArticleDOI

Applied Mathematical Sciences

Independent Component Analysis.

Seungjin Choi
TL;DR: The standardization of the IC model is talked about, and on the basis of n independent copies of x, the aim is to find an estimate of an unmixing matrix Γ such that Γx has independent components.
Proceedings Article

A New Learning Algorithm for Blind Signal Separation

TL;DR: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals and has an equivariant property and is easily implemented on a neural network like model.
Proceedings ArticleDOI

Text localization, enhancement and binarization in multimedia documents

TL;DR: An algorithm to localize artificial text in images and videos using a measure of accumulated gradients and morphological post processing to detect the text is presented and the quality of the localized text is improved by robust multiple frame integration.
Related Papers (5)