scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Digitization of Historic Inscription Images using Cumulants based Simultaneous Blind Source Extraction

TL;DR: The proposed technique provides a suitable method to separate the text layer from the historic inscription images by considering the problem as blind source separation which aims to calculate the independent components from a linear mixture of source signals, by maximizing a contrast function based on higher order cumulants.
Abstract: In this paper a novel method to address the problem of enhancement and binarization of historic inscription images is presented. Inscription images in general have no distinction between the text layer and background layer due to absence of color difference and possess highly correlated signals and noise. The proposed technique provides a suitable method to separate the text layer from the historic inscription images by considering the problem as blind source separation which aims to calculate the independent components from a linear mixture of source signals, by maximizing a contrast function based on higher order cumulants. Further, the results are compared with existing ICA based techniques like NGFICA and Fast-ICA.
Citations
More filters
Book ChapterDOI
11 Nov 2016
TL;DR: An analysis of the different methods used for the enhancement of degraded ancient images in terms of low resolution, minimal intensity difference between the text and background, show through effects and uneven background.
Abstract: The article describes the most recent developments in the field of enhancement and digitization of ancient manuscripts and inscriptions. Digitization of ancient sources of information is essential to have an insight of the rich culture of previous civilizations, which in turn requires the high rate of accuracy in word and character recognition. To enhance the accuracy of the Optical Character Recognition system, the degraded images need to be made compatible for the OCR system. So, the image has to be pre-processed by filtering techniques and segmented by thresholding methods followed by post processing operations. The need for digitization of ancient artefacts is to preserve information that lies in the ancient manuscripts and improve the tourism of our country by attracting more and more tourists. This article gives an analysis of the different methods used for the enhancement of degraded ancient images in terms of low resolution, minimal intensity difference between the text and background, show through effects and uneven background. The techniques reviewed include ICA, NGFICA, Cumulants Based ICA and a novel thresholding technique for text extraction.

1 citations

Book ChapterDOI
01 Jan 2017
TL;DR: The study and analysis of epigraphy is important for knowing about the past as discussed by the authors, and from around third century to modern times, about 90,000 inscriptions have been discovered from different parts of India.
Abstract: The study and analysis of epigraphy is important for knowing about the past. From around third century to modern times, about 90,000 inscriptions have been discovered from different parts of India.
References
More filters
Journal ArticleDOI

37,017 citations


"Digitization of Historic Inscriptio..." refers methods in this paper

  • ...Keywords Blind Source Extraction, ICA, Inscription Images, Binarization...

    [...]

01 Jan 2012
TL;DR: The standardization of the IC model is talked about, and on the basis of n independent copies of x, the aim is to find an estimate of an unmixing matrix Γ such that Γx has independent components.

2,296 citations

Book
01 Sep 2002
TL;DR: This volume unifies and extends the theories of adaptive blind signal and image processing and provides practical and efficient algorithms for blind source separation, Independent, Principal, Minor Component Analysis, and Multichannel Blind Deconvolution (MBD) and Equalization.
Abstract: From the Publisher: With solid theoretical foundations and numerous potential applications, Blind Signal Processing (BSP) is one of the hottest emerging areas in Signal Processing This volume unifies and extends the theories of adaptive blind signal and image processing and provides practical and efficient algorithms for blind source separation, Independent, Principal, Minor Component Analysis, and Multichannel Blind Deconvolution (MBD) and Equalization Containing over 1400 references and mathematical expressions Adaptive Blind Signal and Image Processing delivers an unprecedented collection of useful techniques for adaptive blind signal/image separation, extraction, decomposition and filtering of multi-variable signals and data Offers a broad coverage of blind signal processing techniques and algorithms both from a theoretical and practical point of viewPresents more than 50 simple algorithms that can be easily modified to suit the reader's specific real world problemsProvides a guide to fundamental mathematics of multi-input, multi-output and multi-sensory systemsIncludes illustrative worked examples, computer simulations, tables, detailed graphs and conceptual models within self contained chapters to assist self studyAccompanying CD-ROM features an electronic, interactive version of the book with fully coloured figures and text C and MATLAB user-friendly software packages are also provided MATLAB is a registered trademark of The MathWorks, Inc By providing a detailed introduction to BSP, as well as presenting new results and recent developments, this informative and inspiring work will appeal to researchers, postgraduate students, engineers and scientists working in biomedical engineering, communications, electronics, computer science, optimisations, finance, geophysics and neural networks

1,578 citations


"Digitization of Historic Inscriptio..." refers methods in this paper

  • ...Keywords Blind Source Extraction, ICA, Inscription Images, Binarization...

    [...]

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A novel image operator is presented that seeks to find the value of stroke width for each image pixel, and its use on the task of text detection in natural images is demonstrated.
Abstract: We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.

1,531 citations


"Digitization of Historic Inscriptio..." refers background in this paper

  • ...Many of these works are dependent on background light intensity normalization [14] and exploitation of edge information [6] ....

    [...]