scispace - formally typeset
Proceedings ArticleDOI

Enhancement of inscription images

28 Mar 2013-pp 1-5
TL;DR: The proposed method improves word and character recognition accuracies of the OCR system by 65.3% and 54.3%, respectively, and is a suitable method for separating signals from a mixture of highly correlated signals.
Abstract: This paper addresses the problems encountered during digitization and preservation of inscriptions such as perspective distortion and minimal distinction between foreground and background. In general inscriptions neither possess standard size and shape nor colour difference between the foreground and background. Hence the existing methods like variance based extraction and Fast-ICA based analysis fail to extract text from these inscription images. Natural gradient Flexible ICA (NGFICA) is a suitable method for separating signals from a mixture of highly correlated signals, as it minimizes the dependency among the signals by considering the slope of the signal at each point. We propose an NGFICA based enhancement of inscription images. The proposed method improves word and character recognition accuracies of the OCR system by 65.3% (from 10.1% to 75.4%) and 54.3% (from 32.4% to 86.7%) respectively.
Citations
More filters

Book ChapterDOI
13 Sep 2017-
TL;DR: A new approach for enhancement of Epigraphic Document images using Retinex method is presented in this paper, which enhances the visual clarity of the degraded images by highlighting the foreground text and suppressing the background noise.
Abstract: Epigraphic Documents are the ancient handwritten text documents inscribed on stone, metals, wood and shell. They are the most authentic, solitary and unique documented evidences available for the study of ancient history. In the recent years, Archeological Departments worldwide have taken up the massive initiative of converting their repository of ancient Epigraphic Documents into digital libraries for the perennial purpose of their preservation and easy dissemination. The visual quality of the digitized Epigraphic Document images is poor as they are captured from sources that would have suffered from various kinds of degradations like aging, depositions and risky handling. Enhancement of these images is an essential prerequisite to make them suitable for automatic character recognition and machine translation. A new approach for enhancement of Epigraphic Document images using Retinex method is presented in this paper. This method enhances the visual clarity of the degraded images by highlighting the foreground text and suppressing the background noise. The method has been tested on digitized estampages of ancient stone inscriptions of 11th century written in old Kannada language. The results achieved are efficient in terms of root mean square contrast and standard deviation.

4 citations


Book ChapterDOI
01 Nov 2014-
TL;DR: A novel interactive technique for extraction of text characters from the images of stone inscriptions is introduced in this paper, designed particularly for on-site processing of inscription images acquired at various historic palaces, monuments, and temples.
Abstract: A novel interactive technique for extraction of text characters from the images of stone inscriptions is introduced in this paper. It is designed particularly for on-site processing of inscription images acquired at various historic palaces, monuments, and temples. Its underlying principle is made of several robust character-analytic elements like HoG features, vowel diacritics, and location-bounded scan lines. Since the process involves character spotting and extraction of the inscribed information to editable text, it would subsequently help the archaeologists for epigraphy, transliteration, and translation of rock inscriptions, particularly for the ones having high degradations, noise, and a variety of styles according to the mason origin and reign. The spotted characters can also be used to create a database for ancient script analysis and related archaeological work. We have tested our method on various stone inscriptions collected from some of the heritage sites of Karnataka, India, and the results are quite promising. An Android application of the proposed work is also developed to aid the epigraphers in the study of inscriptions using a tablet or a mobile phone.

2 citations


Cites methods from "Enhancement of inscription images"

  • ...In [5], enhancement of inscription images for recognizing the text using OCR is performed using natural gradient based flexible ICA (NGFICA)....

    [...]

  • ...To identify the dating of a stone inscription by identifying its writer, other methodologies can be seen in [4, 5, 8, 9, 11]....

    [...]


Proceedings ArticleDOI
01 Feb 2018-
TL;DR: This model consists phase congruency and of Gaussian model based background elimination using expectation maximization(EM) algorithm, preprocessing and binarization, which removes the background noise completely where foreground characters are untouched.
Abstract: Epigraphs are important sources for reshaping our culture and history. They have a remarkable importance to mankind. But modern epigraphists find it difficult to interpret the information in scripts. It is mainly because inscriptions are eroded over a period of time due to natural calamities. Scripts of ancient times are largely unknown. Character sets used have changed from one form to another over the centuries. Therefore, for reading ancient scripts the characters have to be extracted. In this paper, a model for enhancement and binarization of historical epigraphs is proposed. This model consists phase congruency and of Gaussian model based background elimination using expectation maximization(EM) algorithm, preprocessing and binarization. In binarization, phase based features are used with specialised filters. Adaptive Gaussian filters are used to smoothen the output images. Weighted mean angle is calculated to differentiate the foreground from the background. EM algorithm removes the background noise completely where foreground characters are untouched. Proposed method is tested on different datasets of inscriptions and epigraphs. Obtained results are compared with the existing classical algorithms.

2 citations


Cites methods from "Enhancement of inscription images"

  • ...In [3], author worked on the extraction of foreground object from the palm leaf using binarization method based on clustering traditional threshold computation....

    [...]


Proceedings ArticleDOI
08 Sep 2014-
TL;DR: The proposed work describes the difficulties during the conversion of inscription digitization, preservation and trifling dissimilarities among forefront and background and proposes methodology that enhances the words and recognizes the characters alone.
Abstract: The proposed work describes the difficulties during the conversion of inscription digitization, preservation and trifling dissimilarities among forefront and background. Basically the inscriptions were neither retained traditional size and nor the shape. Even though they doesn't have colour discrepancy linking foreground and background. In priviling technique describes the extractions in the inscription by using NGFICA method. Our method enhances the words and recognizes the characters alone. In proposing methodology the inscription that has been enhanced, recognized effortlessly.

1 citations


Book ChapterDOI
01 Jan 2017-
Abstract: The study and analysis of epigraphy is important for knowing about the past. From around third century to modern times, about 90,000 inscriptions have been discovered from different parts of India.

References
More filters

01 Jan 2012-
TL;DR: The standardization of the IC model is talked about, and on the basis of n independent copies of x, the aim is to find an estimate of an unmixing matrix Γ such that Γx has independent components.

2,296 citations


Additional excerpts

  • ...Natural gradient based independent component analysis learning algorithm with flexible nonlinearity as de­ scribed in [12] [13] gives better results than other algorithms as it is more efficient in minimizing dependence among correlated signals....

    [...]


Proceedings Article
27 Nov 1995-
TL;DR: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals and has an equivariant property and is easily implemented on a neural network like model.
Abstract: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm are verified by computer simulations.

2,107 citations


"Enhancement of inscription images" refers background in this paper

  • ...The NGFICA [14] can be explained in the simplest possible way as follows....

    [...]


Journal Article
TL;DR: Several techniques for edge detection in imageprocessing are compared and various well-known measuring metrics used in image processing applied to standard images are considered in this comparison.
Abstract: Edge detection is one of the most commonly used operations in image analysis, and there are probably more algorithms in the literature for enhancing and detecting edges than any other single subject. The reason for this is that edges form the outline of an object. An edge is the boundary between an object and the background, and indicates the boundary between overlapping objects. This means that if the edges in an image can be identified accurately, all of the objects can be located and basic properties such as area, perimeter, and shape can be measured. Since computer vision involves the identification and classification of objects in an image, edge detections is an essential tool. In this paper, we have compared several techniques for edge detection in image processing. We consider various well-known measuring metrics used in image processing applied to standard images in this comparison.

257 citations


"Enhancement of inscription images" refers background in this paper

  • ...Simple edge-based approaches [9] are also considered useful to identify regions with high edge density and strength....

    [...]


Proceedings ArticleDOI
10 Dec 2002-
TL;DR: An algorithm to localize artificial text in images and videos using a measure of accumulated gradients and morphological post processing to detect the text is presented and the quality of the localized text is improved by robust multiple frame integration.
Abstract: The systems currently available for content based image and video retrieval work without semantic knowledge, i.e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e.g. by key word based queries. In this paper we present an algorithm to localize artificial text in images and videos using a measure of accumulated gradients and morphological post processing to detect the text. The quality of the localized text is improved by robust multiple frame integration. Anew technique for the binarization of the text boxes is proposed. Finally, detection and OCR results for a commercial OCR are presented.

249 citations


"Enhancement of inscription images" refers methods in this paper

  • ...Several methods have been proposed for detection of text, localization and extraction of text from images of inscriptions [1] [2]....

    [...]


Journal ArticleDOI
01 Aug 2000-
TL;DR: This paper addresses an independent component analysis (ICA) learning algorithm with flexible nonlinearity that is able to separate instantaneous mixtures of sub- and super-Gaussian source signals and employs the parameterized generalized Gaussian density model for hypothesized source distributions.
Abstract: This paper addresses an independent component analysis (ICA) learning algorithm with flexible nonlinearity, so named as flexible ICA, that is able to separate instantaneous mixtures of sub- and super-Gaussian source signals. In the framework of natural Riemannian gradient, we employ the parameterized generalized Gaussian density model for hypothesized source distributions. The nonlinear function in the flexible ICA algorithm is controlled by the Gaussian exponent according to the estimated kurtosis of demixing filter output. Computer simulation results and performance comparison with existing methods are presented.

183 citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20211
20181
20172
20142