scispace - formally typeset
Search or ask a question

Showing papers by "K. P. Soman published in 2012"


Journal ArticleDOI
TL;DR: This paper demonstrates that the proposed preprocessor with a Shannon energy envelope (SEE) estimator is better able to detect R-peaks than other well-known methods in case of noisy or pathological signals.

325 citations


Proceedings ArticleDOI
09 Aug 2012
TL;DR: This paper deals with a new image encryption scheme which employs both compressive sensing and Arnold scrambling method, and provides more security to the data.
Abstract: This paper deals with a new image encryption scheme which employs both compressive sensing and Arnold scrambling method. The compressed sensing(CS) paradigm unifies sensing and compression of sparse signals in a simple linear measurement step. Compressed measurements are scrambled using Arnold transform. So this system provides more security to the data.

27 citations


Journal ArticleDOI
TL;DR: This work is a small tutorial for the new users in the field of software defined radio to give practical exposure in the communication concepts like basic signal generations, signal operations, multi-rate concepts, analog and digital modulation schemes and finally multiplexing schemes with the help of GNU radio.

25 citations


Proceedings ArticleDOI
01 Dec 2012
TL;DR: In this paper, a new method of watermarking scheme is proposed, which uses both Compressed Sensing and Arnold scrambling method for efficient data compression and encryption Compressive sensing technique aims at the reconstruction of sparse signal using a small number of linear measurements Compressed measurements are then encrypted using Arnold transform.
Abstract: Watermarking is a technique for information hiding, which is used to identify the authentication and copyright protection In this paper, a new method of watermarking scheme is proposed, which uses both Compressed Sensing and Arnold scrambling method for efficient data compression and encryption Compressive sensing technique aims at the reconstruction of sparse signal using a small number of linear measurements Compressed measurements are then encrypted using Arnold transform The proposed encryption scheme is computationally more secure against investigated attacks on digital multimedia signals

20 citations


Journal Article
TL;DR: This literature survey is a ground work to understand the different morphology and parser developments in Indian language and various approaches that are used to develop morphological analyzer and generator and natural language parsers tools.
Abstract: Computational Morphology and Natural Language Parsing are the two important as well as essential tasks required for a number of natural language processing application including machine translation. Developing well fledged morphological analyzer and generator (MAG) tools or natural language parsers for highly agglutinative languages is a challenging task. The function of morphological analyzer is to return all the morphemes and their grammatical categories associated with a particular word form. For a given root word and grammatical information, morphological generator will generate the particular word form of that word. On the other hand Parsing is used to understand the syntax and semantics of a natural language sentences confined to the grammar. This literature survey is a ground work to understand the different morphology and parser developments in Indian language. In addition, the paper also deals with various approaches that are used to develop morphological analyzer and generator and natural language parsers tools.

19 citations


Journal ArticleDOI
TL;DR: It is proposed that concept of fractal and its implementation in spreadsheet can be one of the starting points at high school level to induce students into computational thinking and it is shown how to create various kinds of fractals in spreadsheets without using any programming.
Abstract: For most primary and high school level students, computer is a game-playing tool. They might have taught word processing and PowerPoint presentation tool and very basic spreadsheet usage for computing, but, one of the essential skill required for survival in modern technological society is “Computational thinking” that combines power of human intelligence and computing agents for solving complex problems facing the society. It is found that this skill is not imparted to the primary and high school level students. In this context, the concept of computational thinking, its need and the attempts that are being made world over to impart this skill at various levels of education is discussed in the part -1 (out of four) of this article. It is proposed that concept of fractal and its implementation in spreadsheet can be one of the starting points at high school level to induce students into computational thinking. Also it is shown how to create various kinds of fractals in spreadsheets without using any programming. Different fractals require different computational strategies to implement in a spreadsheet. It is hypothesized that practice in the development of such strategies improve the ‘abstraction’ and computational thinking capabilities of the students.

17 citations


Journal ArticleDOI
TL;DR: The design and optimization of a parallel-coupled microstrip bandpass filter for Software Defined Telescope and simulation results reveal that the filter operation is optimum over the frequency range 1.41 GHz to 1.44 GHz.
Abstract: Design and optimization of a parallel-coupled microstrip bandpass filter for Software Defined Telescope is presented in this paper. The simulation and optimization is done using ADS and Momentum. The filter is designed and optimized at a center frequency of 1.42GHz. The filter is built on a relatively cheap substrate FR-4 with permittivity 4.4 r   and loss tangent . Simulation results reveal that the filter operation is optimum over the frequency range 1.41 GHz to 1.44 GHz. The 3 dB bandwidth is thus 300 MHz. the return loss is below -10 dB over the passband. Insertion loss is -2.806 dB in the passband. The filter is almost matched to the characteristic impedance ( 0 Z ), 50 Ohms. Also it is observed that the phase varies linearly with frequency. The filter once fabricated could be used at the radio receiver of the Software Radio Telescope to filter out terrestrial radio interference.

16 citations


Journal Article
TL;DR: It is presented that hyperspectral image classification based on sparse representation can be significantly improved by using an image enhancement step and this step lead to 97.53% of classification accuracy which is high when compared with the classification accuracy obtained without applying the spatial preprocessing technique.
Abstract: In this paper, we present that hyperspectral image classification based on sparse representation can be significantly improved by using an image enhancement step. Spatial enhancement allows further analysis of hyperspectral imagery, as it reduces the intensity variations within the image. Perona-Malik, a partial differential equation based non-linear diffusion scheme is used for the enhancement of the hyperspectral imagery prior to classification. The diffusion technique applied smoothens the homogenous areas of hyperspectral imagery and thereby increases the separability of the classes. The diffusion scheme is applied individually to each band of the hyperspectral imagery and it does not take into account the spectral relationship among different bands. Experiments are performed on the real hyperspectral dataset AVIRIS (Airborne Visible/IR Imaging Spectrometer) 1992 Indiana Indian Pines imagery. We compared the classification statistics of hyperspectral imagery before and after performing the spatial preprocessing step in order to prove the effectiveness of the proposed method. The experiments results proved that the hyperspectral image classification using sparse representation along with spatial enhancement step lead to 97.53% of classification accuracy which is high when compared with the classification accuracy obtained without applying the spatial preprocessing technique.

11 citations


Journal ArticleDOI
TL;DR: This last article in the series explains L-System formalism initially developed for modeling plant growth, and the drawing of these plots in spreadsheet is explained, which is the first attempt in drawing all fractals in spreadsheet.
Abstract: String rewriting systems are another iterative process for creating fractals. The additional grammar formalism in L-systems allows us to build a richer variety of shapes. L-Systems are an efficient way to encode complicated images. With L-Systems different replacements can be made in different parts of the picture. LSystems can be extended to three dimensions, and have been used to make realistic forgeries of plants. They provide a good laboratory for learning about recursive processes, and pattern recognition. In this last article in the series, we explain L-System formalism initially developed for modeling plant growth. The concept can be also used for creating Space filling curves. The drawing of these plots in spreadsheet is explained. It is the first attempt in drawing all fractals in spreadsheet.

11 citations


Journal ArticleDOI
TL;DR: This article shows how Newton’s iterative methods for finding root of a polynomial equation can be used to create fractals in spreadsheets using Microsoft Excel's What-if Analysis tool to do automation of repeated computation.
Abstract: This article shows how Newton’s iterative methods for finding root of a polynomial equation can be used to create fractals in spreadsheets. Newton's method has served as one of the most fruitful paradigms in the development of complex iteration theory. The process of iteration is impossible to carry out by hand but extremely easy to carry out with a computer. By doing such experiments students get a feeling that they have the power to explore the uncharted wilderness of the dynamics of Newton’s method. It gives mathematics an experimental component. It also illustrates a symbiotic relationship between technology and mathematics [1]. Technology can be used to develop our intuition, and mathematics is used to prove that our intuition is correct. The article explores Innovative use of Microsoft Excel’s What-if Analysis tool to do automation of repeated computation. The method employed can also be used for Neural Network training and data clustering [9] in Excel. A wide variety of fractals can be created by using different polynomial equations [2-7].

11 citations


Journal ArticleDOI
TL;DR: This article shows how the Mandelbrot Set, the daddy of all fractals, is drawn in spread sheet, the methodology employed is same as the one used for Newton's fractal.
Abstract: The Mandelbrot Set is the most complex object in mathematics; its admirers like to say. An eternity would not be enough time to see it all, its disks studded with prickly thorns, its spirals and filaments curling outward and around, bearing bulbous molecules that hang, infinitely variegated, like grapes on God's personal vine [1]. In this article we show how it is drawn in spread sheet. The methodology employed is same as the one used for Newton’s fractal. Since it is the daddy of all fractals, a separate article is devoted to it. The same principle is extended to draw fractals based on transcendental functions.

Journal ArticleDOI
TL;DR: The use of recently introduced convex optimization methods, selective local/global segmentation (SLGS) algorithm, for simultaneous binarization and segmentation for human face images is investigated.
Abstract: Face segmentation plays an important role in various applications such as human computer interaction, video surveillance, biometric systems, and face recognition for purposes including authentication and authorization. The accuracy of face classification system depends on the correctness of segmentation. Robustness of the face classification system is determined by the segmentation algorithm used, and the effectiveness in segmenting images of similar kind. This paper explains the level set based segmentation for human face images. The process is done in two stages: In order to get better accuracy, binarization of the image to be segmented is performed. Next, segmentation is applied on the image. Binarization is the process of setting pixel intensity values greater than some threshold value to “on” and the rest to “off”. This process converts the input image into binary image which is used for segmentation. Second process is image segmentation for eliminating the background portion from the binarized image which is obtained after the binarization of the original image. Conventional approaches use separate methods for binarization and segmentation. In this paper we investigate the use of recently introduced convex optimization methods, selective local/global segmentation (SLGS) algorithm [16] for simultaneous binarization and segmentation. The approach is tested in MATLAB and satisfactory results were obtained.

Journal Article
TL;DR: This paper addresses the problem of reducing additive white Gaussian noise from speech signal while preserving the intelligibility and quality of the speech signal by using Savitzky-Golay smoothing filter based denoising method.
Abstract: denoising is the process of removing unwanted sounds from the speech signal. In the presence of noise, it is difficult for the listener to understand the message of the speech signal. Also, the presence of noise in speech signal will degrade the performance of various signal processing tasks like speech recognition, speaker recognition, speaker verification etc. Many methods have been widely used to eliminate noise from speech signal like linear and nonlinear filtering methods, total variation denoising, wavelet based denoising etc. This paper addresses the problem of reducing additive white Gaussian noise from speech signal while preserving the intelligibility and quality of the speech signal. The method is based on Savitzky-Golay smoothing filter, which is basically a low pass filter that performs a polynomial regression on the signal values. The results of S-G filter based denoising method are compared against two widely used enhancement methods, Spectral subtraction method and Total variation denoising. Objective and subjective quality evaluation are performed for the three speech enhancement schemes. The results show that S-G based method is ideal for the removal of additive white Gaussian noise from the speech signals.

01 Jan 2012
TL;DR: In this paper, a data driven approach which is simple, efficient and it does not require any rules and morpheme dictionary is presented. But it is a reverse process of Morphological Analyzer.
Abstract: Tamil is a morphologically rich language. Being agglutinative language most of the categories expressed are suffixes. Tamil is a post positional inflectional language. The Morphological Generator takes lemma and a Morpho-lexical description as input and gives a word-form as output. It is a reverse process of Morphological Analyzer. Morphological generator system implemented here is a new data driven approach which is simple, efficient and it does not require any rules and morpheme dictionary. We have developed an individual system to handle nouns and verbs. Any automated machine translation system requires morphological analyzer of source language and morphological generator of the target language. Using this morphological generator we have also developed a verb conjugator and noun declension. Here.

Journal ArticleDOI
TL;DR: This paper provides a study on character binarization and segmentation of Hindi document images, which are the essential pre-processing steps in several applications like digitization of historically relevant books.
Abstract: is the national language of India, spoken by more than 500 million people and is the second most popular spoken language in the world, after Chinese. Digital document imaging is gaining popularity for application to serve at libraries, government offices, banks etc. In this paper, we intend to provide a study on character binarization and segmentation of Hindi document images, which are the essential pre-processing steps in several applications like digitization of historically relevant books. In the case of historical documents, the document image may have stains, may not be readable, the background could be non-uniform and may be faded because of aging. In those cases the task of binarization and segmentation becomes challenging, and it affects the overall accuracy of the system. So these processes should be carried out accurately and efficiently. Here we experiment level set method in combination with diffusion techniques for improving the accuracy of segmentation in document process task. KeywordsSet Method, Binarization, Segmentation, Convex Optimization.

Journal ArticleDOI
TL;DR: In this paper, a hierarchical inpainting algorithm using wavelets is proposed, which tries to keep the mask size smaller while wavelets help in handling the high pass structure information and low pass texture information separately.
Abstract: Inpainting is the technique of reconstructing unknown or damaged portions of an image in a visually plausible way. Inpainting algorithm automatically fills the damaged region in an image using the information available in undamaged region. Propagation of structure and texture information becomes a challenge as the size of damaged area increases. In this paper, a hierarchical inpainting algorithm using wavelets is proposed. The hierarchical method tries to keep the mask size smaller while wavelets help in handling the high pass structure information and low pass texture information separately. The performance of the proposed algorithm is tested using different factors. The results of our algorithm are compared with existing methods such as interpolation, diffusion and exemplar techniques.

Journal ArticleDOI
TL;DR: This paper investigates the use of recently introduced convex optimization methods, selective local/global segmentation (SLGS) algorithm and fast global minimization (FGM) algorithm for simultaneous binarization and segmentation for OCR task.
Abstract: most challenging task in OCR is getting the characters segmented properly. The accuracy of segmentation depends on the quality of the binarization technique applied. Binarization is the process of setting all intensity values greater than some threshold value to "on". It converts the document image into binary image as extracting text and eliminating the background. This process also removes the noise. The output of this process is used as input to image segmentation process. Conventionally separate methods are used for binarizarion and segmentation. In this paper we investigate the use of recently introduced convex optimization methods, selective local/global segmentation (SLGS) algorithm (16) and fast global minimization (FGM) algorithm (15) for simultaneous binarization and segmentation. Out of the two methods we tried out, one of them is found to be suitable for OCR task. The FGM algorithm provides an average accuracy of 89.97% for Tamil character segmentation. Keywordsvel Set, Active Contours, Binarization, Segmentation.

Journal ArticleDOI
01 Jan 2012
TL;DR: In this paper, a truncated ground plane for 2.5 GHz GHz band was designed and the ground plate was defected to achieve good impedance matching over the desired frequency band, and the proposed antenna has a voltage standing wave ratio less than 2 in the frequency band 2.4 GHz to 2.75 GHz.
Abstract: In this paper, we share our experience of designing a Trapezoidal Monopole Antenna with truncated ground plane for 2.5 GHz Band. Trapezoidal monopole antenna has an omnidirectional radiation pattern. Point-to Multi-Point WiFi system requires an omnidirectional common main antenna to distribute signal to wireless devices and computers. The proposed antenna uses a relatively cheap substrate FR-4 with permittivity ɛ r = 4.4 and loss tangent tanδ = 0.02. The size of the antenna is 12×12 cm 2 . The ground plate is defected to achieve good impedance matching over the desired frequency band. A 50 Ohm microstrip transmission line acts as the feed for the proposed antenna. The antenna has a Voltage Standing Wave Ratio (VSWR) less than 2 in the frequency band 2.4 GHz to 2.75 GHz. The fractional bandwidth is about 13.5% with respect to the center frequency 2.52 GHz. The design and simulation results are presented in this paper.

Proceedings ArticleDOI
09 Aug 2012
TL;DR: This work focuses on explaining OFDM system from linear algebra point of view, and OFDM model communication system is simulated using Excel which makes ease for anyone experiment with OFDM and understand the underlying principle.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) is one of the leading technology that is ruling the communication field But unfortunately, it is shrouded in mystery A good knowledge in Linear Algebra is required to appreciate the technology in a better way So the work focuses on explaining OFDM system from linear algebra point of view Also, OFDM model communication system is simulated using Excel which makes ease for anyone experiment with OFDM and understand the underlying principle The paper aims to provide strong foundation on the concept behind OFDM without the need of having much knowledge in electronics field

01 Jan 2012
TL;DR: This paper gives the limitations of each technique and suggests the choice of appropriate technique for a given scenario, as well as describing the application of such concepts for inpainting.
Abstract: There are various real world situations where, a portion of the image is lost or damaged or hidden by an unwanted object which needs an image restoration. Digital Image Inpainting is a technique which addresses such an issue. Inpainting techniques are based on interpolation, diffusion or exemplar based concepts. This paper briefly describes the application of such concepts for inpainting and provides their detailed performance analysis. It is observed that the performance of these techniques vary while restoring the structure and texture present in an image. This paper gives the limitations of each technique and suggests the choice of appropriate technique for a given scenario. Keywords—digital image inpainting, exemplar based i npainting, TV inpainting, isotropic diffusion, anisotropic diffusion. 1-INTRODUCTION A Photographic picture is a two dimensional image w hich can contain many objects. One may be intereste d in the object or scene that is hidden by another. For exam ple, a beautiful picture may contain some letters w ritten on it or a view of the Taj mahal maybe occluded or a historic painting may be torn or damaged. Here the picture b elow the letters, the occluded portion of the Taj mahal and the damaged portion of the painting needs to be res tored. This problem is addressed under various headings like di socclusion, Object Removal, Image Inpainting etc. R etrieving the information that is hidden or missing becomes diffi cult when there is no prior knowledge or reference image. Here the information surrounding the missing area and other known area has to be utilized for the restoration. Usually, the user in the form of mask specifies the unwanted foregrou nd or the object to be removed or the portion of im age to be retrieved. Clone Brush tool of Adobe Photoshop rest or the image when a sample of the image to be pla c d in the missing area is selected by the user whereas in inp ainting the missing area is automatically filled in by the algorithm. Digital image inpainting is a kind of digital image processing that modifies a portion of the image ba sed on the surrounding area in an undetectable way. The techni ques rely majorly on the diffusion and the sampling process. It has a wide variety of applications in restoration o f deteriorated photos, denoising images, creating s pecial effects in movies, digital zoom-in and edge based image compre ssion. 2. STATE OF ART The inpainting problem can be considered as assigni ng the gray levels to the missing area called as Ω with the help of gray levels in the known area Φ as shown in fig. 2.1. The boundary δΩ, between the two plays a major role in deciding the intensities in Ω. All the algorithms are iterative and try to fill in δΩ first and moves inwards successively altering δΩ each time. The algorithm stops when all the pixels in Ω are successfully assigned some values. The restoration of the structural information like edge s or textural information like repeating patterns p ose a major challenge for the inpainting techniques. Based on t he nature of filling, the algorithms could be class ified into structure based and texture based methods. 1063 Comparative Analysis of Structure and Texture based Image Inpainting Techniques ISSN-2277-1956 /V1N3-1062-1069 Figure2.1. Digital image Inpainting problem Structure based methods uses geodesic curves and the points of isophotes arriving at the boundary fo r inpainting. Isophotes are the lines joining the sam e gray levels and geodesic curves are lines followi ng the shortest possible paths between two points. When used in its pr mitive form it may result in disconnected objec ts. This is illustrated in Fig 2.2; while inpainting the black square in Fig 2.2a, a horizontal bar is expected bu t the algorithm results in two disconnected bars as in Fig 2.2b. The mathematical models for deterministic and varia tion l PDE are explained in detail in [4] and [6]. A series of Partial differential equations are used to extend i sophotes in to the missing area in [1],[2] and[11]. In [12] a convolution mask is used to extend the gray levels in to the inpainting area. The curvatures are exten ded into the inpainting area in [5]. The Texture based methods mainly rely on texture sy nthesis, [3] and [9] which grow a new image outward from an initial seed. Before a pixel is synthesized, its neighbors are sampled. Then the whole image is que ried to find out a source pixel with similar neighbors. At this point, the source pixel is copied to the pixel to be synt hesized which is the missing area. This is called as Exemplar based synt hesis. Based on whether a pixel or a sub window is used for sampling it is further classified as pixel based sa mpling and patch based sampling. The patch size, ma tching criteria and order of filling varies between algorithms. Exe mplar based inpainting is used in [7] and [15]. 3. DIGITAL IMAGE INPAINTING TECHNIQUES The digital image inpainting involves two major ste ps. First step involves the selection of area to be inpainted and the second is the inpainting algorithm which gives appropriate values for the selected area. 3.1 Inpainting area selection The area to be inpainted is selected by the user ba sed on color, region selected by user or a binary i mage specifying the missing area. Color based selection is more flexible and it could be used for specifyi ng the area irrespective of the shape, area and number of regio ns. Instead of looking for the exact color value, t he color values closer to it is also taken, into account for the qu antization effects. This method requires the missin g area to be in a unique and different color from the rest of the ima ge. The user can select the missing area through a free hand selection or polygon selection. This method is capa ble of selecting the missing area irrespective of t he color. This could be used predominantly on black and white imag es. However the missing area cannot be precisely sp cified in this method. It becomes tedious to select more than one area as in the case of imposed text on the ima ge. If the area to be inpainted remains constant across various images or the template of the damage is known, the missin g area is specified in the form of a binary image with the sa me size of the input image. This method is best sui ted to specify the black text imposed on black and white images. In pr actice the missing area is selected using any image manipulation software and given a different color which is then used for the inpainting algorithm. The user selecte d area is usually called as the mask or the region to be inpainted. 3.2 Structure based inpainting These methods are based on the Partial different ial equations which contribute to the structural in formation in an image. The differential equations which use t he concept of Interpolation, Diffusion and Total Va riational PDEs are discussed in this paper. IJECSE,Volume1,Number 3 S. Padmavathi and K. P. Soman ISSN-2277-1956 /V1N3-1062-1069 3.2.1 Interpolation Based Inpainting The simplest method uses soap film PDE, where δΩ becomes its boundary conditions. A set of linear e quations are formed with the known values of δΩ and unknown values of Ω in four major direction namely the north, south, east and west. Interpolation of the four neighbors is used to frame the equation. The δΩ forms the right hand side of the equations. The equations are solved to get the intensities of Ω. 3.2.2 Anisotropic Diffusion Based Inpainting Inpainting problem is considered as diffusion of gr ay levels from the boundary area δΩ into the unknown area Ω. The level set theory used to explain the diffusion b undaries during various periods. If the diffusion process does not depend on the direction or the presence of edges, i t is called as isotropic diffusion. Interpolation t echnique is isotropic in this sense. Anisotropic diffusion[13] is used to avoid blurring across edges. Equation 3.1 shows th e anisotropic diffusion where g represents a smooth function, K r epresents the curvature, ∇ I represent the gradient of the image and Ω represents the area other than Ω. The curvature is given by the equation 3.2.

Proceedings Article
08 Oct 2012
TL;DR: The authors used the dependency parser to identify the boundary for the clause in a sentence. But the dependency tag set, which contains 11 tags, is useful for identifying the boundary of the clause along with the identification of the subject and object information of the sentence.
Abstract: Clause boundary identification is a very important task in natural language processing. Identifying the clauses in the sentence becomes a tough task if the clauses are embedded inside other clauses in the sentence. In our approach, we use the dependency parser to identify the boundary for the clause. The dependency tag set, contains 11 tags, and is useful for identifying the boundary of the clause along with the identification of the subject and object information of the sentence. The MALT parser is used to get the required information about the sentence.

Proceedings ArticleDOI
03 Apr 2012
TL;DR: In this paper, image denoising is formulated as an optimization problem, in which a function with two terms is to be minimized, the first term is called the regularization term, which is some form of energy of the image, and the second term is measured the similarity between the original image and the processed image.
Abstract: The idea of this paper is to model image denoising using an approach based on partial differential equations (PDE), which describes two dimensional heat diffusion. The two dimensional image function is taken to be the harmonic, when it can be obtained as the solution to the equation describing the the heat diffusion. To achieve this, image denoising is formulated as an optimization problem, in which a function with two terms is to be minimized. The first term is called the regularization term, which is some form of energy of the image (like Sobolev energy) and the second term is called the data fidelity term, which measures the similarity between the original image and the processed image. The two terms are combined using a control parameter whose value decides which term has to be minimized more. Image denoising problem could then be solved by a simple iterative equation, derived based on the Gradient Descent method.

Proceedings ArticleDOI
09 Aug 2012
TL;DR: The combined character segmentation algorithm based on the active contour model and nonlinear diffusion techniques provides promising results under scanned documents with different font-size and fond-style characters, and the different artifacts and background noises caused by the aging of the paper and diffusion.
Abstract: In this paper, we present the combined character segmentation algorithm based on the active contour model and nonlinear diffusion techniques. The active contour model is used to perform segmentation of printed characters. The coherence enhancing diffusion technique is proposed to smooth out artifacts and background noises without destroying the edges. The performance of the two character segmentation methods: i) the combined ACM-FGM and CED algorithm, and ii) the ACM-FGM algorithm have been validated using a large scale printed documents in Hindi, Malayalam and Telugu text. The combined algorithm achieves an average segmentation accuracy of 89.08% whereas the ACM-FGM algorithm alone had an average accuracy of 52.63%. The whole character segmentation process time is lesser than that of the ACM-FGM algorithm alone. Experiments show that the combined algorithm provides promising results under scanned documents with different font-size and fond-style characters, and the different artifacts and background noises caused by the aging of the paper and diffusion.

Journal Article
TL;DR: Concept of convolution which everybody uses when one does any multiplication is taken as a vehicle to develop exercises that enhance computational thinking and the use of spreadsheet as a tool for developing computational-thinking -capabilities by integrating it with existing curricula is explored.
Abstract: Modern day innovations in sciences and engineering are direct outcome of human’s capacity for abstract thinking thereby creating effective computational models of the problems that can be solved efficiently by the number crunching and massive data handling capabilities of modern networked computers. Survival of any economy now depends on innovating-capacity of its citizens. Thus capacity for computational thinking has become an essential skill for survival in the 21 st century. It is necessitating a fundamental change in our curriculum in schools. Computational thinking need to be introduced incrementally along with standard content in a way that makes the standard content easier to learn and vice versa. When learners successfully combine disciplinary knowledge and computational methods they develop their identity as Computational Thinkers. The need for trainers, training content and training methodology for imparting computational thinking has become subject of discussion in many international forums. In this article the use of spreadsheet as a tool for developing computational-thinking -capabilities by integrating it with existing curricula is explored. Concept of convolution which everybody uses when one does any multiplication is taken as a vehicle to develop exercises that enhance computational thinking. It is shown how convolution is visualized and implemented and also discussed a wide variety of computational experiments that students at various levels can do with the help of spreadsheet.

Journal ArticleDOI
TL;DR: The paper implements four of those major sensing spectrum algorithms in MATLAB-Simulink and also does a performance comparison among them.

Journal ArticleDOI
TL;DR: A hierarchical search space refinement and hierarchical filling are proposed in this paper which increases the accuracy and handles the extra cost due to multi resolution processing in a better way.
Abstract: There are many real world scenarios where a portion of the image is damaged or lost. Restoring such an image without prior knowledge or a reference image is a difficult task. Image inpainting is a method that focuses on reconstructing the damaged or missing portion of images based on the information available from undamaged areas of the same image. The existing methods fill the missing area from the boundary. Their performance varies while reconstructing structures and textures and many of them restrict the size of the area to be inpainted. In this paper exemplar based inpainting is adopted in a hierarchical framework. A hierarchical search space refinement and hierarchical filling are proposed in this paper which increases the accuracy and handles the extra cost due to multi resolution processing in a better way. The former tries to select an exemplar suitable at all resolution levels restricting the search space from the lower resolution level. The later fills the region at lower resolution level whose results are taken to the higher levels. This makes the non boundary pixels known in the higher resolution level which in turn helps in search space refinement while increasing accuracy.

Journal ArticleDOI
TL;DR: In this paper, a Hierarchical method is proposed to reduce the area to be inpainted in multiple levels and the Total Variation (TV) method is used to inpaint in each level.
Abstract: The art of recovering an image from damage in an undetectable form is known as inpainting.The manual work of inpainting is most often a very time consumingprocess. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them.The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared w ith other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.

Journal ArticleDOI
TL;DR: This paper proposes in this paper a novel way of signal processing using a group of transformations within the limits of Group theory, found that it is possible to process a signal at multiresolution and extend it to perform edge detection, denoising, face recognition, etc by filtering the local features.

01 Jan 2012
TL;DR: An efficient and reliable method for implementing Morphological Analyzer for Malayalam using Machine Learning approach has been presented in this paper, where a Morphological analyzer segments words into morphemes and analyzes word formation.
Abstract: An efficient and reliable method for implementing Morphological Analyzer for Malayalam using Machine Learning approach has been presented here. A Morphological Analyzer segments words into morphemes and analyze word formation. Morphemes are smallest meaning bearing units in a language. Morphological Analysis is one of the techniques used in formal reading and writing. Rule based approaches are generally used for building Morphological Analyzer. The disadvantage of using rule based approaches are that if one rule fails it will affect the entire rule that follows, that is each rule works on the output of previous rule. The significance of using machine learning approach arises from the fact that rules are learned automatically from data, uses learning and classification algorithms to learn models and make predictions. The result shows that the system is very effective and after learning it predicts correct grammatical features even forwords which are not in the training set.

Book ChapterDOI
01 Jan 2012
TL;DR: A method of extracting a Texel from the given textured image using K means clustering algorithm and validating it with the entire image using Normalized Gray level co-occurrence matrix in the maximum gradient direction is described.
Abstract: Identifying the smallest portion of the image that represents the entire image is a basic need for its efficient storage. Texture can be defined as a pattern that is repeated in a specific manner. The basic pattern that is repeated is called as Texel(Texture Element). This paper describes a method of extracting a Texel from the given textured image using K means clustering algorithm and validating it with the entire image. The number of gray levels in an image is reduced using a linear transformation function. The image is then divided in to sub windows of certain size. These sub windows are clustered together using K-means algorithm. Finally a heuristic algorithm is applied on the cluster labels to identify the Texel, which results in more than one candidate for Texel. The best among them is then chosen based on its similarity with the overall image. The similarity between the Texel and the image is calculated based on then Normalized Gray level co-occurrence matrix in the maximum gradient direction. Experiments are conducted on various texture images for various block sizes and the results are summarized.