Other affiliations: Madras Institute of Technology
Bio: P. Anandhakumar is an academic researcher from Anna University. The author has contributed to research in topics: Image segmentation & Cloud computing. The author has an hindex of 6, co-authored 42 publications receiving 129 citations. Previous affiliations of P. Anandhakumar include Madras Institute of Technology.
15 Jul 2011
TL;DR: The FFT based EM-GMM algorithm improves the classification accuracy as it takes into account of spatial correlation of neighboring pixels and as the segmentation done in Fourier domain instead of spatial domain.
Abstract: This paper proposes MR image segmentation based on Fast Fourier Transform based Expectation and Maximization Gaussian Mixture Model algorithm (GMM) No spatial correlation exists when classifying tissue type by using GMM and it also assumes that each class of the tissues is described by one Gaussian distribution but these assumptions lead to poor performance It fails to utilize strong spatial correlation between neighboring pixels when used for the classification of tissues The FFT based EM-GMM algorithm improves the classification accuracy as it takes into account of spatial correlation of neighboring pixels and as the segmentation done in Fourier domain instead of spatial domain The solution via FFT is significantly faster compared to the classical solution in spatial domain — it is just O(N log 2N) instead of O(N^2) and therefore enables the use EM-GMM for high-throughput and real-time applications
01 Dec 2008
TL;DR: An integrated color and texture feature based content based image retrieval using 2D Discrete Wavelet Transform (2D-DWT) to provide efficient in terms of retrieval accuracy and precision.
Abstract: This paper introduces an integrated color and texture feature based content based image retrieval using 2D Discrete Wavelet Transform (2D-DWT).Most of the image retrieval systems are still incapable of providing retrieval result with high retrieval accuracy and less computational complexity. To address this problem, combining color and texture features the effective integrated framework developed. In this approach, the color features of the query image and database images are computed and quadratic distance measure used as similarity metric to retrieve the relevant images and combined with the texture features extracted using 2D-DWT is compared with the query image and database image using euclidean distance measure. The proposed system combined features have been developed to provide efficient in terms of retrieval accuracy and precision. The precision improved from 67% to 95% and average recall rate of 67% to 95% for the general purpose database size of 10000 images also achieves better precision of 67% to 95% and average recall rate of 67% to 95% for the Brodatz album of 116 different textures of 1856 texture images.
01 Jan 2009
03 Aug 2012
TL;DR: The present study demonstrates the data routing method from node to BS (Base Station) using two cluster heads to prolong the battery life of sensor nodes with little compromise in energy consumption and hop count compare to single cluster head method.
Abstract: Most of the sensor nodes in a Wireless Sensor Network (WSN) have limited energy. In order to increase the network lifetime, some energy efficient algorithms were proposed earlier. It has been a challenge to design wireless sensor networks to enable applications for oceanographic data collection, pollution monitoring, offshore exploration, disaster prevention, assisted navigation and tactical surveillance applications. The main objective of this work is to find out the data routing method such that the battery life of sensor nodes can be prolonged. In this paper, A Two Cluster Head Energy efficient Wireless Sensor Network (TCHE-WSN) Algorithm was put forward. WSN consists of sensor nodes which sense the physical parameters such as temperature, humidity, pressure and light etc and send them to a fusion center namely Base Station (BS) from where one can get the value of physical parameters at any time. Requirement of monitoring the environment might be anywhere, like middle of the sea or under the earth where man cannot go often to recharge the batteries which supplies the sensing device, transceiver and memory unit in the sensor node. So the usage of the battery power must be judicious in WSN. The present study demonstrates the data routing method from node to BS (Base Station) using two cluster heads to prolong the battery life of sensor nodes. The two cluster head analogy reduces the overhead of single cluster head, avoids packet collision, and improves reliable data transmission with little compromise in energy consumption and hop count compare to single cluster head method.
••01 Dec 2013
TL;DR: This paper presents a technique for storing encrypted numeric data in fingerprint images through watermarking techniques, where each image is further divided into 4 quadrants and each quadrant image is watermarked with the encrypted numeric digit.
Abstract: This paper presents a technique for storing encrypted numeric data in fingerprint images through watermarking techniques The four fingerprint images, where each image is further divided into 4 quadrants and each quadrant image is watermarked with the encrypted numeric digit As the four fingerprints is watermarked with an altered ATM pin number of the same user, the proposed work finds application in security implementations based oncryptographic fingerprint watermarking Such a combination of encryption and watermarking techniques provides a level of security and further protects the identity of the user from attacks due to the robustness of the technique The experimental study is done on a limited number of users and the results show that our hybrid approach gives improvedresults in terms of other existing approaches in theliterature
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.
TL;DR: A hybrid intelligent machine learning technique for computer-aided detection system for automatic detection of brain tumor through magnetic resonance images is proposed and demonstrates its effectiveness compared with the other machine learning recently published techniques.
Abstract: Computer-aided detection/diagnosis (CAD) systems can enhance the diagnostic capabilities of physicians and reduce the time required for accurate diagnosis. The objective of this paper is to review the recent published segmentation and classification techniques and their state-of-the-art for the human brain magnetic resonance images (MRI). The review reveals the CAD systems of human brain MRI images are still an open problem. In the light of this review we proposed a hybrid intelligent machine learning technique for computer-aided detection system for automatic detection of brain tumor through magnetic resonance images. The proposed technique is based on the following computational methods; the feedback pulse-coupled neural network for image segmentation, the discrete wavelet transform for features extraction, the principal component analysis for reducing the dimensionality of the wavelet coefficients, and the feed forward back-propagation neural network to classify inputs into normal or abnormal. The experiments were carried out on 101 images consisting of 14 normal and 87 abnormal (malignant and benign tumors) from a real human brain MRI dataset. The classification accuracy on both training and test images is 99% which was significantly good. Moreover, the proposed technique demonstrates its effectiveness compared with the other machine learning recently published techniques. The results revealed that the proposed hybrid approach is accurate and fast and robust. Finally, possible future directions are suggested.
01 Jan 2000
TL;DR: Clusters---a grouping of clients that are close together topologically and likely to be under common administrative control are introduced, using a ``network-aware" method, based on information available from BGP routing table snapshots.
Abstract: Being able to identify the groups of clients that are responsible for a significant portion of a Web site's requests can be helpful to both the Web site and the clients. In a Web application, it is beneficial to move content closer to groups of clients that are responsible for large subsets of requests to an origin server. We introduce clusters---a grouping of clients that are close together topologically and likely to be under common administrative control. We identify clusters using a ``network-aware" method, based on information available from BGP routing table snapshots.
TL;DR: The proposed eigenbrain method was effective in AD subject prediction and discriminant brain-region detection in MRI scanning and was coherent with existing literatures.
Abstract: (Purpose) Early diagnosis or detection of Alzheimer’s disease (AD) from the normal elder control (NC) is very important. However, the computer-aided diagnosis (CAD) was not widely used, and the classification performance did not reach the standard of practical use. We proposed a novel CAD system for MR brain images based on eigenbrains and machine learning with two goals: accurate detection of both AD subjects and AD-related brain regions. (Method) First, we used maximum inter-class variance to select key slices from 3D volumetric data. Second, we generated an eigenbrain set for each subject. Third, the most important eigenbrain (MIE) was obtained by Welch’s t-test. Finally, kernel support-vector-machines with different kernels that were trained by particle swarm optimization, were used to make an accurate prediction of AD subjects. Coefficients of MIE with values higher than 0.98 quantile were highlighted to obtain the discriminant regions that distinguish AD from NC. (Results) The experiments showed that the proposed method can predict AD subjects with a competitive performance with existing methods, especially the accuracy of the polynomial kernel (92.36±0.94) was better than the linear kernel of 91.47±1.02 and the radial basis function (RBF) kernel of 86.71±1.93. The proposed eigenbrain-based CAD system detected 30 AD-related brain regions. The results were coherent with existing literatures. (Conclusion) The eigenbrain method was effective in AD subject prediction and discriminant brain-region detection in MRI scanning.