scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Bioinformatics in 2005"


Journal ArticleDOI
01 Jul 2005
TL;DR: Li et al. as discussed by the authors proposed a least-angle regression (LARS) method to select genes that are relevant to patients' survival and to build a predictive model for future prediction, which can be used for identifying important genes that were related to time to death due to cancer and for predicting the survival of future patients.
Abstract: Motivation: An important application of microarray technology is to relate gene expression profiles to various clinical phenotypes of patients. Success has been demonstrated in molecular classification of cancer in which the gene expression data serve as predictors and different types of cancer serve as a categorical outcome variable. However, there has been less research in linking gene expression profiles to the censored survival data such as patients' overall survival time or time to cancer relapse. It would be desirable to have models with good prediction accuracy and parsimony property. Results: We propose to use the L1 penalized estimation for the Cox model to select genes that are relevant to patients' survival and to build a predictive model for future prediction. The computational difficulty associated with the estimation in the high-dimensional and low-sample size settings can be efficiently solved by using the recently developed least-angle regression (LARS) method. Our simulation studies and application to real datasets on predicting survival after chemotherapy for patients with diffuse large B-cell lymphoma demonstrate that the proposed procedure, which we call the LARS--Cox procedure, can be used for identifying important genes that are related to time to death due to cancer and for building a parsimonious model for predicting the survival of future patients. The LARS--Cox regression gives better predictive performance than the L2 penalized regression and a few other dimension-reduction based methods. Conclusions: We conclude that the proposed LARS--Cox procedure can be very useful in identifying genes relevant to survival phenotypes and in building a parsimonious predictive model that can be used for classifying future patients into clinically relevant high- and low-risk groups based on the gene expression profile and survival times of previous patients. Supplementary information: http://dna.ucdavis.edu/~hli/LARSCox-Appendix.pdf Contact: hli@ucdavis.edu

483 citations


Journal ArticleDOI
30 Nov 2005
TL;DR: This paper discusses some of the existing load balancing algorithms in cloud computing and also their challenges.
Abstract: Cloud Computing is an emerging computing paradigm. It aims to share data, calculations, and service transparently over a scalable network of nodes. Since Cloud computing stores the data and disseminated resources in the open environment. So, the amount of data storage increases quickly. In the cloud storage, load balancing is a key issue. It would consume a lot of cost to maintain load information, since the system is too huge to timely disperse load. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed. It helps in optimal utilization of resources and hence in enhancing the performance of the system. A few existing scheduling algorithms can maintain load balancing and provide better strategies through efficient job scheduling and resource allocation techniques as well. In order to gain maximum profits with optimized load balancing algorithms, it is necessary to utilize resources efficiently. This paper discusses some of the existing load balancing algorithms in cloud computing and also their challenges.

112 citations


Journal ArticleDOI
30 Nov 2005
TL;DR: The existing issues in cloud computing such as security, privacy, reliability and so on are introduced and the security problems of current cloud computing are surveyed.
Abstract: Cloud computing is Internet-based computing, whereby shared resources, software and information, are provided to computers and devices on-demand, like the electricity grid. It aims to construct a perfect system with powerful computing capability through a large number of relatively low-cost computing entity, and using the advanced business models like SaaS (Software as a Service), PaaS (Platform as a Service), IaaS (Infrastructure as a Service) to distribute the powerful computing capacity to end users’ hands. Cloud Computing represents a new computing model that poses many demanding security issues at all levels, e.g., network, host, application, and data levels. The variety of the delivery models presents different security challenges depending on the model and consumers’ Quality of Service (QoS) requirements. Confidentiality, Integrity, Availability, Authenticity, and Privacy are essential concerns for both Cloud providers and consumers as well. This paper introduces the existing issues in cloud computing such as security, privacy, reliability and so on. This paper surveys the security problems of current cloud computing.

92 citations


Journal ArticleDOI
07 Feb 2005
TL;DR: This work presents a novel normalization technique, STEPNORM, for data-dependent and adaptive normalization of two-channel spotted microarrays that performs a stepwise interrogation of a range of different normalization models and selects the appropriate method based on formal model selection criteria.
Abstract: Intensities measurements of spotted microarrays embody many undesirable systematic variations. Very commonly, varying amounts and types of such variations are observed in different arrays. Although various normalization methods have been proposed to remove such systematic effects, it has not been well studied how to assess or select the most appropriate method for different arrays and data sets. To address this issue, we present a novel normalization technique, STEPNORM, for data-dependent and adaptive normalization of two-channel spotted microarrays. STEPNORM performs a stepwise interrogation of a range of different normalization models and selects the appropriate method based on formal model selection criteria. In addition, we evaluate the effectiveness of STEPNORM and other commonly used normalization methods utilizing a set of specially constructed splicing arrays.

62 citations


Journal ArticleDOI
30 Nov 2005
TL;DR: The state of the art in DSV is presented and the relationship between the verification approach used (the nature of the classifier and the type of features that are used to represent the signature) is focused on.
Abstract: online or Dynamic signature verification (DSV) is one of the most acceptable, intuitive, fast and cost effective tool for user authentication. DSV uses some dynamics like speed, pressure, directions, stroke length and pen-ups/pen-downs to verify the signer's identity. The state of the art in DSV is presented in this paper. several approaches for DSV are compared and the most influential techniques in this field are highlighted. We concentrate on the relationship between the verification approach used (the nature of the classifier) and the type of features that are used to represent the signature.

16 citations


Journal ArticleDOI
16 Sep 2005
TL;DR: (tree-based) ensemble classifiers for MHC class I and class II molecules are introduced and it is shown that the ensemble methods are consistently more accurate than the other three alternatives and robust with respect to parameter tuning.
Abstract: Identification of peptides binding to Major Histocompatibility Complex (MHC) molecules is important for accelerating vaccine development and improving immunotherapy. Accordingly, a wide variety of prediction methods have been applied in this context. In this paper, we introduce (tree-based) ensemble classifiers for such problems and contrast their predictive performance with forefront existing methods for both MHC class I and class II molecules. In addition, we investigate the impact of differing peptide representation schemes on performance. Finally, classifier predictions are used to conduct genomewide scans of a diverse collection of HIV-1 strains, enabling assessment of epitope conservation. We investigated all combinations of six classifi- cation methods (classification trees, artificial neural networks, support vector machines, as well as the more recently devised ensemble methods (bagging, random forests, boosting) with four peptide representation schemes (amino acid sequence, select biophysical properties, select quantitative structure-activity relationship (QSAR) descriptors, and the combination of the latter two) in predicting peptide binding to an MHC class I molecule (HLA-A2) and MHC class II molecule (HLA-DR4). Our results show that the ensemble methods are consistently more accurate than the other three alternatives. Furthermore, they are robust with respect to parameter tuning. Among the four representation schemes, the amino acid sequence representation gave consistently (across classifiers) best results. This finding obviates the need for feature selection strategies incurred by use of biophysical and/or QSAR properties. We obtained, and aligned, a diverse set of 32 HIV-1 genomes and pursued genomewide HLA-DR4 epitope profiling by querying with respect to classifier predictions, as obtained under each of the four peptide representation schemes. We validated those epitopes conserved across strains against known T-cell epitopes. Once again, amino acid sequence representation was at least as effective as using properties. Assessment of novel epitope predictions awaits experimental verification.

15 citations


Journal ArticleDOI
30 Aug 2005

9 citations


Journal ArticleDOI
30 Nov 2005
TL;DR: The paper evaluates the bit error rate performance of the Free space Optics (FSO) system and results are the comparative study of Bit Error Rate (BER) vs. Signal to Noise Ratio (SNR), which is quite useful for analyse the system performance with different channel models for various modulation techniques.
Abstract: Free Space Optics (FSO) is one of the emerging technology which is thoroughly being popular and is basically using optical signals for the communication. The paper evaluates the bit error rate performance of the Free space Optics (FSO) system.To design a high performance communication link for the atmospheric FSO channel, it is of great importance to characterize the channel with proper model. The performance of the highly efficient, high data rate system is limited by the certain constraints like scintillation effects and atmospheric turbulence. The FSO communication system using Orthogonal Frequency Division Multiplexing (OFDM) technique which is known for its increased robustness against frequency selective fading, narrow-band interference and high channel efficiency. The performance of the modulation techniquesBinary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK) and 8-PSK are studied in the Lognormal and Negative Exponential channel. The obtained results are the comparative study of Bit Error Rate (BER) vs. Signal to Noise Ratio (SNR), which is quite useful for analyse the system performance with different channel models for various modulation techniques.

9 citations


Journal ArticleDOI
30 Aug 2005
TL;DR: This work strives to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions, and argues that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual audit ability.
Abstract: In this work we strive to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. We examine contemporary and historical perspectives from industry, academia, government, and “black hats”. We argue that few cloud computing security issues are fundamentally new or fundamentally intractable; often what appears “new” is so only relative to “traditional” computing of the past several years. Looking back further to the time-sharing era, many of these problems already received attention. On the other hand, we argue that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual audit ability.

9 citations


Journal ArticleDOI
30 Nov 2005
TL;DR: A classification approach using Artificial Neural Network with Back-Propagation learning technique for human diseases like Cancer and heart problems from clinical diagnosis data and crypt the identity of the patients as well as the critical data is introduced.
Abstract: In this paper, the author introduces a classification approach using Artificial Neural Network(ANN) with Back-Propagation learning technique for human diseases like Cancer and heart problems from clinical diagnosis data. Clinical diagnosis is done mostly by experienced doctors with expertise in this field. In many cases, the test results are not effective towards the diagnosis of the disease. The author is particular about the wrong diagnosis which leads to a wrong treatment. The author is using Artificial Neural Network technique to classify the disease with reduced number of DNA sequence. The accuracy is differing based on the training data set and validation data set. The other major issue is the privacy preserving of the patients. As we are sharing the critical data from clinical diagnostic centers, there is good chance of patient’s anonymity is revealed. To avoid this, the author is using a simple Privacy Preserving in Data Mining (PPDM) technique to crypt the identity of the patients as well as the critical data and discloses only the required data like DNA sequence to the research team, as they are not much interested in the identity or the owner of the diagnosis report.

7 citations


Journal ArticleDOI
30 Nov 2005
TL;DR: This paper proposes a scheme for large scale WSNs which effectively uses the spatial correlation and temporal correlation of the data for effective aggregation and thereby preserving precious energy.
Abstract: Wireless sensor networks (WSNs) can be meritoriously used in several application areas like agriculture, military surveillance, environmental monitoring, forest fire detection etc. Since they are used to monitor large geographic areas numerous sensor nodes are to be deployed and their radio range is also very short. Hence they depend on the cooperative effort of these densely deployed sensor nodes for reporting the sensed data. Any changes in environment or an event of interest may be initially observed in a particular area. In other words, they are correlated in space domain. Many nodes in that area may detect the event and report the same event. This redundant information is of no use to system and also depletes the precious energy of the intermediate sensor nodes. Sensor nodes are having very limited energy and needs to be conserved for attaining maximum network life time. Data aggregation is an effective technique for conserving energy by reducing the packet transmissions. Many aggregation systems are available, but when employed for large scale wireless sensor networks they are less effective. In this paper we propose a scheme for large scale WSNs which effectively uses the spatial correlation and temporal correlation of the data for effective aggregation and thereby preserving precious energy.

Journal ArticleDOI
30 Oct 2005
TL;DR: This paper is going to present a review of various gateway discovery approaches which are used for establishing the interconnection of mobile ad hoc networks and internet.
Abstract: A mobile ad hoc network (MANET) consists of wireless mobile nodes without having a fixed infrastructure. The communication between these mobile nodes is carried out without any centralized control. The communication among the nodes within the infrastructure less networks is done through some routing protocol. But whenever any mobile node want to communicate a node in a network that is outside the ad hoc network such as internet, there should be an appropriate mechanism for establishing this connection. Gateway discovery is a fundamental process in connecting MANET with the internet. A mobile node can connect to the internet by discovery of some specialized nodes called as gateway nodes These Gateway nodes act as a bridge between the mobile ad hoc networks and the internet. The basic aim of the gateway discovery approach is to modify the route discovery process so that it is not only used for discovery of destination mobile nodes but also the gateways. In this paper we are going to present a review of various gateway discovery approaches which are used for establishing the interconnection of mobile ad hoc networks and internet.

Journal ArticleDOI
30 Nov 2005
TL;DR: This paper presents a graph based approach to extend the frequent pattern mining as an incremental approach, which also gives good results for incremental mining.
Abstract: Extracting useful information from huge amount of data is known as Data Mining. It happens at the intersection of artificial intelligence and statistics. It is also defined as the use of computer algorithms to discover hidden patterns and interesting relationships between items in large datasets. Candidate generation and test, Pattern Growth etc. are the common approaches to find frequent patterns from the database. Incremental mining is a crucial requirement for the industries nowadays. Many tree based approaches have tried to extend the frequent pattern mining as an incremental approach, but most of the research was limited to interactive mining only. Here, instead of tree based approach, graph based approach is presented which also gives good results for incremental mining.

Journal ArticleDOI
30 Aug 2005
TL;DR: This paper describes prevention against Denials of sleep attack, and analyzes each of proposed solutions to identify their strengths and limitations.
Abstract: With the advancement in Wireless Sensor Network (WSN) sensors are gaining importance in the physical world. Besides the low power of sensor nodes used, sensors are widely used in detecting temperature, pollution, pressure and other various applications. Energy-constrained sensor networks periodically place nodes to sleep in order to extend the network Lifetime. Denial of sleep attacks are a great threat to lifetime of sensor networks as it prevents the nodes from going into sleep mode. In this paper we are describing prevention against Denials of sleep attack. We have analyzed each of proposed solutions, identify their strengths and limitations. Indexing terms Wireless Sensor Network (wsn), Denial of sleep, Medium Access Control (MAC), Heterogeneous Network, Cluster.

Journal ArticleDOI
30 Oct 2005
TL;DR: A mathematical model for allocating “M” tasks of distributed program to “N” multiple processors (M>N) that minimizes the total cost of the program.
Abstract: Distributed Computing System [DCS] has attracted several researchers by posing several challenging problems. In this paper we have developed a mathematical model for allocating “M” tasks of distributed program to “N” multiple processors (M>N) that minimizes the total cost of the program. Relocating the tasks from one processor to another at certain points during the course of execution of the program that contributes to the total cost of the running program has been taken into account. Most of the researchers have considered the cost for relocating the task from one processor to another processor at the end of the phase as a constant. But in real life situations the reallocating cost of the tasks may very processor to processor this is due to the execution efficiency of the processors. Phase-wise execution cost [EC], inter task communication cost [ITCT], residence cost [RC] of each task on different processors and relocation cost [REC] for each task have been considered while preparing a dynamic tasks allocation model.

Journal ArticleDOI
30 Aug 2005
TL;DR: This paper proposes a policy based system for granting security during the process of web service composition and makes use of Finite State Machine model model which clearly portrays the business and flow logic.
Abstract: The revolution impacted by Web Service as a solution to business and enterprise application integration throws light on the significance of security provided by Web Services during Web Service Composition. Satisfying the security requirements is truly a demanding task because of the dynamic and capricious nature of the Web. Web Service Composition associates web services to create high level business process that absolutely matches and conforms appropriately to the service requestor’s needs. It involves customizing services often by locating, assimilating and deploying elementary services. Our paper proposes a policy based system for granting security during the process of web service composition. Policies defined for effective and secure composition analyze and verify the conditions under which the task of the web service is allowed or rejected. To achieve this specification, we make use of Finite State Machine model which clearly portrays the business and flow logic. Nodes in the Finite State Machine represent rules. Upon efficacious fulfillment of policies which are defined in the node access points, transition between rules is stimulated. A service composition is said to be successfully incorporated only if there is complete absence of policy violations when combining the policies of elementary services. The simulated FSM which extracts the rules and policies of the web services and correctly matches and satisfies the policy constraints defined in the access points ensure providing security for the composite web service. Keywords— Composition System, Finite State Machine, Policy Manager, Web Service Composition, Quality Measurement Manager

Journal ArticleDOI
30 Nov 2005
TL;DR: In this paper, two different development and climate change scenarios were developed to simulate Lake Tana water level i.e., i) Base line scenario (1991-2000),ii) Future development scenario on short term periods (2031-2040), and ii) future development scenario (2091-2100).
Abstract: This paper presents simulation of Lake Tana reservoir future water use under emerging scenario with and without climate change impacts. Two different development and climate change scenarios were developed to simulate Lake Tana water level i.e., i) Base line scenario (1991-2000) ,ii) Future development scenario on short term periods(2031-2040) , and ii) Future development scenario on long term periods (2091-2100). River head flow estimated by Soil and Water Assessment Tool (SWAT) was used as an input to Water Evaluation And Planning (WEAP) model to simulate the Lake level for each scenario. Based on WEAP model simulation results, demand coverage and reliability of 100% was observed in all scenarios for Tana-Beles hydropower project. For scenarios without climate change impacts, there are longer periods of time when mean monthly lake levels are below 1785 masl (i.e., the minimum lake level required for shipping). Under natural conditions (lake level without project), they exceed this level in 100%.under current conditions (Base line scenario, BLS), they exceed this level in 89% of the months. In the full development scenario (FDSCE¹), this will decrease to 83%. For all scenarios with climate change impacts, Lake water Level will not significantly be affected by climate change impacts.

Journal ArticleDOI
30 Aug 2005
TL;DR: A statistical analysis of the raga structure of Bhairav, the first raga of the morning, in Indian classical music is given.
Abstract: A raga, in Indian classical music, is a melodic structure with fixed notes and a set of rules characterizing a particular mood conveyed by performance. Bhairav is the first raga of the morning. The present paper gives a statistical analysis of this raga structure.


Proceedings Article
02 Feb 2005
TL;DR: A simple formula quantifies the information content of any combined phenotyping and genotyping design in a backcross and is extended to cover multi-genotype crosses such as the F2 intercross, and multiple QTL models.
Abstract: We examine the efficiency of different genotyping and phenotyping strategies in inbred line crosses from an information perspective. This provides a mathematical framework for the statistical aspects of QTL experimental design, while guiding our intuition. Our central result is a simple formula that quantifies the fraction of missing information of any genotyping strategy in a backcross. It includes the special case of selectively genotyping only the phenotypic extreme individuals. The formula is a function of the square of the phenotype, and the uncertainty in our knowledge of the genotypes at a locus. This result is used to answer a variety of questions. First, we examine the cost-information tradeoff varying the density of markers, and the proportion of extreme phenotypic individuals genotyped. Then we evaluate the information content of selective phenotyping designs and the impact of measurement error in phenotyping. A simple formula quantifies the information content of any combined phenotyping and genotyping design. We extend our results to cover multi-genotype crosses such as the F2 intercross, and multiple QTL models. We find that when the QTL effect is small, any contrast in a multi-genotype cross benefits from selective genotyping in the same manner as in a backcross. The benefit remains in the presence of a second unlinked QTL with small effect (explaining less than 20% of the variance), but diminishes if the second QTL has a large effect. Software for performing power calculations for backcross and F2 intercross incorporating selective genotyping and marker spacing is available [in related files].

Journal ArticleDOI
30 Oct 2005
TL;DR: This grooming technology is presented in terms of its basic characteristics, services provided, models, and why cloud computing is a widely accepted in business and software enterprises.
Abstract: Cloud computing is an emerging model of “computing as utility” to provide convenient, on demand access to shared pool of resources. In this paper, this grooming technology is presented in terms of its basic characteristics, services provided, models, and why cloud computing is a widely accepted in business and software enterprises.

Journal ArticleDOI
30 Oct 2005
TL;DR: This paper proposes the novel concept of intelligent crawling of Ontology based content focused crawling, the new approach that analyses it crawl boundary to find the links that are likely to be the most relevant for the crawl while a boundary irrelevant region of the web.
Abstract: The enormous growth of the World Wide Web in the recent years has made it important to perform resources discovery efficiently. The rapid growth of World Wide Web poses (Doubles in size approximately every eight months) unprecedented scaling challenges for general purpose crawler and search engine. Finding useful information from the web which has a large and distributed structure required efficient search strategies. As ontology plays an important role in providing controlled vocabulary of concepts, each with an explicitly defined and machine process able semantics. In this paper ,we propose the novel concept of intelligent crawling of Ontology based content focused crawling , the new approach that analyses it crawl boundary to find the links that are likely to be the most relevant for the crawl while a boundary irrelevant region of the web. Through our new focused crawling technique we solve the polysemy (refer to word with multiple meaning) and synonymy (refers to multiple word having the same meaning) semantic net problem. Also instead of searching in the whole web, our proposed technique will search in the ontology build by us that is updated periodically after a very short interval than instead of displaying all the information that is not related to the user need, we will display only relevant and related information. Our purposed work give us two fold benefit , firstly only focused result are retrieved which reduce the number of results entreated and secondly, due to focused searching irrelevant result are pruned which reduce the time.

Journal ArticleDOI
30 Aug 2005
TL;DR: In this paper, a multi-attribute Complex Proportional Assessment of alternative (CAPA) is used to evaluate the overall efficiency of a project with the criterion values expressed in terms of intervals.
Abstract: Multi-criteria decision support systems are used in various fields of human activities. In every alternative multi-criteria decision making problem can be represented by a set of properties or constraints. The properties can be qualitative & quantitative. For measurement of these properties, there are different unit, as well as there are different optimization techniques. Depending upon the desired goal, the normalization aims for obtaining reference scales of values of these properties. This paper deals with the multi-attribute Complex Proportional Assessment of alternative. In order to make the appropriate decision and to make a proper comparison among the available alternatives Analytic Hierarchy Process (AHP) under fuzziness and COPRAS method with grey numbers has been used. The uses of AHP is for analysis the structure of the project selection problem is used under fuzziness and to assign the weights of the properties and the COPRAS-G method is used to obtain the final ranking and select the best one among the projects. To illustrate the above mention methods survey data on the expansion of optical fiber for a telecommunication sector is reused. The decision maker can also used different weight combination in the decision making process according to the demand of the system.COPRAS-G method is used to evaluate the overall efficiency of a project with the criterion values expressed in terms of intervals. It is based on the real conditions of decision making and applications of the grey number theory.

Journal ArticleDOI
15 Oct 2005
TL;DR: This paper provides a survey on various different methods to segmentation of palmprint into ROI and extraction of principle lines and some conclusion and suggestion is offered.
Abstract: Palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Palmprint recognition process consists of image acquisition, pre-processing, feature extraction, matching and result. One of the most important stages in these methods is pre-processing which contains some operations such as filtering, Region Of Interest (ROI) extraction, normalization. This paper provides a survey on various different methods to segmentation of palmprint into ROI and extraction of principle lines. ROI segmentation of palmprint is to automatically and reliably segment a small region from the captured palmprintimage.We pay more attention towards more essential stage of palm localization, segmentation and ROI extraction. Finally some conclusion and suggestion is offered.

Journal ArticleDOI
30 Nov 2005
TL;DR: In this paper, a hybrid ANFIS (Adaptive Neuro-Fuzzy Inference System) controller with DMRAC (Direct Model Reference Adaptive Control) and mathematical modeling of the kinematic and dynamic solutions was proposed.
Abstract: This paper proposes a hybrid ANFIS (Adaptive Neuro-Fuzzy Inference System) controller with DMRAC (Direct Model Reference Adaptive Control) and mathematical modeling of the kinematic and dynamic solutions. The controller was hybrid with a classical controller, and was designed for a spherical-wristed 6-DOF elbow manipulator. The manipulator’s trajectory overshoot and settling time affect movement; their minimization was thus aimed for. The whole manipulator-controller system was modeled and simulated on MATLAB Version 2011a and Robotics Toolbox 9. To increase accuracy, the ANFIS controller was trained to use many paths in rules and memberships selection. A 3D display model for the manipulator was built in MATLAB. The simulation of the design had done by using the MATLAB/SIMULINK through connection the design with 3D model. Satisfactory results show the hybrid controller’s capacity for precision and speed, both of which are higher than a classical controller’s alone.

Proceedings Article
01 Jan 2005
TL;DR: A new algorithm for linkage disequilibrium based on independent component analysis (ICA) is proposed and analyzed and is able in some cases to discover new patterns due to the inherent properties of ICA and is more robust compared to other techniques since it estimates the missing values.
Abstract: Linkage disequilibrium has gained a lot of attention recently since it can be effectively utilized in various problems in the field of statistical genetics, for example gene mapping and evolutionary inference. In this work, we propose and analyze a new algorithm for linkage disequilibrium based on independent component analysis (ICA). The results comply with results obtained using other published methods. However, the proposed algorithm is able in some cases to discover new patterns due to the inherent properties of ICA and is more robust compared to other techniques since it estimates the missing values.

Journal ArticleDOI
30 Oct 2005
TL;DR: In this article, a cylindrical magneto-hydrodynamics antenna with circular patch and two annular rings is proposed for the wide band of frequency range between 7.9 and 27 GHz.
Abstract: Dielectric Resonator Antennas (DRAs) have received lots of attention in the last two decades due to several attractive characteristics such as high radiation efficiency, light weight, and low profile. There is also increasing challenges for the design of high bandwidth and multi-bands antennas which can be achieved using MHD Antennas for high speed and reconfigurable applications in wireless communication. In this work the objective is to design and develop a cylindrical MHD antenna with circular patch and two annular rings. Magneto-hydrodynamics (MHD) Antenna is a Fluid based Antenna in which the fluid resonator provides excellent coupling of RF energy into fluid. Fluid resonator volume, chemical properties, electric field and magnetic fields are the factors of resonant frequency, gain and return loss. The proposed antenna shall be tuned in the wide band of frequency range between 7.9 – 27 GHz. Simulations using HFSS and measurements have been carried out in respect of design prototype for ‘Air’ and BSTO (Barium Strontium Titanate Oxide) microwave fluid. The findings in this work that the Fluid Resonator based hybrid approach for antenna enhances the bandwidth by a large factor and annular rings with circular patch in proper geometry provides multiband operation. Variation in the volume of the fluid shifts the resonant frequency of the solid structure in the wideband. When magnetic field is applied, significant improvement has been noticed in return loss of the proposed antenna.

Journal ArticleDOI
30 Oct 2005
TL;DR: A new MAC protocol named as REMAC is introduced that minimizes the idle listening by allowing nodes to remain in sleep state until it is necessary to wakeup and also allows the participating nodes to wake up during the sleep time, perform the data transfer and return to sleep state thereby minimizing the chances for over hearing.
Abstract: Wireless sensor networks are considered to be a promising area to equip scientists with the capability of developing realtime monitoring systems. This paper discusses the design and development of a wireless sensor network (WSN) that can be used for monitoring purposes in the agricultural fields. This battery-powered sensor node makes the network deployment easy but limit the lifetime of the network to the limited capacity of these batteries. The main source of energy wastage in modern sensor networks is idle listening and overhearing. Duty cycling is a proven mechanism to overcome the energy wastage through idle listening. In this paper we introduce a new MAC protocol named as REMAC that minimizes the idle listening by allowing nodes to remain in sleep state until it is necessary to wakeup. It also allows the participating nodes to wake up during the sleep time, perform the data transfer and return to sleep state thereby minimizing the chances for over hearing. We show the performance of REMAC through detailed simulations in NS-2 and also compare the performance evaluation with similar synchronous protocols that employ duty cycling. In the analysis REMAC proves to be saving much energy as compared to others.

Journal ArticleDOI
30 Aug 2005
TL;DR: This paper presents a meta-analysis of Web Service Negotiation Using AHP for Business Oriented Design of Service Level Agreements using Sri Manakula Vinayagar Engineering College, Pondicherry as a model for similar studies in India.
Abstract: 401 | P a g e w w w . i j c t o n l i n e . c o m Web Service Negotiation Using AHP for Business Oriented Design of Service Level Agreements 1 Mr. R. Raju, 2 Ms. D. Dhivya, 3 Ms. R.Saranya, Ms. S. I. Abbinaya 1 Associate Professor and Head, IT Dept, Sri Manakula Vinayagar Engineering College, Pondicherry-605104. rajupdy@gmail.com 2,3,4 Dept Of Information Technology, Sri Manakula Vinayagar Engineering College, Pondicherry-605104. dhivya.blu@gmail.com rsaranya.kavya@gmail.com saiabbinaya333@gmail.com

Journal ArticleDOI
30 Nov 2005
TL;DR: A FWOWA approach helps to discard the irrelevant features by avoiding the overfitting and improve the accuracy of the cluster.
Abstract: Feature reduction finds the optimal feature subset using machine learning techniques and evaluation criteria. Some of the irrelevant features are existed in the real-world datasets that should be removed by using the multi criterion decision approach. The relevant features are determined by using the WOWA criteria in fuzzy set. There are two important criteria are considered such as preferential weights and importance weights of features. These weights are used to find the irrelevant features and they are removed from the mixture. In this context, WOWA operator has the capability of assigning the preferential weights and important weights to the features. It helps to obtain the irrelevant, by selecting the relevant features using the weights in the feature reduction process. The objective of this paper is to propose a FWOWA approach helps to discard the irrelevant features by avoiding the overfitting and improve the accuracy of the cluster. The irrelevant features are determined by applying WOWA. By applying WOWA, the irrelevant features are examined and it is removed from the Gaussian Mixture using (RPEM).