scispace - formally typeset
Search or ask a question
Book ChapterDOI

Soft Computing in Bioinformatics

01 Jan 2021-pp 431-446
TL;DR: In this paper, the authors explored the soft computing based techniques for bioinformatics and discussed the necessity of soft computing techniques and their compatibility for solving wide spectrum of bio-informatic related problems.
Abstract: In this chapter, we explored the soft computing based techniques for bioinformatics. Necessity of soft computing techniques and their compatibility for solving wide spectrum of bioinformatics related problems is reviewed. Basics of soft computing techniques are discussed and their relevancy in solving many bioinformatics based problems is also elaborated. Actual experimental results on two real world bioinformatics data demonstrated the efficacy of soft computing techniques over conventional one for biological data problems.
References
More filters
Journal ArticleDOI
TL;DR: GARS has proved to be a suitable tool for performing feature selection on high-dimensional data and could be adopted when standard feature selection approaches do not provide satisfactory results or when there is a huge amount of data to be analyzed.
Abstract: Feature selection is a crucial step in machine learning analysis. Currently, many feature selection approaches do not ensure satisfying results, in terms of accuracy and computational time, when the amount of data is huge, such as in ‘Omics’ datasets. Here, we propose an innovative implementation of a genetic algorithm, called GARS, for fast and accurate identification of informative features in multi-class and high-dimensional datasets. In all simulations, GARS outperformed two standard filter-based and two ‘wrapper’ and one embedded’ selection methods, showing high classification accuracies in a reasonable computational time. GARS proved to be a suitable tool for performing feature selection on high-dimensional data. Therefore, GARS could be adopted when standard feature selection approaches do not provide satisfactory results or when there is a huge amount of data to be analyzed.

26 citations

Journal ArticleDOI
01 Jan 2011
TL;DR: The architecture of a neuron with a non-linear aggregation function for complex-valued signals and the superiority of proposed neuron based network over real and complex multilayer perceptron is demonstrated through variety of experiments.
Abstract: The key element of neurocomputing research in complex domain is the development of artificial neuron model with improved computational power and generalization ability. The non-linear activities in neuronal interactions are observed in biological neurons. This paper presents architecture of a neuron with a non-linear aggregation function for complex-valued signals. The proposed aggregation function is conceptually based on generalized mean of signals impinging on a neuron. This function is general enough and is capable of realizing various conventional aggregation functions as its special case. The generalized-mean neuron has a simpler structure and variation in the value of generalization parameter embraces higher order structure of a neuron. Hence, it can be used without the hassles of possible combinatorial explosion, as in higher order neurons. The superiority of proposed neuron based network over real and complex multilayer perceptron is demonstrated through variety of experiments.

23 citations

Journal ArticleDOI
01 Aug 2010
TL;DR: Two compensatory type novel aggregation functions for artificial neurons are proposed that produce net potential as linear or non-linear composition of basic summation and radial basis operations over a set of input signals.
Abstract: The computational power of a neuron lies in the spatial grouping of synapses belonging to any dendrite tree. Attempts to give a mathematical representation to the grouping process of synapses continue to be a fascinating field of work for researchers in the neural network community. In the literature, we generally find neuron models that comprise of summation, radial basis or product aggregation function, as basic unit of feed-forward multilayer neural network. All these models and their corresponding networks have their own merits and demerits. The MLP constructs global approximation to input–output mapping, while a RBF network, using exponentially decaying localized non-linearity, constructs local approximation to input–output mapping. In this paper, we propose two compensatory type novel aggregation functions for artificial neurons. They produce net potential as linear or non-linear composition of basic summation and radial basis operations over a set of input signals. The neuron models based on these aggregation functions ensure faster convergence, better training and prediction accuracy. The learning and generalization capabilities of these neurons have been tested over various classification and functional mapping problems. These neurons have also shown excellent generalization ability over the two-dimensional transformations.

22 citations

Proceedings ArticleDOI
23 Jul 2007
TL;DR: It is shown that the same semantic constraints used to ensure linguistic interpretation in data driven design of fuzzy systems are also useful in the design of evolutive fuzzy clustering algorithms.
Abstract: In this paper it is show that the same semantic constraints used to ensure linguistic interpretation in data driven design of fuzzy systems are also useful in the design of evolutive fuzzy clustering algorithms. Specifically it is show that these constraints generalize the constraints used in popular fuzzy clustering algorithms such as the FCM. Experimental studies illustrate the effectiveness of this approach to clustering. The algorithm attempts to optimize the clusters' parameters as well as the number of clusters (a dynamically variable length of chromosomes is used).

13 citations

Journal ArticleDOI
TL;DR: This article deals with a related 3D back-propagation (3D-BP) learning algorithm, which is an extension of conventional back- Propagation algorithm in the single dimension.
Abstract: In this paper, we investigate the neural network with three-dimensional parameters for applications like 3D image processing, interpretation of 3D transformations, and 3D object motion. A 3D vector represents a point in the 3D space, and an object might be represented with a set of these points. Thus, it is desirable to have a 3D vector-valued neural network, which deals with three signals as one cluster. In such a neural network, 3D signals are flowing through a network and are the unit of learning. This article also deals with a related 3D back-propagation (3D-BP) learning algorithm, which is an extension of conventional back-propagation algorithm in the single dimension. 3D-BP has an inherent ability to learn and generalize the 3D motion. The computational experiments presented in this paper evaluate the performance of considered learning machine in generalization of 3D transformations and 3D pattern recognition.

11 citations