scispace - formally typeset
Search or ask a question
Author

Prabhat Mahanti

Bio: Prabhat Mahanti is an academic researcher from University of New Brunswick. The author has contributed to research in topics: Ant colony & Metaheuristic. The author has an hindex of 11, co-authored 46 publications receiving 622 citations. Previous affiliations of Prabhat Mahanti include Birla Institute of Technology and Science.

Papers
More filters
Book ChapterDOI
28 May 2001
TL;DR: The application of hybridized soft computing techniques for automated stock market forecasting and trend analysis using a neural network for one day ahead stock forecasting and a neuro-fuzzy system for analyzing the trend of the predicted stock values are made use.
Abstract: The use of intelligent systems for stock market predictions has been widely established. This paper deals with the application of hybridized soft computing techniques for automated stock market forecasting and trend analysis. We make use of a neural network for one day ahead stock forecasting and a neuro-fuzzy system for analyzing the trend of the predicted stock values. To demonstrate the proposed technique, we considered the popular Nasdaq-100 index of Nasdaq Stock MarketSM. We analyzed the 24 months stock data for Nasdaq-100 main index as well as six of the companies listed in the Nasdaq-100 index. Input data were preprocessed using principal component analysis and fed to an artificial neural network for stock forecasting. The predicted stock values are further fed to a neuro-fuzzy system to analyze the trend of the market. The forecasting and trend prediction results using the proposed hybrid system are promising and certainly warrant further research and analysis.

162 citations

Journal ArticleDOI
TL;DR: In this paper, an optimization method based on real-coded GA with elitist strategy for thinning a large linear array of uniformly excited isotropic antennas to yield the maximum relative sidelobe level (SLL) equal to or below a fixed level.
Abstract: In this paper, we propose an optimization method based on real-coded genetic algorithm (GA) with elitist strategy for thinning a large linear array of uniformly excited isotropic antennas to yield the maximum relative sidelobe level (SLL) equal to or below a fixed level. The percentage of thinning is always kept equal to or above a fixed value. Two examples have been proposed and solved with different objectives and with different value of percentage of thinning that will produce nearly the same sidelobe level. Directivities of the thinned arrays are found out and simulation results of different problems are also compared with published results to illustrate the effectiveness of the proposed method.

116 citations

Journal Article
TL;DR: The paper proposes an initial heuristic algorithm to apply modified ant colony optimization approach for the diversified service allocation and scheduling mechanism in cloud paradigm.
Abstract: Scheduling of diversified service requests in distributed computing is a critical design issue. Cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtual computers. It is not only the clusters and grid but also it comprises of next generation data centers. The paper proposes an initial heuristic algorithm to apply modified ant colony optimization approach for the diversified service allocation and scheduling mechanism in cloud paradigm. The proposed optimization method is aimed to minimize the scheduling throughput to service all the diversified requests according to the different resource allocator available under cloud computing environment. Keywords—Ant Colony, Cloud Computing, Grid, Resource allocator, Service Request.

64 citations

Journal ArticleDOI
TL;DR: This new technique, called DEPRO, utilizes a Differential Evolution (DE) algorithm for learning and optimizing the output of the classification method PROAFTN, and provides excellent results, outperforming the most common classification algorithms.
Abstract: This paper introduces a new learning technique for the multicriteria classification method PROAFTN. This new technique, called DEPRO, utilizes a Differential Evolution (DE) algorithm for learning and optimizing the output of the classification method PROAFTN. The limitation of the PROAFTN method is largely due to the set of parameters (e.g., intervals and weights) required to be obtained to perform the classification procedure. Therefore, a learning method is needed to induce and extract these parameters from data. DE is an efficient metaheuristic optimization algorithm based on a simple mathematical structure to mimic a complex process of evolution. Some of the advantages of DE over other global optimization methods are that it often converges faster and with more certainty than many other methods and it uses fewer control parameters. In this work, the DE algorithm is proposed to inductively obtain PROAFTN's parameters from data to achieve a high classification accuracy. Based on results generated from 12 public datasets, DEPRO provides excellent results, outperforming the most common classification algorithms.

37 citations

01 Jan 2009
TL;DR: This paper will focus on soft computing paradigm in bioinformatics with particular emphasis on integrative research.
Abstract: Bioinformatics is a promising and innovative research field in 21st century. Despite of a high number of techniques specifically dedicated to bioinformatics problems as well as many successful applications, we are in the beginning of a process to massively integrate the aspects and experiences in the different core subjects such as biology, medicine, computer science, engineering, chemistry, physics, and mathematics. Recently the use of soft computing tools for solving bioinformatics problems have been gaining the attention of researchers because of their ability to handle imprecision, uncertainty in large and complex search spaces. The paper will focus on soft computing paradigm in bioinformatics with particular emphasis on integrative research.

37 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: This paper surveys more than 100 related published articles that focus on neural and neuro-fuzzy techniques derived and applied to forecast stock markets to show that soft computing techniques are widely accepted to studying and evaluating stock market behavior.
Abstract: The key to successful stock market forecasting is achieving best results with minimum required input data. Given stock market model uncertainty, soft computing techniques are viable candidates to capture stock market nonlinear relations returning significant forecasting results with not necessarily prior knowledge of input data statistical distributions. This paper surveys more than 100 related published articles that focus on neural and neuro-fuzzy techniques derived and applied to forecast stock markets. Classifications are made in terms of input data, forecasting methodology, performance evaluation and performance measures used. Through the surveyed papers, it is shown that soft computing techniques are widely accepted to studying and evaluating stock market behavior.

714 citations

Journal ArticleDOI
TL;DR: The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.
Abstract: In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives.This article1 provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.

711 citations

Journal ArticleDOI
TL;DR: Experimental results show that the performance of all the prediction models improve when these technical parameters are represented as trend deterministic data, and random forest outperforms other three prediction models on overall performance.
Abstract: Four machine learning algorithms are used for prediction in stock markets.Focus is on data pre-processing to improve the prediction accuracy.Technical indicators are discretised by exploiting the inherent opinion.Prediction accuracy of algorithms increases when discrete data is used. This paper addresses problem of predicting direction of movement of stock and stock price index for Indian stock markets. The study compares four prediction models, Artificial Neural Network (ANN), Support Vector Machine (SVM), random forest and naive-Bayes with two approaches for input to these models. The first approach for input data involves computation of ten technical parameters using stock trading data (open, high, low & close prices) while the second approach focuses on representing these technical parameters as trend deterministic data. Accuracy of each of the prediction models for each of the two input approaches is evaluated. Evaluation is carried out on 10years of historical data from 2003 to 2012 of two stocks namely Reliance Industries and Infosys Ltd. and two stock price indices CNX Nifty and S&P Bombay Stock Exchange (BSE) Sensex. The experimental results suggest that for the first approach of input data where ten technical parameters are represented as continuous values, random forest outperforms other three prediction models on overall performance. Experimental results also show that the performance of all the prediction models improve when these technical parameters are represented as trend deterministic data.

657 citations