scispace - formally typeset
Search or ask a question

Showing papers on "Soft computing published in 2016"


Journal ArticleDOI
TL;DR: This paper presents a survey of state-of-the-art work in all aspects of approximate computing and highlights future research challenges in this field.
Abstract: As one of the most promising energy-efficient computing paradigms, approximate computing has gained a lot of research attention in the past few years. This paper presents a survey of state-of-the-art work in all aspects of approximate computing and highlights future research challenges in this field.

420 citations



Journal ArticleDOI
TL;DR: The results show that the proposedMONF model outperforms the above benchmark models; it is concluded that the MONF model is a new alternative tool that should be used in flood susceptibility mapping.

246 citations


Journal ArticleDOI
TL;DR: Taxonomy of indexing techniques is developed to provide insight to enable researchers understand and select a technique as a basis to design an indexing mechanism with reduced time and space consumption for BD-MCC.
Abstract: The explosive growth in volume, velocity, and diversity of data produced by mobile devices and cloud applications has contributed to the abundance of data or `big data.' Available solutions for efficient data storage and management cannot fulfill the needs of such heterogeneous data where the amount of data is continuously increasing. For efficient retrieval and management, existing indexing solutions become inefficient with the rapidly growing index size and seek time and an optimized index scheme is required for big data. Regarding real-world applications, the indexing issue with big data in cloud computing is widespread in healthcare, enterprises, scientific experiments, and social networks. To date, diverse soft computing, machine learning, and other techniques in terms of artificial intelligence have been utilized to satisfy the indexing requirements, yet in the literature, there is no reported state-of-the-art survey investigating the performance and consequences of techniques for solving indexing in big data issues as they enter cloud computing. The objective of this paper is to investigate and examine the existing indexing techniques for big data. Taxonomy of indexing techniques is developed to provide insight to enable researchers understand and select a technique as a basis to design an indexing mechanism with reduced time and space consumption for BD-MCC. In this study, 48 indexing techniques have been studied and compared based on 60 articles related to the topic. The indexing techniques' performance is analyzed based on their characteristics and big data indexing requirements. The main contribution of this study is taxonomy of categorized indexing techniques based on their method. The categories are non-artificial intelligence, artificial intelligence, and collaborative artificial intelligence indexing methods. In addition, the significance of different procedures and performance is analyzed, besides limitations of each technique. In conclusion, several key future research topics with potential to accelerate the progress and deployment of artificial intelligence-based cooperative indexing in BD-MCC are elaborated on.

222 citations


BookDOI
31 Aug 2016
TL;DR: The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control, and discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes.
Abstract: The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, researchers and practitioners in the area of soft computing, systems modeling and control.

147 citations


Journal ArticleDOI
TL;DR: An overview of the current state ofsoft computing techniques is given and the advantages and disadvantages of soft computing compared to traditional hard computing techniques are described.

146 citations


01 Jan 2016
TL;DR: People have search hundreds of times for their chosen books like this learning and soft computing support vector machines neural networks and fuzzy logic models, but end up in malicious downloads.
Abstract: Thank you for reading learning and soft computing support vector machines neural networks and fuzzy logic models. As you may know, people have search hundreds times for their chosen books like this learning and soft computing support vector machines neural networks and fuzzy logic models, but end up in malicious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some infectious bugs inside their computer.

125 citations


Journal ArticleDOI
TL;DR: Estimation and prediction results of the ELM models were compared with genetic programming (GP) and artificial neural networks (ANNs) models and indicate that on the whole, the newflanged algorithm creates good generalization presentation.
Abstract: Evaluation of the parameters affecting the shear strength and ductility of steel–concrete composite beam is the goal of this study. This study focuses on predicting the future output of beam’s strength and ductility based on relative inputs using a soft computing scheme, extreme learning machine (ELM). Estimation and prediction results of the ELM models were compared with genetic programming (GP) and artificial neural networks (ANNs) models. Referring to the experimental results, as opposed to the GP and ANN methods, the ELM approach enhanced generalization ability and predictive accuracy. Moreover, achieved results indicated that the developed ELM models can be used with confidence for further work on formulating novel model predictive strategy in shear strength and ductility of steel concrete composite. Furthermore, the experimental results indicate that on the whole, the newflanged algorithm creates good generalization presentation. In comparison to the other widely used conventional learning algorithms, the ELM has a much faster learning ability.

116 citations


Journal ArticleDOI
01 Jan 2016
TL;DR: A new soft computing framework is developed for solving nanofluidic problems based on fluid flow and heat transfer of multi-walled carbon nanotube along a flat plate with Navier slip boundary with the help of artificial neural networks (ANNs), Genetic Algorithms (GAs), Interior-Point Algorithm (IPA), and hybridized approach GA-IPA.
Abstract: Novel design of unsupervised ANNs for solving nanofluidic problems in mechanics.Hybrid computing GA-IPA is exploited for finding design parameters of networks.Design scheme is tested effectively on variant fluid flow and heat transfer scenarios.Correctness of scheme is verified by closely matched results from standard solutions.Statistical performance indices validate consistent accuracy and convergence. In the present study, a new soft computing framework is developed for solving nanofluidic problems based on fluid flow and heat transfer of multi-walled carbon nanotube (MWCNT) along a flat plate with Navier slip boundary with the help of artificial neural networks (ANNs), Genetic Algorithms (GAs), Interior-Point Algorithm (IPA), and hybridized approach GA-IPA. Original PDEs associated with the problem are transformed into system of nonlinear ODEs using similarity transformation. Mathematical model of transformed system is constructed by exploiting the strength of universal function approximation ability of ANNs and an unsupervised error function is formulated for the system in a least mean square sense. Learning of the design variable of the networks is carried out with GAs supported with IPA for rapid local convergence. The design scheme is applied to solve number of variants by taking water, engine oil, and kerosene oil as a base fluids mixed with different concentrations of MWCNTs. The reliability and effectiveness of the design scheme is measured with the help of results of statistical analysis based on sufficient large number of independent runs of the algorithms rather than single successful run. The comparative studies of the proposed solution are made with standard numerical results in order to establish the correctness of the given scheme.

74 citations



Journal ArticleDOI
01 Aug 2016
TL;DR: The research findings show that the new integrated framework can help identify a set of relevant groutability influencing factors and deliver superior prediction performance compared with other state-of-the-art approaches.
Abstract: Display OmittedDifferential Flower Pollination-optimized Support Vector Machine for Groutability Prediction (DFP-SVMGP). A soft computing method for groutability estimation is proposed.A hybrid metaheuristic is constructed to optimize the SVM-based model.The effect of evaluation functions on the model performance is studied.Relevant influencing factors in two datasets have been revealed.The new approach attains high prediction accuracy. This research presents a soft computing methodology for groutability estimation of grouting processes that employ cement grouts. The method integrates a hybrid metaheuristic and the Support Vector Machine (SVM) with evolutionary input factor and hyper-parameter selection. The new prediction model is constructed and verified using two datasets of grouting experiments. The contribution of this study to the body of knowledge is multifold. First, the efficacies of the Flower Pollenation Algorithm (FPA) and the Differential Evolution (DE) are combined to establish an integrated metaheuristic approach, named as Differential Flower Pollenation (DFP). The integration of the FPA and the DE aims at harnessing the strength and complementing the disadvantage of each individual optimization algorithm. Second, the DFP is employed to optimize the input factor selection and hyper-parameter tuning processes of the SVM-based groutability prediction model. Third, this study conducts a comparative work to investigate the effects of different evaluation functions on the model performance. Finally, the research findings show that the new integrated framework can help identify a set of relevant groutability influencing factors and deliver superior prediction performance compared with other state-of-the-art approaches.

Journal ArticleDOI
01 Jan 2016
TL;DR: Experimental results on the prototype system demonstrate NeverStop can efficiently facilitate researchers to reduce the average waiting time for vehicles, and a genetic algorithm illustrating how theaverage waiting time is derived is presented.
Abstract: We presentNeverStop, which utilizes genetic algorithms and fuzzy control methods in big data intelligent transportation systems.We integrate the fuzzy control module into the NeverStop system design. The fuzzy control architecture completes the integration and modeling of the traffic control systems.We present a genetic algorithm illustrating how the average waiting time is derived. The involvement amplifies the NeverStop system and facilitates the fuzzy control module.NeverStop utilizes fuzzy control method and genetic algorithm to adjust the waiting time for the traffic lights, consequently the average waiting time can be significantly reduced. The academic and industry have entered big data era in many computer software and embedded system related fields. Intelligent transportation system problem is one of the important areas in the real big data application scenarios. However, it is posing significant challenge to manage the traffic lights efficiently due to the accumulated dynamic car flow data scale. In this paper, we present NeverStop, which utilizes genetic algorithms and fuzzy control methods in big data intelligent transportation systems. NeverStop is constructed with sensors to control the traffic lights at intersection automatically. It utilizes fuzzy control method and genetic algorithm to adjust the waiting time for the traffic lights, consequently the average waiting time can be significantly reduced. A prototype system has been implemented at an EBox-II terminal device, running the fuzzy control and genetic algorithms. Experimental results on the prototype system demonstrate NeverStop can efficiently facilitate researchers to reduce the average waiting time for vehicles.

Journal ArticleDOI
TL;DR: In this article, some soft topological properties are given in details, namely (soft compactness, soft sequentially compactness and continuity and uniformly continues of soft functions between soft topology spaces).
Abstract: After the famous article of Moldotsove [10] in 1999 which initiate the theory of soft sets as a mathematical theory to deal with the uncertainty problems, many research works in the softbmathematics and its applications in various fields are appeared. In [17], the authers introduced a new definition of the soft metric function using the soft elements. By this definition each soft metric in view of Das and Samanta [6] is also a soft metric in our concept but the converse is not true. In the present paper, some soft topological properties are given in details, namely (soft compactness, soft sequentially compactness, continuity and uniformly continues of soft functions between soft topological spaces). We hope that the findings in thispaper will help researcher enhance and promote the further study on soft topology to carry out a general framework for theirapplications in practical life.

Journal ArticleDOI
06 Aug 2016-Sensors
TL;DR: The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy.
Abstract: In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

Journal ArticleDOI
01 Mar 2016
TL;DR: The experimental results validate the improved performance of the machine, with lesser computation time compared to prior studies, and the fuzzy layer parameters are not tuned.
Abstract: Prediction of ground motion parameters using hybrid soft computing technique.The neuro-fuzzy inference system uses Sugeno type fuzzy rules with a randomized fuzzy layer and a linear neural network output layer.Faster prediction of peak ground acceleration, velocity and displacement with increased accuracy. In this paper, a novel neuro-fuzzy learning machine called randomized adaptive neuro-fuzzy inference system (RANFIS) is proposed for predicting the parameters of ground motion associated with seismic signals. This advanced learning machine integrates the explicit knowledge of the fuzzy systems with the learning capabilities of neural networks, as in the case of conventional adaptive neuro-fuzzy inference system (ANFIS). In RANFIS, to accelerate the learning speed without compromising the generalization capability, the fuzzy layer parameters are not tuned. The three time domain ground motion parameters which are predicted by the model are peak ground acceleration (PGA), peak ground velocity (PGV) and peak ground displacement (PGD). The model is developed using the database released by PEER (Pacific Earthquake Engineering Research Center). Each ground motion parameter is related to mainly to four seismic parameters, namely earthquake magnitude, faulting mechanism, source to site distance and average soil shear wave velocity. The experimental results validate the improved performance of the machine, with lesser computation time compared to prior studies.

BookDOI
TL;DR: This two volume set (CCIS 610 and 611) constitute the proceedings of the 16th International Conference on Information processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2016, held in Eindhoven, The Netherlands, in June 2016.
Abstract: This two volume set (CCIS 610 and 611) constitute the proceedings of the 16th International Conference on Information processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2016, held in Eindhoven, The Netherlands, in June 2016. The revised full papers presented together with four invited talks were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on fuzzy measures and integrals; uncertainty quantification with imprecise probability; textual data processing; belief functions theory and its applications; graphical models; fuzzy implications functions; applications in medicine and bioinformatics; real-world applications; soft computing for image processing; clustering; fuzzy logic, formal concept analysis and rough sets; graded and many-valued modal logics; imperfect databases; multiple criteria decision methods; argumentation and belief revision; databases and information systems; conceptual aspects of data aggregation and complex data fusion; fuzzy sets and fuzzy logic; decision support; comparison measures; machine learning; social data processing; temporal data processing; aggregation.

Journal ArticleDOI
TL;DR: Experimental results evidence that the key to a successful defect classifier is the feature extraction method – namely the novel CBIR-based one outperforms all the competitors – and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application.

Journal ArticleDOI
01 Jan 2016
TL;DR: The proposed soft computing method has been tested and compared to other algorithms by applying it to the problem of predicting SVI in WWTP and demonstrates its effectiveness of achieving considerably better predicting performance for SVI values.
Abstract: The structure of RSONN can be self-organized based on the contributions of each hidden node, which uses not only the past states but also the current states.The appropriately adjusted learning rates of RSONN is derived based on the Lyapunov stability theorem. Moreover, the convergence of the proposed RSONN is discussed.An experimental hardware, including the proposed soft computing method is set up. The experimental results have confirmed that the soft computing method exhibits satisfactory predicting performance for SVI. In this paper, a soft computing method, based on a recurrent self-organizing neural network (RSONN) is proposed for predicting the sludge volume index (SVI) in the wastewater treatment process (WWTP). For this soft computing method, a growing and pruning method is developed to tune the structure of RSONN by the sensitivity analysis (SA) of hidden nodes. The redundant hidden nodes will be removed and the new hidden nodes will be inserted when the SA values of hidden nodes meet the criteria. Then, the structure of RSONN is able to be self-organized to maintain the prediction accuracy. Moreover, the convergence of RSONN is discussed in both the self-organizing phase and the phase following the modification of the structure for the soft computing method. Finally, the proposed soft computing method has been tested and compared to other algorithms by applying it to the problem of predicting SVI in WWTP. Experimental results demonstrate its effectiveness of achieving considerably better predicting performance for SVI values.

Journal ArticleDOI
TL;DR: A simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX compounds is presented and results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM.

Journal ArticleDOI
TL;DR: In this study several soft computing methods are analyzed for robotic gripper applications and extreme learning machine (ELM) and support vector regression (SVR) approach shows the highest accuracy with ELM approach.
Abstract: Adaptive grippers should be able to detect and recognize grasping objects. To be able to do it control algorithm need to be established to control gripper tasks. Since the gripper movements are highly nonlinear systems it is desirable to avoid using of conventional control strategies for robotic manipulators. Instead of the conventional control strategies more advances algorithms can be used. In this study several soft computing methods are analyzed for robotic gripper applications. The gripper structure is fully compliant with embedded sensors. The sensors could be used for grasping shape detection. As soft computing methods, extreme learning machine (ELM) and support vector regression (SVR) were established. Also other soft computing methods are analyzed like fuzzy, neuro-fuzzy and artificial neural network approach. The results show the highest accuracy with ELM approach than other soft computing methods. Controlling input displacement of a new adaptive compliant gripper.This design of the gripper with embedded sensors as part of its structure.To build an effective prediction model of input displacement of gripper.The impact of the variation in the input parameters.

BookDOI
01 Jan 2016
TL;DR: The post-conference proceedings of the 12th International Conference on Artificial Evolution, EA 2015, held in Lyon, France, in October 2015 were published in this paper, with the focus of the conference on following topics: Evolutionary Computation, Evolutionary Optimization, Co-evolution, Artificial Life, Population Dynamics, Theory, Algorithmics and Modeling, Implementations, Application of Evolutionary Paradigms to the Real World, Dynamic Optimisation, Machine Learning and hybridization with other soft computing techniques.
Abstract: This book constitutes the thoroughly refereed post-conference proceedings of the 12th International Conference on Artificial Evolution, EA 2015, held in Lyon, France, in October 2015. The 18 revised papers were carefully reviewed and selected from 31 submissions. The focus of the conference is on following topics: Evolutionary Computation, Evolutionary Optimization, Co-evolution, Artificial Life, Population Dynamics, Theory, Algorithmics and Modeling, Implementations, Application of Evolutionary Paradigms to the Real World, Dynamic Optimization, Machine Learning and hybridization with other soft computing techniques.

Journal ArticleDOI
TL;DR: In this paper, a classifying artificial neural network (ANN) ensemble approach is proposed for estimating the required time for a simulation task and reduced the estimation time considerably.
Abstract: Cloud manufacturing (CMfg) is an extension of cloud computing in the manufacturing sector. The CMfg concept of simulating a factory online by using Web services is a topic of interest. To distribute a simulation workload evenly among simulation clouds, a simulation task is typically decomposed into small parts that are simultaneously processed. Therefore, the time required to complete a simulation task must be estimated in advance. However, this topic is seldom discussed. In this paper, a classifying artificial neural network (ANN) ensemble approach is proposed for estimating the required time for a simulation task. In the proposed methodology, simulation tasks are classified using k-means before their simulation times are estimated. Subsequently, for each task category, an ANN is constructed to estimate the required task time in the category. However, to reduce the impact of ANN overfitting, the required time for each simulation task is estimated using the ANNs of all categories, and the estimation results are then weighted and summed. Thus, the ANNs form an ensemble. In addition to the proposed methodology, six statistical and soft computing methods were applied in real tasks. According to the experimental results, compared with the six existing methods, the proposed methodology reduced the estimation time considerably. In addition, this advantage was statistically significant according to the results of the paired t test.

Journal ArticleDOI
01 Dec 2016
TL;DR: Two problem decomposition methods for training feedforward and recurrent neural networks for chaotic time series problems are evaluated and it is shown that recurrent neural Networks have better generalisation ability when compared to feedforward networks for real-world timeseries problems.
Abstract: Graphical abstractDisplay Omitted HighlightsIn this paper, we evaluate the performance of coevolutionary feedforward and recurrent neural networks architectures for chaotic time series problems.We further apply them for financial prediction problems selected from the NASDAQ stock exchange.We highlight the challenges in real-time implementation and present a mobile application framework for financial time series prediction.The results, in general, show that recurrent neural networks have better generalisation ability when compared to feedforward networks for real-world time series problems. The fusion of soft computing methods such as neural networks and evolutionary algorithms have given a very promising performance for time series prediction problems. In order to fully harness their strengths for wider impact, effective real-world implementation of prediction systems must incorporate the use of innovative technologies such as mobile computing. Recently, co-evolutionary algorithms have shown to be very promising for training neural networks for time series prediction. Cooperative coevolution decomposes a problem into subcomponents that are evolved in isolation and cooperation typically involves fitness evaluation. The challenge has been in developing effective subcomponent decomposition methods for different neural network architectures. In this paper, we evaluate the performance of two problem decomposition methods for training feedforward and recurrent neural networks for chaotic time series problems. We further apply them for financial prediction problems selected from the NASDAQ stock exchange. We highlight the challenges in real-time implementation and present a mobile application framework for financial time series prediction. The results, in general, show that recurrent neural networks have better generalisation ability when compared to feedforward networks for real-world time series problems.

Journal ArticleDOI
01 Nov 2016
TL;DR: A novel soft computing framework using a modified clustering technique, an innovative hourly time-series classification method, a new cluster selection algorithm and a multilayer perceptron neural network (MLPNN) to increase the solar radiation forecasting accuracy is proposed.
Abstract: Display Omitted This paper proposes a new hybrid soft computing framework to increase the solar radiation forecasting accuracy.An improved version of K-means algorithm is proposed to provide fixed, definitive clustering results.A new classification approach is developed to better characterize irregularities and variations of solar radiation.The new method is important for very short-term forecasting where the forecast horizon can be as short as a few seconds. Accurate forecasting of renewable-energy sources plays a key role in their integration into the grid. This paper proposes a novel soft computing framework using a modified clustering technique, an innovative hourly time-series classification method, a new cluster selection algorithm and a multilayer perceptron neural network (MLPNN) to increase the solar radiation forecasting accuracy. The proposed clustering method is an improved version of K-means algorithm that provides more reliable results than the K-means algorithm. The time series classification method is specifically designed for solar data to better characterize its irregularities and variations. Several different solar radiation datasets for different states of U.S. are used to evaluate the performance of the proposed forecasting model. The proposed forecasting method is also compared with the existing state-of-the-art techniques. The comparison results show the higher accuracy performance of the proposed model.

Journal ArticleDOI
TL;DR: This research investigates probable soft computing techniques that are comparatively applied to localizations in a variety of components and proposes an alternative scheme that utilizes an extreme learning machine to increase the estimation accuracy and demonstrates effectiveness compared to other state-of-the-art soft-computing-based range-free localization schemes.

Journal ArticleDOI
01 Sep 2016
TL;DR: This article is targeted to focus on the relevant hybrid soft computing techniques which are in practice for content-based image and video retrieval, which serve to enhance the overall performance and robustness of the system with reduced human interference.
Abstract: Graphical abstractDisplay Omitted There has been an unrestrained growth of videos on the Internet due to proliferation of multimedia devices. These videos are mostly stored in unstructured repositories which pose enormous challenges for the task of both image and video retrieval. Users aim to retrieve videos of interest having content which is relevant to their need. Traditionally, low-level visual features have been used for content based video retrieval (CBVR). Consequently, a gap existed between these low-level features and the high level semantic content. The semantic differential was partially bridged by proliferation of research on interest point detectors and descriptors, which represented mid-level features of the content. The computational time and human interaction involved in the classical approaches for CBVR are quite cumbersome. In order to increase the accuracy, efficiency and effectiveness of the retrieval process, researchers resorted to soft computing paradigms. The entire retrieval task was automated to a great extent using individual soft computing components. Due to voluminous growth in the size of multimedia databases, augmented by an exponential rise in the number of users, integration of two or more soft computing techniques was desirable for enhanced efficiency and accuracy of the retrieval process. The hybrid approaches serve to enhance the overall performance and robustness of the system with reduced human interference. This article is targeted to focus on the relevant hybrid soft computing techniques which are in practice for content-based image and video retrieval.

Journal ArticleDOI
TL;DR: In this paper, the authors identified the various soft computing approaches which are used for diagnosing and predicting the diseases and identified various diseases for which these approaches are applied, and categories the soft computing approach for clinical support system.
Abstract: In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches Soft computing approaches have the ability to adapt itself according to problem domain Another aspect is a good balance between exploration and exploitation processes These aspects make soft computing approaches more powerful, reliable and efficient The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases Second objective is to identify various diseases for which these approaches are applied Third objective is to categories the soft computing approaches for clinical support system In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc A detailed discussion on these approaches are presented in literature section This work summarizes various soft computing approaches used in healthcare domain in last one decade These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also This work also focuses on accuracy rate of soft computing technique and tabular information is provided for each category including author details, technique, disease and utility/accuracy

Posted Content
TL;DR: In this article, the authors used fuzzy logic and neural networks to improve the accuracy of the use case points method and showed that an improvement up to 22% can be obtained using the proposed approach.
Abstract: Software estimation is a crucial task in software engineering. Software estimation encompasses cost, effort, schedule, and size. The importance of software estimation becomes critical in the early stages of the software life cycle when the details of software have not been revealed yet. Several commercial and non-commercial tools exist to estimate software in the early stages. Most software effort estimation methods require software size as one of the important metric inputs and consequently, software size estimation in the early stages becomes essential. One of the approaches that has been used for about two decades in the early size and effort estimation is called use case points. Use case points method relies on the use case diagram to estimate the size and effort of software projects. Although the use case points method has been widely used, it has some limitations that might adversely affect the accuracy of estimation. This paper presents some techniques using fuzzy logic and neural networks to improve the accuracy of the use case points method. Results showed that an improvement up to 22% can be obtained using the proposed approach.

Journal ArticleDOI
TL;DR: The Response Surface Method (RSM) is improved based on high-order polynomial functions for forecasting the river stream-flow namely; High-Order Response Surface (HORS) method, which showed outstanding performance for monthly stream- flow forecasting at AHD.
Abstract: Accurate and reliable stream-flow forecasting has a key role in water resources planning and management. Most recently, soft computing approaches have become progressively prevalent in modelling hydrological variables and most specifically stream-flows. This is due to their ability to capture the non-linearity and non-stationarity characteristics of the hydrological variables with minimum information requirements. Despite this, they present several challenges in the modelling architecture, as there is a need to establish a suitable pre-processing method for the stream-flow data and an appropriate optimization model has to be integrated in order re-adjust the weights and biases associated with the model structure. On top of that, artificial intelligent models require “trial and error” procedures in order to be properly tuned (number of hidden layers, number of neurons within the hidden layers and the type of the transfer function). However, soft computing approach experienced several problems while calibration such as over-fitting. In this research, the Response Surface Method (RSM) is improved based on high-order polynomial functions for forecasting the river stream-flow namely; High-Order Response Surface (HORS) method. Several higher orders have been examined, second, third, fourth and fifth polynomial functions in order to figure out the best fit that able to mimic the pattern of stream-flow. In order to demonstrate the effectiveness of the proposed model, monthly stream-flow time series data located in Aswan High Dam (AHD) has been examined. A detailed analysis of the overall statistical indicators revealed that the proposed method showed outstanding performance for monthly stream-flow forecasting at AHD. It could be concluded that the fifth order polynomial function outperforms the other orders of the polynomial functions especially with May model who achieved minimum MAE 0.12, NRMSE 0.07, MSE 0.03 and maximum SF and R2 (0.97, 0.99) respectively.

Journal ArticleDOI
01 May 2016
TL;DR: From the experimental results, it is inferred that proposed methods for determining optimal thresholds take less time for determining the optimal thresholds when compared with existing methods such as Otsu and Kapur methods.
Abstract: Multilevel thresholding is the method applied to segment the given image into unique sub-regions when the gray value distribution of the pixels is not distinct. The segmentation results are affected by factors such as number of threshold and threshold values. Hence, this paper proposes different methods for determining optimal thresholds using optimization techniques namely GA, PSO and hybrid model. Parallel algorithms are also proposed and implemented for these methods to reduce the execution time. From the experimental results, it is inferred that proposed methods take less time for determining the optimal thresholds when compared with existing methods such as Otsu and Kapur methods.