scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Machine Learning and Cybernetics in 2015"


Journal ArticleDOI
TL;DR: In this article, the authors present an overview of linear discriminant analysis (LDA) techniques for solving small sample size (SSS) problem and highlight some important datasets and software/packages.
Abstract: Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/packages.

136 citations


Journal ArticleDOI
TL;DR: This paper investigates the parallelization and scalability of a common and effective fuzzy clustering algorithm named fuzzy c-means (FCM) algorithm, parallelized using the MapReduce paradigm outlining how the Map and Reduce primitives are implemented.
Abstract: The management and analysis of big data has been identified as one of the most important emerging needs in recent years. This is because of the sheer volume and increasing complexity of data being created or collected. Current clustering algorithms can not handle big data, and therefore, scalable solutions are necessary. Since fuzzy clustering algorithms have shown to outperform hard clustering approaches in terms of accuracy, this paper investigates the parallelization and scalability of a common and effective fuzzy clustering algorithm named fuzzy c-means (FCM) algorithm. The algorithm is parallelized using the MapReduce paradigm outlining how the Map and Reduce primitives are implemented. A validity analysis is conducted in order to show that the implementation works correctly achieving competitive purity results compared to state-of-the art clustering algorithms. Furthermore, a scalability analysis is conducted to demonstrate the performance of the parallel FCM implementation with increasing number of computing nodes used.

81 citations


Journal ArticleDOI
TL;DR: This paper applies the concept of artificial bee colony (ABC) and design an ABC-based path planning algorithm for sparse wireless sensor networks and observes that this problem can be formulated as traveling salesman problem with neighborhoods, which is known to be NP-hard.
Abstract: In sparse wireless sensor networks, a mobile robot is usually exploited to collect the sensing data. Each sensor has a limited transmission range and the mobile robot must get into the coverage of each sensor node to obtain the sensing data. To minimize the energy consumption on the traveling of the mobile robot, it is significant to plan a data collection path with the minimum length to complete the data collection task. In this paper, we observe that this problem can be formulated as traveling salesman problem with neighborhoods, which is known to be NP-hard. To address this problem, we apply the concept of artificial bee colony (ABC) and design an ABC-based path planning algorithm. Simulation results validate the correctness and high efficiency of our proposal.

71 citations


Journal ArticleDOI
TL;DR: A hybrid PSO-GA optimization method for automatic design of fuzzy logic controllers (FLC) to minimize the steady state error of a plant’s response is proposed.
Abstract: In this paper we propose the use of a hybrid PSO-GA optimization method for automatic design of fuzzy logic controllers (FLC) to minimize the steady state error of a plant’s response. We test the optimal FLC obtained by the hybrid PSO-GA method using benchmark control plants. The bio-inspired and the evolutionary methods are used to find the parameters of the membership functions of the FLC to obtain the optimal controller. Simulation results are obtained to show the feasibility of the proposed approach. A comparison is also made among the proposed Hybrid PSO-GA, GA and PSO to determine if there is a significant difference in the results.

62 citations


Journal ArticleDOI
TL;DR: The proposed approach of illumination normalization is expected to nullify the effect of illumination variations as well as to preserve the low-frequency details of a face image in order to achieve a good recognition performance.
Abstract: We develop a new approach of illumination normalization for face recognition under varying lighting conditions. The effect of illumination variations is in decreasing order over low-frequency discrete cosine transform (DCT) coefficients. The proposed approach is expected to nullify the effect of illumination variations as well as to preserve the low-frequency details of a face image in order to achieve a good recognition performance. This has been accomplished by using a fuzzy filter applied over the low-frequency DCT (LFDCT) coefficients. The ‘simple classification technique’ (k-nearest neighbor classification) is used to establish the performance improvement by present approach of illumination normalization under high and unpredictable illumination variations. Our fuzzy filter based illumination normalization approach achieves zero error rate on Yale face database B (named as Yale B database in this work) and CMU PIE database. An excellent performance is achieved on extended Yale B database. The present approach of illumination normalization is also tested on Yale face database which comprises of illumination variations together with expression variations and misalignment. Significant reduction in the error rate is achieved by the present approach on this database as well. These results establish the superiority of the proposed approach of illumination normalization, over the existing ones.

56 citations


Journal ArticleDOI
TL;DR: The results of numerical experiments demonstrate that the proposed approach comparing to DTW measures the similarity of time series fast and validly, which improves the performance of the algorithm applied to the field of timeseries data mining.
Abstract: Dynamic time warping (DTW) is a robust method used to measure similarity of time series. To speed up the calculation of DTW, an on-line and dynamic time warping is proposed to the field of time series data mining. A sliding window is used to segment a long time series into several short subsequences, and an efficient DTW proposed to measure the similarity of each pair of short subsequences. Meanwhile, a forward factor is proposed to set an overlap warping path for the two adjacent subsequences, which makes the last warping path be close to the best warping path between two time series. The results of numerical experiments demonstrate that, in contrast to DTW, the proposed approach comparing to DTW measures the similarity of time series fast and validly, which improves the performance of the algorithm applied to the field of time series data mining.

44 citations


Journal ArticleDOI
TL;DR: This paper aims to determine the weights of decision makers by measuring the support degree of each pair of ordinal rankings by defining the similarity degree of dominance granular structures and proposing an improved programming model to compute the consensus rankings.
Abstract: Deriving the consensus ranking(s) from a set of rankings plays an important role in group decision making. However, the relative importance, i.e. weight of a decision maker, is ignored in most of the ordinal ranking methods. This paper aims to determine the weights of decision makers by measuring the support degree of each pair of ordinal rankings. We first define the similarity degree of dominance granular structures to depict the mutual relations of the ordinal rankings. Then, the support degree, which is obtained from similarity degree, is presented to determine weights of decision makers. Finally, an improved programming model is proposed to compute the consensus rankings by minimizing the violation with the weighted ranking(s). Two examples are given to illustrate the rationality of the proposed model.

39 citations


Journal ArticleDOI
TL;DR: It is investigated the relationships between the proposed multigranulation decision-theoretic rough set models and other related roughSet models and some basic properties of these models are studied.
Abstract: We study multigranulation decision-theoretic rough sets in incomplete information systems. Based on Bayesian decision procedure, we propose the notions of weighted mean multigranulation decision-theoretic rough sets, optimistic multigranulation decision-theoretic rough sets, and pessimistic multigranulation decision-theoretic rough sets in an incomplete information system. We investigate the relationships between the proposed multigranulation decision-theoretic rough set models and other related rough set models. We also study some basic properties of these models. We give an example to illustrate the application of the proposed models.

36 citations


Journal ArticleDOI
TL;DR: Simulation results prove that Simplex-PSO efficiently minimizes the total design error to a greater extent in comparison with previously reported optimization techniques.
Abstract: The simplex particle swarm optimization (Simplex-PSO) is a swarm intelligent based evolutionary computation method. Simplex-PSO is the hybridization of Nedler–Mead simplex method and particle swarm optimization (PSO) without the velocity term. The Simplex-PSO has fast optimizing capability and high computational precision for high-dimensionality functions. In this paper, Simplex-PSO is employed for selection of optimal discrete component values such as resistors and capacitors for fourth order Butterworth low pass analog active filter and second order State Variable low pass analog active filter, respectively. Simplex-PSO performs the dual task of efficiently selecting the component values as well as minimizing the total design errors of low pass analog active filters. The component values of the filters are selected in such a way so that they become E12/E24/E96 series compatible. The simulation results prove that Simplex-PSO efficiently minimizes the total design error to a greater extent in comparison with previously reported optimization techniques.

33 citations


Journal ArticleDOI
TL;DR: A chaotic PSO namely Totally Disturbed Particle Swarm Optimization (TDPSO), an enhanced variant of PSO is employed for obtaining the optimal machining conditions during multi-pass turning operations subject to various constraints.
Abstract: Determination of an optimal set of machining parameters is needed to produce an ordered product of considerable quality and minimal manufacturing cost. The nonlinear and highly constrained nature of machining models restricts the application of classical gradient based techniques for handling such problems. The present study focuses on obtaining the optimal machining conditions during multi-pass turning operations. Methodology used is, a chaotic PSO namely Totally Disturbed Particle Swarm Optimization (TDPSO), an enhanced variant of PSO is employed for obtaining the optimal machining conditions during multi-pass turning operations subject to various constraints. In TDPSO, the phenomenon of chaos is embedded at different stages of PSO in order to make the search process more efficient. Results obtained by TDPSO are compared with results available in literature and it is observed that TDPSO is quite efficient for dealing with such problems.

33 citations


Journal ArticleDOI
TL;DR: It is shown that typical periodic structures embedded in a chaotic region, called shrimps, organize themselves in two independent ways: as spirals that individually coil up toward a focal point while undergo period-adding bifurcations.
Abstract: This work reports two-dimensional parameter space plots, concerned with a three-dimensional Hopfield-type neural network with a hyperbolic tangent as the activation function. It shows that typical periodic structures embedded in a chaotic region, called shrimps, organize themselves in two independent ways: (i) as spirals that individually coil up toward a focal point while undergo period-adding bifurcations and, (ii) as a sequence with a well-defined law of formation, constituted by two different period-adding sequences inserted between.

Journal ArticleDOI
TL;DR: A new privacy-preserving proximal support vector machine (P3SVM) is formulated for classification of vertically partitioned data based on the concept of global random reduced kernel which is composed of local reduced kernels, leading to an extremely simple and fast privacy- Preserving algorithm.
Abstract: A new privacy-preserving proximal support vector machine (P3SVM) is formulated for classification of vertically partitioned data. Our classifier is based on the concept of global random reduced kernel which is composed of local reduced kernels. Each of them is computed using local reduced matrix with Gaussian perturbation, which is privately generated by only one of the parties, and never made public. This formulation leads to an extremely simple and fast privacy-preserving algorithm, for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. Comprehensive experiments are conducted on multiple publicly available benchmark datasets to evaluate the performance of the proposed algorithms and the results indicate that: (a) Our P3SVM achieves better performance than the recently proposed privacy-preserving SVM via random kernels in terms of both classification accuracy and computational time. (b) A significant improvement of accuracy is attained by our P3SVM when compared to classifiers generated only using each party’s own data. (c) The generated classifier has comparable accuracy to an ordinary PSVM classifier trained on the entire dataset, without releasing any private data.

Journal ArticleDOI
TL;DR: In this paper, an experimental study on the stability of an ELM and its generalization capability is presented, focusing on the relationship between the uncertainty of the ELM's output on the training set and the generalization ability.
Abstract: This paper gives an experimental study on the stability of an extreme learning machine (ELM) and its generalization capability. Focusing on the relationship between uncertainty of an ELM’s output on the training set and the ELM’s generalization capability, the experiments show some new results in the viewpoint of classical pattern recognition. The study provides some useful guidelines to choose a fraction of ELMs with better generalization from an ensemble for classification problems.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed implicit Lagrangian twin support vector machine (TWSVM) classifiers yields significantly better generalization performance in both computational time and classification accuracy.
Abstract: In this paper, we proposed an implicit Lagrangian twin support vector machine (TWSVM) classifiers by formulating a pair of unconstrained minimization problems (UMPs) in dual variables whose solutions will be obtained using finite Newton method. The advantage of considering the generalized Hessian approach for our modified UMPs reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems in TWSVM and TBSVM, which leads to extremely simple and fast algorithm. Unlike the classical TWSVM and least square TWSVM (LSTWSVM), the structural risk minimization principle is implemented by adding regularization term in the primal problems of our proposed algorithm. This embodies the essence of statistical learning theory. Computational comparisons of our proposed method against GEPSVM, TWSVM, STWSVM and LSTWSVM have been made on both synthetic and well-known real world benchmark datasets. Experimental results show that our method yields significantly better generalization performance in both computational time and classification accuracy.

Journal ArticleDOI
TL;DR: ISbFIM, an Iterative Sampling based Frequent Itemset Mining method, which can be easily parallelized and applied to mine item sets, sequences or structures and is implemented with a Map-Reduce version to demonstrate its scalability on big data.
Abstract: Frequent pattern mining attracts extensive research interests over the past two decades: including mining frequent item sets from transactions, extracting frequent sequences from bio-arrays and detecting common subgraph from molecular structures. In the era of big data, the explosive data volume brings new challenges to frequent pattern mining: (1) Space complexity: both input data, intermediate results and the outputted patterns could be too large to fit into memory which prevents many algorithms from executing; (2) Time complexity: many existing approaches rely on exhaustive search or complicated data structures to mine frequent patterns which prove to be inapplicable for big data. To deal with these two challenges. we propose ISbFIM, an Iterative Sampling based Frequent Itemset Mining method. Rather than process the entire data set at once, ISbFIM samples computationally-manageable subsets and extracts frequent itemsets from these subsets. By repeating this process for a sufficient number of times, we can guarantee both theoretically and empirically that the frequent itemsets can be enumerated without running into a combinatorial explosion. ISbFIM can be easily parallelized and applied to mine item sets, sequences or structures. We implement a Map-Reduce version of ISbFIM to demonstrate its scalability on big data.

Journal ArticleDOI
TL;DR: This paper presents a set of group recommender systems that automatically detect groups of users by clustering them, in order to respect a constraint on the maximum number of recommendation lists that can be produced.
Abstract: Group recommender systems provide suggestions when more than a person is involved in the recommendation process. A particular context in which group recommendation is useful is when the number of recommendation lists that can be generated is limited (i.e., it is not possible to suggest a list of items to each user). In such a case, grouping users and producing recommendations to groups becomes necessary. None of the approaches in the literature is able to automatically group the users in order to overcome the previously presented limitation. This paper presents a set of group recommender systems that automatically detect groups of users by clustering them, in order to respect a constraint on the maximum number of recommendation lists that can be produced. The proposed systems have been largely evaluated on two real-world datasets and compared with hundreds of experiments and statistical tests, in order to validate the results. Moreover, we introduce a set of best practices that help in the development of group recommender systems in this context.

Journal ArticleDOI
TL;DR: In this paper, the synchronization issues for chaotic memristive neural networks with time delays using sampled-data feedback controller are formulated and studied and some new testable algebraic criteria are obtained for ensuring the synchronization goal.
Abstract: In this paper, the synchronization issues for chaotic memristive neural networks with time delays using sampled-data feedback controller are formulated and studied. By constructing a useful Lyapunov functional and using inequality techniques, some new testable algebraic criteria are obtained for ensuring the synchronization goal. Finally, an illustrative example is exploited to demonstrate the effectiveness of the proposed sampled-data control scheme.

Journal ArticleDOI
TL;DR: The sketching with words (SWW) technique is applied to design a system that can simulate a face sketch expert and different types of face have generated after applying ‘fairly’ and ‘very’ linguistic hedges on face components.
Abstract: The face sketch of the criminal may be one of the crucial evidence in catching the criminal. Face sketch is drawn by the sketch expert on the basis of onlooker’s statement, which is about different human face parts like forehead, eyes, nose, and chin etc. These statements are full of uncertainties e.g. ‘His eyes were not fairly small’. Since the precise interpretation of these natural language statements is a very difficult task. So we need a system that can convert imprecise face description, into a complete face. Therefore we have applied the sketching with words (SWW) technique to design a system that can simulate a face sketch expert. SWW is a methodology in which the objects of computation are fuzzy geometric objects e.g. fuzzy line, fuzzy circle, fuzzy triangle, and fuzzy parallel. These fuzzy objects (f-objects) are formalized by fuzzy geometry (f-geometry) of Zadeh. SWW is inspired by computing with words and fuzzy geometry. Since the onlooker has to granulate face into granule label. Hence the concept of fuzzy granule has applied for face recognition. Different types of face have generated after applying ‘fairly’ and ‘very’ linguistic hedges on face components.

Journal ArticleDOI
TL;DR: This paper addresses the issue of observer-based control problem for a class of switched networked control systems (NCSs) by considering the packet loss and time delay in the network, and designs an observer- based state feedback controller for NCSs with random packet loss.
Abstract: This paper addresses the issue of observer-based control problem for a class of switched networked control systems (NCSs). In particular, by considering the packet loss and time delay in the network, a discrete-time switched system is formulated. Moreover, the packet loss in the network is assumed to occur in a random way, which is described by introducing Bernoulli distributed white sequences. First, results for the exponential stabilization of discrete-time switched NCSs without random packet loss is derived by using the average dwell time approach and multiple Lyapunov–Krasovskii function. Next, the attention is focused on designing an observer-based state feedback controller for NCSs with random packet loss which ensures that the resulting error system is exponentially stable. Further, the sufficient conditions for existence of control parameters are formulated in the form of linear matrix inequalities (LMIs) which can be easily solved by using some standard numerical packages. Also, the observer and control gains can be calculated by using the solutions of a set of LMIs. Finally, a numerical example based on DC motor model is provided to illustrate the applicability and effectiveness of the proposed design procedure.

Journal ArticleDOI
TL;DR: A delay dependent condition is developed to estimate the neuron states through observed output measurements such that the error system is globally asymptotically stable and a less conservative sufficient condition for the existence of state estimator is formulated in terms of linear matrix inequality.
Abstract: This paper is concerned with the state estimation problem for a class of memristor-based neural networks with time-varying delay. A delay dependent condition is developed to estimate the neuron states through observed output measurements such that the error system is globally asymptotically stable. By constructing more effective Lyapunov functionals, and combining with Jensen integral inequality and free-weighting matrix approach, a less conservative sufficient condition for the existence of state estimator is formulated in terms of linear matrix inequality, which can be checked efficiently by using some standard numerical packages. Finally, a numerical example is given to demonstrate the effectiveness of the presented results.

Journal ArticleDOI
TL;DR: This ITSVR employs two nonparallel functions to identify the upper and lower sides of the interval output data, respectively, in which the Hausdorff distance is incorporated into the Gaussian kernel as the interval kernel for interval input data.
Abstract: It is necessary to use interval data to define terms or describe extreme behaviors because of the existence of uncertainty in many real-world problems. In this paper, a novel efficient interval twin support vector regression (ITSVR) is proposed to handle such interval data. This ITSVR employs two nonparallel functions to identify the upper and lower sides of the interval output data, respectively, in which the Hausdorff distance is incorporated into the Gaussian kernel as the interval kernel for interval input data. Compared with other support vector regression (SVR)-based interval regression methods, such as the interval support vector interval regression networks (ISVIRN), this ITSVR algorithm is more efficient since only two smaller-sized QPPs are solved, respectively. The experimental results on several artificial datasets and three stock index datasets show the validity of ITSVR.

Journal ArticleDOI
TL;DR: This work presents an image analysis based automatic road crack detection method for conducting smooth driving on non-smooth road surfaces by analyzing the color variance on norm and introducing a method on discriminant analysis.
Abstract: We present an image analysis based automatic road crack detection method for conducting smooth driving on non-smooth road surfaces. In the new proposal, first the road surface areas which include cracks are extracted as crack images by analyzing the color variance on norm. Then cracks are extracted from those areas by introducing a method on discriminant analysis. According to experiments using the images of different road surfaces, the new proposal showed better performances than the conventional approaches.

Journal ArticleDOI
TL;DR: This paper treats of the exponential synchronization issue of delayed Cohen–Grossberg neural networks with discontinuous activations by utilizing Lyapunov stability theory, and an adaptive controller is designed such that the response system can be exponentially synchronized with a drive system.
Abstract: This paper treats of the exponential synchronization issue of delayed Cohen–Grossberg neural networks with discontinuous activations. By utilizing Lyapunov stability theory, an adaptive controller is designed such that the response system can be exponentially synchronized with a drive system. Our synchronization criteria are easily verified and the obtained results are also applicable to neural networks with continuous activations since they are a special case of neural networks with discontinuous activations. Results of this paper improve a few previous known results. Finally, numerical simulations are given to verify the effectiveness of the theoretical results.

Journal ArticleDOI
TL;DR: It is concluded that despite the shape ambiguities in Indian scripts, proposed classification algorithm could be a dominant technique in the field of handwritten character recognition.
Abstract: With advances in the field of digitization, document analysis and handwriting recognition have emerged as key research areas. Authors present a handwritten character recognition system for Gujrati, an Indian language spoken by 40 million people. The proposed system extracts four features. A unique pattern descriptor and Gabor phase XNOR pattern are the two features that are newly proposed for isolated handwritten character set of Gujrati. In addition to these two features, we use contour direction probability distribution function and autocorrelation features. Next contribution is the weighted k-NN classifier. This research finally contributes is a novel mean χ 2 distance measure. Proposed classifier exploits a combination of feature weights, new distance measure along with a triangular distance and Euclidian distance for performance that improves conventional k-NN classifier. The implementation on a comprehensive data set show 86.33 % recognition efficiency. Facts and figures show that proposed approach outperforms conventional k-NN. It is concluded that despite the shape ambiguities in Indian scripts, proposed classification algorithm could be a dominant technique in the field of handwritten character recognition.

Journal ArticleDOI
TL;DR: This work presents a novel online ensemble approach, Diversified online ensembles detection (DOED), for handling drifting concepts in data streams and proves it to be highly resource effective achieving very high accuracies even in a resource constrained environment.
Abstract: Data Streams are continuous data instances arriving at a very high speed with varying underlying conceptual distribution. We present a novel online ensemble approach, Diversified online ensembles detection (DOED), for handling these drifting concepts in data streams. Our approach maintains two ensembles of weighted experts, an ensemble with low diversity and an ensemble with high diversity, which are updated as per their accuracy in classifying the new data instances. Our approach detects drifts by comparing the two accuracies: an accuracy of an ensemble on the recent examples and its accuracy since the beginning of the learning. The final prediction for an instance is the class predicted by the ensemble which gives better accuracy in classifying the recent examples. When a drift is detected by an ensemble, it is reinitialized still maintaining its diversity levels. Experimental evaluation using various artificial and real-world datasets proves that DOED provides very high accuracy in classifying new data instances, irrespective of the size of dataset, type of drift or presence of noise. We compare DOED with the other learners in terms of new performance metrics such as kappa statistic, model cost, and the evaluation time and memory requirements. Our approach proved to be highly resource effective achieving very high accuracies even in a resource constrained environment.

Journal ArticleDOI
TL;DR: The proposed smoothed objective function is introduced to approximate the original loss function in the primal form of SVR, which transforms the original quadratic programming into a convex unconstrained minimization problem.
Abstract: The support vector regression (SVR) model is usually fitted by solving a quadratic programming problem, which is computationally expensive. To improve the computational efficiency, we propose to directly minimize the objective function in the primal form. However, the loss function used by SVR is not differentiable, which prevents the well-developed gradient based optimization methods from being applicable. As such, we introduce a smooth function to approximate the original loss function in the primal form of SVR, which transforms the original quadratic programming into a convex unconstrained minimization problem. The properties of the proposed smoothed objective function are discussed and we prove that the solution of the smoothly approximated model converges to the original SVR solution. A conjugate gradient algorithm is designed for minimizing the proposed smoothly approximated objective function in a sequential minimization manner. Extensive experiments on real-world datasets show that, compared to the quadratic programming based SVR, the proposed approach can achieve similar prediction accuracy with significantly improved computational efficiency, specifically, it is hundreds of times faster for linear SVR model and multiple times faster for nonlinear SVR model.

Journal ArticleDOI
TL;DR: A new MOcGA, namely cosine multiobjective cellular genetic algorithm (C-MCGA), for continuous multiObjective optimization, which outperforms two typical MOcGAs and two state-of-the-art algorithms, NSGA-II and SPEA2, on a given set of test instances.
Abstract: Multiobjective cellular genetic algorithms (MOcGAs) are variants of evolutionary computation algorithms by organizing the population into grid structures, which are usually 2D grids. This paper proposes a new MOcGA, namely cosine multiobjective cellular genetic algorithm (C-MCGA), for continuous multiobjective optimization. The CMCGA introduces two new components: a 3D grid structure and a cosine crowding measurement. The first component is used to organize the population. Compared with a 2D grid, the 3D grid offers a vertical expansion of cells. The second one simultaneously considers the crowding distances and location distributions for measuring the crowding degree values for the solutions. The simulation results show that C-MCGA outperforms two typical MOcGAs and two state-of-the-art algorithms, NSGA-II and SPEA2, on a given set of test instances. Furthermore, the proposed measurement metric is compared with that in NSGA-II, which is demonstrated to yield a more diverse population on most of the test instances.

Journal ArticleDOI
TL;DR: New hypotheses related to the structure of visual streams of the Algorithm of Discovery include the nested streams, the programmed experiments, the visual model, the construction set and the Ghost play, which are tested on the thought experiments for development of two algorithms in LG.
Abstract: In our previous research we investigated the structure of the Primary Language of the human brain as introduced by J. von Neumann in 1957. Two components have been investigated, the algorithm optimizing warfighting, Linguistic Geometry (LG), and the algorithm for inventing new algorithms, the Algorithm of Discovery. The ultimate goal of this research and the emphasis of this paper is to make the next step to understanding the Algorithm of Discovery. Our results, introduced in the recent papers, show that this algorithm is based on multiple thought experiments which manifest themselves via mental visual streams (“mental movies”). It appears that those streams are the only interface available for this brain phenomenon. The visual streams can run concurrently and can exchange information between each other. The streams may initiate additional thought experiments, program them, and execute them in due course. Our research reveals the role of analogy in constructing visual model by erasing the particulars of simple examples, utilizing this model as a reference for constructing the new visual object and a symbolic shell attached to this object in the form of symbolic tags. This paper investigates new hypotheses related to the structure of visual streams of the Algorithm of Discovery. They include the nested streams, the programmed experiments, the visual model, the construction set and the Ghost play. These hypotheses are tested on the thought experiments for development of two algorithms in LG: functions next and med for the Grammars of Shortest and Admissible Trajectories and the No-Search Approach based on the State Space Chart. While those algorithms were partly investigated in our earlier papers this paper goes much deeper, i.e., closer to revealing the nature of the Algorithm of Discovery.

Journal ArticleDOI
TL;DR: It turns out that the disambiguation effect is much better with the adoption of the semi-supervised graph clustering method that has been integrated with the expert-associated relationships.
Abstract: In order to utilize the associated relationship in the expert page efficiently, we’d like to introduce a Chinese expert disambiguation method based on the semi-supervised graph clustering with the integration of various associated relationships. Firstly, extract the correlation characteristics of the expert attributes according to the correlation analysis on the expert page. Secondly, construct a similarity matrix between the documents on different expert pages with the utilization of the attributes characteristics and the associated relationship of the expert pages. Finally, with the adoption of the attribute correlation as the semi-supervised constraint, construct an expert disambiguation model by applying the graph-based clustering approach to get the solution of the model through the kernel-based method for the purpose to achieve expert name disambiguation. Through the contrast experiment in the Chinese expert disambiguation, it turns out that the disambiguation effect is much better with the adoption of the semi-supervised graph clustering method that has been integrated with the expert-associated relationships.

Journal ArticleDOI
TL;DR: The proposed method adopts least squares support vector machine (LSSVM) to approximate a nonlinear autoregressive model with eXogeneous (NARX) through Lipschitz quotient criterion to obtain an efficient LSSVM model.
Abstract: In this paper, we present a method for nonlinear system identification. The proposed method adopts least squares support vector machine (LSSVM) to approximate a nonlinear autoregressive model with eXogeneous (NARX). First, the orders of NARX model are determined from input–output data via Lipschitz quotient criterion. Then, an LSSVM model is used to approximate the NARX model. To obtain an efficient LSSVM model, a novel particle swarm optimization with adaptive inertia weight is proposed to tune the hyper-parameters of LSSVM. Two experimental results are given to illustrate the effectiveness of the proposed method.