scispace - formally typeset

Showing papers in "Information Sciences in 2016"


Journal ArticleDOI

[...]

TL;DR: An extended TOPSIS method and an aggregation-based method respectively for multi-attribute group decision making (MAGDM) with probabilistic linguistic information, and apply them to a practical case concerning strategy initiatives.
Abstract: When expressing preferences in qualitative setting, several possible linguistic terms with different weights (represented by probabilities) may be considered at the same time. The probabilistic distribution is usually hard to be provided completely and ignorance may exist. In this paper, we first propose a novel concept called probabilistic linguistic term set (PLTS) to serve as an extension of the existing tools. Then we put forward some basic operational laws and aggregation operators for PLTSs. After that, we develop an extended TOPSIS method and an aggregation-based method respectively for multi-attribute group decision making (MAGDM) with probabilistic linguistic information, and apply them to a practical case concerning strategy initiatives. Finally, the strengths and weaknesses of our methods are clarified by comparing them with some similar techniques.

520 citations


Journal ArticleDOI

[...]

TL;DR: A novel approach based on deep learning for active classification of electrocardiogram (ECG) signals by learning a suitable feature representation from the raw ECG data in an unsupervised way using stacked denoising autoencoders (SDAEs) with sparsity constraint.
Abstract: In this paper, we propose a novel approach based on deep learning for active classification of electrocardiogram (ECG) signals. To this end, we learn a suitable feature representation from the raw ECG data in an unsupervised way using stacked denoising autoencoders (SDAEs) with sparsity constraint. After this feature learning phase, we add a softmax regression layer on the top of the resulting hidden representation layer yielding the so-called deep neural network (DNN). During the interaction phase, we allow the expert at each iteration to label the most relevant and uncertain ECG beats in the test record, which are then used for updating the DNN weights. As ranking criteria, the method relies on the DNN posterior probabilities to associate confidence measures such as entropy and Breaking-Ties (BT) to each test beat in the ECG record under analysis. In the experiments, we validate the method on the well-known MIT-BIH arrhythmia database as well as two other databases called INCART, and SVDB, respectively. Furthermore, we follow the recommendations of the Association for the Advancement of Medical Instrumentation (AAMI) for class labeling and results presentation. The results obtained show that the newly proposed approach provides significant accuracy improvements with less expert interaction and faster online retraining compared to state-of-the-art methods.

395 citations


Journal ArticleDOI

[...]

TL;DR: A two-dimensional Logistic-adjusted-Sine map (2D-LASM) is proposed that has better ergodicity and unpredictability, and a wider chaotic range than many existing chaotic maps.
Abstract: With complex properties of ergodicity, unpredictability and sensitivity to initial states, chaotic systems are widely used in cryptography. This paper proposes a two-dimensional Logistic-adjusted-Sine map (2D-LASM). Performance evaluations show that it has better ergodicity and unpredictability, and a wider chaotic range than many existing chaotic maps. Using the proposed map, this paper further designs a 2D-LASM-based image encryption scheme (LAS-IES). The principle of diffusion and confusion are strictly fulfilled, and a mechanism of adding random values to plain-image is designed to enhance the security level of cipher-image. Simulation results and security analysis show that LAS-IES can efficiently encrypt different kinds of images into random-like ones that have strong ability of resisting various security attacks.

368 citations


Journal ArticleDOI

[...]

TL;DR: A multi-population based approach is proposed to realize the adapted ensemble of multiple strategies of differential evolution, thereby resulting in a new DE variant named multi- Population ensemble DE (MPEDE) which simultaneously consists of three mutation strategies.
Abstract: A multi-population based approach is proposed to realize the adapted ensemble of multiple strategies of differential evolution.The control parameters of each mutation strategy are adapted independently.Extensive experiments are conducted to test the performance of multi-population ensemble DE (MPEDE). Differential evolution (DE) is among the most efficient evolutionary algorithms (EAs) for global optimization and now widely applied to solve diverse real-world applications. As the most appropriate configuration of DE to efficiently solve different optimization problems can be significantly different, an appropriate combination of multiple strategies into one DE variant attracts increasing attention recently. In this study, we propose a multi-population based approach to realize an ensemble of multiple strategies, thereby resulting in a new DE variant named multi-population ensemble DE (MPEDE) which simultaneously consists of three mutation strategies, i.e., "current-to-pbest/1" and "current-to-rand/1" and "rand/1". There are three equally sized smaller indicator subpopulations and one much larger reward subpopulation. Each constituent mutation strategy has one indicator subpopulation. After every certain number of generations, the current best performing mutation strategy will be determined according to the ratios between fitness improvements and consumed function evaluations. Then the reward subpopulation will be allocated to the determined best performing mutation strategy dynamically. As a result, better mutation strategies obtain more computational resources in an adaptive manner during the evolution. The control parameters of each mutation strategy are adapted independently as well. Extensive experiments on the suit of CEC 2005 benchmark functions and comprehensive comparisons with several other efficient DE variants show the competitive performance of the proposed MPEDE (Matlab codes of MPEDE are available from http://guohuawunudt.gotoip2.com/publications.html).

288 citations


Journal ArticleDOI

[...]

TL;DR: A closeness index-based Pythagorean fuzzy QUALIFLEX method is developed to address hierarchical multicriteria decision making problems within Pythagorian fuzzy environment based on PFNs and IVPFNs and can deal effectively with the hierarchal structure of criteria.
Abstract: A new closeness index-based ranking method of PFNs is presented.A novel concept of interval-valued Pythagorean fuzzy set is proposed.A new interval-valued Pythagorean fuzzy distance measure is presented.A new hierarchical Pythagorean fuzzy QUALIFLEX method is developed.The proposed method is further extended to manage heterogeneous information. Pythagorean fuzzy set initially developed by Yager (2014) is a new tool to model imprecise and ambiguous information in multicriteria decision making problems. In this paper, we propose a novel closeness index for Pythagorean fuzzy number (PFN) and also introduce a closeness index-based ranking method for PFNs. Next, we extend the Pythagorean fuzzy set to present the concept of interval-valued Pythagorean fuzzy set (IVPFS) which is parallel to interval-valued intuitionistic fuzzy set. The elements in IVPFS are called interval-valued Pythagorean fuzzy numbers (IVPFNs). We further introduce the basic operations of IVPFNs and investigate their desirable properties. Meanwhile, we also explore the ranking method and the distance measure for IVPFNs. Afterwards, we develop a closeness index-based Pythagorean fuzzy QUALIFLEX method to address hierarchical multicriteria decision making problems within Pythagorean fuzzy environment based on PFNs and IVPFNs. This hierarchical decision problem includes the main-criteria layer and the sub-criteria layer in which the relationships among main-criteria are interdependent, the relationships among sub-criteria are independent and the weights of sub-criteria take the form of IVPFNs. Therefore, in the developed method we first define the concept of concordance/discordance index based on the closeness index-based ranking methods and compute the sub-weighted concordance/discordance indices by employing the weighted averaging aggregation operator based on the closeness indices of IVPFNs. In order to take main-criteria interactions into account, we further employ Choquet integral to calculate the main-weighted concordance/discordance indices. By investigating all possible permutations of alternatives with the level of concordance and discordance of the complete preference order, we finally obtain the optimal rankings of alternatives. The proposed method is implemented in a risk evaluation problem in order to demonstrate its applicability and superiority. The salient features of the proposed method, compared to the state-of-the-art QUALIFLEX-based methods, are: (1) it can take the interactive phenomena among criteria into account; (2) it can manage simultaneously the PFN and IVPFN decision data; (3) it can deal effectively with the hierarchal structure of criteria. The proposed method provides us with a useful way for hierarchical multicriteria decision making problems within Pythagorean fuzzy contexts. In addition, we also extend the proposed method to manage heterogeneous information which includes five different types of information such as real numbers, interval numbers, fuzzy numbers, PFNs and hesitant fuzzy elements.

273 citations


Journal ArticleDOI

[...]

TL;DR: This paper presents a comparative study of type-2 fuzzy logic systems with respect to intervaltype-2 and type-1 fuzzy Logic systems to show the efficiency and performance of a generalized type- 2 fuzzy logic controller (GT2FLC) to design the fuzzy controllers of complex non-linear plants.
Abstract: This paper presents a comparative study of type-2 fuzzy logic systems with respect to interval type-2 and type-1 fuzzy logic systems to show the efficiency and performance of a generalized type-2 fuzzy logic controller (GT2FLC). We used different types of fuzzy logic systems for designing the fuzzy controllers of complex non-linear plants. The theory of alpha planes is used for approximating generalized type-2 fuzzy logic in fuzzy controllers. In the defuzzification process, the Karnik and Mendel Algorithm is used. Simulation results with a type-1 fuzzy logic controller (T1FLC), an interval type-2 fuzzy logic controller (IT2FLC) and with a generalized type-2 fuzzy logic controller (GT2FLC) for benchmark plants are presented. The advantage of using generalized type-2 fuzzy logic in fuzzy controllers is verified with four benchmark problems. We considered different levels of noise, number of alpha planes and four types of membership functions in the simulations for comparison and to analyze the approach of generalized type-2 fuzzy logic systems when applied in fuzzy control.

264 citations


Journal ArticleDOI

[...]

TL;DR: Surprisingly, it is found that the direct link plays an important performance enhancing role in RVFL, while the bias term in the output neuron had no significant effect and the ridge regression based closed-form solution was better than those with Moore-Penrose pseudoinverse.
Abstract: With randomly generated weights between input and hidden layers, a random vector functional link network is a universal approximator for continuous functions on compact sets with fast learning property. Though it was proposed two decades ago, the classification ability of this family of networks has not been fully investigated yet. Through a very comprehensive evaluation by using 121 UCI datasets, the effect of bias in the output layer, direct links from the input layer to the output layer and type of activation functions in the hidden layer, scaling of parameter randomization as well as the solution procedure for the output weights are investigated in this work. Surprisingly, we found that the direct link plays an important performance enhancing role in RVFL, while the bias term in the output neuron had no significant effect. The ridge regression based closed-form solution was better than those with Moore-Penrose pseudoinverse. Instead of using a uniform randomization in - 1,+1 for all datasets, tuning the scaling of the uniform randomization range for each dataset enhances the overall performance. Six commonly used activation functions were investigated in this work and we found that hardlim and sign activation functions degenerate the overall performance. These basic conclusions can serve as general guidelines for designing RVFL networks based classifiers.

242 citations


Journal ArticleDOI

[...]

TL;DR: A comprehensive survey of the earliest work and recent advances on network training is presented as well as some suggestions for future research.
Abstract: As a powerful tool for data regression and classification, neural networks have received considerable attention from researchers in fields such as machine learning, statistics, computer vision and so on. There exists a large body of research work on network training, among which most of them tune the parameters iteratively. Such methods often suffer from local minima and slow convergence. It has been shown that randomization based training methods can significantly boost the performance or efficiency of neural networks. Among these methods, most approaches use randomization either to change the data distributions, and/or to fix a part of the parameters or network configurations.źThis article presents a comprehensive survey of the earliest work and recent advances as well as some suggestions for future research.

231 citations


Journal ArticleDOI

[...]

TL;DR: The results proved that the combinatorial double auction-based resource allocation model is an appropriate market-based model for cloud computing because it allows double-sided competition and bidding on an unrestricted number of items, which causes it to be economically efficient.
Abstract: Users and providers have different requirements and objectives in an investment market. Users will pay the lowest price possible with certain guaranteed levels of service at a minimum and providers would follow the strategy of achieving the highest return on their investment. Designing an optimal market-based resource allocation that considers the benefits for both the users and providers is a fundamental criterion of resource management in distributed systems, especially in cloud computing services. Most of the current market-based resource allocation models are biased in favor of the provider over the buyer in an unregulated trading environment. In this study, the problem was addressed by proposing a new market model called the Combinatorial Double Auction Resource Allocation (CDARA), which is applicable in cloud computing environments. The CDARA was prototyped and simulated using CloudSim, a Java-based simulator for simulating cloud computing environments, to evaluate its efficiency from an economic perspective. The results proved that the combinatorial double auction-based resource allocation model is an appropriate market-based model for cloud computing because it allows double-sided competition and bidding on an unrestricted number of items, which causes it to be economically efficient. Furthermore, the proposed model is incentive-compatible, which motivates the participants to reveal their true valuation during bidding.

214 citations


Journal ArticleDOI

[...]

TL;DR: This paper redefine some more logical operational laws for linguistic terms, hesitant fuzzy linguistic elements (HFLEs) and probabilistic linguistic term sets (PLTSs) based on two equivalent transformation functions to keep the operation results more reasonable in decision making with linguistic information.
Abstract: In the process of decision making, people sometimes may feel more comfortable to express their preferences by linguistic terms instead of the quantitative form. However, as the basic premise of operations, the existing operational laws of linguistic terms and the extended linguistic term sets are very unreasonable. In order to overcome this issue, in this paper, we redefine some more logical operational laws for linguistic terms, hesitant fuzzy linguistic elements (HFLEs) and probabilistic linguistic term sets (PLTSs) based on two equivalent transformation functions. These novel operational laws can not only avoid the operation values exceeding the bounds of LTSs, but also keep the operation results more reasonable in decision making with linguistic information. Furthermore, the operational laws can keep the probability information complete when computing with PLTSs. Additionally, lots of properties of the operational laws are discussed, and some three-dimensional figures are drawn to show the regions of different operational laws of linguistic terms more vividly.

210 citations


Journal ArticleDOI

[...]

TL;DR: An improved method to construct the BPA based on the confusion matrix is proposed that takes into account both the precision rate and the recall rate of each class.
Abstract: The determination of basic probability assignment (BPA) is a crucial issue in the application of Dempster-Shafer evidence theory. Classification is a process of determining the class label that a sample belongs to. In classification problem, the construction of BPA based on the confusion matrix has been studied. However, the existing methods do not make full use of the available information provided by the confusion matrix. In this paper, an improved method to construct the BPA is proposed based on the confusion matrix. The proposed method takes into account both the precision rate and the recall rate of each class. An illustrative case regarding the prediction of transmembrane protein topology is given to demonstrate the effectiveness of the proposed method.

Journal ArticleDOI

[...]

TL;DR: A novel classification scheme is introduced by distinguishing between methods for deterministic and random graphs for a better understanding of the methods, their challenges and, finally, for applying the methods efficiently in an interdisciplinary setting of data science to solve a particular problem involving comparative network analysis.
Abstract: In this paper we survey methods for performing a comparative graph analysis and explain the history, foundations and differences of such techniques of the last 50 years. While surveying these methods, we introduce a novel classification scheme by distinguishing between methods for deterministic and random graphs. We believe that this scheme is useful for a better understanding of the methods, their challenges and, finally, for applying the methods efficiently in an interdisciplinary setting of data science to solve a particular problem involving comparative network analysis.

Journal ArticleDOI

[...]

TL;DR: A novel framework with dedicated combination of data prediction, compression, and recovery to simultaneously achieve accuracy and efficiency of the data processing in clustered WSNs to reduce the communication cost while guaranteeing the dataprocessing and data prediction accuracy.
Abstract: Environmental monitoring is one of the most important applications of wireless sensor networks (WSNs), which usually requires a lifetime of several months, or even years. However, the inherent restriction of energy carried within the battery of sensor nodes brings an extreme difficulty to obtain a satisfactory network lifetime, which becomes a bottleneck in scale of such applications in WSNs. In this paper, we propose a novel framework with dedicated combination of data prediction, compression, and recovery to simultaneously achieve accuracy and efficiency of the data processing in clustered WSNs. The main aim of the framework is to reduce the communication cost while guaranteeing the data processing and data prediction accuracy. In this framework, data prediction is achieved by implementing the Least Mean Square (LMS) dual prediction algorithm with optimal step size by minimizing the mean-square derivation (MSD), in a way that the cluster heads (CHs) can obtain a good approximation of the real data from the sensor nodes. On this basis, a centralized Principal Component Analysis (PCA) technique is utilized to perform the compression and recovery for the predicted data on the CHs and the sink, separately in order to save the communication cost and to eliminate the spatial redundancy of the sensed data about environment. All errors generated in these processes are finally evaluated theoretically, which come out to be controllable. Based on the theoretical analysis, we design a number of algorithms for implementation. Simulation results by using the real world data demonstrate that our framework provides a cost-effective solution to such as environmental monitoring applications in cluster based WSNs.

Journal ArticleDOI

[...]

TL;DR: This study provides a connection between two different models based on linguistic 2-tuples and proves the equivalence of the linguistic computational models to handle ULTSs, and proposes a novel CW methodology where the hesitant fuzzy linguistic term sets (HFLTSs) can be constructed based on ULtss using a numerical scale.
Abstract: The 2-tuple linguistic representation model is widely used as a basis for computing with words (CW) in linguistic decision making problems. Two different models based on linguistic 2-tuples (i.e., the model of the use of a linguistic hierarchy and the numerical scale model) have been developed to address term sets that are not uniformly and symmetrically distributed, i.e., unbalanced linguistic term sets (ULTSs). In this study, we provide a connection between these two different models and prove the equivalence of the linguistic computational models to handle ULTSs. Further, we propose a novel CW methodology where the hesitant fuzzy linguistic term sets (HFLTSs) can be constructed based on ULTSs using a numerical scale. In the proposed CW methodology, we present several novel possibility degree formulas for comparing HFLTSs, and define novel operators based on the mixed 0-1 linear programming model to aggregate the hesitant unbalanced linguistic information.

Journal ArticleDOI

[...]

TL;DR: The RVFL network overall outperforms the non-ensemble methods, namely the persistence method, seasonal autoregressive integrated moving average (sARIMA), artificial neural network (ANN).
Abstract: Short-term electricity load forecasting plays an important role in the energy market as accurate forecasting is beneficial for power dispatching, unit commitment, fuel allocation and so on. This paper reviews a few single hidden layer network configurations with random weights (RWSLFN). The RWSLFN was extended to eight variants based on the presence or absence of input layer bias, hidden layer bias and direct input-output connections. In order to avoid mapping the weighted inputs into the saturation region of the enhancement nodes' activation function and to suppress the outliers in the input data, a quantile scaling algorithm to re-distribute the randomly weighted inputs is proposed. The eight variations of RWSLFN are assessed using six generic time series datasets and 12 load demand time series datasets. The result shows that the RWSLFNs with direct input-output connections (known as the random vector functional link network or RVFL network) have statistically significantly better performance than the RWSLFN configurations without direct input-output connections, possibly due to the fact that the direct input-output connections in the RVFL network emulate the time delayed finite impulse response (FIR) filter. However the RVFL network has simpler training and higher accuracy than the FIR based two stage neural network. The RVFL network is also compared with some reported forecasting methods. The RVFL network overall outperforms the non-ensemble methods, namely the persistence method, seasonal autoregressive integrated moving average (sARIMA), artificial neural network (ANN). In addition, the testing time of the RVFL network is the shortest while the training time is comparable to the other reported methods. Finally, possible future research directions are pointed out.

Journal ArticleDOI

[...]

TL;DR: The experimental results demonstrate that the proposed clustering algorithm can find cluster centers, recognize clusters regardless of their shape and dimension of the space in which they are embedded, be unaffected by outliers, and can often outperform DPC, AP, DBSCAN and K-means.
Abstract: Clustering by fast search and find of Density Peaks (referred to as DPC) was introduced by Alex Rodriguez and Alessandro Laio. The DPC algorithm is based on the idea that cluster centers are characterized by having a higher density than their neighbors and by being at a relatively large distance from points with higher densities. The power of DPC was demonstrated on several test cases. It can intuitively find the number of clusters and can detect and exclude the outliers automatically, while recognizing the clusters regardless of their shape and the dimensions of the space containing them. However, DPC does have some drawbacks to be addressed before it may be widely applied. First, the local density ?i of point i is affected by the cutoff distance dc, and is computed in different ways depending on the size of datasets, which can influence the clustering, especially for small real-world cases. Second, the assignment strategy for the remaining points, after the density peaks (that is the cluster centers) have been found, can create a "Domino Effect", whereby once one point is assigned erroneously, then there may be many more points subsequently mis-assigned. This is especially the case in real-word datasets where there could exist several clusters of arbitrary shape overlapping each other. To overcome these deficiencies, a robust clustering algorithm is proposed in this paper. To find the density peaks, this algorithm computes the local density ?i of point i relative to its K-nearest neighbors for any size dataset independent of the cutoff distance dc, and assigns the remaining points to the most probable clusters using two new point assignment strategies. The first strategy assigns non-outliers by undertaking a breadth first search of the K-nearest neighbors of a point starting from cluster centers. The second strategy assigns outliers and the points unassigned by the first assignment procedure using the technique of fuzzy weighted K-nearest neighbors. The proposed clustering algorithm is benchmarked on publicly available synthetic and real-world datasets which are commonly used for testing the performance of clustering algorithms. The clustering results of the proposed algorithm are compared not only with that of DPC but also with that of several well known clustering algorithms including Affinity Propagation (AP), Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and K-means. The benchmarks used are: clustering accuracy (Acc), Adjusted Mutual Information (AMI) and Adjusted Rand Index (ARI). The experimental results demonstrate that our proposed clustering algorithm can find cluster centers, recognize clusters regardless of their shape and dimension of the space in which they are embedded, be unaffected by outliers, and can often outperform DPC, AP, DBSCAN and K-means.

Journal ArticleDOI

[...]

TL;DR: An optimization method is used to efficiently design the generalized type-2 fuzzy system to improve the control performance and the proposed method for control is applied to a non-linear control problem to test the advantages of the proposed approach.
Abstract: In this paper a granular approach for intelligent control using generalized type-2 fuzzy logic is presented. Granularity is used to divide the design of the global controller into several individual simpler controllers. The theory of alpha planes is used to implement the generalized type-2 fuzzy systems. The proposed method for control is applied to a non-linear control problem to test the advantages of the proposed approach. Also an optimization method is used to efficiently design the generalized type-2 fuzzy system to improve the control performance.

Journal ArticleDOI

[...]

TL;DR: An overview on Big Data is presented including four issues, namely: concepts, characteristics and processing paradigms of Big data; the state-of-the-art techniques for decision making in Big Data; felicitous decision making applications of Big Data in social science; and the current challenges ofBig Data as well as possible future directions.
Abstract: The era of Big Data has arrived along with large volume, complex and growing data generated by many distinct sources. Nowadays, nearly every aspect of the modern society is impacted by Big Data, involving medical, health care, business, management and government. It has been receiving growing attention of researches from many disciplines including natural sciences, life sciences, engineering and even art & humanities. It also leads to new research paradigms and ways of thinking on the path of development. Lots of developed and under-developing tools improve our ability to make more felicitous decisions than what we have made ever before. This paper presents an overview on Big Data including four issues, namely: (i) concepts, characteristics and processing paradigms of Big Data; (ii) the state-of-the-art techniques for decision making in Big Data; (iii) felicitous decision making applications of Big Data in social science; and (iv) the current challenges of Big Data as well as possible future directions.

Journal ArticleDOI

[...]

TL;DR: A hybrid MCDM method combining simple additive weighting (SAW), techniques for order preference by similarity to an ideal solution (TOPSIS) and grey relational analysis (GRA) techniques, which can guide a decision maker in making a reasonable judgment without requiring professional skills or extensive experience is presented.
Abstract: The experimental design technique is used for the weight assignment.A mathematical model is constructed to help the DMs make reasonable decisions.Different MCDM evaluation methods are combined to solve the same MCDM problem.The top-ranked alternatives exactly match with those derived by the past researchers. Multiple criteria decision-making (MCDM) is a difficult task because the existing alternatives are frequently in conflict with each other. This study presents a hybrid MCDM method combining simple additive weighting (SAW), techniques for order preference by similarity to an ideal solution (TOPSIS) and grey relational analysis (GRA) techniques. A feature of this method is that it employs an experimental design technique to assign attribute weights and then combines different MCDM evaluation methods to construct the hybrid decision-making model. This model can guide a decision maker in making a reasonable judgment without requiring professional skills or extensive experience. The ranking results agreed upon by multiple MCDM methods are more trustworthy than those generated by a single MCDM method. The proposed method is illustrated in a practical application scenario involving an IC packaging company. Four additional numerical examples are provided to demonstrate the applicability of the proposed method. In all of the cases, the results obtained using the proposed method were highly similar to those derived by previous studies, thus proving the validity and capability of this method to solve real-life MCDM problems.

Journal ArticleDOI

[...]

TL;DR: In this article, the negation, union, and intersection operations on PHFLTSs are defined, and a transformation algorithm is proposed to convert the proportional comparative linguistic pairs into PHFLTs.
Abstract: We propose the general concept of PHFLTS.We define the negation, union, and intersection operations on PHFLTSs.We present the PHFLWA and PHFLOWA operators.A transformation algorithm is proposed to convert the proportional comparative linguistic pairs into PHFLTSs.We develop a proportional hesitant fuzzy linguistic MCGDM model. The theory of hesitant fuzzy linguistic term sets (HFLTSs) is a powerful technique used to describe hesitant situations, which are typically assessed by experts using several possible linguistic values or rich expressions instead of a single term. The union of HFLTSs with respect to each expert, that is, an extended HFLTS (EHFLTS), further facilitates the elicitation of linguistic assessments for addressing group decision-making problems because EHFLTSs can deal with generalized (either consecutive or non-consecutive) linguistic terms. In this study, we propose proportional HFLTSs (PHFLTSs), which include the proportional information of each generalized linguistic term. The mathematical form for a PHFLTS is consistent with that for a linguistic distribution assessment. However, the underlying meanings of the proportions associated with generalized linguistic terms are different. PHFLTSs can be viewed as a special method for performing linguistic distribution assessments. PHFLTSs are recognized as a useful extension of HFLTSs and a possibility distribution for HFLTSs under different assumptions. We define the basic operations with closed properties among PHFLTSs on the basis of t-norms and t-conorms. We then propose a probability theory-based outranking method for PHFLTSs by providing possibility degree formulas. We also study two fundamental aggregation operators for PHFLTSs, namely, the proportional hesitant fuzzy linguistic weighted averaging operator and the proportional hesitant fuzzy linguistic ordered weighted averaging operator. Several important properties of these aggregation operators are investigated. Finally, we use the proposed multiple criteria group decision-making model in practical applications.

Journal ArticleDOI

[...]

TL;DR: The proposed fuzzy forecasting method uses the particle swarm optimization techniques to get the optimal partition of the intervals in the universe of discourse and uses the K-means clustering algorithm to cluster the subscripts of the fuzzy sets of the current states of the fuzzified historical testing datum and to divide the constructed fuzzy logical relationships into fuzzy logical relationship groups for increasing the forecasting accuracy rates.
Abstract: In this paper, we propose a new fuzzy time series forecasting method for forecasting the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) based on fuzzy time series, fuzzy logical relationships, particle swarm optimization techniques, the K-means clustering algorithm, and similarity measures between the subscript of the fuzzy set of the fuzzified historical testing datum on the previous trading day and the subscripts of the fuzzy sets appearing in the current states of the fuzzy logical relationships in the chosen fuzzy logical relationship group. The particle swarm optimization techniques are used to get the optimal partition of the intervals in the universe of discourse. The K-means clustering algorithm is used to cluster the subscripts of the fuzzy sets of the current states of the fuzzy logical relationships to get the cluster center of each cluster and to divide the constructed fuzzy logical relationships into fuzzy logical relationship groups. The experimental results show that the proposed fuzzy forecasting method gets higher forecasting accuracy rates than the existing methods. The advantages of the proposed fuzzy forecasting method is that it uses the particle swarm optimization techniques to get the optimal partition of the intervals in the universe of discourse and uses the K-means clustering algorithm to cluster the subscripts of the fuzzy sets of the current states of the fuzzy logical relationships to get the cluster center of each cluster and to divide the constructed fuzzy logical relationships into fuzzy logical relationship groups for increasing the forecasting accuracy rates.

Journal ArticleDOI

[...]

TL;DR: This paper first proposes quaternion-valued neural networks (QVNNs) with unbounded time-varying delays with sufficient conditions on the global µ-stability in the form of both complex-valued and real-valued linear matrix inequalities (LMIs).
Abstract: In this paper, we first propose quaternion-valued neural networks (QVNNs) with unbounded time-varying delays. Some sufficient conditions on the global µ-stability in the form of both complex-valued and real-valued linear matrix inequalities (LMIs) are provided by solving two difficulties. One is decomposing the QVNN into two complex-valued systems with the plural decomposition method of quaternion, which can reduce the complexity of calculations by avoiding the non-commutativity of quaternion multiplication. The other is choosing the appropriate Lyapunov-Krasovskii functional in the form of Hermitian matrices, which is a big challenge. Finally, two numerical examples are provided to verify the effectiveness of the obtained results.

Journal ArticleDOI

[...]

TL;DR: Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods on scene classification of local and global spatial features.
Abstract: A scene classification based on collaborative representation fusion is proposed.The complementary nature of local and global spatial features is investigated.Weighted fusion is designed based on residuals from two types of features.Proposed LGF overcomes difficulties residing in feature or decision level fusion. This paper presents an effective scene classification approach based on collaborative representation fusion of local and global spatial features. First, a visual word codebook is constructed by partitioning an image into dense regions, followed by the typical k-means clustering. A locality-constrained linear coding is employed on dense regions via the visual codebook, and a spatial pyramid matching strategy is then used to combine local features of the entire image. For global feature extraction, the method called multiscale completed local binary patterns (MS-CLBP) is applied to both the original gray scale image and its Gabor feature images. Finally, kernel collaborative representation-based classification (KCRC) is employed on the extracted local and global features, and class label of the testing image is assigned according to the minimal approximation residual after fusion. The proposed method is evaluated by using four commonly-used datasets including two remote sensing images datasets, an indoor and outdoor scenes dataset, and a sports action dataset. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods.

Journal ArticleDOI

[...]

TL;DR: This paper proposes a unified framework to classify and compare existing studies to explain rough sets in a multigranulation space through rough sets derived by using individual equivalence relations.
Abstract: There exist several approaches to rough set approximations in a multigranulation space, namely, a family of equivalence relations. In this paper, we propose a unified framework to classify and compare existing studies. An underlying principle is to explain rough sets in a multigranulation space through rough sets derived by using individual equivalence relations. Two basic models are suggested. One model is based on a combination of a family of equivalence relations into an equivalence relation and the construction of approximations with respect to the combined relation. By combining equivalence relations through set intersection and union, respectively, we construct two sub-models. The other model is based on the construction of a family of approximations from a set of equivalence relations and a combination of the family of approximations. By using set intersection and union to combine a family of approximations, respectively, we again build two sub-models. As a result, we have a total of four models. We examine these models and give conditions under which some of them become the same.

Journal ArticleDOI

[...]

TL;DR: Experimental results and security analysis demonstrate that the proposed algorithm has a high security, fast speed and can resist various attacks.
Abstract: This paper proposes a new lossless encryption algorithm for color images based on a six-dimensional (6D) hyperchaotic system and the two-dimensional (2D) discrete wavelet transform (DWT). Different from the current image encryption methods, our image encryption scheme is constructed using the 2D DWT and 6D hyperchaotic system in both the frequency domain and the spatial domain, where the key streams depend on not only the hyperchaotic system but the plain-image. In the presented algorithm, the plain-image is firstly divided into four image sub-bands by means of the 2D DWT. Secondly, the sub-bands are permutated by a key stream, and then the size of them is decreased by a constant factor. Thirdly, the 2D inverse DWT is employed to reconstruct an intermediate image by the four encrypted image sub-bands. Finally, to further enhance the security, the pixel values of the intermediate image are modified by using another key stream. Experimental results and security analysis demonstrate that the proposed algorithm has a high security, fast speed and can resist various attacks.

Journal ArticleDOI

[...]

TL;DR: The experimental results show that the proposed similarity measure between intuitionistic fuzzy sets can overcome the drawbacks of the existing similarity measures.
Abstract: In this paper, we propose a new similarity measure between intuitionistic fuzzy values based on the centroid points of transformed right-angled triangular fuzzy numbers. We also prove some properties of the proposed similarity measure between intuitionistic fuzzy values. Based on the proposed similarity measure between intuitionistic fuzzy values, we propose a new similarity measure between intuitionistic fuzzy sets. We also apply the proposed similarity measure between intuitionistic fuzzy sets to deal with pattern recognition problems. The experimental results show that the proposed similarity measure between intuitionistic fuzzy sets can overcome the drawbacks of the existing similarity measures. The proposed similarity measure provides us with a useful way for dealing with pattern recognition problems in intuitionistic fuzzy environments.

Journal ArticleDOI

[...]

TL;DR: The advantage and effectiveness of the proposed criteria will be shown through the comparison of maximum delay bounds with some results obtained by recently published papers via four numerical examples.
Abstract: This paper is concerned with the problem of stability and stabilization for Takagi-Sugeno (T-S) fuzzy systems with time-varying delays. By constructing a suitable Lyapunov-Krasovskii functional, sufficient conditions for ensuring the asymptotic stability and stabilization of the concerned fuzzy systems have been derived within the framework of linear matrix inequalities (LMIs). The advantage and effectiveness of the proposed criteria will be shown through the comparison of maximum delay bounds with some results obtained by recently published papers via four numerical examples.

Journal ArticleDOI

[...]

TL;DR: This paper presents a novel approach to visual objects classification based on generating simple fuzzy classifiers using local image features to distinguish between one known class and other classes by boosting meta-learning.
Abstract: This paper presents a novel approach to visual objects classification based on generating simple fuzzy classifiers using local image features to distinguish between one known class and other classes. Boosting meta-learning is used to find the most representative local features. The proposed approach is tested on a state-of-the-art image dataset and compared with the bag-of-features image representation model combined with the Support Vector Machine classification. The novel method gives better classification accuracy and the time of learning and testing process is more than 30% shorter.

Journal ArticleDOI

[...]

TL;DR: The purpose of the addressed problem is to design a full-order filter such that, in the simultaneous presence of distributed delays, randomly occurring nonlinearities, sensor saturation and missing measurements, the filtering dynamic system is guaranteed to be exponentially mean-square stable, and the H∞ filtering performance index is achieved.
Abstract: This paper is concerned with the H∞ filtering problem for a class of networked systems subject to randomly occurring distributed state delays, nonlinearities, sensor saturation as well as missing measurements via unreliable communication channels. The output measurements are affected by sensor saturation which is described by sector-nonlinearities. The missing measurement is modeled as a linear random variable satisfying the Bernoulli distribution, and the nonlinearities with the form of global Lipschitz cover the well-known nonlinear functions. The purpose of the addressed problem is to design a full-order filter such that, in the simultaneous presence of distributed delays, randomly occurring nonlinearities, sensor saturation and missing measurements, the filtering dynamic system is guaranteed to be exponentially mean-square stable, and the H∞ filtering performance index is achieved. A sufficient condition for the solution of the addressed problem is derived, and the explicit expression of the desired filter gains is described in terms of the solution to linear matrix inequality (LMI). Finally, a numerical example is provided to show the effectiveness of the designed method.

Journal ArticleDOI

[...]

TL;DR: This paper proposes a novel local feature descriptor, called a local feature statistics histogram (LFSH), for efficient 3D point cloud registration, and an optimized sample consensus (OSAC) algorithm is developed to iteratively estimate the optimum transformation from point correspondences.
Abstract: This paper proposes a novel local feature descriptor, called a local feature statistics histogram (LFSH), for efficient 3D point cloud registration. An LFSH forms a comprehensive description of local shape geometries by encoding their statistical properties on local depth, point density, and angles between normals. The sub-features in the LFSH descriptor are low-dimensional and quite efficient to compute. In addition, an optimized sample consensus (OSAC) algorithm is developed to iteratively estimate the optimum transformation from point correspondences. OSAC can handle the challenging cases of matching highly self-similar models. Based on the proposed LFSH and OSAC, a coarse-to-fine algorithm can be formed for 3D point cloud registration. Experiments and comparisons with the state-of-the-art descriptors demonstrate that LFSH is highly discriminative, robust, and significantly faster than other descriptors. Meanwhile, the proposed coarse-to-fine registration algorithm is demonstrated to be robust to common nuisances, including noise and varying point cloud resolutions, and can achieve high accuracy on both model data and scene data.