scispace - formally typeset
Search or ask a question

Showing papers by "Shu-Chuan Chu published in 2014"


Journal ArticleDOI
TL;DR: A novel dictionary training method for sparse reconstruction for enhancing the similarity of sparse representations between the low resolution and high resolution MRI block pairs through simultaneous training two dictionaries.

73 citations


Book ChapterDOI
01 Jan 2014
TL;DR: A novel algorithm, namely compact Bat Algorithm (cBA), for solving the numerical optimization problems is proposed based on the framework of the original Bat algorithm, in which the replaced population with the probability vector updated based on single competition is inspired.
Abstract: Addressing to the computational requirements of the hardware devices with limited resources such as memory size or low price is critical issues. This paper, a novel algorithm, namely compact Bat Algorithm (cBA), for solving the numerical optimization problems is proposed based on the framework of the original Bat algorithm (oBA). A probabilistic representation random of the Bat’s behavior is inspired to employ for this proposed algorithm, in which the replaced population with the probability vector updated based on single competition. These lead to the entire algorithm functioning applying a modest memory usage. The simulations compare both algorithms in terms of solution quality, speed and saving memory. The results show that cBA can solve the optimization despite a modest memory usage as good performance as oBA displays with its complex population-based algorithm. It is used the same as what is needed for storing space with six solutions.

23 citations


Journal ArticleDOI
TL;DR: In this paper, a uniform framework for kernel self-optimization with the ability to adjust the data structure is presented, where the data-dependent kernel is extended and applied to kernel learning, and optimization equations with two criteria for measuring data discrimination are used to solve the optimal parameter values.

18 citations


Book ChapterDOI
03 Jun 2014
TL;DR: The results show that cABC can solve the optimization despite a modest memory usage as good performance as original ABC oABC displays with its complex population-based algorithm.
Abstract: Another version of Artificial Bee Colony ABC optimization algorithm, which is called the Compact Artificial Bee Colony cABC optimization, for numerical optimization problems, is proposed in this paper. Its aim is to address to the computational requirements of the hardware devices with limited resources such as memory size or low price. A probabilistic representation random of the collection behavior of social bee colony is inspired to employ for this proposed algorithm, in which the replaced population with the probability vector updated based on single competition. These lead to the entire algorithm functioning applying a modest memory usage. The simulations compare both algorithms in terms of solution quality, speed and saving memory. The results show that cABC can solve the optimization despite a modest memory usage as good performance as original ABC oABC displays with its complex population-based algorithm. It is used the same as what is needed for storing space with six solutions.

16 citations


Book ChapterDOI
01 Jan 2014
TL;DR: This paper proposed an efficient image encryption scheme where Logistic chaos-based stream cipher is utilized to permute the color image and the MD5 hash function and the ZUC stream cipher algorithm are combined to diffusion thecolor image.
Abstract: Digital color image encryption is different from text encryption because of some inherent features of image such as huge data capacity and high correlation among the neighboring pixels. Because of the desirable cryptographic properties of the chaotic maps such as sensitivity to initial conditions and random-like behave, more and more researches use these properties for encryption. This paper proposed an efficient image encryption scheme. Logistic chaos-based stream cipher is utilized to permute the color image. The MD5 hash function and the ZUC stream cipher algorithm are combined to diffusion the color image. Theoretical and experimental analyses both confirm the security and the validity of the proposed algorithm.

13 citations


Proceedings Article
01 Jan 2014
TL;DR: The method of optimizing matrix mapping with data dependent kernel for feature extraction of the image for classification adaptively optimizes the parameter of kernel for nonlinear mapping.
Abstract: Kernel based nonlinear feature extraction is feasible to extract the feature of image for classification.The current kernel-based method endures two problems: 1) kernelbased method is to use the data vector through transforming the image matrix into vector, which will cause the store and computing burden; 2) the parameter of kernel function has the heavy influences on kernel based learning method. In order to solve the two problems, we present the method of optimizing matrix mapping with data dependent kernel for feature extraction of the image for classification. The method implements the algorithm without transforming the matrix to vector, and it adaptively optimizes the parameter of kernel for nonlinear mapping. The comprehensive experiments are implemented evaluate the performance of the algorithms.

12 citations


Book ChapterDOI
01 Jan 2014
TL;DR: This work comprehensive optimize embedding dimension and delay time by particle swarm optimization, to get the optimal values of embedding Dimension and Delay time in RBF single-step and multi-step prediction models.
Abstract: Radial basis function (RBF) neural network has very good performance on prediction of chaotic time series, but the precision of prediction is great affected by embedding dimension and delay time of phase-space reconstruction in the process of predicting. Based on hereinbefore problems, we comprehensive optimize embedding dimension and delay time by particle swarm optimization, to get the optimal values of embedding dimension and delay time in RBF single-step and multi-step prediction models. In addition, we made single step and multi-step prediction to the Lorenz system by this method, the results show that the prediction accuracy of optimized prediction model is obvious improved.

8 citations


Proceedings Article
01 Jan 2014
TL;DR: A no-reference quality metric for evaluating the blocking artifacts in images based on a finding that the blocking artifact has direct effect on the distribution of discrete Tchebichef moments is presented.
Abstract: This paper presents a no-reference quality metric for evaluating the blocking artifacts in images. It is based on a finding that the blocking artifact has direct effect on the distribution of discrete Tchebichef moments. The image is first divided into target blocks that cover potential artifacts. Tchebichef moments are then extracted as the block features. A local artifact score is computed by comparing the coefficients of the moments and the overall quality metric is obtained by taking the average of local estimates. Simulation results and comparisons demonstrate the advantage of the method.

6 citations


Book ChapterDOI
01 Jan 2014
TL;DR: As a sub-field of pattern recognition, face recognition (or face classification) has become a hot research point and feature extraction based no dimensionality reduction plays the important role in the relative areas.
Abstract: As a sub-field of pattern recognition, face recognition (or face classification) has become a hot research point. In pattern recognition and in image processing, feature extraction based no dimensionality reduction plays the important role in the relative areas. Feature extraction simplifies the amount of resources required to describe a large set of data accurately for classification and clustering. On the algorithms, when the input data are too large to be processed and it is suspected to be notoriously redundant (much data, but not much information), then the input data will be transformed into a reduced representation set of features also named features vector with linear transformation or the nonlinear transformation. Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen, it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input data.

3 citations


Book ChapterDOI
03 Jun 2014
TL;DR: A new distortion-based method was proposed which hides sensitive rules by removing some items in database so as to reduce the support or confidence of sensitive rules below specified thresholds and can achieve satisfactory results with fewer side effects and data loss.
Abstract: Today, people can use various database techniques to discover useful knowledge from large collections of data. However, people also face the risk of disclosing sensitive information to competitor when the data is shared between different organizations. Thus, there is a balance between the legitimate mining need and protection of confidential knowledge when people release or share data. In this paper, we study the privacy preserving in association rule mining. A new distortion-based method was proposed which hides sensitive rules by removing some items in database so as to reduce the support or confidence of sensitive rules below specified thresholds. Aimed at minimizing side effects, the number of sensitive rules and the number of non-sensitive rules supported by each transaction are utilized to sort the transactions and the candidates which contain most sensitive rules and least non-sensitive rules are selected to modify. Comparative experiments on real datasets showed that the new method can achieve satisfactory results with fewer side effects and data loss.

3 citations


Book ChapterDOI
01 Jan 2014
TL;DR: The kernel trick is used to represent the complicated nonlinear relationships of input data to develop kernel discriminant analysis (KDA) algorithm, a traditional dimensionality reduction technique for feature extraction.
Abstract: Linear discriminant analysis (LDA) is a traditional dimensionality reduction technique for feature extraction. It has been widely used and proven successful in a lot of real-world applications. LDA works well in some cases, but it fails to capture a nonlinear relationship with a linear mapping. In order to overcome this weakness of LDA, the kernel trick is used to represent the complicated nonlinear relationships of input data to develop kernel discriminant analysis (KDA) algorithm.

Book ChapterDOI
01 Jan 2014
TL;DR: A comprehensive survey on face recognition from practical applications, sensory inputs, methods, and application conditions, and a comprehensive survey of face recognition methods from the viewpoints of signal processing and machine learning are implemented.
Abstract: Face recognition has the wide research and applications on many areas. Many surveys of face recognition are implemented. Different from previous surveys on from a single viewpoint of application, method, or condition, this book has a comprehensive survey on face recognition from practical applications, sensory inputs, methods, and application conditions. In the sensory inputs, we review face recognition from image-based, video-based, 3D-based, and hyperspectral image-based face recognition, and a comprehensive survey of face recognition methods from the viewpoints of signal processing and machine learning are implemented, such as kernel learning, manifold learning method. Moreover, we discuss the single-training-sample-based face recognition and under the variable poses. The prominent algorithms are described and critically analyzed, and relevant issues such as data collection, the influence of the small sample size, and system evaluation are discussed.

Book ChapterDOI
01 Jan 2014
TL;DR: The famous paired living habit of sleepy lizards is verified based on the proposed algorithm and some common population characteristics of the lizards are introduced by using the traditional social net work algorithms.
Abstract: The K-Nearest Neighbor algorithm is one of the commonly used methods for classification in machine learning and computational intelligence. A new research method and its improvement for the sleepy lizards based on the K-Nearest Neighbor algorithm and the traditional social network algorithms are proposed in this chapter. The famous paired living habit of sleepy lizards is verified based on our proposed algorithm. In addition, some common population characteristics of the lizards are also introduced by using the traditional social net work algorithms. Good performance of the experimental results shows efficiency of the new research method.

Book ChapterDOI
03 Jun 2014
TL;DR: The proposed GGDA uses Genetic Algorithm to search for an more discriminant diffusing map and then perform LDA in the new space and the Experimental results confirm the efficiency of the proposed algorithm.
Abstract: In this paper, a novel Genetic Generalized Discriminant Analysis GGDA is proposed. GGDA is a generalized version of Exponential Discriminant Analysis EDA. EDA algorithm is equivalent to map the samples to a new space and then perform LDA. However, is this space is optimal for classification? The proposed GGDA uses Genetic Algorithm to search for an more discriminant diffusing map and then perform LDA in the new space. The Experimental results confirm the efficiency of the proposed algorithm.

Book ChapterDOI
01 Jan 2014
TL;DR: It is proved that the two kinds algorithms are effective, and can distinguish different complex chaos and chaotic sequences, and the complexity of the Logistic map is greater than that of other chaotic systems.
Abstract: The complexity of the sequence is an important index of quantify the performance of chaotic sequence. In order to select a higher complexity of chaotic sequence and apply it in hardware encryption system, this paper analyzes chaotic complexity quantitative analysis methods and presents the approximate entropy and permutation entropy as criterion of measuring the complexity of the chaotic sequences. Set tent, logistic and henon three kinds of chaotic systems as examples, and we analysis and comparison their complexity. It is proved that the two kinds algorithms are effective, and can distinguish different complex chaos and chaotic sequences. Researches show that the complexity of the Logistic map is greater than that of other chaotic systems. The results of the study provide the theoretical and experimental basis for the application of chaotic sequence in hardware encryption system and the information security communication.

01 Jan 2014
TL;DR: The proposed guidable bat algorithm (GBA) is the paradigm of EC 2.0 and the simulation results show that the solving efficiency and solution quality of GBA are better than BA's, even well-known HBA’s.
Abstract: In this study, an innovative conception is conceived to break the development bottleneck of the traditional ECs at present. This innovative conception is bio-inspired evolutionary computing with context-awareness and collective-effect called as Next-Generation ECs (EC 2.0). For the property of context-awareness in EC 2.0, the individuals are able to observe environmental information by physic property. And, the individual can regularly and closely move to objective. In addition, the individual behaviors in collective-effect include competition, cooperation and conflict. The conflict behaviors of individuals such as difference, contradiction or inconsistence are considered to design the search strategy. The proposed guidable bat algorithm (GBA) is the paradigm of EC 2.0. The bats governed by GBA are able to rapidly and precisely discover the global optimal solution. The simulation results show that the solving efficiency and solution quality of GBA are better than BA’s, even well-known HBA’s.

Book ChapterDOI
01 Jan 2014
TL;DR: Feature extraction is an important step and essential process in many data analysis areas, such as face recognition, handwriting recognition, human facial expression analysis, speech recognition.
Abstract: Feature extraction is an important step and essential process in many data analysis areas, such as face recognition, handwriting recognition, human facial expression analysis, speech recognition.

Book ChapterDOI
01 Jan 2014
TL;DR: A novel image feature extraction algorithm, entitled Feature Line-based Local Discriminant Analysis (FLLDA), is proposed, a subspace learning algorithm based on Feature Line (FL) metric that confirms the effectiveness of the proposed algorithm.
Abstract: In this paper, a novel image feature extraction algorithm, entitled Feature Line-based Local Discriminant Analysis (FLLDA), is proposed. FLLDA is a subspace learning algorithm based on Feature Line (FL) metric. FL metric is used for the evaluation of the local within-class scatter and local between class scatter in the proposed FLLDA approach. The Experimental results on COIL20 image database confirm the effectiveness of the proposed algorithm.

Book ChapterDOI
01 Jan 2014
TL;DR: The facial feature extraction plays an important role in face recognition and should sufficiently consider the following two issues: what features are used to represent a face image and how to classify a new face image based on this representation.
Abstract: Face recognition and its relative research have become the very active research topics in recent years due to its wide applications. An excellent face recognition algorithm should sufficiently consider the following two issues: what features are used to represent a face image and how to classify a new face image based on this representation. So the facial feature extraction plays an important role in face recognition.

Book ChapterDOI
01 Jan 2014
TL;DR: This paper focuses on the development of a meaningful low-dimensional subspace in a high-dimensional input space such as PCA and LDA through linear dimensionality reduction.
Abstract: Feature extraction with dimensionality reduction is an important step and essential process in embedding data analysis Linear dimensionality reduction aims to develop a meaningful low-dimensional subspace in a high-dimensional input space such as PCA and LDA LDA is to find the optimal projection matrix with Fisher criterion through considering the class labels, and PCA seeks to minimize the mean square error criterion

Book ChapterDOI
01 Jan 2014
TL;DR: Semi-supervised learning methods attempt to improve the performance of a supervised or an unsuper supervised learning in the presence of side information.
Abstract: Semi-supervised learning methods attempt to improve the performance of a supervised or an unsupervised learning in the presence of side information. This side information can be in the form of unlabeled samples in the supervised case or pairwise constraints in the unsupervised case

Book ChapterDOI
21 Nov 2014
TL;DR: In this paper, an innovative conception is conceived to break the development bottleneck of the traditional ECs at present, which is bio-inspired evolutionary computing with context-awareness and collective effect called as Next-Generation ECs (EC 2.0).
Abstract: In this study, an innovative conception is conceived to break the development bottleneck of the traditional ECs at present. This innovative conception is bio-inspired evolutionary computing with context-awareness and collective-effect called as Next-Generation ECs (EC 2.0). For the property of context-awareness in EC 2.0, the individuals are able to observe environmental information by physic property. And, the individual can regularly and closely move to objective. In addition, the individual behaviors in collective-effect include competition, cooperation and conflict. The conflict behaviors of individuals such as difference, contradiction or inconsistence are considered to design the search strategy. The proposed guidable bat algorithm (GBA) is the paradigm of EC 2.0. The bats governed by GBA are able to rapidly and precisely discover the global optimal solution. The simulation results show that the solving efficiency and solution quality of GBA are better than BA’s, even well-known HBA’s.

Book ChapterDOI
01 Jan 2014
TL;DR: Multimedia multisensor system is used in various monitoring systems such as bus, home, shopping mall, school, and so on and multiple sensors such as audio and video are used for identification and ensure the safety.
Abstract: Multimedia multisensor system is used in various monitoring systems such as bus, home, shopping mall, school, and so on. Accordingly, these systems are implemented in an ambient space. Multiple sensors such as audio and video are used for identification and ensure the safety. The wrist pulse signal detector is used to health analysis. These multisensor multimedia systems are be recording, processing, and analyzing the sensory media streams and providing the high-level information.

Book ChapterDOI
01 Jan 2014
TL;DR: Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function, implicitly perform a nonlinear mapping of the input data to a high-dimensional feature space.
Abstract: Nonlinear information processing algorithms can be designed by means of linear techniques in implicit feature spaces induced by kernel functions Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function, implicitly perform a nonlinear mapping of the input data to a high-dimensional feature space