scispace - formally typeset
Search or ask a question

Showing papers by "University of Macau published in 2015"


Journal ArticleDOI
TL;DR: Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)].
Abstract: In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

1,034 citations


Journal ArticleDOI
TL;DR: A new two-dimensional Sine Logistic modulation map (2D-SLMM) which is derived from the Logistic and Sine maps is introduced which has the wider chaotic range, better ergodicity, hyperchaotic property and relatively low implementation cost.

585 citations


Journal ArticleDOI
TL;DR: A new approach is advocate that, for selected groups of taxa, combines the best use of single‐locus barcodes and super‐barcodes for efficient plant identification, and discusses the feasibility of using the chloroplast genome as a super-barcode.
Abstract: DNA barcoding is currently a widely used and effective tool that enables rapid and accurate identification of plant species; however, none of the available loci work across all species Because single-locus DNA barcodes lack adequate variations in closely related taxa, recent barcoding studies have placed high emphasis on the use of whole-chloroplast genome sequences which are now more readily available as a consequence of improving sequencing technologies While chloroplast genome sequencing can already deliver a reliable barcode for accurate plant identification it is not yet resource-effective and does not yet offer the speed of analysis provided by single-locus barcodes to unspecialized laboratory facilities Here, we review the development of candidate barcodes and discuss the feasibility of using the chloroplast genome as a super-barcode We advocate a new approach for DNA barcoding that, for selected groups of taxa, combines the best use of single-locus barcodes and super-barcodes for efficient plant identification Specific barcodes might enhance our ability to distinguish closely related plants at the species and population levels

536 citations


Journal ArticleDOI
Jianbo Xiao1
TL;DR: With in vivo (oral) treatment, flavonoids glycosides showed similar or even higher antidiabetes, anti-inflammatory, antidegranulating, antistress, and antiallergic activity than their flavonoid aglycones.
Abstract: The dietary flavonoids, especially their glycosides, are the most vital phytochemicals in diets and are of great general interest due to their diverse bioactivity. The natural flavonoids almost all exist as their O-glycoside or C-glycoside forms in plants. In this review, we summarized the existing knowledge on the different biological benefits and pharmacokinetic behaviors between flavonoid aglycones and their glycosides. Due to various conclusions from different flavonoid types and health/disease conditions, it is very difficult to draw general or universally applicable comments regarding the impact of glycosylation on the biological benefits of flavonoids. It seems as though O-glycosylation generally reduces the bioactivity of these compounds - this has been observed for diverse properties including antioxidant activity, antidiabetes activity, anti-inflammation activity, antibacterial, antifungal activity, antitumor activity, anticoagulant activity, antiplatelet activity, antidegranulating activity, antitrypanosomal activity, influenza virus neuraminidase inhibition, aldehyde oxidase inhibition, immunomodulatory, and antitubercular activity. However, O-glycosylation can enhance certain types of biological benefits including anti-HIV activity, tyrosinase inhibition, antirotavirus activity, antistress activity, antiobesity activity, anticholinesterase potential, antiadipogenic activity, and antiallergic activity. However, there is a lack of data for most flavonoids, and their structures vary widely. There is also a profound lack of data on the impact of C-glycosylation on flavonoid biological benefits, although it has been demonstrated that in at least some cases C-glycosylation has positive effects on properties that may be useful in human healthcare such as antioxidant and antidiabetes activity. Furthermore, there is a lack of in vivo data that would make it possible to make broad generalizations concerning the influence of glycosylation on the benefits of flavonoids for human health. It is possible that the effects of glycosylation on flavonoid bioactivity in vitro may differ from that seen in vivo. With in vivo (oral) treatment, flavonoid glycosides showed similar or even higher antidiabetes, anti-inflammatory, antidegranulating, antistress, and antiallergic activity than their flavonoid aglycones. Flavonoid glycosides keep higher plasma levels and have a longer mean residence time than those of aglycones. We should pay more attention to in vivo benefits of flavonoid glycosides, especially C-glycosides.

394 citations


Journal ArticleDOI
01 Sep 2015
TL;DR: A comprehensive survey of the state-of-the-art distributed evolutionary algorithms and models, which have been classified into two groups according to their task division mechanism, and insights into the models are presented and discussed.
Abstract: Graphical abstractDisplay Omitted HighlightsProvide an updated and systematic review of distributed evolutionary algorithms.Classify the models into population and dimension-distributed groups semantically.Analyze the parallelism, search behaviors, communication costs, scalability, etc.Highlight recent research hotspots in this field.Discuss challenges and potential research directions in this field. The increasing complexity of real-world optimization problems raises new challenges to evolutionary computation. Responding to these challenges, distributed evolutionary computation has received considerable attention over the past decade. This article provides a comprehensive survey of the state-of-the-art distributed evolutionary algorithms and models, which have been classified into two groups according to their task division mechanism. Population-distributed models are presented with master-slave, island, cellular, hierarchical, and pool architectures, which parallelize an evolution task at population, individual, or operation levels. Dimension-distributed models include coevolution and multi-agent models, which focus on dimension reduction. Insights into the models, such as synchronization, homogeneity, communication, topology, speedup, advantages and disadvantages are also presented and discussed. The study of these models helps guide future development of different and/or improved algorithms. Also highlighted are recent hotspots in this area, including the cloud and MapReduce-based implementations, GPU and CUDA-based implementations, distributed evolutionary multiobjective optimization, and real-world applications. Further, a number of future research directions have been discussed, with a conclusion that the development of distributed evolutionary computation will continue to flourish.

332 citations


Journal ArticleDOI
TL;DR: The general architecture of locally connected ELM is studied, showing that: 1) ELM theories are naturally valid for local connections, thus introducing local receptive fields to the input layer; 2) each hidden node in ELM can be a combination of several hidden nodes (a subnetwork), which is also consistent with ELM theory.
Abstract: Extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks (SLFNs), provides efficient unified learning solutions for the applications of feature learning, clustering, regression and classification. Different from the common understanding and tenet that hidden neurons of neural networks need to be iteratively adjusted during training stage, ELM theories show that hidden neurons are important but need not be iteratively tuned. In fact, all the parameters of hidden nodes can be independent of training samples and randomly generated according to any continuous probability distribution. And the obtained ELM networks satisfy universal approximation and classification capability. The fully connected ELM architecture has been extensively studied. However, ELM with local connections has not attracted much research attention yet. This paper studies the general architecture of locally connected ELM, showing that: 1) ELM theories are naturally valid for local connections, thus introducing local receptive fields to the input layer; 2) each hidden node in ELM can be a combination of several hidden nodes (a subnetwork), which is also consistent with ELM theories. ELM theories may shed a light on the research of different local receptive fields including true biological receptive fields of which the exact shapes and formula may be unknown to human beings. As a specific example of such general architectures, random convolutional nodes and a pooling structure are implemented in this paper. Experimental results on the NORB dataset, a benchmark for object recognition, show that compared with conventional deep learning solutions, the proposed local receptive fields based ELM (ELM-LRF) reduces the error rate from 6.5% to 2.7% and increases the learning speed up to 200 times.

321 citations


Journal ArticleDOI
TL;DR: Recent advances in the versatile applications of gelatin within biomedical context are reviewed and an attempt is made to draw upon its advantages and potential challenges.
Abstract: The biomacromolecule, gelatin, has increasingly been used in biomedicine—beyond its traditional use in food and cosmetics. The appealing advantages of gelatin, such as its cell-adhesive structure, low cost, off-the-shelf availability, high biocompatibility, biodegradability and low immunogenicity, among others, have made it a desirable candidate for the development of biomaterials for tissue engineering and drug delivery. Gelatin can be formulated in the form of nanoparticles, employed as size-controllable porogen, adopted as surface coating agent and mixed with synthetic or natural biopolymers forming composite scaffolds. In this article, we review recent advances in the versatile applications of gelatin within biomedical context and attempt to draw upon its advantages and potential challenges.

282 citations


Journal ArticleDOI
TL;DR: It is argued that κ² is not an appropriate effect size measure for mediation models, because of its lack of the property of rank preservation, and can lead to paradoxical results in multiple mediation models.
Abstract: Mediation analysis is important for research in psychology and other social and behavioral sciences. Great progress has been made in testing mediation effects and in constructing their confidence intervals. Mediation effect sizes have also been considered. Preacher and Kelley (2011) proposed and recommended κ² as an effect size measure for a mediation effect. In this article, we argue that κ² is not an appropriate effect size measure for mediation models, because of its lack of the property of rank preservation (e.g., the magnitude of κ² may decrease when the mediation effect that κ² represents increases). Furthermore, κ² can lead to paradoxical results in multiple mediation models. We show that the problem of κ² is due to (a) the improper calculation of the maximum possible value of the indirect effect, and (b) mathematically, the maximum possible indirect effect is infinity, implying that the definition of κ² is mathematically incorrect. At this time, it appears that the traditional mediation effect size measure PM (the ratio of the indirect effect to the total effect), together with some other statistical information, should be preferred for basic mediation models. But for inconsistent mediation models where the indirect effect and the direct effect have opposite signs, the situation is less clear. Other considerations and suggestions for future research are also discussed.

273 citations


Journal ArticleDOI
TL;DR: This paper introduces a general chaotic framework called the cascade chaotic system (CCS), and introduces a pseudo-random number generator (PRNG) and a data encryption system using a chaotic map generated by CCS.
Abstract: Chaotic maps are widely used in different applications. Motivated by the cascade structure in electronic circuits, this paper introduces a general chaotic framework called the cascade chaotic system (CCS). Using two 1-D chaotic maps as seed maps, CCS is able to generate a huge number of new chaotic maps. Examples and evaluations show the CCS’s robustness. Compared with corresponding seed maps, newly generated chaotic maps are more unpredictable and have better chaotic performance, more parameters, and complex chaotic properties. To investigate applications of CCS, we introduce a pseudo-random number generator (PRNG) and a data encryption system using a chaotic map generated by CCS. Simulation and analysis demonstrate that the proposed PRNG has high quality of randomness and that the data encryption system is able to protect different types of data with a high-security level.

263 citations


Journal ArticleDOI
TL;DR: In this article, a joint multi-task learning algorithm is proposed to better predict attributes in images using deep convolutional neural networks (CNN), where each CNN will predict one binary attribute.
Abstract: This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.

255 citations


Journal ArticleDOI
TL;DR: The proposed forgery region extraction algorithm, which replaces the feature points with small superpixels as feature blocks and then merges the neighboring blocks that have similar local color features into the feature blocks to generate the merged regions to detect the detected forgery regions.
Abstract: A novel copy–move forgery detection scheme using adaptive oversegmentation and feature point matching is proposed in this paper. The proposed scheme integrates both block-based and keypoint-based forgery detection methods. First, the proposed adaptive oversegmentation algorithm segments the host image into nonoverlapping and irregular blocks adaptively. Then, the feature points are extracted from each block as block features, and the block features are matched with one another to locate the labeled feature points; this procedure can approximately indicate the suspected forgery regions. To detect the forgery regions more accurately, we propose the forgery region extraction algorithm, which replaces the feature points with small superpixels as feature blocks and then merges the neighboring blocks that have similar local color features into the feature blocks to generate the merged regions. Finally, it applies the morphological operation to the merged regions to generate the detected forgery regions. The experimental results indicate that the proposed copy–move forgery detection scheme can achieve much better detection results even under various challenging conditions compared with the existing state-of-the-art copy–move forgery detection methods.

Journal ArticleDOI
TL;DR: In this paper, a research model that incorporates hotel website quality, eTrust, and online booking intentions was put forward, and the software AMOS 20.0 was adopted to analyze the proposed inter-variable relationships.

Journal ArticleDOI
TL;DR: In this article, a moderated mediation model was proposed to predict abusive supervision behavior through emotional exhaustion, with leader-member exchange (LMX) acting as the contextual condition, and they found that abused subordinates resort to remain silent in the workplace due to their feelings of emotional exhaustion.
Abstract: Abusive supervision is a dysfunctional leadership behavior that adversely affects its targets and the organization as a whole. Drawing on conservation of resources (COR) theory, the present research expands our knowledge on its destructive impact. Specifically, we propose a moderated mediation model wherein abusive supervision predicts subordinate's silence behavior through emotional exhaustion, with leader–member exchange (LMX) acting as the contextual condition. Two-wave data collected from 152 employees in the service industry in Macau supported our hypothesized model. We found that abused subordinates resort to remain silent in the workplace due to their feelings of emotional exhaustion. Further, the presence of high LMX makes the adverse impact of abusive supervision even worse. Theoretical and practical implications are discussed. We also offer several promising directions for future research.

Journal ArticleDOI
TL;DR: This meta-analysis confirms that 40 miRNAs are significantly dysregulated in type 2 diabetes, and miR-199a-3p and MiR-223 are potential tissue biomarkers of type 1 diabetes.
Abstract: Aims/hypothesis The aim was to identify potential microRNA (miRNA) biomarkers of type 2 diabetes.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed SSRLDE significantly outperforms the state-of-the-art DR methods for HSI classification.
Abstract: Dimension reduction (DR) is a necessary and helpful preprocessing for hyperspectral image (HSI) classification. In this paper, we propose a spatial and spectral regularized local discriminant embedding (SSRLDE) method for DR of hyperspectral data. In SSRLDE, hyperspectral pixels are first smoothed by the multiscale spatial weighted mean filtering. Then, the local similarity information is described by integrating a spectral-domain regularized local preserving scatter matrix and a spatial-domain local pixel neighborhood preserving scatter matrix. Finally, the optimal discriminative projection is learned by minimizing a local spatial-spectral scatter and maximizing a modified total data scatter. Experimental results on benchmark hyperspectral data sets show that the proposed SSRLDE significantly outperforms the state-of-the-art DR methods for HSI classification.

Journal ArticleDOI
Jingjing Liu1, Xiudan Zhan1, Jian-Bo Wan1, Yitao Wang1, Chunming Wang1 
TL;DR: A review of the latest literature on the development of CRG-based pharmaceutical vehicles and the perspective of using CRG for broader biomedical applications focuses on how current strategies exploit the unique gelling mechanisms, strong water absorption and abundant functional groups of the three major CRG varieties.

Journal ArticleDOI
TL;DR: Results demonstrate that the stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network.
Abstract: This paper targets learning robust image representation for single training sample per person face recognition. Motivated by the success of deep learning in image representation, we propose a supervised autoencoder, which is a new type of building block for deep architectures. There are two features distinct our supervised autoencoder from standard autoencoder. First, we enforce the faces with variants to be mapped with the canonical face of the person, for example, frontal face with neutral expression and normal illumination; Second, we enforce features corresponding to the same person to be similar. As a result, our supervised autoencoder extracts the features which are robust to variances in illumination, expression, occlusion, and pose, and facilitates the face recognition. We stack such supervised autoencoders to get the deep architecture and use it for extracting features in image representation. Experimental results on the AR, Extended Yale B, CMU-PIE, and Multi-PIE data sets demonstrate that by coupling with the commonly used sparse representation-based classification, our stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network, in spite of much less training data and without any domain information. Moreover, supervised autoencoder can also be used for face verification, which further demonstrates its effectiveness for face representation.

Journal ArticleDOI
TL;DR: For a high-order considered system, the attention is focused on the construction of a reduced-order model, which not only approximates the original system well with a Hankel-norm performance but translates it into a lower dimensional fuzzy switched system as well.
Abstract: In this paper, the model approximation problem is investigated for a Takagi–Sugeno fuzzy switched system with stochastic disturbance. For a high-order considered system, our attention is focused on the construction of a reduced-order model, which not only approximates the original system well with a Hankel-norm performance but translates it into a lower dimensional fuzzy switched system as well. By using the average dwell time approach and the piecewise Lyapunov function technique, a sufficient condition is first proposed to guarantee the mean-square exponential stability with a Hankel-norm error performance for the error system. The model approximation is then converted into a convex optimization problem by using a linearization procedure. Finally, simulations are provided to illustrate the effectiveness of the proposed theory.

Journal ArticleDOI
TL;DR: It is shown for the first time that HCC-derived exosomes could mobilize normal hepatocyte, which may have implication in facilitating the protrusive activity of HCC cells through liver parenchyma during the process of metastasis.
Abstract: Exosomes are increasingly recognized as important mediators of cell-cell communication in cancer progression through the horizontal transfer of RNAs and proteins to neighboring or distant cells. Hepatocellular carcinoma (HCC) is a highly malignant cancer, whose metastasis is largely influenced by the tumor microenvironment. The possible role of exosomes in the interactions between HCC tumor cell and its surrounding hepatic milieu are however largely unknown. In this study, we comprehensively characterized the exosomal RNA and proteome contents derived from three HCC cell lines (HKCI-C3, HKCI-8 and MHCC97L) and an immortalized hepatocyte line (MIHA) using Ion Torrent sequencing and mass spectrometry, respectively. RNA deep sequencing and proteomic analysis revealed exosomes derived from metastatic HCC cell lines carried a large number of protumorigenic RNAs and proteins, such as MET protooncogene, S100 family members and the caveolins. Of interest, we found that exosomes from motile HCC cell lines could significantly enhance the migratory and invasive abilities of non-motile MIHA cell. We further demonstrated that uptake of these shuttled molecules could trigger PI3K/AKT and MAPK signaling pathways in MIHA with increased secretion of active MMP-2 and MMP-9. Our study showed for the first time that HCC-derived exosomes could mobilize normal hepatocyte, which may have implication in facilitating the protrusive activity of HCC cells through liver parenchyma during the process of metastasis.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: This work proposes a Lyapunov-based VCG auction policy for the on-line sensor selection, which converges asymptotically to the optimal off-line benchmark performance, even with no future information and under asymmetry of current information.
Abstract: Providing an adequate long-term user participation incentive is important for a participatory sensing system to maintain enough number of active users (sensors), so as to collect a sufficient number of data samples and support a desired level of service quality. In this work, we consider the sensor selection problem in a general time-dependent and location-aware participatory sensing system, taking the long-term user participation incentive into explicit consideration. We study the problem systematically under different information scenarios, regarding both future information and current information (realization). In particular, we propose a Lyapunov-based VCG auction policy for the on-line sensor selection, which converges asymptotically to the optimal off-line benchmark performance, even with no future information and under asymmetry of current information. Extensive numerical results show that our proposed policy outperforms the state-of-art policies in the literature, in terms of both user participation (e.g., reducing the user dropping probability by 25% ∼ 90%) and social performance (e.g., increasing the social welfare by 15% ∼ 80%).

Journal ArticleDOI
TL;DR: The results demonstrate the feasibility and effectiveness of measurement-based care for outpatients with moderate to severe major depression, suggesting that this approach can be incorporated in the clinical care of patients with major depression.
Abstract: Objective:The authors compared measurement-based care with standard treatment in major depression.Methods:Outpatients with moderate to severe major depression were consecutively randomized to 24 weeks of either measurement-based care (guideline- and rating scale-based decisions; N=61), or standard treatment (clinicians’ choice decisions; N=59). Pharmacotherapy was restricted to paroxetine (20–60 mg/day) or mirtazapine (15–45 mg/day) in both groups. Depressive symptoms were measured with the Hamilton Depression Rating Scale (HAM-D) and the Quick Inventory of Depressive Symptomatology–Self-Report (QIDS-SR). Time to response (a decrease of at least 50% in HAM-D score) and remission (a HAM-D score of 7 or less) were the primary endpoints. Outcomes were evaluated by raters blind to study protocol and treatment.Results:Significantly more patients in the measurement-based care group than in the standard treatment group achieved response (86.9% compared with 62.7%) and remission (73.8% compared with 28.8%). Simil...

Journal ArticleDOI
TL;DR: The fuzzy restricted Boltzmann machine (FRBM) and its learning algorithm are proposed in this paper, in which the parameters governing the model are replaced by fuzzy numbers, which shows that the representation capability of FRBM model is significantly better than the traditional RBM.
Abstract: In recent years, deep learning caves out a research wave in machine learning. With outstanding performance, more and more applications of deep learning in pattern recognition, image recognition, speech recognition, and video processing have been developed. Restricted Boltzmann machine (RBM) plays an important role in current deep learning techniques, as most of existing deep networks are based on or related to it. For regular RBM, the relationships between visible units and hidden units are restricted to be constants. This restriction will certainly downgrade the representation capability of the RBM. To avoid this flaw and enhance deep learning capability, the fuzzy restricted Boltzmann machine (FRBM) and its learning algorithm are proposed in this paper, in which the parameters governing the model are replaced by fuzzy numbers. This way, the original RBM becomes a special case in the FRBM, when there is no fuzziness in the FRBM model. In the process of learning FRBM, the fuzzy free energy function is defuzzified before the probability is defined. The experimental results based on bar-and-stripe benchmark inpainting and MNIST handwritten digits classification problems show that the representation capability of FRBM model is significantly better than the traditional RBM. Additionally, the FRBM also reveals better robustness property compared with RBM when the training data are contaminated by noises.

Journal ArticleDOI
TL;DR: Under the assumption that the time-varying delays exist in the system output, only one NN is employed to compensate for all unknown nonlinear terms depending on the delayed output, and the NN parameters to be estimated are greatly decreased and the online learning time is dramatically decreased.
Abstract: This paper presents an adaptive output-feedback neural network (NN) control scheme for a class of stochastic nonlinear time-varying delay systems with unknown control directions. To make the controller design feasible, the unknown control coefficients are grouped together and the original system is transformed into a new system using a linear state transformation technique. Then, the Nussbaum function technique is incorporated into the backstepping recursive design technique to solve the problem of unknown control directions. Furthermore, under the assumption that the time-varying delays exist in the system output, only one NN is employed to compensate for all unknown nonlinear terms depending on the delayed output. Moreover, by estimating the maximum of NN parameters instead of the parameters themselves, the NN parameters to be estimated are greatly decreased and the online learning time is also dramatically decreased. It is shown that all the signals of the closed-loop system are bounded in probability. The effectiveness of the proposed scheme is demonstrated by the simulation results.

Journal ArticleDOI
TL;DR: It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method and the simulation examples are employed to illustrate the effectiveness of the proposed algorithm.
Abstract: Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: In this paper, the Deift-Zhou method was used to obtain, in the solitonless sector, the leading order asymptotic of the solution to the Cauchy problem of the Fokas-Lenells equation.

Journal ArticleDOI
TL;DR: A sophisticated deep-learning technique for short-term and long-term wind speed forecast, i.e., the predictive deep Boltzmann machine (PDBM) and corresponding learning algorithm and prediction accuracy of the PDBM model outperforms existing methods by more than 10%.
Abstract: It is important to forecast the wind speed for managing operations in wind power plants. However, wind speed prediction is extremely complex and difficult due to the volatility and deviation of the wind. As existing forecasting methods directly model the raw wind speed data, it is difficult for them to provide higher inference accuracy. Differently, this paper presents a sophisticated deep-learning technique for short-term and long-term wind speed forecast, i.e., the predictive deep Boltzmann machine (PDBM) and corresponding learning algorithm. The proposed deep model forecasts wind speed by analyzing the higher level features abstracted from lower level features of the wind speed data. These automatically learnt features are very informative and appropriate for the prediction. The proposed PDBM is a deep stochastic model that can represent the wind speed very well, and is inspired by two aspects. 1)The stochastic model is suitable to capture the probabilistic characteristics of wind speed. 2)Recent developments in neural networks with deep architectures show that deep generative models have competitive capability to approximate nonlinear and nonsmooth functions. The evaluation of the proposed PDBM model is depicted by both hour-ahead and day-ahead prediction experiments based on real wind speed datasets. The prediction accuracy of the PDBM model outperforms existing methods by more than 10%.

Journal ArticleDOI
TL;DR: Two spatial-spectral composite kernel ELM classification methods are proposed that outperform the general ELM, SVM, and SVM with CK methods on the hyperspectral images.
Abstract: Due to its simple, fast, and good generalization ability, extreme learning machine (ELM) has recently drawn increasing attention in the pattern recognition and machine learning fields. To investigate the performance of ELM on the hyperspectral images (HSIs), this paper proposes two spatial–spectral composite kernel (CK) ELM classification methods. In the proposed CK framework, the single spatial or spectral kernel consists of activation–function-based kernel and general Gaussian kernel, respectively. The proposed methods inherit the advantages of ELM and have an analytic solution to directly implement the multiclass classification. Experimental results on three benchmark hyperspectral datasets demonstrate that the proposed ELM with CK methods outperform the general ELM, SVM, and SVM with CK methods.

Journal ArticleDOI
TL;DR: Some well-known functions of FSH and LH in fish are confirmed while also providing evidence for novel functions, which would be difficult to reveal using traditional biochemical and physiological approaches.
Abstract: Vertebrate reproduction is controlled by two gonadotropins (FSH and LH) from the pituitary. Despite numerous studies on FSH and LH in fish species, their functions in reproduction still remain poorly defined. This is partly due to the lack of powerful genetic approaches for functional studies in adult fish. This situation is now changing with the emergence of genome-editing technologies, especially Transcription Activator-Like Effector Nuclease (TALEN) and Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR). In this study, we deleted the hormone-specific β-genes of both FSH and LH in the zebrafish using TALEN. This was followed by a phenotype analysis for key reproductive events, including gonadal differentiation, puberty onset, gametogenesis, final maturation, and fertility. FSH-deficient zebrafish (fshb−/−) were surprisingly fertile in both sexes; however, the development of both the ovary and testis was significantly delayed. In contrast, LH-deficient zebrafish (lhb−/−) showed normal gon...

Journal ArticleDOI
TL;DR: This work proposes a novel algorithm that outperformed existing methods on accelerometer-based gait recognition, even if the step cycles were perfectly detected for them.
Abstract: Gait, as a promising biometric for recognizing human identities, can be nonintrusively captured as a series of acceleration signals using wearable or portable smart devices. It can be used for access control. Most existing methods on accelerometer-based gait recognition require explicit step-cycle detection, suffering from cycle detection failures and intercycle phase misalignment. We propose a novel algorithm that avoids both the above two problems. It makes use of a type of salient points termed signature points (SPs), and has three components: 1) a multiscale SP extraction method, including the localization and SP descriptors; 2) a sparse representation scheme for encoding newly emerged SPs with known ones in terms of their descriptors, where the phase propinquity of the SPs in a cluster is leveraged to ensure the physical meaningfulness of the codes; and 3) a classifier for the sparse-code collections associated with the SPs of a series. Experimental results on our publicly available dataset of 175 subjects showed that our algorithm outperformed existing methods, even if the step cycles were perfectly detected for them. When the accelerometers at five different body locations were used together, it achieved the rank-1 accuracy of 95.8% for identification, and the equal error rate of 2.2% for verification.

Journal ArticleDOI
TL;DR: A novel luminescent G-quadruplex-selective iridium(iii) complex was employed in a G- quadruplex -based detection assay for PTK7.
Abstract: A series of luminescent iridium(iii) complexes were synthesised and evaluated for their ability to act as luminescent G-quadruplex-selective probes. The iridium(iii) complex 9 [Ir(pbi)2(5,5-dmbpy)]PF6 (where pbi = 2-phenyl-1H-benzo[d]imidazole; 5,5-dmbpy = 5,5'-dimethyl-2,2'-bipyridine) exhibited high luminescence for G-quadruplex DNA compared to dsDNA and ssDNA, and was employed to construct a G-quadruplex-based assay for protein tyrosine kinase-7 (PTK7) in aqueous solution. PTK7 is an important biomarker for a range of leukemias and solid tumors. In the presence of PTK7, the specific binding of the sgc8 aptamer sequence triggers a structural transition and releases the G-quadruplex-forming sequence. The formation of the nascent G-quadruplex structure is then detected by the G-quadruplex-selective iridium(iii) complex with an enhanced luminescent response. Moreover, the application of the assay for detecting PTK7 in cellular debris and membrane protein extract was demonstrated. To our knowledge, this is the first G-quadruplex-based assay for PTK7.