Institution
National University of Defense Technology
Education•Changsha, China•
About: National University of Defense Technology is a education organization based out in Changsha, China. It is known for research contribution in the topics: Radar & Synthetic aperture radar. The organization has 39430 authors who have published 40181 publications receiving 358979 citations. The organization is also known as: Guófáng Kēxuéjìshù Dàxué & NUDT.
Topics: Radar, Synthetic aperture radar, Laser, Fiber laser, Radar imaging
Papers published on a yearly basis
Papers
More filters
••
31 May 2014TL;DR: A new automated repair technique using random search, which is commonly considered much simpler than genetic programming, is presented and implemented, and a prototype tool called RSRepair is implemented, suggesting the stronger strength of random search over genetic programming.
Abstract: Automated program repair recently received considerable attentions, and many techniques on this research area have been proposed. Among them, two genetic-programming-based techniques, GenProg and Par, have shown the promising results. In particular, GenProg has been used as the baseline technique to check the repair effectiveness of new techniques in much literature. Although GenProg and Par have shown their strong ability of fixing real-life bugs in nontrivial programs, to what extent GenProg and Par can benefit from genetic programming, used by them to guide the patch search process, is still unknown. To address the question, we present a new automated repair technique using random search, which is commonly considered much simpler than genetic programming, and implement a prototype tool called RSRepair. Experiment on 7 programs with 24 versions shipping with real-life bugs suggests that RSRepair, in most cases (23/24), outperforms GenProg in terms of both repair effectiveness (requiring fewer patch trials) and efficiency (requiring fewer test case executions), justifying the stronger strength of random search over genetic programming. According to experimental results, we suggest that every proposed technique using optimization algorithm should check its effectiveness by comparing it with random search.
316 citations
••
TL;DR: The manifold regularization and the margin maximization to NMF are introduced and the manifold regularized discriminative NMF (MD-NMF) is obtained to overcome the aforementioned problems.
Abstract: Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R1600 , FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.
312 citations
••
TL;DR: This paper revisits existing security threats and gives a systematic survey on them from two aspects, the training phase and the testing/inferring phase, and categorizes current defensive techniques of machine learning into four groups: security assessment mechanisms, countermeasures in theTraining phase, those in the testing or inferring phase; data security, and privacy.
Abstract: Machine learning is one of the most prevailing techniques in computer science, and it has been widely applied in image processing, natural language processing, pattern recognition, cybersecurity, and other fields. Regardless of successful applications of machine learning algorithms in many scenarios, e.g., facial recognition, malware detection, automatic driving, and intrusion detection, these algorithms and corresponding training data are vulnerable to a variety of security threats, inducing a significant performance decrease. Hence, it is vital to call for further attention regarding security threats and corresponding defensive techniques of machine learning, which motivates a comprehensive survey in this paper. Until now, researchers from academia and industry have found out many security threats against a variety of learning algorithms, including naive Bayes, logistic regression, decision tree, support vector machine (SVM), principle component analysis, clustering, and prevailing deep neural networks. Thus, we revisit existing security threats and give a systematic survey on them from two aspects, the training phase and the testing/inferring phase. After that, we categorize current defensive techniques of machine learning into four groups: security assessment mechanisms, countermeasures in the training phase, those in the testing or inferring phase, data security, and privacy. Finally, we provide five notable trends in the research on security threats and defensive techniques of machine learning, which are worth doing in-depth studies in future.
312 citations
••
TL;DR: A novel approach for texture classification, generalizing the well-known local binary pattern (LBP) approach, which produces the best classification results on KTHTIPS2b, and results comparable to the state of the art on CUReT.
311 citations
••
TL;DR: The proposed unconventional random feature extraction is simple, yet by leveraging the sparse nature of texture images, the approach outperforms traditional feature extraction methods which involve careful design and complex steps and leads to significant improvements in classification accuracy and reductions in feature dimensionality.
Abstract: Inspired by theories of sparse representation and compressed sensing, this paper presents a simple, novel, yet very powerful approach for texture classification based on random projection, suitable for large texture database applications. At the feature extraction stage, a small set of random features is extracted from local image patches. The random features are embedded into a bag--of-words model to perform texture classification; thus, learning and classification are carried out in a compressed domain. The proposed unconventional random feature extraction is simple, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We have conducted extensive experiments on each of the CUReT, the Brodatz, and the MSRC databases, comparing the proposed approach to four state-of-the-art texture classification methods: Patch, Patch-MRF, MR8, and LBP. We show that our approach leads to significant improvements in classification accuracy and reductions in feature dimensionality.
310 citations
Authors
Showing all 39659 results
Name | H-index | Papers | Citations |
---|---|---|---|
Rui Zhang | 151 | 2625 | 107917 |
Jian Li | 133 | 2863 | 87131 |
Chi Lin | 125 | 1313 | 102710 |
Wei Xu | 103 | 1492 | 49624 |
Lei Liu | 98 | 2041 | 51163 |
Xiang Li | 97 | 1472 | 42301 |
Chang Liu | 97 | 1099 | 39573 |
Jian Huang | 97 | 1189 | 40362 |
Tao Wang | 97 | 2720 | 55280 |
Wei Liu | 96 | 1538 | 42459 |
Jian Chen | 96 | 1718 | 52917 |
Wei Wang | 95 | 3544 | 59660 |
Peng Li | 95 | 1548 | 45198 |
Jianhong Wu | 93 | 726 | 36427 |
Jianhua Zhang | 92 | 415 | 28085 |