scispace - formally typeset
Search or ask a question
Institution

Harbin Institute of Technology

EducationHarbin, China
About: Harbin Institute of Technology is a education organization based out in Harbin, China. It is known for research contribution in the topics: Microstructure & Control theory. The organization has 88259 authors who have published 109297 publications receiving 1603393 citations. The organization is also known as: HIT.


Papers
More filters
Journal ArticleDOI
TL;DR: It is proved that WNNP is equivalent to a standard quadratic programming problem with linear constrains, which facilitates solving the original problem with off-the-shelf convex optimization solvers and presents an automatic weight setting method, which greatly facilitates the practical implementation of WNNM.
Abstract: As a convex relaxation of the rank minimization model, the nuclear norm minimization (NNM) problem has been attracting significant research interest in recent years. The standard NNM regularizes each singular value equally, composing an easily calculated convex norm. However, this restricts its capability and flexibility in dealing with many practical problems, where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, which adaptively assigns weights on different singular values. As the key step of solving general WNNM models, the theoretical properties of the weighted nuclear norm proximal (WNNP) operator are investigated. Albeit nonconvex, we prove that WNNP is equivalent to a standard quadratic programming problem with linear constrains, which facilitates solving the original problem with off-the-shelf convex optimization solvers. In particular, when the weights are sorted in a non-descending order, its optimal solution can be easily obtained in closed-form. With WNNP, the solving strategies for multiple extensions of WNNM, including robust PCA and matrix completion, can be readily constructed under the alternating direction method of multipliers paradigm. Furthermore, inspired by the reweighted sparse coding scheme, we present an automatic weight setting method, which greatly facilitates the practical implementation of WNNM. The proposed WNNM methods achieve state-of-the-art performance in typical low level vision tasks, including image denoising, background subtraction and image inpainting.

608 citations

Journal ArticleDOI
01 Jan 2017-Carbon
TL;DR: In this paper, a uniform core-shell Co@C microspheres are innovatively fabricated through an in situ transformation from Co 3 O 4 @phenolic resin precursor, which can restrain the agglomeration of Co particles during high-temperature treatment, which accounts for the survival of uniform coreshell microstructure.

604 citations

Journal ArticleDOI
TL;DR: FFDNet as mentioned in this paper proposes a fast and flexible denoising convolutional neural network with a tunable noise level map as the input, which can handle a wide range of noise levels effectively with a single network.
Abstract: Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including (i) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network, (ii) the ability to remove spatially variant noise by specifying a non-uniform noise level map, and (iii) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.

602 citations

Journal ArticleDOI
TL;DR: This research compared three supervised machine learning algorithms of Naive Bayes, SVM and the character based N-gram model for sentiment classification of the reviews on travel blogs for seven popular travel destinations in the US and Europe.
Abstract: The rapid growth in Internet applications in tourism has lead to an enormous amount of personal reviews for travel-related information on the Web. These reviews can appear in different forms like BBS, blogs, Wiki or forum websites. More importantly, the information in these reviews is valuable to both travelers and practitioners for various understanding and planning processes. An intrinsic problem of the overwhelming information on the Internet, however, is information overloading as users are simply unable to read all the available information. Query functions in search engines like Yahoo and Google can help users find some of the reviews that they needed about specific destinations. The returned pages from these search engines are still beyond the visual capacity of humans. In this research, sentiment classification techniques were incorporated into the domain of mining reviews from travel blogs. Specifically, we compared three supervised machine learning algorithms of Naive Bayes, SVM and the character based N-gram model for sentiment classification of the reviews on travel blogs for seven popular travel destinations in the US and Europe. Empirical findings indicated that the SVM and N-gram approaches outperformed the Naive Bayes approach, and that when training datasets had a large number of reviews, all three approaches reached accuracies of at least 80%.

598 citations

Journal ArticleDOI
TL;DR: The proposed group-based sparse representation (GSR) is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework.
Abstract: Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. In addition, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman-based technique is developed to solve the proposed GSR-driven l 0 minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both peak signal-to-noise ratio and visual perception.

597 citations


Authors

Showing all 89023 results

NameH-indexPapersCitations
Jiaguo Yu178730113300
Lei Jiang1702244135205
Gang Chen1673372149819
Xiang Zhang1541733117576
Hui-Ming Cheng147880111921
Yi Yang143245692268
Bruce E. Logan14059177351
Bin Liu138218187085
Peng Shi137137165195
Hui Li1352982105903
Lei Zhang135224099365
Jie Liu131153168891
Lei Zhang130231286950
Zhen Li127171271351
Kurunthachalam Kannan12682059886
Network Information
Related Institutions (5)
South China University of Technology
69.4K papers, 1.2M citations

95% related

Tianjin University
79.9K papers, 1.2M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

94% related

University of Science and Technology of China
101K papers, 2.4M citations

94% related

Nanyang Technological University
112.8K papers, 3.2M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023383
20221,895
202110,083
20209,817
20199,659
20188,215