scispace - formally typeset
Search or ask a question
Author

Lei Huang

Bio: Lei Huang is an academic researcher from Ocean University of China. The author has contributed to research in topics: Computer science & Convolutional neural network. The author has an hindex of 18, co-authored 129 publications receiving 1449 citations. Previous affiliations of Lei Huang include Chinese Academy of Sciences & Beihang University.


Papers
More filters
Journal ArticleDOI
TL;DR: This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods, and classify existing literatures with a detailed taxonomy including representation and Classification methods, as well as the datasets they used.
Abstract: Human activity recognition (HAR) aims to recognize activities from a series of observations on the actions of subjects and the environmental conditions. The vision-based HAR research is the basis of many applications including video surveillance, health care, and human-computer interaction (HCI). This review highlights the advances of state-of-the-art activity recognition approaches, especially for the activity representation and classification methods. For the representation methods, we sort out a chronological research trajectory from global representations to local representations, and recent depth-based representations. For the classification methods, we conform to the categorization of template-based methods, discriminative models, and generative models and review several prevalent methods. Next, representative and available datasets are introduced. Aiming to provide an overview of those methods and a convenient way of comparing them, we classify existing literatures with a detailed taxonomy including representation and classification methods, as well as the datasets they used. Finally, we investigate the directions for future research.

239 citations

Proceedings ArticleDOI
01 Jun 2019
TL;DR: This paper proposes a collaborative learning method to jointly improve the performance of disease grading and lesion segmentation by semi-supervised learning with an attention mechanism and achieves consistent improvements over state-of-the-art methods on three public datasets.
Abstract: Medical image analysis has two important research areas: disease grading and fine-grained lesion segmentation. Although the former problem often relies on the latter, the two are usually studied separately. Disease severity grading can be treated as a classification problem, which only requires image-level annotations, while the lesion segmentation requires stronger pixel-level annotations. However, pixel-wise data annotation for medical images is highly time-consuming and requires domain experts. In this paper, we propose a collaborative learning method to jointly improve the performance of disease grading and lesion segmentation by semi-supervised learning with an attention mechanism. Given a small set of pixel-level annotated data, a multi-lesion mask generation model first performs the traditional semantic segmentation task. Then, based on initially predicted lesion maps for large quantities of image-level annotated data, a lesion attentive disease grading model is designed to improve the severity classification accuracy. Meanwhile, the lesion attention model can refine the lesion maps using class-specific information to fine-tune the segmentation model in a semi-supervised manner. An adversarial architecture is also integrated for training. With extensive experiments on a representative medical problem called diabetic retinopathy (DR), we validate the effectiveness of our method and achieve consistent improvements over state-of-the-art methods on three public datasets.

189 citations

Proceedings Article
29 Apr 2018
TL;DR: In this article, an orthogonal weight normalization method was proposed to solve OMDSM in feed-forward Neural Networks (FNNs), which can stabilize the distribution of network activations and regularize FNNs.
Abstract: Orthogonal matrix has shown advantages in training Recurrent Neural Networks (RNNs), but such matrix is limited to be square for the hidden-to-hidden transformation in RNNs. In this paper, we generalize such square orthogonal matrix to orthogonal rectangular matrix and formulating this problem in feed-forward Neural Networks (FNNs) as Optimization over Multiple Dependent Stiefel Manifolds (OMDSM). We show that the orthogonal rectangular matrix can stabilize the distribution of network activations and regularize FNNs. We propose a novel orthogonal weight normalization method to solve OMDSM. Particularly, it constructs orthogonal transformation over proxy parameters to ensure the weight matrix is orthogonal. To guarantee stability, we minimize the distortions between proxy parameters and canonical weights over all tractable orthogonal transformations. In addition, we design orthogonal linear module (OLM) to learn orthogonal filter banks in practice, which can be used as an alternative to standard linear module. Extensive experiments demonstrate that by simply substituting OLM for standard linear module without revising any experimental protocols, our method improves the performance of the state-of-the-art networks, including Inception and residual networks on CIFAR and ImageNet datasets.

100 citations

Posted Content
TL;DR: Decorrelated batch normalization (DBN) as discussed by the authors whitens the activations to accelerate the training of deep models by centering and scaling activations within mini-batches.
Abstract: Batch Normalization (BN) is capable of accelerating the training of deep models by centering and scaling activations within mini-batches. In this work, we propose Decorrelated Batch Normalization (DBN), which not just centers and scales activations but whitens them. We explore multiple whitening techniques, and find that PCA whitening causes a problem we call stochastic axis swapping, which is detrimental to learning. We show that ZCA whitening does not suffer from this problem, permitting successful learning. DBN retains the desirable qualities of BN and further improves BN's optimization efficiency and generalization ability. We design comprehensive experiments to show that DBN can improve the performance of BN on multilayer perceptrons and convolutional neural networks. Furthermore, we consistently improve the accuracy of residual networks on CIFAR-10, CIFAR-100, and ImageNet.

87 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes to reparameterize the input weight of each neuron in deep neural networks by normalizing it with zero-mean and unit-norm, followed by a learnable scalar parameter to adjust the norm of the weight.
Abstract: Training deep neural networks is difficult for the pathological curvature problem. Re-parameterization is an effective way to relieve the problem by learning the curvature approximately or constraining the solutions of weights with good properties for optimization. This paper proposes to reparameterize the input weight of each neuron in deep neural networks by normalizing it with zero-mean and unit-norm, followed by a learnable scalar parameter to adjust the norm of the weight. This technique effectively stabilizes the distribution implicitly. Besides, it improves the conditioning of the optimization problem and thus accelerates the training of deep neural networks. It can be wrapped as a linear module in practice and plugged in any architecture to replace the standard linear module. We highlight the benefits of our method on both multi-layer perceptrons and convolutional neural networks, and demonstrate its scalability and efficiency on SVHN, CIFAR-10, CIFAR-100 and ImageNet datasets.

70 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
30 Apr 2014-Sensors
TL;DR: A significant aim of this review is to provide a distinct categorization pursuant to state of the art humidity sensor types, principles of work, sensing substances, transduction mechanisms, and production technologies.
Abstract: Humidity measurement is one of the most significant issues in various areas of applications such as instrumentation, automated systems, agriculture, climatology and GIS. Numerous sorts of humidity sensors fabricated and developed for industrial and laboratory applications are reviewed and presented in this article. The survey frequently concentrates on the RH sensors based upon their organic and inorganic functional materials, e.g., porous ceramics (semiconductors), polymers, ceramic/polymer and electrolytes, as well as conduction mechanism and fabrication technologies. A significant aim of this review is to provide a distinct categorization pursuant to state of the art humidity sensor types, principles of work, sensing substances, transduction mechanisms, and production technologies. Furthermore, performance characteristics of the different humidity sensors such as electrical and statistical data will be detailed and gives an added value to the report. By comparison of overall prospects of the sensors it was revealed that there are still drawbacks as to efficiency of sensing elements and conduction values. The flexibility offered by thick film and thin film processes either in the preparation of materials or in the choice of shape and size of the sensor structure provides advantages over other technologies. These ceramic sensors show faster response than other types.

895 citations