scispace - formally typeset
Search or ask a question
Author

Yi Yang

Bio: Yi Yang is an academic researcher from Tsinghua University. The author has contributed to research in topics: Medicine & Computer science. The author has an hindex of 143, co-authored 2456 publications receiving 92268 citations. Previous affiliations of Yi Yang include Jiangsu University & Texas State University.


Papers
More filters
Journal ArticleDOI
22 May 2009-Science
TL;DR: This study identifies interactors of ABI1 and ABI2 which are named regulatory components of ABA receptor (RCARs) in Arabidopsis and suggests that the ABA receptors may be a class of closely related complexes, which may explain previous difficulties in establishing its identity.
Abstract: The plant hormone abscisic acid (ABA) acts as a developmental signal and as an integrator of environmental cues such as drought and cold. Key players in ABA signal transduction include the type 2C protein phosphatases (PP2Cs) ABI1 and ABI2, which act by negatively regulating ABA responses. In this study, we identify interactors of ABI1 and ABI2 which we have named regulatory components of ABA receptor (RCARs). In Arabidopsis, RCARs belong to a family with 14 members that share structural similarity with class 10 pathogen-related proteins. RCAR1 was shown to bind ABA, to mediate ABA-dependent inactivation of ABI1 or ABI2 in vitro, and to antagonize PP2C action in planta. Other RCARs also mediated ABA-dependent regulation of ABI1 and ABI2, consistent with a combinatorial assembly of receptor complexes.

1,854 citations

Proceedings ArticleDOI
26 Jan 2017
TL;DR: A simple semisupervised pipeline that only uses the original training set without collecting extra data, which effectively improves the discriminative ability of learned CNN embeddings and proposes the label smoothing regularization for outliers (LSRO).
Abstract: The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/ Person-reID_GAN.

1,789 citations

Journal ArticleDOI
03 Apr 2020
TL;DR: Random Erasing as mentioned in this paper randomly selects a rectangle region in an image and erases its pixels with random values, which reduces the risk of overfitting and makes the model robust to occlusion.
Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: https://github.com/zhunzhong07/Random-Erasing.

1,748 citations

Book ChapterDOI
08 Sep 2018
TL;DR: In this paper, a part-based convolutional baseline (PCB) is proposed to learn discriminative part-informed features for person retrieval and two contributions are made: (i) a network named Part-based Convolutional Baseline (PCBB) which outputs a convolutionAL descriptor consisting of several part-level features.
Abstract: Employing part-level features offers fine-grained information for pedestrian image description. A prerequisite of part discovery is that each part should be well located. Instead of using external resources like pose estimator, we consider content consistency within each part for precise part location. Specifically, we target at learning discriminative part-informed features for person retrieval and make two contributions. (i) A network named Part-based Convolutional Baseline (PCB). Given an image input, it outputs a convolutional descriptor consisting of several part-level features. With a uniform partition strategy, PCB achieves competitive results with the state-of-the-art methods, proving itself as a strong convolutional baseline for person retrieval. (ii) A refined part pooling (RPP) method. Uniform partition inevitably incurs outliers in each part, which are in fact more similar to other parts. RPP re-assigns these outliers to the parts they are closest to, resulting in refined parts with enhanced within-part consistency. Experiment confirms that RPP allows PCB to gain another round of performance boost. For instance, on the Market-1501 dataset, we achieve (77.4+4.2)% mAP and (92.3+1.5)% rank-1 accuracy, surpassing the state of the art by a large margin. Code is available at: https://github.com/syfafterzy/PCB_RPP

1,633 citations

01 Jan 2009
TL;DR: In this paper, the plant hormone abscisic acid (ABA) receptor was identified as a regulatory component of the ABA receptor (RCARs) in Arabidopsis.
Abstract: ABA Receptor Rumbled? The plant hormone abscisic acid (ABA) is critical for normal development and for mediating plant responses to stressful environmental conditions. Now, two papers present analyses of candidate ABA receptors (see the news story by Pennisi). Ma et al. (p. 1064; published online 30 April) and Park et al. (p. 1068, published online 30 April) used independent strategies to search for proteins that physically interact with ABI family phosphatase components of the ABA response signaling pathway. Both groups identified different members of the same family of proteins, which appear to interact with ABI proteins to form a heterocomplex that can act as the ABA receptor. The variety of both families suggests that the ABA receptor may not be one entity, but rather a class of closely related complexes, which may explain previous difficulties in establishing its identity. Links between two ancient multimember protein families signal responses to the plant hormone abscisic acid. The plant hormone abscisic acid (ABA) acts as a developmental signal and as an integrator of environmental cues such as drought and cold. Key players in ABA signal transduction include the type 2C protein phosphatases (PP2Cs) ABI1 and ABI2, which act by negatively regulating ABA responses. In this study, we identify interactors of ABI1 and ABI2 which we have named regulatory components of ABA receptor (RCARs). In Arabidopsis, RCARs belong to a family with 14 members that share structural similarity with class 10 pathogen-related proteins. RCAR1 was shown to bind ABA, to mediate ABA-dependent inactivation of ABI1 or ABI2 in vitro, and to antagonize PP2C action in planta. Other RCARs also mediated ABA-dependent regulation of ABI1 and ABI2, consistent with a combinatorial assembly of receptor complexes.

1,506 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book ChapterDOI
06 Sep 2014
TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Abstract: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.

30,462 citations