scispace - formally typeset
Author

Yi Yang

Bio: Yi Yang is an academic researcher from Tsinghua University. The author has contributed to research in topic(s): Convolutional neural network & Feature (computer vision). The author has an hindex of 143, co-authored 2456 publication(s) receiving 92268 citation(s). Previous affiliations of Yi Yang include Jiangsu University & Texas State University.


Papers
More filters
Journal ArticleDOI
22 May 2009-Science
TL;DR: This study identifies interactors of ABI1 and ABI2 which are named regulatory components of ABA receptor (RCARs) in Arabidopsis and suggests that the ABA receptors may be a class of closely related complexes, which may explain previous difficulties in establishing its identity.
Abstract: The plant hormone abscisic acid (ABA) acts as a developmental signal and as an integrator of environmental cues such as drought and cold. Key players in ABA signal transduction include the type 2C protein phosphatases (PP2Cs) ABI1 and ABI2, which act by negatively regulating ABA responses. In this study, we identify interactors of ABI1 and ABI2 which we have named regulatory components of ABA receptor (RCARs). In Arabidopsis, RCARs belong to a family with 14 members that share structural similarity with class 10 pathogen-related proteins. RCAR1 was shown to bind ABA, to mediate ABA-dependent inactivation of ABI1 or ABI2 in vitro, and to antagonize PP2C action in planta. Other RCARs also mediated ABA-dependent regulation of ABI1 and ABI2, consistent with a combinatorial assembly of receptor complexes.

1,553 citations

01 Jan 2009
TL;DR: In this paper, the plant hormone abscisic acid (ABA) receptor was identified as a regulatory component of the ABA receptor (RCARs) in Arabidopsis.
Abstract: ABA Receptor Rumbled? The plant hormone abscisic acid (ABA) is critical for normal development and for mediating plant responses to stressful environmental conditions. Now, two papers present analyses of candidate ABA receptors (see the news story by Pennisi). Ma et al. (p. 1064; published online 30 April) and Park et al. (p. 1068, published online 30 April) used independent strategies to search for proteins that physically interact with ABI family phosphatase components of the ABA response signaling pathway. Both groups identified different members of the same family of proteins, which appear to interact with ABI proteins to form a heterocomplex that can act as the ABA receptor. The variety of both families suggests that the ABA receptor may not be one entity, but rather a class of closely related complexes, which may explain previous difficulties in establishing its identity. Links between two ancient multimember protein families signal responses to the plant hormone abscisic acid. The plant hormone abscisic acid (ABA) acts as a developmental signal and as an integrator of environmental cues such as drought and cold. Key players in ABA signal transduction include the type 2C protein phosphatases (PP2Cs) ABI1 and ABI2, which act by negatively regulating ABA responses. In this study, we identify interactors of ABI1 and ABI2 which we have named regulatory components of ABA receptor (RCARs). In Arabidopsis, RCARs belong to a family with 14 members that share structural similarity with class 10 pathogen-related proteins. RCAR1 was shown to bind ABA, to mediate ABA-dependent inactivation of ABI1 or ABI2 in vitro, and to antagonize PP2C action in planta. Other RCARs also mediated ABA-dependent regulation of ABI1 and ABI2, consistent with a combinatorial assembly of receptor complexes.

1,506 citations

Proceedings ArticleDOI
26 Jan 2017
TL;DR: A simple semisupervised pipeline that only uses the original training set without collecting extra data, which effectively improves the discriminative ability of learned CNN embeddings and proposes the label smoothing regularization for outliers (LSRO).
Abstract: The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/ Person-reID_GAN.

1,305 citations

Journal ArticleDOI
03 Apr 2020
TL;DR: Random Erasing as mentioned in this paper randomly selects a rectangle region in an image and erases its pixels with random values, which reduces the risk of overfitting and makes the model robust to occlusion.
Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: https://github.com/zhunzhong07/Random-Erasing.

1,210 citations

Proceedings ArticleDOI
20 Jun 2011
TL;DR: A general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations, and it is shown that such relations can capture notions of local rigidity.
Abstract: We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster.

1,131 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

30,199 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

29,453 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

26,972 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations