scispace - formally typeset
J

Jianchao Yang

Researcher at Adobe Systems

Publications -  184
Citations -  28091

Jianchao Yang is an academic researcher from Adobe Systems. The author has contributed to research in topics: Convolutional neural network & Sparse approximation. The author has an hindex of 60, co-authored 183 publications receiving 24321 citations. Previous affiliations of Jianchao Yang include University of Illinois at Urbana–Champaign.

Papers
More filters
Journal ArticleDOI

Image and Video Restorations via Nonlocal Kernel Regression

TL;DR: A nonlocal kernel regression (NL-KR) model is presented in this paper for various image and video restoration tasks and it is demonstrated that the proposed framework performs favorably with previous works both qualitatively and quantitatively.
Posted Content

Self-Tuned Deep Super Resolution

TL;DR: This paper proposes a deep joint super resolution (DJSR) model to exploit both external and self similarities for SR, first pre-trained on external examples with proper data augmentations, and fine-tuned with multi-scale self examples from each input.
Proceedings Article

AtomNAS: Fine-Grained End-to-End Neural Architecture Search

TL;DR: A fine-grained search space comprised of atomic blocks, a minimal search unit much smaller than the ones used in recent NAS algorithms is proposed, which achieves state-of-the-art performance under several FLOPS configurations on ImageNet with a negligible searching cost.
Proceedings ArticleDOI

Joint Visual-Textual Sentiment Analysis with Deep Neural Networks

TL;DR: This work first fine-tune a convolutional neural network for image sentiment analysis and train a paragraph vector model for textual sentiment analysis, and shows that joint visual-textual features can achieve the state-of-the-art performance than textual and visual sentiment analysis algorithms alone.
Proceedings Article

GPU asynchronous stochastic gradient descent to speed up neural network training

TL;DR: GPU A-SGD as discussed by the authors makes use of both model parallelism and data parallelism to speed up training of large convolutional neural networks useful for computer vision, which makes it possible to train larger networks on larger training sets in a reasonable amount of time.