scispace - formally typeset
Search or ask a question
Author

Jian Sun

Bio: Jian Sun is an academic researcher from Xi'an Jiaotong University. The author has contributed to research in topics: Object detection & Computer science. The author has an hindex of 109, co-authored 360 publications receiving 239387 citations. Previous affiliations of Jian Sun include French Institute for Research in Computer Science and Automation & Tsinghua University.


Papers
More filters
Proceedings ArticleDOI
08 Jun 2009
TL;DR: This paper is the first to consider the important problem of estimating integral from point clouds data (PCD) under the more general non-statistical setting, and proposes two weighting schemes: the Voronoi and the Principle Eigenvector schemes.
Abstract: Integration over a domain, such as a Euclidean space or a Riemannian manifold, is a fundamental problem across scientific fields Many times, the underlying domain is only accessible through a discrete approximation, such as a set of points sampled from it, and it is crucial to be able to estimate integral in such discrete settings In this paper, we study the problem of estimating the integral of a function defined over a k-submanifold embedded in $d$-dimensional space, from its function values at a set of sample points Previously, such estimation is usually obtained in a statistical setting, where input data is typically assumed to be drawn from certain probabilistic distribution Our paper is the first to consider this important problem of estimating integral from point clouds data (PCD) under the more general non-statistical setting, and provide certain theoretical guarantees Our approaches consider the problem from a geometric point of view Specifically, we estimate the integral by computing a weighted sum, and propose two weighting schemes: the Voronoi and the Principle Eigenvector schemes The running time of both methods depends mostly on the intrinsic dimension of the underlying manifold, instead of on the ambient dimensions We show that the estimation based on the Voronoi scheme converges to the true integral under the so-called (e, δ)-sampling condition with explicit error bound presented This is the first result of this sort for estimating integral from general PCD For the Principle Eigenvector scheme, although no theoretical guarantee is established, we present its connection to the Heat diffusion operator, and illustrate justifications behind its construction Experimental results show that both new methods consistently produce more accurate integral estimations than common statistical methods under various sampling conditions

21 citations

Patent
29 May 2007
TL;DR: In this paper, a flash-based strategy is used to separate foreground information from background information within image information, where the foreground information in the flash image is illuminated by the flash to a much greater extent than the background information.
Abstract: A flash-based strategy is used to separate foreground information from background information within image information. In this strategy, a first image is taken without the use of flash. A second image is taken of the same subject matter with the use of flash. The foreground information in the flash image is illuminated by the flash to a much greater extent than the background information. Based on this property, the strategy applies processing to extract the foreground information from the background information. The strategy supplements the flash information by also taking into consideration motion information and color information.

21 citations

Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this paper, a self-guided upsample module is proposed to tackle the interpolation blur problem caused by bilinear upsampling between pyramid levels, and a pyramid distillation loss is added to add supervision for intermediate levels via distilling the finest flow as pseudo labels.
Abstract: We present an unsupervised learning approach for optical flow estimation by improving the upsampling and learning of pyramid network. We design a self-guided upsample module to tackle the interpolation blur problem caused by bilinear upsampling between pyramid levels. Moreover, we propose a pyramid distillation loss to add supervision for intermediate levels via distilling the finest flow as pseudo labels. By integrating these two components together, our method achieves the best performance for unsupervised optical flow learning on multiple leading benchmarks, including MPI-SIntel, KITTI 2012 and KITTI 2015. In particular, we achieve EPE=1.4 on KITTI 2012 and F1=9.38% on KITTI 2015, which outperform the previous state-of-the-art methods by 22.2% and 15.7%, respectively.

21 citations

Posted Content
Dong Chen1, Gang Hua1, Fang Wen1, Jian Sun1
TL;DR: Supervised Transformer Network as discussed by the authors proposes a new cascaded Convolutional Neural Network (CNN) to address the problem of large pose variations in real-word face detection by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns.
Abstract: Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face/non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image.

21 citations

Patent
Jian Sun1, Heung-Yeung Shum1, Hai Tao1
01 Apr 2004
TL;DR: In this paper, a method for generating high-resolution images from a generic low-resolution image is described. But the method is based on extracting, at a training phase, a plurality of primal sketch priors from training data.
Abstract: Techniques are disclosed to enable generation of a high-resolution image from any generic low-resolution image. In one described implementation, a method includes extracting, at a training phase, a plurality of primal sketch priors from training data. At a synthesis phase, the plurality of primal sketch priors are utilized to improve a low-resolution image by replacing one or more low-frequency primitives extracted from the low-resolution image with corresponding ones of the plurality of primal sketch priors.

21 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations