scispace - formally typeset
Search or ask a question
Author

Jian Sun

Bio: Jian Sun is an academic researcher from Xi'an Jiaotong University. The author has contributed to research in topics: Object detection & Computer science. The author has an hindex of 109, co-authored 360 publications receiving 239387 citations. Previous affiliations of Jian Sun include French Institute for Research in Computer Science and Automation & Tsinghua University.


Papers
More filters
Journal ArticleDOI
21 Jul 2013
TL;DR: This paper presents a content-aware warping algorithm that generates rectangular images from stitched panoramic images, and demonstrates that the results are often visually plausible, and the introduced distortion is often unnoticeable.
Abstract: Stitched panoramic images mostly have irregular boundaries. Artists and common users generally prefer rectangular boundaries, which can be obtained through cropping or image completion techniques. In this paper, we present a content-aware warping algorithm that generates rectangular images from stitched panoramic images. Our algorithm consists of two steps. The first local step is mesh-free and preliminarily warps the image into a rectangle. With a grid mesh placed on this rectangle, the second global step optimizes the mesh to preserve shapes and straight lines. In various experiments we demonstrate that the results of our approach are often visually plausible, and the introduced distortion is often unnoticeable.

76 citations

Journal ArticleDOI
27 Jul 2009
TL;DR: This paper presents SkyFinder, an interactive search system that computes all sky attributes offline, then provides an interactive online search engine and builds a sky graph based on the sky attributes, so that the user can smoothly explore and find a path within the space of skies.
Abstract: In this paper, we present SkyFinder, an interactive search system of over a half million sky images downloaded from the Internet. Using a set of automatically extracted, semantic sky attributes (category, layout, richness, horizon, etc.), the user can find a desired sky image, such as "a landscape with rich clouds at sunset" or "a whole blue sky with white clouds". The system is fully automatic and scalable. It computes all sky attributes offline, then provides an interactive online search engine. Moreover, we build a sky graph based on the sky attributes, so that the user can smoothly explore and find a path within the space of skies. We also show how our system can be used for controllable sky replacement.

75 citations

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A new scalable face representation is developed using both local and global features and it is shown that the inverted index based on local features provides candidate images with good recall, while the multi-reference re-ranking with global hamming signature leads to good precision.
Abstract: State-of-the-art image retrieval systems achieve scalability by using bag-of-words representation and textual retrieval methods, but their performance degrades quickly in the face image domain, mainly because they 1) produce visual words with low discriminative power for face images, and 2) ignore the special properties of the faces. The leading features for face recognition can achieve good retrieval performance, but these features are not suitable for inverted indexing as they are high-dimensional and global, thus not scalable in either computational or storage cost. In this paper we aim to build a scalable face image retrieval system. For this purpose, we develop a new scalable face representation using both local and global features. In the indexing stage, we exploit special properties of faces to design new component-based local features, which are subsequently quantized into visual words using a novel identity-based quantization scheme. We also use a very small hamming signature (40 bytes) to encode the discriminative global feature for each face. In the retrieval stage, candidate images are firstly retrieved from the inverted index of visual words. We then use a new multi-reference distance to re-rank the candidate images using the hamming signature. On a one-millon face database, we show that our local features and global hamming signatures are complementary — the inverted index based on local features provides candidate images with good recall, while the multi-reference re-ranking with global hamming signature leads to good precision. As a result, our system is not only scalable but also outperforms the linear scan retrieval system using the state-of-the-art face recognition feature in term of the quality.

75 citations

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A novel adaptive bottom-up approach to parallelize the BK algorithm that is more cache-friendly within smaller subgraphs; it keeps balanced workloads among computing cores; and it causes little overhead and is adaptable to the number of available cores.
Abstract: Graph-cuts optimization is prevalent in vision and graphics problems. It is thus of great practical importance to parallelize the graph-cuts optimization using today's ubiquitous multi-core machines. However, the current best serial algorithm by Boykov and Kolmogorov (called the BK algorithm) still has the superior empirical performance. It is non-trivial to parallelize as expensive synchronization overhead easily offsets the advantage of parallelism. In this paper, we propose a novel adaptive bottom-up approach to parallelize the BK algorithm. We first uniformly partition the graph into a number of regularly-shaped disjoint subgraphs and process them in parallel, then we incrementally merge the subgraphs in an adaptive way to obtain the global optimum. The new algorithm has three benefits: 1) it is more cache-friendly within smaller subgraphs; 2) it keeps balanced workloads among computing cores; 3) it causes little overhead and is adaptable to the number of available cores. Extensive experiments in common applications such as 2D/3D image segmentations and 3D surface fitting demonstrate the effectiveness of our approach.

75 citations

Journal ArticleDOI
TL;DR: This paper proposes a structure-constrained cycleGAN for unsupervised MR-to-CT synthesis by defining an extra structure-consistency loss based on the modality independent neighborhood descriptor and utilizes a spectral normalization technique to stabilize the training process and a self-attention module to model the long-range spatial dependencies in the synthetic images.
Abstract: Synthesizing a CT image from an available MR image has recently emerged as a key goal in radiotherapy treatment planning for cancer patients. CycleGANs have achieved promising results on unsupervised MR-to-CT image synthesis; however, because they have no direct constraints between input and synthetic images, cycleGANs do not guarantee structural consistency between these two images. This means that anatomical geometry can be shifted in the synthetic CT images, clearly a highly undesirable outcome in the given application. In this paper, we propose a structure-constrained cycleGAN for unsupervised MR-to-CT synthesis by defining an extra structure-consistency loss based on the modality independent neighborhood descriptor. We also utilize a spectral normalization technique to stabilize the training process and a self-attention module to model the long-range spatial dependencies in the synthetic images. Results on unpaired brain and abdomen MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other unsupervised synthesis methods. We also show that an approximate affine pre-registration for unpaired training data can improve synthesis results.

74 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations