scispace - formally typeset
Search or ask a question
Author

Jian Sun

Bio: Jian Sun is an academic researcher from Xi'an Jiaotong University. The author has contributed to research in topics: Object detection & Computer science. The author has an hindex of 109, co-authored 360 publications receiving 239387 citations. Previous affiliations of Jian Sun include French Institute for Research in Computer Science and Automation & Tsinghua University.


Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes a novel area-preserving parameterization method, which is based on an optimal mass transport theory and convex geometry and formulates the solution to the optimal transport map as the unique optimum of a convex energy.
Abstract: In geometric modeling, surface parameterization plays an important role for converting triangle meshes to spline surfaces Parameterization will introduce distortions Conventional parameterization methods emphasize on angle-preservation, which may induce huge area distortions and cause large spline fitting errors and trigger numerical instabilitiesTo overcome this difficulty, this work proposes a novel area-preserving parameterization method, which is based on an optimal mass transport theory and convex geometry Optimal mass transport mapping is measure-preserving and minimizes the transportation cost According to Brenier's theorem, for quadratic distance transportation costs, the optimal mass transport map is the gradient of a convex function The graph of the convex function is a convex polyhedron with prescribed normal and areas The existence and the uniqueness of such a polyhedron have been proved by the Minkowski-Alexandrov theorem in convex geometry This work gives an explicit method to construct such a polyhedron based on the variational principle, and formulates the solution to the optimal transport map as the unique optimum of a convex energy In practice, the energy optimization can be carried out using Newton's method, and each iteration constructs a power Voronoi diagram dynamically We tested the proposal algorithms on 3D surfaces scanned from real life Experimental results demonstrate the efficiency and efficacy of the proposed variational approach for the optimal transport map

5 citations

Proceedings ArticleDOI
18 May 2017
TL;DR: This paper designs an energy function for unsupervised PolSAR image classification by combining a supervised softmax regression model with a Markov Random Field (MRF) smoothness constraint, and iteratively optimize the classifiers and class labels by alternately minimizing the energy function w.r.t. them.
Abstract: This paper presents a novel unsupervised image classification method for polarimetric synthetic aperture radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, we design an energy function for unsupervised PolSAR image classification by combining a supervised softmax regression model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixel-wise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, we iteratively optimize the classifiers and class labels by alternately minimizing the energy function w.r.t. them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. We apply this approach to real PolSAR benchmark data. Extensive experiments justify that our approach can effectively classify the PolSAR image in an unsupervised way, and produce higher accuracies than the compared state-of-the-art methods.

5 citations

Posted Content
TL;DR: Joint multi-dimension pruning is presented, a new perspective of pruning a network on three crucial aspects: spatial, depth and channel simultaneously, as this method is optimized collaboratively across the three dimensions in a single end-to-end training.
Abstract: We present joint multi-dimension pruning (named as JointPruning), a new perspective of pruning a network on three crucial aspects: spatial, depth and channel simultaneously. The joint strategy enables to search a better status than previous studies that focused on individual dimension solely, as our method is optimized collaboratively across the three dimensions in a single end-to-end training. Moreover, each dimension that we consider can promote to get better performance through colluding with the other two. Our method is realized by the adapted stochastic gradient estimation. Extensive experiments on large-scale ImageNet dataset across a variety of network architectures MobileNet V1&V2 and ResNet demonstrate the effectiveness of our proposed method. For instance, we achieve significant margins of 2.5% and 2.6% improvement over the state-of-the-art approach on the already compact MobileNet V1&V2 under an extremely large compression ratio.

4 citations

Posted Content
TL;DR: In this article, for compact Riemannian manifolds isometrically embedded in Euclidean spaces possibly with boundary, the point integral method converges to the eigenvalues and eigenvectors obtained by the Point Integral Method (PIM) with the Neumann boundary.
Abstract: The spectral structure of the Laplacian-Beltrami operator (LBO) on manifolds has been widely used in many applications, include spectral clustering, dimensionality reduction, mesh smoothing, compression and editing, shape segmentation, matching and parameterization, and so on Typically, the underlying Riemannian manifold is unknown and often given by a set of sample points The spectral structure of the LBO is estimated from some discrete Laplace operator constructed from this set of sample points In our previous papers, we proposed the point integral method to discretize the LBO from point clouds, which is also capable to solve the eigenproblem Then one fundmental issue is the convergence of the eigensystem of the discrete Laplacian to that of the LBO In this paper, for compact manifolds isometrically embedded in Euclidean spaces possibly with boundary, we show that the eigenvalues and the eigenvectors obtained by the point integral method converges to the eigenvalues and the eigenfunctions of the LBO with the Neumann boundary, and in addition, we give an estimate of the convergence rate This result provides a solid mathematical foundation for the point integral method in the computation of Laplacian spectra from point clouds

4 citations

Journal ArticleDOI
TL;DR: A novel adversarial domain adaptation approach defined in the spherical feature space, in which spherical classifier for label prediction and spherical domain discriminator for discriminating domain labels is defined and a robust pseudo-label loss is developed to utilize pseudo-labels robustly.
Abstract: Adversarial domain adaptation has been an effective approach for learning domain-invariant features by adversarial training. In this paper, we propose a novel adversarial domain adaptation approach defined in the spherical feature space, in which we define spherical classifier for label prediction and spherical domain discriminator for discriminating domain labels. In the spherical feature space, we develop a robust pseudo-label loss to utilize pseudo-labels robustly, which weights the importance of the estimated labels of target data by the posterior probability of correct labeling, modeled by the Gaussian-uniform mixture model in the spherical space. Our proposed approach can be generally applied to both unsupervised and semi-supervised domain adaptation settings. In particular, to tackle the semi-supervised domain adaptation setting where a few labeled target data are available for training, we proposed a novel reweighted adversarial training strategy for effectively reducing the intra-domain discrepancy within the target domain. We also present theoretical analysis for the proposed method based on the domain adaptation theory. Extensive experiments are conducted on benchmarks for multiple applications, including object recognition, digit recognition, and face recognition. The results show that our method either surpasses or is competitive compared with recent methods for both unsupervised and semi-supervised domain adaptation.

4 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations