scispace - formally typeset
Search or ask a question
Author

Tieniu Tan

Bio: Tieniu Tan is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Feature extraction & Iris recognition. The author has an hindex of 96, co-authored 704 publications receiving 39487 citations. Previous affiliations of Tieniu Tan include Association for Computing Machinery & Center for Excellence in Education.


Papers
More filters
Proceedings ArticleDOI
31 Oct 2008
TL;DR: A key-binding scheme based on iris data is proposed, in which reliable region is selected to reduce the intra-class variation, and error control technique that combines RS and convolutional codes is used to increase the key length.
Abstract: The combination of biometrics and cryptography is a promising information security technique which offers an efficient way to protect the biometric template, as well as to facilitate user authentication and key management. We propose a key-binding scheme based on iris data, in which reliable region is selected to reduce the intra-class variation, and error control technique that combines RS and convolutional codes is used to increase the key length. The scheme does not reveal any significant information about the key and the original iris template, and the system achieves a false rejection rate (FRR) of less than 0.5% with the key length of 218 bits.
Journal ArticleDOI
01 Jan 2020
TL;DR: In this paper, two 325 MHz Nb/Cu QWR cavities have been fabricated at IMP, to demonstrate whether the niobium coated copper cavity technique can meet the requirements of CiADS.
Abstract: The possibility for adopting niobium thin film coated copper (Nb/Cu) quarter wave resonators (QWRs) in the low energy section of CiADS project[1] is being evaluated. Comparing with bulk niobium cavities, the Nb/Cu cavities feature a much better thermal and mechanical stability at 4.5 K. Two 325 MHz Nb/Cu QWR cavities have been fabricated at IMP, to demonstrate whether the niobium coated copper cavity technique can meet the requirements of CiADS. The cavity is coated with biased DC diode sputtering technique. This paper covers resulting film characters, vertical tests with the evolution of the sputtering process, and improvements to mitigate issues we met.
Patent
13 Sep 2018
TL;DR: Zhang et al. as mentioned in this paper proposed an image tampering forensics method and device, which consists of marking observation clues of an image to be detected, constructing a 3D morphable model of a class object to which a target object belongs, and estimating a three-dimensional normal vector of a support plane according to the observation clues.
Abstract: An image tampering forensics method and device. The method comprises: marking observation clues of an image to be detected (S101); constructing a three-dimensional morphable model of a class object to which a target object belongs (S102); estimating a three-dimensional normal vector of a support plane according to the observation clues (S103); estimating a three-dimensional posture of the target object according to the observation clues and the three-dimensional morphable model to further obtain a normal vector of a plane at which one side, contacting the support plane, of the target object is located (S104); calculating the parallelism between the target object and the support plane, and/or among a plurality of target objects, and determining, according to the parallelism, whether the image to be detected is a tampered image (S105). Compared with the prior art, the image tampering forensics method and device can determine, according to the parallelism between the target object and the support plane, and/or among a plurality of target objects, whether the image to be detected is a tampered image, and can thus effectively determine whether a low-quality image is a tampered image.
Journal ArticleDOI
TL;DR: In this article , the authors divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements.
Abstract: Image and video synthesis has become a blooming topic in computer vision and machine learning communities along with the developments of deep generative models, due to its great academic and application value. Many researchers have been devoted to synthesizing high-fidelity human images as one of the most commonly seen object categories in daily lives, where a large number of studies are performed based on various deep generative models, task settings and applications. Thus, it is necessary to give a comprehensive overview on these variant methods on human image generation. In this paper, we divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods. For each route, the most representative models and the corresponding variants are presented, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements. Besides, the main public human image datasets and evaluation metrics in the literature are also summarized. Furthermore, due to the wide application potentials, two typical downstream usages of synthesized human images are covered, i.e., data augmentation for person recognition tasks and virtual try-on for fashion customers. Finally, we discuss the challenges and potential directions of human image generation to shed light on future research.
Proceedings ArticleDOI
01 Aug 2018
TL;DR: The experiments demonstrate that the proposed inception-block donut convolutional network is orthogonal and can further improve the performance of most off-the-shelf bottom-up based methods.
Abstract: One of recent trends in network architecture design confirms that the inception-block convolutional group is efficient, since it can aggregate spatial context information in lower dimensions without causing significant loss in representative capabilities. We believe that not only the strong correlation between adjacent cells, multi-scale feature extraction also plays a vital role in this novel module. In this paper, we extend the profits of the block to a top-down donut convolutional network for semantic segmentation task. Our network automatically learns rich convolution kernels to capture more structure prior. In the inception-block design, it overcomes the limitations in larger kernel size and adaptively captures different object-scales contexts without chain sampling. Our experiments demonstrate that the proposed inception-block donut convolutional network is orthogonal and can further improve the performance of most off-the-shelf bottom-up based methods.

Cited by
More filters
Proceedings ArticleDOI
23 Jun 2014
TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

21,729 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: A generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis.
Abstract: Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed "uniform," are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.

14,245 citations

Posted Content
TL;DR: This paper proposes a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%.
Abstract: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at this http URL.

13,081 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations