scispace - formally typeset
Search or ask a question
Author

Taemin Kim

Bio: Taemin Kim is an academic researcher from Ames Research Center. The author has contributed to research in topics: Terrain & Histogram. The author has an hindex of 10, co-authored 42 publications receiving 750 citations. Previous affiliations of Taemin Kim include Oak Ridge Associated Universities & KAIST.

Papers
More filters
Proceedings ArticleDOI
01 Jan 2009
TL;DR: RANSAC (Random Sample Consensus) has been popular in regression problem with samples contaminated with outliers, but there are a few survey and performance analysis on them.
Abstract: RANSAC (Random Sample Consensus) has been popular in regression problem with samples contaminated with outliers. It has been a milestone of many researches on robust estimators, but there are a few survey and performance analysis on them. This paper categorizes them on their objectives: being accurate, being fast, and being robust. Performance evaluation performed on line fitting with various data distribution. Planar homography estimation was utilized to present performance in real data.

449 citations

Journal ArticleDOI
03 May 2010-PLOS ONE
TL;DR: A microfluidics-based multiplexed IHC (MMIHC) platform that significantly improves IHC performance in reduction of time and tissue consumption, quantification, consistency, sensitivity, specificity and cost-effectiveness is reported.
Abstract: Background Biomarkers play a key role in risk assessment, assessing treatment response, and detecting recurrence and the investigation of multiple biomarkers may also prove useful in accurate prediction and prognosis of cancers. Immunohistochemistry (IHC) has been a major diagnostic tool to identify therapeutic biomarkers and to subclassify breast cancer patients. However, there is no suitable IHC platform for multiplex assay toward personalized cancer therapy. Here, we report a microfluidics-based multiplexed IHC (MMIHC) platform that significantly improves IHC performance in reduction of time and tissue consumption, quantification, consistency, sensitivity, specificity and cost-effectiveness. Methodology/Principal Findings By creating a simple and robust interface between the device and human breast tissue samples, we not only applied conventional thin-section tissues into on-chip without any additional modification process, but also attained perfect fluid control for various solutions, without any leakage, bubble formation, or cross-contamination. Four biomarkers, estrogen receptor (ER), human epidermal growth factor receptor 2 (HER2), progesterone receptor (PR) and Ki-67, were examined simultaneously on breast cancer cells and human breast cancer tissues. The MMIHC method improved immunoreaction, reducing time and reagent consumption. Moreover, it showed the availability of semi-quantitative analysis by comparing Western blot. Concordance study proved strong consensus between conventional whole-section analysis and MMIHC (n = 105, lowest Kendall's coefficient of concordance, 0.90). To demonstrate the suitability of MMIHC for scarce samples, it was also applied successfully to tissues from needle biopsies. Conclusions/Significance The microfluidic system, for the first time, was successfully applied to human clinical tissue samples and histopathological diagnosis was realized for breast cancers. Our results showing substantial agreement indicate that several cancer-related proteins can be simultaneously investigated on a single tumor section, giving clear advantages and technical advances over standard immunohistochemical method. This novel concept will enable histopathological diagnosis using numerous specific biomarkers at a time even for small-sized specimens, thus facilitating the individualization of cancer therapy.

79 citations

Proceedings ArticleDOI
Taemin Kim1, Hyun S. Yang1
08 Oct 2006
TL;DR: A novel method to extend the grayscale histogram equalization (GHE) for color images in a multi-dimension that can generate a uniform histogram, thus minimizing the disparity between the histogram and uniform distribution.
Abstract: In this paper, a novel method to extend the grayscale histogram equalization (GHE) for color images in a multi-dimension is proposed. Unlike most current techniques, the proposed method can generate a uniform histogram, thus minimizing the disparity between the histogram and uniform distribution. A histogram of any dimension is regarded as a mixture of isotropic Gaussians. This method is a natural extension of the GHE to a multi-dimension. An efficient algorithm for the histogram equalization is provided. The results show that this approach is valid, and a psycho-visual study on a target distribution will improve the practical use of the proposed method.

37 citations

Journal ArticleDOI
Minseok Kim1, Seyong Kwon1, Taemin Kim1, Eun Sook Lee2, Je-Kyun Park1 
TL;DR: The applicability of the microfluidic IHC/ICC platform for quantitative proteomic profiling in breast cancer samples to clinical samples and to human breast cancer tissue is demonstrated, indicating that this platform is useful for accurate histopathological diagnoses using numerous specific biomarkers simultaneously.

34 citations

Proceedings ArticleDOI
10 Oct 2009
TL;DR: Adaptive RANSAC is proposed to solve this problem, which is based on Maximum Likelihood Sample Consensus (MLESAC), which estimates the ratio of outliers through expectation maximization (EM), which entails the necessary number of iteration for each frame.
Abstract: The core step of video stabilization is to estimate global motion from locally extracted motion clues. Outlier motion clues are generated from moving objects in image sequence, which cause incorrect global motion estimates. Random Sample Consensus (RANSAC) is popularly used to solve such outlier problem. RANSAC needs to tune parameters with respect to the given motion clues, so it sometimes fail when outlier clues are increased than before. Adaptive RANSAC is proposed to solve this problem, which is based on Maximum Likelihood Sample Consensus (MLESAC). It estimates the ratio of outliers through expectation maximization (EM), which entails the necessary number of iteration for each frame. The adaptation sustains high accuracy in varying ratio of outliers and faster than RANSAC when fewer iteration is enough. Performance of adaptive RANSAC is verified in experiments using four images sequences.

28 citations


Cited by
More filters
Proceedings Article
01 Jan 2007
TL;DR: In this paper, the Gaussian Process Latent Variable Model (GPLVM) is used to reconstruct a topological connectivity graph from a signal strength sequence, which can be used to perform efficient WiFi SLAM.
Abstract: WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization.

488 citations

Journal ArticleDOI
TL;DR: Modifications to the automated, open source NASA Ames Stereo Pipeline to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, and point cloud co-registration.
Abstract: We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ∼0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ∼2 m with direct geolocation accuracy of

470 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: In this paper, a multi-layer perceptron operating on pixel coordinates rather than directly on the image is proposed to learn to find good correspondences for wide-baseline stereo.
Abstract: We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.

456 citations

Journal ArticleDOI
TL;DR: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed and it reconstructs the enhanced image by applying inverse DWT.
Abstract: In this letter, a new satellite image contrast enhancement technique based on the discrete wavelet transform (DWT) and singular value decomposition has been proposed. The technique decomposes the input image into the four frequency subbands by using DWT and estimates the singular value matrix of the low-low subband image, and, then, it reconstructs the enhanced image by applying inverse DWT. The technique is compared with conventional image equalization techniques such as standard general histogram equalization and local histogram equalization, as well as state-of-the-art techniques such as brightness preserving dynamic histogram equalization and singular value equalization. The experimental results show the superiority of the proposed method over conventional and state-of-the-art techniques.

310 citations

Journal ArticleDOI
TL;DR: The theory of robust regression (RR) is developed and an effective convex approach that uses recent advances on rank minimization is presented that applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification.
Abstract: Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( ${\mathbf X}$ ) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables ${\mathbf X}$ to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR.

268 citations