scispace - formally typeset
Search or ask a question
Author

Rae-Hong Park

Bio: Rae-Hong Park is an academic researcher from Sogang University. The author has contributed to research in topics: Motion estimation & Motion compensation. The author has an hindex of 37, co-authored 370 publications receiving 5856 citations. Previous affiliations of Rae-Hong Park include Seoul National University & University of Maryland, College Park.


Papers
More filters
Journal ArticleDOI
TL;DR: Computer simulation results reveal that most algorithms perform consistently well on images with a bimodal histogram, however, all algorithms break down for a certain ratio of population of object and background pixels in an image, which in practice may arise quite frequently.
Abstract: A comparative performance study of five global thresholding algorithms for image segmentation was investigated. An image database with a wide variety of histogram distribution was constructed. The histogram distribution was changed by varying the object size and the mean difference between object and background. The performance of five algorithms was evaluated using the criterion functions such as the probability of error, shape, and uniformity measures Attempts also have been made to evaluate the performance of each algorithm on the noisy image. Computer simulation results reveal that most algorithms perform consistently well on images with a bimodal histogram. However, all algorithms break down for a certain ratio of population of object and background pixels in an image, which in practice may arise quite frequently. Also, our experiments show that the performances of the thresholding algorithms discussed in this paper are data-dependent. Some analysis is presented for each of the five algorithms based on the performance measures.

556 citations

Journal ArticleDOI
TL;DR: This work analyzes the conventional Hausdorff distance measures and proposes two robust HD measures based on m-estimation and least trimmed square which are more efficient than the conventional HD measures.
Abstract: A Hausdorff distance (HD) is one of commonly used measures for object matching. This work analyzes the conventional HD measures and proposes two robust HD measures based on m-estimation and least trimmed square (LTS) which are more efficient than the conventional HD measures. By computer simulation, the matching performance of the conventional and proposed HD measures is compared with synthetic and real images.

275 citations

Journal ArticleDOI
TL;DR: An integrated system for navigation parameter estimation using sequential aerial images, where the navigation parameters represent the positional and velocity information of an aircraft for autonomous navigation is presented.
Abstract: Presents an integrated system for navigation parameter estimation using sequential aerial images, where the navigation parameters represent the positional and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values reduces the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large positional error. Therefore, absolute position estimation is required to compensate for the positional error generated by the relative position estimation. Absolute position estimation algorithms using image matching and digital elevation model (DEM) matching are presented. In the image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in the DEM matching, an algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.

207 citations

Journal ArticleDOI
01 Feb 2005
TL;DR: A fast noise estimation algorithm using a Gaussian pre-filter that can be applied to noise reduction in commercial image- or video-based applications such as digital cameras and digital television (DTV) for its performance and simplicity.
Abstract: This paper proposes a fast noise estimation algorithm using a Gaussian filter. It is based on block-based noise estimation, in which an input image is assumed to be contaminated by the additive white Gaussian noise and a filtering process is performed by an adaptive Gaussian filter. Coefficients of a Gaussian filter are selected as functions of the standard deviation of the Gaussian noise that is estimated from an input noisy image. For estimation of the amount of noise (i.e., standard deviation of the Gaussian noise), we split an image into a number of blocks and select smooth blocks that are classified by the standard deviation of intensity of a block, where the standard deviation is computed from the difference of the selected block images between the noisy input image and its filtered image. In the experiments, the performance of the proposed algorithm is compared with that of the three conventional (block-based and filtering-based) noise estimation methods. Experiments with several still images show the effectiveness of the proposed algorithm. The proposed noise estimation algorithm can be efficiently applied to noise reduction in commercial image - or video-based applications such as digital cameras and digital television (DTV) for its performance and simplicity.

197 citations

Journal ArticleDOI
TL;DR: In transmitting moving pictures, interframe coding is shown to be effective for compressing video data and a hierarchical motion vector estimation algorithm using mean pyramid is proposed, which reduces the computational complexity greatly with its performance comparable to that of the full search.
Abstract: In transmitting moving pictures, interframe coding is shown to be effective for compressing video data. A hierarchical motion vector estimation algorithm using mean pyramid is proposed. Using the same measurement window at each level of a pyramid, the proposed algorithm, based on the tree pruning, reduces the computational complexity greatly with its performance comparable to that of the full search (FS). By varying the number of candidate motion vectors which are to be used as the initial search points for motion vector estimation at the next level, the mean squared error of the proposed algorithm varies, ranging between those of the FS and three step search (TSS) methods. Also, depending on the number of candidate motion vectors, the computational complexity of the proposed hierarchical algorithm ranges from 1/8-1/2 of that of the FS. The computer simulation results of the proposed technique compared with the conventional methods are given for various test sequences. >

182 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images, and the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications are identified.
Abstract: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316)

4,543 citations

Book
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.

2,849 citations

Journal ArticleDOI

2,415 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations