scispace - formally typeset
Search or ask a question
Author

Sang-Hoon Kim

Bio: Sang-Hoon Kim is an academic researcher from Cheju Halla University. The author has contributed to research in topics: Camera resectioning & Stereoscopy. The author has an hindex of 2, co-authored 4 publications receiving 159 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a real-time and illumination invariant lane detection method for lane departure warning system that works well in various illumination conditions such as in bad weather conditions and at night time.
Abstract: Invariant property of lane color under various illuminations is utilized for lane detection.Computational complexity is reduced using vanishing point detection and adaptive ROI.Datasets for evaluation include various environments from several devices.Simulation demo demonstrate fast and powerful performance for real-time applications. Lane detection is an important element in improving driving safety. In this paper, we propose a real-time and illumination invariant lane detection method for lane departure warning system. The proposed method works well in various illumination conditions such as in bad weather conditions and at night time. It includes three major components: First, we detect a vanishing point based on a voting map and define an adaptive region of interest (ROI) to reduce computational complexity. Second, we utilize the distinct property of lane colors to achieve illumination invariant lane marker candidate detection. Finally, we find the main lane using a clustering method from the lane marker candidates. In case of lane departure situation, our system sends driver alarm signal. Experimental results show satisfactory performance with an average detection rate of 93% under various illumination conditions. Moreover, the overall process takes only 33ms per frame.

194 citations

Journal ArticleDOI
31 Jul 2011
TL;DR: A hybrid stereoscopic camera system which acquires and utilizes stereoscopic images from two different camera modules, the main- camera module and the sub-camera module is proposed.
Abstract: In this paper, we propose a hybrid stereoscopic camera system which acquires and utilizes stereoscopic images from two different camera modules, the main-camera module and the sub-camera module. Hybrid stereoscopic camera can effectively reduce the price and the size of a stereoscopic camera by using a relatively small and cheap sub-camera module such as a mobile phone camera. Images from the two different camera modules are very different from each other in aspects of color, angle of view, scale, resolution and so on. The proposed system performs an efficient hybrid stereoscopic image registration algorithm that transforms hybrid stereoscopic images into normal stereoscopic images based-on camera geometry. As experimental results, the registered stereoscopic images and applications of the proposed system are shown to demonstrate the performance and the functionality of the proposed camera system.

7 citations

Journal ArticleDOI
30 Jul 2012
TL;DR: In this article, the authors used depth images to estimate the light transmission and the degradation factor by the scattered light and recovered the scatter-free images by removing the scattered components on the image according to the estimated transmission.
Abstract: In the underwater environment, light is absorbed and scattered by water and floating particles, which makes the underwater images suffer from color degradation and limited visibility. Physically, the amount of the scattered light transmitted to the image is proportional to the distance between the camera and the object. In this paper, the proposed visibility enhancement. method utilizes depth images to estimate the light transmission and the degradation factor by the scattered light. To recover the scatter-free images without unnatural artifacts, the proposed method normalizes the degradation factor based on the value of each pixel of the image. Finally, the scatter-free images are obtained by removing the scattered components on the image according to the estimated transmission. The proposed method also considers the color discrepancies of underwater stereo images so that the stereo images have the same color appearance after the visibility enhancement. The experimental results show that the proposed method improves the color contrast more than 5% to 14% depending on the experimental images.
Journal ArticleDOI
31 Aug 2015
TL;DR: In this paper, Moon et al. presented a depth map based scene detection system for 3D animation, depth value, depth chart, depth map, and scene detection scene detection.
Abstract:                                                      !   Keywords : S3D animation, depth value, depth chart, depth map, scene detection  ※ Corresponding Author: Moon suk hwanReceived : May 31, 2015Revised : August 19, 2015Accepted : August 31, 2015* Cheju Halla University Dept. of Broadcasting & Film Tel: +82-64-741-7466, Fax: +82-64-741-7465 email: shkim0207@chu.ac.kr** Cheju Halla University, Dept. of Digital ContentsTel: +82-64-741-7668, Fax: +82-64-741-7465 email: msh@chu.ac.kr▣ 본 연구는 산업통상자원부의 2014년도 지역특화산업육성사업 기술개발 연구비 지원에 의해 수행되었음.   ! "#$%&'()'*+,-./0123 4567589:; ?@ABCDEFGHIJ K/"L M./0NEO5EPQ9: "RSTUV/W'X+ Y Z[\]^[_`' >1KG=abcd0ef5+./023 1,-/Wgh1i'j0@klmnogpq N56758'j'9:ZrlmnogpqNW5EstuZZvwx0Nytzg9:ZvwxO{w|}|3~]€e‚ƒ„y 89†‡ˆ:‰g+Zv

Cited by
More filters
Proceedings Article
27 Apr 2018
TL;DR: This paper proposes Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer.
Abstract: Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coherences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-by-slice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset. The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.

539 citations

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed Spatial CNN, which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer.
Abstract: Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coherences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset. The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.

200 citations

Journal ArticleDOI
TL;DR: The feasibility and effectiveness of the designed method for autonomous lane change solves two crucial issues - trajectory planning and trajectory tracking and can be extended applied in intelligent vehicles in future.
Abstract: Autonomous lane change maneuver was developed using cooperative strategy.Proposed system can be potential to prevent lane change crashes and thus reducing injuries and fatalities.A trajectory planning method based on polynomial was developed.A trajectory tracking controller with global convergence ability was designed.Simulations and experimental results were presented to validate the method. Lane change maneuver is one of the most conventional behaviors in driving. Unsafe lane change maneuvers are key factor for traffic accidents and traffic congestion. For drivers' safety, comfort and convenience, advanced driver assistance systems (ADAS) are presented. The main problem discussed in this paper is the development of an autonomous lane change system. The system can be extended applied in intelligent vehicles in future. It solves two crucial issues - trajectory planning and trajectory tracking. Polynomial method was adopted for describing the trajectory planning issue. Movement of a host vehicle was abstracted into time functions. Moreover, collision detection was mapped into a parameter space by adopting infinite dynamic circles. The second issue was described by backstepping principle. According to the Lyapunov function, a tracking controller with global convergence property was verified. Both the simulations and the experimental results demonstrate the feasibility and effectiveness of the designed method for autonomous lane change.

200 citations

Journal ArticleDOI
TL;DR: An overview of current LDW system is provided, describing in particular pre-processing, lane models, lane de Ntection techniques and departure warning system.

196 citations

Journal ArticleDOI
TL;DR: A stacked ELM architecture in the CNN framework is proposed using an extreme learning machine (ELM) and the backpropagation algorithm is modified to find the targets of hidden layers and effectively learn network weights while maintaining performance.

167 citations