scispace - formally typeset
Search or ask a question
Author

Jongin Son

Bio: Jongin Son is an academic researcher from Yonsei University. The author has contributed to research in topics: Object detection & Feature (computer vision). The author has an hindex of 6, co-authored 10 publications receiving 229 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a real-time and illumination invariant lane detection method for lane departure warning system that works well in various illumination conditions such as in bad weather conditions and at night time.
Abstract: Invariant property of lane color under various illuminations is utilized for lane detection.Computational complexity is reduced using vanishing point detection and adaptive ROI.Datasets for evaluation include various environments from several devices.Simulation demo demonstrate fast and powerful performance for real-time applications. Lane detection is an important element in improving driving safety. In this paper, we propose a real-time and illumination invariant lane detection method for lane departure warning system. The proposed method works well in various illumination conditions such as in bad weather conditions and at night time. It includes three major components: First, we detect a vanishing point based on a voting map and define an adaptive region of interest (ROI) to reduce computational complexity. Second, we utilize the distinct property of lane colors to achieve illumination invariant lane marker candidate detection. Finally, we find the main lane using a clustering method from the lane marker candidates. In case of lane departure situation, our system sends driver alarm signal. Experimental results show satisfactory performance with an average detection rate of 93% under various illumination conditions. Moreover, the overall process takes only 33ms per frame.

194 citations

Journal ArticleDOI
TL;DR: A robust image matching method to provide invariance to the illumination and viewpoint variations by focusing on how to solve these limitations and incorporate this scheme into the vision-based localization system is proposed.
Abstract: Robust feature point extraction to provide viewpoint invariances with virtual viewpoint.Robust feature descriptor to provide illumination invariances with modified binary pattern.Efficient computational model to reduce time complexity of localization system.Extensive experiments and simulation demos for objective evaluations. A sensor-based vision localization system is one of the most essential technologies in computer vision applications like an autonomous navigation, surveillance, and many others. Conventionally, sensor-based vision localization systems have three inherent limitations, These include, sensitivity to illumination variations, viewpoint variations, and high computational complexity. To overcome these problems, we propose a robust image matching method to provide invariance to the illumination and viewpoint variations by focusing on how to solve these limitations and incorporate this scheme into the vision-based localization system. Based on the proposed image matching method, we design a robust localization system that provides satisfactory localization performance with low computational complexity. Specifically, in order to solve the problem of illumination and viewpoint, we extract a key point using a virtual view from a query image and the descriptor based on the local average patch difference, similar to HC-LBP. Moreover, we propose a key frame selection method and a simple tree scheme for fast image search. Experimental results show that the proposed localization system is four times faster than existing systems, and exhibits better matching performance compared to existing algorithms in challenging environments with difficult illumination and viewpoint conditions.

24 citations

Proceedings ArticleDOI
25 Oct 2012
TL;DR: By incorporating geometric information with a color-based road probability map, the proposed method robustly detect road regions on real scene containing variation of illumination such as shadow and mixed artificial lights.
Abstract: Road detection is an important task in intelligent transportation system (ITS). Over the past few decades, several vision-based approaches for road detection have been proposed and most of them are based on color information. However, color information may result in false road detection under variation of illumination conditions. To deal with illumination problems, we propose an illumination invariant road detection method using geometric information. By incorporating geometric information with a color-based road probability map, the proposed method robustly detect road regions on real scene containing variation of illumination such as shadow and mixed artificial lights. Experimental results show that the proposed method outperforms the conventional methods.

18 citations

Journal ArticleDOI
TL;DR: A reliability factor is introduced to measure an inhomogeneity of the regions quantitatively and a disparity feature with reliability votes for localizing obstacles and dominant candidates in voting map are selected as initial obstacle region, which shows satisfactory performance under various real parking environments.
Abstract: HightlightsUtilizing three features such as disparity, super pixel segments and pixel-wise gradient.Computing the reliability of disparity from super pixel segments and pixel-wise gradient.Developing voting map to reduce time complexity of initial obstacle region.Superior performance with erroneous disparity information and in complex environments. A vision based real-time rear obstacle detection system is one of the most essential technologies, which can be used in many applications such as a parking assistance systems and intelligent vehicles. Although disparity is a useful feature for detecting obstacles, estimating a correct disparity map is a hard problem due to the matching ambiguity and noise sensitivity, especially in homogeneous regions. To overcome these problems, we leverage reliable disparities only for obstacle detection. A reliability factor is introduced to measure an inhomogeneity of the regions quantitatively. It is computed at each superpixel to consider the noise sensitivity of pixel-wise gradients and to assign similar reliability value within a same object. It includes two major components: firstly, In a feature extraction and combining stage, we extract three features from stereo images such as disparity, superpixel segments and pixel-wise gradient and compute the reliability of disparity from superpixel segments and the pixel-wise gradient. Secondly, In an obstacle detection stage, a disparity feature with reliability votes for localizing obstacles and dominant candidates in voting map are selected as initial obstacle region. The initial obstacle regions are expanded into their neighbor superpixels based on CIELAB color similarity and distance similarity between superpixels. Experimental results show satisfactory performance under various real parking environments. Its detection rate is at least 4% higher than those of other existing methods, and its false detection rate is more than 10% lower and thus, can be used for parking assistance system.

16 citations

Proceedings ArticleDOI
18 Nov 2011
TL;DR: This paper adopts learning method to estimate illumination invariant direction which is specified to road surface, and incorporates scene layout of road image to reduce false positive detection rate outside the road.
Abstract: Road detection is an essential and important component in intelligent transportation system (ITS). Generally, most road detection methods are sensitive to variation of illumination which results in increasing false detection rate. In this paper, we propose an illumination invariant road detection method to deal with variation of illumination. We adopt learning method to estimate illumination invariant direction which is specified to road surface. Once this direction is estimated, we can classify image pixel as road or not. Incorporating scene layout of road image, we reduce false positive detection rate outside the road. Experimental results on real road scenes show that the effectiveness of the proposed method.

14 citations


Cited by
More filters
Proceedings Article
27 Apr 2018
TL;DR: This paper proposes Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer.
Abstract: Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coherences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-by-slice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset. The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.

539 citations

Book
01 Jan 1997
TL;DR: This book is a good overview of the most important and relevant literature regarding color appearance models and offers insight into the preferred solutions.
Abstract: Color science is a multidisciplinary field with broad applications in industries such as digital imaging, coatings and textiles, food, lighting, archiving, art, and fashion. Accurate definition and measurement of color appearance is a challenging task that directly affects color reproduction in such applications. Color Appearance Models addresses those challenges and offers insight into the preferred solutions. Extensive research on the human visual system (HVS) and color vision has been performed in the last century, and this book contains a good overview of the most important and relevant literature regarding color appearance models.

496 citations

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed Spatial CNN, which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer.
Abstract: Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coherences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset. The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%.

200 citations

Journal ArticleDOI
TL;DR: The feasibility and effectiveness of the designed method for autonomous lane change solves two crucial issues - trajectory planning and trajectory tracking and can be extended applied in intelligent vehicles in future.
Abstract: Autonomous lane change maneuver was developed using cooperative strategy.Proposed system can be potential to prevent lane change crashes and thus reducing injuries and fatalities.A trajectory planning method based on polynomial was developed.A trajectory tracking controller with global convergence ability was designed.Simulations and experimental results were presented to validate the method. Lane change maneuver is one of the most conventional behaviors in driving. Unsafe lane change maneuvers are key factor for traffic accidents and traffic congestion. For drivers' safety, comfort and convenience, advanced driver assistance systems (ADAS) are presented. The main problem discussed in this paper is the development of an autonomous lane change system. The system can be extended applied in intelligent vehicles in future. It solves two crucial issues - trajectory planning and trajectory tracking. Polynomial method was adopted for describing the trajectory planning issue. Movement of a host vehicle was abstracted into time functions. Moreover, collision detection was mapped into a parameter space by adopting infinite dynamic circles. The second issue was described by backstepping principle. According to the Lyapunov function, a tracking controller with global convergence property was verified. Both the simulations and the experimental results demonstrate the feasibility and effectiveness of the designed method for autonomous lane change.

200 citations

Journal ArticleDOI
TL;DR: An overview of current LDW system is provided, describing in particular pre-processing, lane models, lane de Ntection techniques and departure warning system.

196 citations