scispace - formally typeset
Search or ask a question

Showing papers on "Corner detection published in 2014"


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed local descriptor based registration method can achieve reliable registration outcome, and the LSS-based similarity metric is robust to non-linear intensity differences among multispectral remote sensing images.
Abstract: Image registration is a crucial step for remote sensing image processing Automatic registration of multispectral remote sensing images could be challenging due to the significant non-linear intensity differences caused by radiometric variations among such images To address this problem, this paper proposes a local descriptor based registration method for multispectral remote sensing images The proposed method includes a two-stage process: pre-registration and fine registration The pre-registration is achieved using the Scale Restriction Scale Invariant Feature Transform (SR-SIFT) to eliminate the obvious translation, rotation, and scale differences between the reference and the sensed image In the fine registration stage, the evenly distributed interest points are first extracted in the pre-registered image using the Harris corner detector Then, we integrate the local self-similarity (LSS) descriptor as a new similarity metric to detect the tie points between the reference and the pre-registered image, followed by a global consistency check to remove matching blunders Finally, image registration is achieved using a piecewise linear transform The proposed method has been evaluated with three pairs of multispectral remote sensing images from TM, ETM+, ASTER, Worldview, and Quickbird sensors The experimental results demonstrate that the proposed method can achieve reliable registration outcome, and the LSS-based similarity metric is robust to non-linear intensity differences among multispectral remote sensing images

163 citations


Journal ArticleDOI
TL;DR: A performance analysis of the FPGA and the GPU implementations, and an extra CPU reference implementation, shows the competitive throughput of the proposed architecture even at a much lower clock frequency than those of the GPU and the CPU.
Abstract: This work presents a new flexible parameterizable architecture for image and video processing with reduced latency and memory requirements, supporting a variable input resolution. The proposed architecture is optimized for feature detection, more specifically, the Canny edge detector and the Harris corner detector. The architecture contains neighborhood extractors and threshold operators that can be parameterized at runtime. Also, algorithm simplifications are employed to reduce mathematical complexity, memory requirements, and latency without losing reliability. Furthermore, we present the proposed architecture implementation on an FPGA-based platform and its analogous optimized implementation on a GPU-based architecture for comparison. A performance analysis of the FPGA and the GPU implementations, and an extra CPU reference implementation, shows the competitive throughput of the proposed architecture even at a much lower clock frequency than those of the GPU and the CPU. Also, the results show a clear advantage of the proposed architecture in terms of power consumption and maintain a reliable performance with noisy images, low latency and memory requirements.

91 citations


Journal ArticleDOI
TL;DR: A framework of a complete image stitching system based on feature based approaches will be introduced and the current challenges of image stitching will be discussed.
Abstract: stitching (Mosaicing) is considered as an active research area in computer vision and computer graphics. Image stitching is concerned with combining two or more images of the same scene into one high resolution image which is called panoramic image. Image stitching techniques can be categorized into two general approaches: direct and feature based techniques. Direct techniques compare all the pixel intensities of the images with each other, whereas feature based techniques aim to determine a relationship between the images through distinct features extracted from the processed images. The last approach has the advantage of being more robust against scene movement, faster, and has the ability to automatically discover the overlapping relationships among an unordered set of images. The purpose of this paper is to present a survey about the feature based image stitching. The main components of image stitching will be described. A framework of a complete image stitching system based on feature based approaches will be introduced. Finally, the current challenges of image stitching will be discussed. Keywordsstitching/mosaicing, panoramic image, features based detection, SIFT, SURF, image blending.

83 citations


Journal ArticleDOI
TL;DR: This work implements pedestrian dead reckoning (PDR) for indoor localization with a waist-mounted PDR based system on a smart-phone that estimates the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem.
Abstract: We implement pedestrian dead reckoning (PDR) for indoor localization. With a waist-mounted PDR based system on a smart-phone, we estimate the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem. We propose a zero velocity update (ZUPT) method to address sensor drift error: Simple harmonic motion and a low-pass filtering mechanism combined with the analysis of gait characteristics. This method does not require training to develop the step length model. Exploiting the geometric similarity between the user trajectory and the floor map, our map matching algorithm includes three different filters to calibrate the direction errors from the gyro using building floor plans. A sliding-window-based algorithm detects corners. The system achieved 98% accuracy in estimating user walking distance with a waist-mounted phone and 97% accuracy when the phone is in the user's pocket. ZUPT improves sensor drift error (the accuracy drops from 98% to 84% without ZUPT) using 8 Hz as the cut-off frequency to filter out sensor noise. Corner length impacted the corner detection algorithm. In our experiments, the overall location error is about 0.48 meter.

74 citations


Journal ArticleDOI
TL;DR: This paper presents software that implements optical flow motion tracking using the Lucas-Kanade algorithm that is immensely fast, allowing for real-time motion tracking on videos in Full HD or even 4K format and also supports multiple GPU systems, where it scales up very well.
Abstract: Motion tracking algorithms are widely used in computer vision related research. However, the new video standards, especially those in high resolutions, cause that current implementations, even running on modern hardware, no longer meet the needs of real-time processing. To overcome this challenge several GPU (Graphics Processing Unit) computing approaches have recently been proposed. Although they present a great potential of a GPU platform, hardly any is able to process high definition video sequences efficiently. Thus, a need arose to develop a tool being able to address the outlined problem. In this paper we present software that implements optical flow motion tracking using the Lucas-Kanade algorithm. It is also integrated with the Harris corner detector and therefore the algorithm may perform sparse tracking, i.e. tracking of the meaningful pixels only. This allows to substantially lower the computational burden of the method. Moreover, both parts of the algorithm, i.e. corner selection and tracking, are implemented on GPU and, as a result, the software is immensely fast, allowing for real-time motion tracking on videos in Full HD or even 4K format. In order to deliver the highest performance, it also supports multiple GPU systems, where it scales up very well.

46 citations


Journal ArticleDOI
28 Feb 2014-Sensors
TL;DR: The proposed Mean Projection Transform as a corner classifier and parabolic fit approximation to form a robust detector presents fewer false-positive and false-negative points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods.
Abstract: Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.

40 citations


Book ChapterDOI
01 Jun 2014
TL;DR: The object extraction method works on two well known algorithms: the Canny edge detection method and the quadrilaterals detection and allows to filter out unnecessary key points from the background image.
Abstract: In this paper we present a new method for obtaining a list of interest objects from a single image. Our object extraction method works on two well known algorithms: the Canny edge detection method and the quadrilaterals detection. Our approach allows to select only the significant elements of the image. In addition, this method allows to filter out unnecessary key points in a simple way (for example obtained by the SIFT algorithm) from the background image. The effectiveness of the method is confirmed by experimental research.

37 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: This work studies visual subcategorization as a means of capturing appearance variation in object detection schemes using color and gradient features and an ensemble of models that capture visual variation due to varying orientation, truncation, and occlusion degree.
Abstract: Object classes generally contain large intra-class variation, which poses a challenge to object detection schemes. In this work, we study visual subcategorization as a means of capturing appearance variation. First, training data is clustered using color and gradient features. Second, the clustering is used to learn an ensemble of models that capture visual variation due to varying orientation, truncation, and occlusion degree. Fast object detection is achieved with integral image features and pixel lookup features. The framework is studied in the context of vehicle detection on the challenging KITTI dataset.

35 citations


Journal ArticleDOI
TL;DR: The proposed corner detector using the magnitude responses of the imaginary part of the Gabor filters on contours is compared with five state-of-the-art detectors and reveals that the proposed detector is more competitive with respect to detection accuracy, localisation accuracy, affine transforms and noise-robustness.
Abstract: This study proposes a contour-based corner detector using the magnitude responses of the imaginary part of the Gabor filters on contours. Unlike the traditional contour-based methods that detect corners by analysing the shape of the edge contours and searching for local curvature maxima points on planar curves, the proposed corner detector combines the pixels of the edge contours and their corresponding grey-variation information. Firstly, edge contours are extracted from the original image using Canny edge detector. Secondly, the imaginary parts of the Gabor filters are used to smooth the pixels on the edge contours. At each edge pixel, the magnitude responses at each direction are normalised by their values and the sum of the normalised magnitude response at each direction is used to extract corners from edge contours. Thirdly, both the magnitude response threshold and the angle threshold are used to remove the weak or false corners. Finally, the proposed detector is compared with five state-of-the-art detectors on some grey-level images. The results from the experiment reveal that the proposed detector is more competitive with respect to detection accuracy, localisation accuracy, affine transforms and noise-robustness.

32 citations


Journal ArticleDOI
TL;DR: This paper first extracts the prominent feature points from each target object, and then uses a particle filter-based approach to track the feature points in image sequences based on various attributes such as location, velocity and other descriptors.

31 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: This paper proposes a FPGA implementation based on sliding processing window for Harris corner algorithm that has very good performance with significant less BRAM usage with respect to other approaches.
Abstract: This paper proposes a FPGA implementation based on sliding processing window for Harris corner algorithm. It represents one of the most frequently used pre-processing method, for a wide variety of image processing algorithms, such as feature detection, motion tracking, image registration, etc‥ It relies on a series of sequential steps, each processing an image outputted by the previous step. The purpose of the sliding window is to avoid storing intermediate results of processing stages in the external FPGA memory or to avoid utilize large line buffers typically implemented with BRAM blocks. Therefore, the entire processing pipeline benefits from data locality. Implementation results for Virtex5 and Spartan-6 devices show that the proposed solution has very good performance (more than 130 fps for 1280×720 images in Xilinx Spartan-6) with significant less BRAM usage with respect to other approaches.

Journal ArticleDOI
TL;DR: This paper investigates a novel approach for point matching of multi-sensor satellite imagery that incorporates an angle criterion, distance condition and point matching condition in the multi-objective fitness function to match corresponding corner-points between the reference image and the sensed image.

Proceedings ArticleDOI
01 Dec 2014
TL;DR: The paper presents two advanced methods for comparative study in the field of computer vision which involves the implementation of the Scalar Invariant Fourier Transform (SIFT) algorithm for the leaf recognition based on the key descriptors value with the help of Mean Projection algorithm.
Abstract: The paper presents two advanced methods for comparative study in the field of computer vision. The first method involves the implementation of the Scalar Invariant Fourier Transform (SIFT) algorithm for the leaf recognition based on the key descriptors value. The second method involves the contour-based corner detection and classification which is done with the help of Mean Projection algorithm. The advantage of this system over the other Curvature Scale Space (CSS) systems is that there are fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques. The performance analysis of both the algorithm was done on the flavia database.

Proceedings ArticleDOI
01 Aug 2014
TL;DR: This paper proposes to use a local sliding window approach with Integral Channel Features (ICF) and AdaBoost classifier for moving vehicle detection and segmentation and evaluates various object segmentation approaches based on contour extraction, blob extraction, or machine learning to handle such effects.
Abstract: Moving objects play a key role for gaining scene understanding in aerial surveillance tasks. The detection of moving vehicles can be challenging due to high object distance, simultaneous object and camera motion, shadows, or weak contrast. In scenarios where vehicles are driving on busy urban streets, this is even more challenging due to possible merged detections. In this paper, a video processing chain is proposed for moving vehicle detection and segmentation. The fundament for detecting motion which is independent of the camera motion is tracking of local image features such as Harris corners. Independently moving features are clustered. Since motion clusters are prone to merge similarly moving objects, we evaluate various object segmentation approaches based on contour extraction, blob extraction, or machine learning to handle such effects. We propose to use a local sliding window approach with Integral Channel Features (ICF) and AdaBoost classifier.

Journal ArticleDOI
TL;DR: This paper proposes novel eye inner corner detection methods based on AAM and Harris corner detector and demonstrates that a method based on a neural network presents the best performance even in light changing scenarios.
Abstract: Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changing scenarios. In addition to this method, algorithms based on AAM and Harris corner detector present better accuracies than recent high performance face points tracking methods such as Intraface.

Proceedings ArticleDOI
TL;DR: A scale-invariant feature descriptor for representation of light-field images can significantly improve tasks such as object recognition and tracking on images taken with recently popularized light field cameras.
Abstract: Weproposeascale-invariantfeaturedescriptorforrepresentationoflight- eldimages. Theproposeddescriptorcansigni cantly improve tasks such as object recognition and tracking on images taken with recently popularized light eld cameras.We test our proposed representation using various light eld images of different types, both synthetic and real.Ourexperimentsshowverypromisingresultsintermsofretaininginvarianceundervariousscalingtransformations.Keywords: Feature Extraction, Transform, Scale Invariance, Plenoptic Function, Light Field Imaging 1. INTRODUCTION A crucially important part of many computer vision tasks is robust extraction and description of a set of features.These tasks include image matching, stitching, object recognition and face detection among others. 5 A desirable feature descriptor should be invariant to certain transformations such as scaling and rotation. 1 Sev-eral feature detection and description methods have been proposed so far which include Harris corner detector,

Journal ArticleDOI
01 Sep 2014-Optik
TL;DR: An improved Harris corner detection algorithm is proposed based on Barron operator, since Harris corners detection algorithm has a poor accuracy in positioning complex corner detection and may miss certain real corners.

Posted Content
TL;DR: Image Mosaicing is a method of constructing multiple images of the same scene into a larger image by using image-mosaicing algorithms, and various corner detection algorithms is being used for Feature extraction.
Abstract: Image Mosaicing is a method of constructing multiple images of the same scene into a larger image. The output of the image mosaic will be the union of two input images. Image-mosaicing algorithms are used to get mosaiced image. Image Mosaicing processed is basically divided in to 5 phases. Which includes; Feature point extraction, Image registration, Homography computation, Warping and Blending if Image. Various corner detection algorithms is being used for Feature extraction. This corner produces an efficient and informative output mosaiced image. Image mosaicing is widely used in creating 3D images, medical imaging, computer vision, data from satellites, and military automatic target recognition.

Proceedings ArticleDOI
08 Jan 2014
TL;DR: With an average error of +/-2 km/h speed detection was achieved for different video sequences, and frame masking is used to differentiate between one or more vehicles.
Abstract: Vehicle speed detection is used to estimate the velocity of the moving vehicle using image and video processing techniques. Without any camera calibrations video is captured and analyzed for speed in real time. By employing frame subtraction and masking techniques, moving vehicles are segmented out. Speed is calculated using the time taken between frames and corner detected object traversed in that frames. Finally frame masking is used to differentiate between one or more vehicles. With an average error of +/-2 km/h speed detection was achieved for different video sequences.

Proceedings ArticleDOI
14 Jul 2014
TL;DR: The proposed method to identify corners without the manual selection of a threshold parameter makes it ideal for corner detection on a wide range of imagery where the quantity and quality of corners is not known a priori such as in video processing applications.
Abstract: Widely-used corner detectors such as Shi-Tomasi and Harris necessitate the selection of a threshold parameter manually in order to identify good quality corners. The recent attempts based on trial-and-error methods for threshold setting are time-consuming, making them unsuitable for low-cost and embedded video processing applications. In this paper we propose a novel automated thresholding technique for Shi-Tomasi and Harris corner detectors based on an iterative pruning strategy. The proposed pruning strategy involves the rapid extraction of potential corner regions and their evaluation for detecting corners. This pruning strategy is applied iteratively until the required number of corners is identified without necessitating the selection of the threshold parameter. As the complex corner measure computations of the Shi-Tomasi and Harris detectors are only applied to very small regions selected by the proposed pruning method, significant savings in computation is also achieved. In addition, the pruning strategy is computationally simpler, making it suitable for deployment in low cost and embedded applications. Our evaluations on the NiOS-II embedded platform show that the proposed automated thresholding technique is able to achieve an average speedup of 67% in Shi-Tomasi and 51% in Harris, with almost no loss in accuracy. The proposed method to identify corners without the manual selection of a threshold parameter makes it ideal for corner detection on a wide range of imagery where the quantity and quality of corners is not known a priori such as in video processing applications.

Journal ArticleDOI
TL;DR: This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process.
Abstract: This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

Book ChapterDOI
25 Feb 2014
TL;DR: A new resource-aware Harris corner-detection algorithm that can adapt itself to the dynamically varying load on a many-core processor to process the frame within a predefined time interval is presented.
Abstract: Corner-detection techniques are being widely used in computer vision — for example in object recognition to find suitable candidate points for feature registration and matching. Most computer-vision applications have to operate on real-time video sequences, hence maintaining a consistent throughput and high accuracy are important constrains that ensure high-quality object recognition. A high throughput can be achieved by exploiting the inherent parallelism within the algorithm on massively parallel architectures like many-core processors. However, accelerating such algorithms on many-core CPUs offers several challenges as the achieved speedup depends on the instantaneous load on the processing elements. In this work, we present a new resource-aware Harris corner-detection algorithm for many-core processors. The novel algorithm can adapt itself to the dynamically varying load on a many-core processor to process the frame within a predefined time interval. The results show a 19% improvement in throughput and an 18% improvement in accuracy.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This paper presents SICK (Scale-Invariant Corner Keypoints), which is a novel method for fast keypoint detection and is faster to compute and more robust than recent state-of-the-art methods.
Abstract: Effective and efficient generation of keypoints from images is the first step of many computer vision applications, such as object matching. The last decade presented us with an arms race toward faster and more robust keypoint detection, feature description and matching. This resulted in several new algorithms, for example Scale Invariant Features Transform (SIFT), Speed-up Robust Feature (SURF), Oriented FAST and Rotated BRIEF (ORB) and Binary Robust Invariant Scalable Keypoints (BRISK). The keypoint detection has been improved using various techniques in most of these algorithms. However, in the search for faster computing, the accuracy of the algorithms is decreasing. In this paper, we present SICK (Scale-Invariant Corner Keypoints), which is a novel method for fast keypoint detection. Our experiment results show that SICK is faster to compute and more robust than recent state-of-the-art methods.

Proceedings ArticleDOI
08 Jan 2014
TL;DR: This paper presents Image matching and registration method that is invariant to scale, rotation, translation and illumination changes, named as Scale Invariant Feature Transform (SIFT).
Abstract: This paper presents Image matching and registration method that is invariant to scale, rotation, translation and illumination changes. The method is named as Scale Invariant Feature Transform (SIFT). This algorithm will detect and describe image features such as contours, points, corners etc. SIFT descriptors are the characteristic signature of the feature. The features calculated from the image to be registered should be distinctive and then it can be matched. It can be useful in object recognition, image mosaicing, 3 D reconstruction and video tracking. The simulation results shows that this algorithm works well in all types of cases having scale and rotation difference, it also register the object having occlusion and clutter background.

Proceedings ArticleDOI
24 Aug 2014
TL;DR: A human detection method that combines range image segmentation and human detection based on image local features based on Joint HOG features that improves the detection performance for the occluded humans is proposed.
Abstract: This paper proposes a human detection method that combines range image segmentation and human detection based on image local features. The method uses a stereo vision system called Subtraction Stereo, which extracts a range image of foreground regions. An extracted range image is segmented for each object by Mean Shift Clustering. Human detection based on local features is applied to each segment of foreground regions to detect humans. In this process, regions to scan a detection window for extracting local features are restricted. In addition, the size of the detection window is obtained using the distance information of a range image and camera parameters. Therefore, processing time and false detection can be reduced. Joint HOG features are used as the image local features. When applying the Joint HOG based human detection, occlusion of multiple humans is considered in construction of a classifier and in integration of detection windows, which improves the detection performance for the occluded humans. The proposed method is evaluated by experiments comparing with the method using Joint HOG features only. 11fps fast human detection is achieved.

Patent
Yunfang Zhu1, Li Shuiping1, Du Xin1
08 Dec 2014
TL;DR: In this article, a calibration template image is obtained by photographing a calibrated calibration template, and corner detection is performed on the image to extract image corners and a radial distortion parameter is calculated according to the extracted image corners.
Abstract: Embodiments of the present invention disclose a parameter calibration method. The method includes: acquiring a calibration template image, where the calibration template image is obtained by photographing a calibration template; performing corner detection on the calibration template image to extract image corners; calculating a radial distortion parameter according to the extracted image corners; performing radial distortion correction according to the calculated radial distortion parameter, so as to reconstruct a distortion correction image; and according to a perspective projection relationship between the calibration template and the reconstructed distortion correction image, calculating intrinsic and extrinsic parameters to implement parameter calibration, where the intrinsic and extrinsic parameters include: a matrix of intrinsic parameters, a rotational vector, and a translational vector. The present invention may be applied to parameter calibration for an imaging apparatus such as a camcorder and a camera in a case of a high distortion.

Journal ArticleDOI
Xin Ye1, Jun Gao1, Zhijing Zhang1, Chao Shao1, Guangyuan Shao1 
TL;DR: In this paper, a sub-pixel calibration method for a microassembly system with coaxial alignment function (MSCA) is proposed, where the relative discrepancy between the two parts can be calculated from image plane coordinate instead of calculating space transformation matrix.
Abstract: Purpose – The purpose of this paper is to propose a sub-pixel calibration method for a microassembly system with coaxial alignment function (MSCA) because traditional sub-pixel calibration approaches cannot be used in this system. Design/methodology/approach – The in-house microassembly system comprises a six degrees of freedom (6-DOF) large motion serial robot with microgrippers, a hexapod 6-DOF precision alignment worktable and a vision system whose optical axis of the microscope is parallel with the horizontal plane. A prism with special coating is fixed in front of the objective lens; thus, two parts’ Figures, namely the images of target and base part, can be acquired simultaneously. The relative discrepancy between the two parts can be calculated from image plane coordinate instead of calculating space transformation matrix. Therefore, the traditional calibration method cannot be applied in this microassembly system. An improved calibration method including the check corner detection solves the disto...

01 Jan 2014
TL;DR: SURF (Speeded Up Robust Features) algorithm is used here for continuous image recognition and tracking in video for visual object tracking for surveillance applications.
Abstract: Visual object tracking for surveillance applications is an important task in computer vision. Tracking of object is a matching problem. One main difficulty in object tracking is to choose suitable features and models for recognizing and tracking the target. SURF (Speeded Up Robust Features) algorithm is used here for continuous image recognition and tracking in video. The SURF feature descriptor operates by reducing the search space of possible interest points inside of the scale space image pyramid. SURF adds a lot of features to improve the speed in every step. The resulting tracked interest points are more repeatable and noise free. SURF is good at handling images with blurring and rotation. Corner detection is good for obtaining image features for object tracking and recognition. Interest points in an image are located using corner detector. By using harris corner detection algorithm along SURF feature descriptor, tracking efficiency is improved.

Patent
01 Jan 2014
TL;DR: In this paper, a method for measuring distance and height by a vehicle-mounted monocular camera based on a vertical type target is presented, which belongs to the technical field of intelligent vehicle environmental perception.
Abstract: The invention discloses a method for measuring distance and height by a vehicle-mounted monocular camera based on a vertical type target, and belongs to the technical field of intelligent vehicle environmental perception. Through the operations of template matching, candidate point clustering and screening and accurate positioning on a region of interest of a vertical type target image, the method realizes the detection and the positioning of a sub pixel level angular point, combines a projective geometry model, and creates a mapping relation between an image ordinate and an actual imaging angle, thereby realizing the measurement of the distance and the height. The inner and outer parameters of the camera are not needed to be calibrated, a calibrating board or a reference object is not needed to be placed repeatedly, the possibility of the occurrence of error is reduced, and not only are the operation steps reduced, but also the measuring accuracy is improved; compared with the conventional corner detection, the target point of the target can be detected more accurately, so that the calculated amount of the follow-up clustering and screening is reduced, the height measure of the monocular camera is realized on the basis of calculating the actual imaging angle and distance, and the cost is greatly reduced.

Proceedings ArticleDOI
29 Sep 2014
TL;DR: This work proposes hardware architecture for motion detection based on the background subtraction algorithm, which is implemented in FPGAs, and Sobel edge detection algorithm is used to identify object's edges.
Abstract: Image and video processing applications tend to be real time constraints. Applications like visual surveillance, traffic monitoring, vehicle tracking, autonomous navigation, computer vision etc. has the basic requirement of identifying moving object in real time. Hardware based approaches are well suited for real time motion detection as they results in high performance and low cost. For rapid development of real time motion detection systems, we propose hardware architecture for motion detection based on the background subtraction algorithm, which is implemented in FPGAs. The steps involved in the process are: (a) a grey level background image is stored in an SRAM memory in FPGA, (b) color reduction is applied to both the background and current images, (c) Both filtered images are then subtracted using image subtraction (d) the gravity center of the object of resultant image is calculated and sent to a PC (via RS-232 interface) (e) Sobel edge detection algorithm is used to identify object's edges. Identifying object's edge could be extended by classifying objects based on their shapes.