scispace - formally typeset
Search or ask a question

Showing papers on "Corner detection published in 2019"


Proceedings ArticleDOI
01 Jun 2019
TL;DR: In this article, the authors propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions, and compare it to the state-of-the-art with extensive experiments, showing better performance.
Abstract: We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras ---our implementation processes up to 1.6Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.

65 citations


Journal ArticleDOI
16 Jan 2019
TL;DR: In this paper, the authors propose a method to compute the convolution of a linear spatial kernel with the output of an event camera, which operates on the event stream output of the camera directly without synthesizing pseudo-image frames as is common in the literature.
Abstract: Spatial convolution is arguably the most fundamental of two-dimensional image processing operations. Conventional spatial image convolution can only be applied to a conventional image, that is, an array of pixel values (or similar image representation) that are associated with a single instant in time. Event cameras have serial, asynchronous output with no natural notion of an image frame, and each event arrives with a different timestamp. In this letter, we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera. The approach operates on the event stream output of the camera directly without synthesising pseudoimage frames as is common in the literature. The key idea is the introduction of an internal state that directly encodes the convolved image information, which is updated asynchronously as each event arrives from the camera. The state can be read off as often as and whenever required for use in higher level vision algorithms for real-time robotic systems. We demonstrate the application of our method to corner detection, providing an implementation of a Harris corner-response “state” that can be used in real time for feature detection and tracking on robotic systems.

50 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed method has a high corner resolution (the ability to accurately detect neighboring corners), and a corresponding corner resolution constant is also derived, and this method is less sensitive to any local variations and noise on the contour; and false corner detection is less likely to occur.
Abstract: Image corner detection is very important in the fields of image analysis and computer vision. Curvature calculation techniques are used in many contour-based corner detectors. We identify that existing calculation of curvature is sensitive to local variation and noise in the discrete domain and does not perform well when corners are closely located. In this paper, discrete curvature representations of single and double corner models are investigated and obtained. A number of model properties have been discovered, which help us detect corners on contours. It is shown that the proposed method has a high corner resolution (the ability to accurately detect neighboring corners), and a corresponding corner resolution constant is also derived. Meanwhile, this method is less sensitive to any local variations and noise on the contour; and false corner detection is less likely to occur. The proposed detector is compared with seven state-of-the-art detectors. Three test images with ground truths are used to assess the detection capability and localization accuracy of these methods in cases with noise-free and different noise levels; 24 images with various scenes without ground truths are used to evaluate their repeatability under affine transformation, JPEG compression, and noise degradations. The experimental results show that our proposed detector attains a better overall performance.

35 citations


Journal ArticleDOI
TL;DR: The proposed method to count wheat ears based on fully convolutional network (FCN) and Harris corner detection achieves good results and provides an important technique for studying wheat phenotyping.
Abstract: Accurate counting of wheat ears in field conditions is vital to predict yield and for crop breeding. To quickly and accurately obtain the number of wheat ears in a field, we propose herein a method to count wheat ears based on fully convolutional network (FCN) and Harris corner detection. The technical procedure consists essentially of 1) constructing a dataset of wheat-ear images from acquired red-green-blue (RGB) images; 2) training a FCN as the wheat-ear segmentation model by using the constructed image dataset; 3) preparing testing images and inputting them into the segmentation model to get the initial segmentation results; 4) binarizing the initial segmentation by using the Otsu algorithm (to facilitate subsequent processing); and 5) applying Harris corner detection after extracting the wheat-ear skeleton to obtain the number of wheat ears in the images. The segmentation results show that the proposed FCN-based segmentation model segments wheat ears with an average accuracy of 0.984 and at low computational cost. An average of only 0.033 s is required to segment a 256× 256 -pixel wheat-ear image. Moreover, the segmentation result is improved by nearly 10% compared with the previous segmentation methods under conditions of wheat-ear occlusion, leaf occlusion, uneven illumination, and soil disturbance. Subsequently, the proposed counting method achieves good results, with an average accuracy of 0.974, a coefficient of determination (R 2 ) of 0.983, and a root mean square error (RMSE) of 14.043. These metrics are all improved by 10% compared with the previous methods. These results show that the proposed method accurately counts wheat ears even under conditions of wheat-ear adhesion. Furthermore, the results provide an important technique for studying wheat phenotyping.

33 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: The proposed G-SAE maintenance algorithm and corner candidate selection algorithm greatly enhance the real-time performance for corner detection, while the corner candidate refinement algorithm maintains the accuracy of performance by using an improved event-based Harris detector.
Abstract: Recently, the emerging bio-inspired event cameras have demonstrated potentials for a wide range of robotic applications in dynamic environments. In this paper, we propose a novel fast and asynchronous event-based corner detection method which is called FA-Harris. FA-Harris consists of several components, including an event filter, a Global Surface of Active Events (G-SAE) maintaining unit, a corner candidate selecting unit, and a corner candidate refining unit. The proposed G-SAE maintenance algorithm and corner candidate selection algorithm greatly enhance the real-time performance for corner detection, while the corner candidate refinement algorithm maintains the accuracy of performance by using an improved event-based Harris detector. Additionally, FA-Harris does not require artificially synthesized event-frames and can operate on asynchronous events directly. We implement the proposed method in C++ and evaluate it on public Event Camera Datasets. The results show that our method achieves approximately 8× speed-up when compared with previously reported event-based Harris detector, and with no compromise on the accuracy of performance.

32 citations


Proceedings ArticleDOI
01 Feb 2019
TL;DR: The results reflect that the combination of SURF and MSER performs better compared to other algorithm combinations when an image is scaled and rotated, however there are no good matches when there is also a pose variation.
Abstract: Feature extraction and matching has the limelight in all almost all the fields ranging from biomedical to exploratory research. It has ubiquitous applications in present world that is moving at a breathtaking pace towards automation. The algorithms used for feature extraction are application specific, i.e. the one that yields better performance for face recognition does not guarantee the same performance for lane detection. A lot of time is invested in identifying algorithms that are best suited for an application. In the interest of time, an attempt has been made to develop few good algorithm combinations that assist in the selection of algorithms. The features of input image and the target image are extracted, described and matched using various algorithm combinations like SURF, FAST, MSER, and Harris Corner Detector. The combination of all these algorithms is simulated on MATLAB using computer vision and image processing toolboxes. A Graphical User Interface (GUI) is developed for better user experience. Face recognition is considered as an example to perform the simulation. The results reflect that the combination of SURF and MSER performs better compared to other algorithm combinations when an image is scaled and rotated, however there are no good matches when there is also a pose variation. Proper thresholding of ‘Match Threshold’, ‘Reject Ratio’, and ‘Inlier Threshold’ must be carried out through trial and error method to get better results.

23 citations


Journal ArticleDOI
Ali Almagbile1
TL;DR: A new testing procedure based on feature from accelerated segment test (FAST) algorithms is introduced to detect the crowd features from UAV images taken from different camera orientations and positions and the results show that the proposed algorithms are able to extract crowd Features from different UAV pictures.
Abstract: With rapid developments in platforms and sensors technology in terms of digital cameras and video recordings, crowd monitoring has taken a considerable attentions in many disciplines such as psycho...

22 citations


Journal ArticleDOI
TL;DR: Improved ORB (Oriented FAST and Rotated BRIEF) based real-time image registration and target localization algorithm for high-resolution video images is proposed, focusing on the parallelization of three of the most time-consuming parts: improved ORB feature extraction, feature matching based on Hamming distance for matching rough points, and Random Sample Consensus algorithm for precise matching and achieving transformation model parameters.
Abstract: High-resolution video images contain huge amount of data so that the real-time capability of image registration and target localization algorithm is difficult to be achieved when operated on central processing units (CPU). In this paper, improved ORB (Oriented FAST and Rotated BRIEF, FAST, which means “Features from Accelerated Segment Test”, is a corner detection method used for feature points extraction. BRIEF means “Binary Robust Independent Elementary Features”, and it’s a binary bit string used to describe features) based real-time image registration and target localization algorithm for high-resolution video images is proposed. We focus on the parallelization of three of the most time-consuming parts: improved ORB feature extraction, feature matching based on Hamming distance for matching rough points, and Random Sample Consensus algorithm for precise matching and achieving transformation model parameters. Realizing Compute Unified Device Architecture (CUDA)-based real-time image registration and target localization parallel algorithm for high-resolution video images is also emphasized on. The experimental results show that when the registration and localization effect is similar, image registration and target localization algorithm for high-resolution video images achieved by CUDA is roughly 20 times faster than by CPU implementation, meeting the requirement of real-time processing.

21 citations


Posted Content
TL;DR: A learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions and significantly more robust to abrupt changes of direction of the corners compared to previous ones is proposed.
Abstract: We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras ---our implementation processes up to 1.6Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance.

21 citations


Journal ArticleDOI
TL;DR: Experimental results illustrate that this gesture recognition method based on improved features from accelerated segment test (FAST) corner detection has strong anti-interference ability in complex background, and it is good at performance.
Abstract: Mobile terminal gesture recognition is an extreme challenge, not only because its limited computing resource make it complicated to identify feature points but also the complex background can easily affect the recognition result. This study proposes a gesture recognition method based on improved features from accelerated segment test (FAST) corner detection. First, in order to eliminate the effects of complex background and light, the intersection of the two frame images is obtained through background subtraction and the multi-colour space to realise the detection of the hand. Second, in order to improve the performance of the algorithm, an improved FAST corner detection method combined with the back propagation neural network (BPNN) is proposed in accordance with the characteristics of fingertips. Subsequently, the feature points are screened by method of non-maximum suppression. Finally, gesture recognition is realised by matching feature points. Experimental results illustrate that this method has strong anti-interference ability in complex background, and it is good at performance.

17 citations


Journal ArticleDOI
TL;DR: Computational simulations demonstrate that the proposed CNN approach presents competitive results in comparison with other algorithms in terms of accuracy and robustness.

Journal ArticleDOI
23 Apr 2019-Sensors
TL;DR: Experimental results show that the proposed building corner detection approach achieves an F-measure of 0.83 in the test image set and outperforms a number of state-of-the-art corner detectors by a large margin.
Abstract: In aerial images, corner points can be detected to describe the structural information of buildings for city modeling, geo-localization, and so on. For this specific vision task, the existing generic corner detectors perform poorly, as they are incapable of distinguishing corner points on buildings from those on other objects such as trees and shadows. Recently, fully convolutional networks (FCNs) have been developed for semantic image segmentation that are able to recognize a designated kind of object through a training process with a manually labeled dataset. Motivated by this achievement, an FCN-based approach is proposed in the present work to detect building corners in aerial images. First, a DeepLab model comprised of improved FCNs and fully-connected conditional random fields (CRFs) is trained end-to-end for building region segmentation. The segmentation is then further improved by using a morphological opening operation to increase its accuracy. Corner points are finally detected on the contour curves of building regions by using a scale-space detector. Experimental results show that the proposed building corner detection approach achieves an F-measure of 0.83 in the test image set and outperforms a number of state-of-the-art corner detectors by a large margin.

Journal ArticleDOI
TL;DR: In this paper, a three-step procedure using wavelet de-noising, shape from shading (SFS) method and Harris corner detector (HCD) for improved defect detection from radiographic images was described.
Abstract: A three-step procedure is described which utilises wavelet de-noising, shape from shading (SFS) method and the Harris corner detector (HCD) for improved defect detection from radiographic images of...

Journal ArticleDOI
TL;DR: In this article, a system calibration method for a trifocal sensor, which is sensitive to different spectral bands, is presented, which consists of a stereo camera operating in the visual (VIS) spectrum and a thermal imaging camera, operating in Long-Wave Infrared (LWIR) spectrum.
Abstract: . This paper presents a system calibration method for a trifocal sensor, which is sensitive to different spectral bands. The trifocal camera system consists of a stereo camera, operating in the visual (VIS) spectrum and a thermal imaging camera, operating in the Long-Wave- Infrared (LWIR) spectrum. Intrinsic parameters and spatial alignment are determined simultaneously. As calibration target a passive aluminium chessboard is used. Corner detection and subsequent bundle adjustment is done on all synchronized image triplets. The remaining reprojection errors are in the sub-pixel range and enable the system to generate metric point clouds, colored with thermal intensities in real-time.

Journal ArticleDOI
TL;DR: This paper presents a three-dimensional graphical simulation (virtual world) that replicates a complete aerial refueling scenario including the tanker, the tanker’s stereo vision system, and an app that automates the entire process.
Abstract: This Paper presents a three-dimensional graphical simulation (virtual world) that replicates a complete aerial refueling scenario including the tanker, the tanker’s stereo vision system, and an app...

Journal ArticleDOI
TL;DR: Numerical experiments demonstrate that the proposed ACRA corner detection algorithm outperforms the CPDA approach and other seven state-of-the-art methods in terms of the repeatability and localization error evaluation metrics.
Abstract: As one of the most significant image local features, corner is widely utilized in many computer vision applications. A number of contour-based corner detection algorithms have been proposed over the last decades, among which the chord-to-point distance accumulation (CPDA) corner detector is reported to produce robust performance in corner detection, especially compared with curvature scale-space (CSS) based corner detectors, which are sensitive to local variation and noise on the contour. In this paper, we investigate the CPDA algorithm in terms of its limitations, and then propose the altitude-to-chord ratio accumulation (ACRA) corner detector based on CPDA approach. Altitude-to-chord ratio is insensitive to the selection of chord length compared with chord-to-point distance, which allows us utilize a single chord instead of the three chords used in CPDA algorithm. Besides, we replace the maximum normalization used in CPDA algorithm with the linear normalization to avoid the uneven data projection. Numerical experiments demonstrate that the proposed ACRA corner detection algorithm outperforms the CPDA approach and other seven state-of-the-art methods in terms of the repeatability and localization error evaluation metrics.

Journal ArticleDOI
TL;DR: Experiments for distance estimation show that the method can achieve the estimation precision with less than 5% error within 500 cm, and it has many advantages such as easy implementation, fast processing speed, high estimation accuracy and robustness, etc.
Abstract: Obstacles detection and distance estimation play an important role in vision navigation for inspection robot for high-voltage transmission lines which walks along the overhead ground wire. In view of images from inspection site, Harris corners matching is used to detect background motion caused by camera jitter to eliminate the motion through motion compensation and a method for selecting the effective matching point pairs is proposed in the paper. Then the frame difference image and binary image are processed together and complete moving objects are segment. On the basis of the obstacles detection an algorithm for distance estimation of them is put forward. The relation between the distance to be estimated and coordinate difference of both edges of the ground wire in image where the objects lie is obtained, so the distance can be acquired, by use of the pose of the camera relative to the wire, as well as the pin-hole imaging model. Experiments for distance estimation show that the method can achieve the estimation precision with less than 5% error within 500 cm, and it has many advantages such as easy implementation, fast processing speed, high estimation accuracy and robustness, etc.

Journal ArticleDOI
TL;DR: An improved features from accelerated segment test (FAST) corner-detection method for suppressing SAR image speckle noise is proposed here and the phenomenon of corner density is eliminated based on non-maximum suppression.
Abstract: An improved features from accelerated segment test (FAST) corner-detection method for suppressing SAR image speckle noise is proposed here. The direction of the corner candidate is first computed by the law of universal gravitation and is introduced into corner detection to divide the 16 pixels around the corner candidate into two groups without repeatedly searching. Then, the similarity between the corner candidate and its surrounding pixels is computed to suppress the points affected by SAR speckle noise. Finally, the phenomenon of corner density is eliminated based on non-maximum suppression. The experimental results show that authors’ method has better suppressing effect compared with FAST-8 and SIFT corner-detection methods.

Journal ArticleDOI
TL;DR: This research presents a novel automatic method to enumerated 3D affine measurements from a single perspective image that has been used to perform tracking and manipulation of an object in real-time environment.
Abstract: The approximation of 3D geometry through single image is a particular instance of 3D reconstruction from several images. The advance information or user input should be provided to retrieve or conclude depth information. This research presents a novel automatic method to enumerated 3D affine measurements from a single perspective image. The least geometric information has been resolute through the image of the scene. The vanishing line and a vanishing point are two required information to reconstruct from an image of a scene. The affine scene structure can be reconstructed through the image of a scene. The proposed approach has many advantages; there is no need of the camera’s intrinsic matrix and the explicit correlation among camera and scene (pose), no need for selecting Vx, Vy, Vz points, novel dexterous robot architecture for manipulation. In this paper, the following approaches have been implemented: (1) the three sets of vanishing points in X, Y, and Z axis; (2) the vanishing lines of the image; (3) distance among planes that parallel to the reference plane; (4) image wrapping; (5) corner detection (algorithm has been implemented in order to make the process automatic). The indigenous data set has been taken for the experiment. The results are compared with Zhang- and ArUco-based calibration. This novel approach has been used to perform tracking and manipulation of an object in real-time environment.


Posted Content
TL;DR: FA-Harris as discussed by the authors is a fast and asynchronous event-based corner detection method which is called FA-Harris consists of several components, including an event filter, a Global Surface of Active Events (G-SAE) maintaining unit, a corner candidate selecting unit, and an event candidate refining unit.
Abstract: Recently, the emerging bio-inspired event cameras have demonstrated potentials for a wide range of robotic applications in dynamic environments. In this paper, we propose a novel fast and asynchronous event-based corner detection method which is called FA-Harris. FA-Harris consists of several components, including an event filter, a Global Surface of Active Events (G-SAE) maintaining unit, a corner candidate selecting unit, and a corner candidate refining unit. The proposed G-SAE maintenance algorithm and corner candidate selection algorithm greatly enhance the real-time performance for corner detection, while the corner candidate refinement algorithm maintains the accuracy of performance by using an improved event-based Harris detector. Additionally, FA-Harris does not require artificially synthesized event-frames and can operate on asynchronous events directly. We implement the proposed method in C++ and evaluate it on public Event Camera Datasets. The results show that our method achieves approximately 8x speed-up when compared with previously reported event-based Harris detector, and with no compromise on the accuracy of performance.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A new image feature extraction algorithm is proposed in this paper, which firstly screens the image twice, then the Harris corner detection algorithm is used to detect the corner location, and the corner position is optimized to sub-pixel level by iterative algorithm.
Abstract: Harris corner detection algorithm can detect stable image feature points in the image. But there are some problems such as large amount of computation, low positioning accuracy, only detecting corner positions without feature descriptor, which limit the application of the algorithm. In order to solve these problems, a new image feature extraction algorithm is proposed in this paper, which firstly screens the image twice, then the Harris corner detection algorithm is used to detect the corner position, and the corner position is optimized to sub-pixel level by iterative algorithm. Finally, the rotation invariant fast extraction (RIFD) descriptor is used to represent the feature point information. The experimental results show that the proposed algorithm effectively overcomes the shortcomings of the Harris algorithm, it can quickly and accurately extract stable features in the image. It has great application prospects in many image matching systems.

Journal ArticleDOI
TL;DR: This study investigated the development of visual recognition and stereoscopic imaging technology, applying them to the construction of an image processing system for autonomous underwater vehicles (AUVs) and compared the analysis results obtained from various brightness and turbidity conditions in out-of-water and underwater environments.
Abstract: This study investigated the development of visual recognition and stereoscopic imaging technology, applying them to the construction of an image processing system for autonomous underwater vehicles (AUVs). For the proposed visual recognition technology, a Hough transform was combined with an optical flow algorithm to detect the linear features and movement speeds of dynamic images; the proposed stereoscopic imaging technique employed a Harris corner detector to estimate the distance of the target. A physical AUV was constructed with a wide-angle lens camera and a binocular vision device mounted on the bow to provide image input. Subsequently, a simulation environment was established in Simscape Multibody and used to control the post-driver system of the stern, which contained horizontal and vertical rudder planes as well as the propeller. In static testing at National Cheng Kung University, physical targets were placed in a stability water tank; the study compared the analysis results obtained from various brightness and turbidity conditions in out-of-water and underwater environments. Finally, the dynamic testing results were combined with a fuzzy controller to output the real-time responses of the vehicle regarding the angles, rates of the rudder planes, and the propeller revolution speeds at various distances.

Journal ArticleDOI
TL;DR: In this paper, a multi-threshold corner detection and region matching algorithm based on texture classification is proposed to address the unreasonable distributed corners in single threshold Harris detection and expensive computation cost incurred from image region matching performed by normalized cross correlation.
Abstract: In order to address the unreasonable distributed corners in single threshold Harris detection and expensive computation cost incurred from image region matching performed by normalized cross correlation (NCC) algorithm, multi-threshold corner detection and region matching algorithm based on texture classification are proposed. Firstly, the input image is split into sub-blocks which are classified into four different categories based on the specific texture: flat, weak, middle texture and strong regions. Subsequently, an algorithm is suggested to decide threshold values for different texture type, and interval calculation for the sub-blocks is performed to improve operation efficiency in the algorithm implementation. Finally, based on different texture characteristics, Census, interval-sampled NCC, and complete NCC are employed to perform image matching. As demonstrated by the experimental results, corner detection based on texture classification is capable to obtain a reasonable corner number as well as a more uniform spatial distribution, when compared to the traditional Harris algorithm. If combined with the interval classification, speedup for texture classification is approximately 30%. In addition, the matching algorithm based on texture classification is capable to improve the speed of 26.9%~29.9% while maintaining the comparable accuracy of NCC. In general, for better splicing quality, the overall stitching speed is increased by 14.1%~18.4%. Alternatively, for faster speed consideration, the weak texture region which accounts for a large proportion of an image and provides less effective information can be ignored, for which 23.9%~28.4% speedup can be achieved at the cost of a 1.9%~3.9% reduction in corner points. Therefore, the proposed algorithm is made potentially suited to uniformly distributed corner point calculation and high computation efficiency requirement scenarios.

Journal ArticleDOI
TL;DR: This work uses Viola Jones algorithm for detecting the eyes, eye blink using background subtraction, gradient based corner detection and it is capable of detecting common cases of fatigued behaviour linked with prolonged computer use by tracking the eye blink rate.

Journal ArticleDOI
TL;DR: A method for segmentation of overlapping fish images in aquaculture using the shape factor to determine whether an overlap exists and the improved Zhang-Suen thinning algorithm, which achieves better performance in segmentation accuracy and effectiveness.
Abstract: Individual fish segmentation is a prerequisite for feature extraction and object identification in any machine vision system. In this paper, a method for segmentation of overlapping fish images in aquaculture was proposed. First, the shape factor was used to determine whether an overlap exists in the picture. Then, the corner points were extracted using the curvature scale space algorithm, and the skeleton obtained by the improved Zhang-Suen thinning algorithm. Finally, intersecting points were obtained, and the overlapped region was segmented. The results show that the average error rate and average segmentation efficiency of this method was 10% and 90%, respectively. Compared with the traditional watershed method, the separation point is accurate, and the segmentation accuracy is high. Thus, the proposed method achieves better performance in segmentation accuracy and effectiveness. This method can be applied to multi-target segmentation and fish behavior analysis systems, and it can effectively improve recognition precision. Keywords: aquaculture, image processing, overlapping segmentation, corner detection, improved Zhang-Suen algorithm DOI: 10.25165/j.ijabe.20191206.3217 Citation: Zhou C, Lin K, Xu D M, Liu J T, Zhang S, Sun C H, et al. Method for segmentation of overlapping fish images in aquaculture. Int J Agric & Biol Eng, 2019; 12(6): 135–142.

Journal ArticleDOI
TL;DR: A method for detecting building changes from multitemporal high-resolution aerial images is proposed, which focuses on analyzing building changes in the spatial domain and is treated as a classification process, so that the optimal solution indicates corners belonging to changed buildings.
Abstract: With the rapid development of urban areas, construction areas are constantly appearing. Those changed areas require timely monitoring to provide up-to-date information for urban planning and mapping. As a result, it is a challenge to develop an effective change detection technique. In this work, a method for detecting building changes from multitemporal high-resolution aerial images is proposed. Different from traditional methods, which usually depict building changes in the color domain (e.g., using pixel values or its variants as features), this work focuses on analyzing building changes in the spatial domain. Moreover, contextual relations are explored as well, in order to achieve a robust detection result. In detail, corners are first extracted from the image and an irregular Markov random field model is then constructed based on them. Energy terms in the model are appropriately designed for describing the geometric characteristics of the building. Change detection is treated as a classification process, so that the optimal solution indicates corners belonging to changed buildings. Finally, changed areas are illustrated by linking preserved corners followed by postprocessing steps. Experimental results demonstrate the capabilities of the proposed method for change detection.

Journal ArticleDOI
TL;DR: This work presents a framework for measuring the geometric precision of a classification map, and shows that superpixels are the best candidate for the local statistics features, as they modestly improve the classification accuracy, while preserving the geometric elements in the image.
Abstract: Land cover maps are a key resource for many studies in Earth Observation, and thanks to the high temporal, spatial, and spectral resolutions of systems like Sentinel-2, maps with a wide variety of land cover classes can now be automatically produced over vast areas. However, certain context-dependent classes, such as urban areas, remain challenging to classify correctly with pixel-based methods. Including contextual information into the classification can either be done at the feature level with texture descriptors or object-based approaches, or in the classification model itself, as is done in Convolutional Neural Networks. This improves recognition rates of these classes, but sometimes deteriorates the fine-resolution geometry of the output map, particularly in sharp corners and in fine elements such as rivers and roads. However, the quality of the geometry is difficult to assess in the absence of dense training data, which is usually the case in land cover mapping, especially over wide areas. This work presents a framework for measuring the geometric precision of a classification map, in order to provide deeper insight into the consequences of the use of various contextual features, when dense validation data is not available. This quantitative metric, named the Pixel Based Corner Match (PBCM), is based on corner detection and corner matching between a pixel-based classification result, and a contextual classification result. The selected case study is the classification of Sentinel-2 multi-spectral image time series, with a rich nomenclature containing context-dependent classes. To demonstrate the added value of the proposed metric, three spatial support shapes (window, object, superpixel) are compared according to their ability to improve the classification performance on this challenging problem, while paying attention to the geometric precision of the result. The results show that superpixels are the best candidate for the local statistics features, as they modestly improve the classification accuracy, while preserving the geometric elements in the image. Furthermore, the density of edges in a sliding window provides a significant boost in accuracy, and maintains a high geometric precision.

Book ChapterDOI
12 Apr 2019
TL;DR: A next generation of digital watermarking based on specific features of digital information using Harris corner detection (strongest) method, which shows an adequate level of resilience against the majority of image processing attacks together with affine transformations.
Abstract: Digital watermarking always attracts the attention of researchers from early nineties to present scenario, in the ever changing world of digital creation, transmission and modification of information. In this paper we draw our attention towards a next generation of digital watermarking based on specific features of digital information using Harris corner detection (strongest) method. In this scheme first twenty strongest corner points obtained using Harris method are used as a reference for embedding watermark information and to retain the synchronization information after distortions. The scheme is time efficient as only twenty strongest corner points per frame are computed. Hence the proposed scheme is useful for time constrained applications. The experimental results confirms that the suggested scheme show a adequate level of resilience against the majority of image processing attacks together with affine transformations.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a corner detection scheme that combines a corner detector based on curvature scale space with a GGI system, which can directly extract corner points from the reconstruction results of ghost imaging.
Abstract: Gradient ghost imaging (GGI) is a new imaging method that can directly extract the edge of a target from the correlation of light intensity fluctuations. However, there are problems of poor image quality and practicability in GGI. The corner point is an important target feature and has important applications in the fields of image processing and machine vision. In this paper, we propose a corner detection scheme that combines a corner detection algorithm based on curvature scale space with a GGI system, which can directly extract corner points from the reconstruction results of ghost imaging. A simulation and experimental results show that our method can acquire precise and robust corner information of an unknown target in undersampled cases, which promotes the practical development of ghost imaging.