scispace - formally typeset
Search or ask a question

Showing papers on "Corner detection published in 2018"


Journal ArticleDOI
22 Jun 2018
TL;DR: This letter proposes a new, purely event-based corner detector, and a novel corner tracker, demonstrating that it is possible to detect corners and track them directly on the event stream in real time, promising great impact in high-speed applications.
Abstract: The recent emergence of bioinspired event cameras has opened up exciting new possibilities in high-frequency tracking, bringing robustness to common problems in traditional vision, such as lighting changes and motion blur. In order to leverage these attractive attributes of the event cameras, research has been focusing on understanding how to process their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream essentially forming frames of events grouped according to their timestamp, we are still to exploit the power of these cameras. In this spirit, this letter proposes a new, purely event-based corner detector, and a novel corner tracker, demonstrating that it is possible to detect corners and track them directly on the event stream in real time. Evaluation on benchmarking datasets reveals a significant boost in the number of detected corners and the repeatability of such detections over the state of the art even in challenging scenarios with the proposed approach while enabling more than a 4× speed-up when compared to the most efficient algorithm in the literature. The proposed pipeline detects and tracks corners at a rate of more than 7.5 million events per second, promising great impact in high-speed applications.

140 citations


Journal ArticleDOI
TL;DR: By theoretical and experimental results, it is observed that the proposed feature and contrast enhancement of image using Riemann–Liouville fractional differential operator outperforms the existing methods under comparison.
Abstract: Edge detection is an important aspect of image processing to improve image edge quality. In the literature, there exist various edge detection techniques in spatial and frequency domains that use integer-order differentiation operators. In this paper, we have implemented feature and contrast enhancement of image using Riemann–Liouville fractional differential operator. Based on the direction of strong edge, we have evaluated edge components and carried out a performance analysis based on several well-known metrics. We have also improved the pixel contrast based on foreground and background gray level. Moreover, by theoretical and experimental results, it is observed that the proposed feature and contrast enhancement outperforms the existing methods under comparison. We have discussed that the edge components calculated using fractional derivative can be used for texture and contrast enhancement. This paper is based on fractional-order differentiation operation to detect edges with the help of the directional edge components across eight directions. The experimental comparison results are shown in tabular form and as qualitative texture results. The six experimental input images are used to analyze various performance metrics. The experiments show that for any grayscale image the proposed method outperforms classical edge detection operators.

52 citations


Journal ArticleDOI
TL;DR: This paper proposes a Harris image matching method that combines adaptive threshold and random sample consensus (RANSAC) and shows that compared with the existing algorithms, the proposed method not only obtains a matching accuracy higher than 20% of Cui’s algorithm but also saves more than 30% detection time of corner detection and image matching.
Abstract: The education plays a more and more important role in disseminating knowledge because of the explosive growth of knowledge. As one kind of carrier delivering knowledge, image also presents an explosive growth trend and plays an increasingly important role in education, medical, advertising, entertainment, and so on. Aiming at the long time of massive image feature extraction in the construction of smart campus, the traditional Harris corner has problems, such as low detection efficiency and many non-maximal pseudocorner points. This paper proposes a Harris image matching method that combines adaptive threshold and random sample consensus (RANSAC). First, the Harris feature points are selected based on the adaptive threshold and the Forstner algorithm in this method. On the one hand, candidate points are filtered based on the adaptive threshold. On the other hand, the Forstner algorithm is used to further select the corner points. Second, the normalized cross correlation matching and the RANSAC are applied to precisely match the detected Harris corners. The experimental results show that compared with the existing algorithms, the proposed method not only obtains a matching accuracy higher than 20% of Cui’s algorithm but also saves more than 30% detection time of corner detection and image matching. Furthermore, the proposed method obtains a matching accuracy higher than 50% of the Cui’s algorithm and saves more than 50% detection time of corner detection and image matching.

51 citations


Journal ArticleDOI
TL;DR: This work presents an implementation and thorough study of the Harris corner detector, a feature detector that relies on the analysis of the eigenvalues of the autocorrelation matrix and proposes several alternatives for improving the precision and speed.
Abstract: In this work, we present an implementation and thorough study of the Harris corner detector. This feature detector relies on the analysis of the eigenvalues of the autocorrelation matrix. The algorithm comprises seven steps, including several measures for the classification of corners, a generic non-maximum suppression method for selecting interest points, and the possibility to obtain the corners position with subpixel accuracy. We study each step in detail and propose several alternatives for improving the precision and speed. The experiments analyze the repeatability rate of the detector using different types of transformations. Source Code The reviewed source code and documentation for this algorithm are available from the web page of this article. Compilation and usage instruction are included in the README.txt file of the archive.

47 citations


Journal ArticleDOI
01 Oct 2018-Optik
TL;DR: A robust image matching algorithm based on the combination of the wavelet transform and Scale-invariant feature transform (SIFT) and exploit the information from SIFT to comprise matching constraint and use them to get more correct matches.

38 citations


Journal ArticleDOI
TL;DR: This research aims to improve the extraction of road centerlines using both very-high-resolution aerial images and light detection and ranging (LiDAR) by accounting for road connectivity by applying the fractal net evolution approach to segment remote sensing images into image objects and then classifies image objects using the machine learning classifier, random forest.
Abstract: The road networks provide key information for a broad range of applications such as urban planning, urban management, and navigation. The fast-developing technology of remote sensing that acquires high-resolution observational data of the land surface offers opportunities for automatic extraction of road networks. However, the road networks extracted from remote sensing images are likely affected by shadows and trees, making the road map irregular and inaccurate. This research aims to improve the extraction of road centerlines using both very-high-resolution (VHR) aerial images and light detection and ranging (LiDAR) by accounting for road connectivity. The proposed method first applies the fractal net evolution approach (FNEA) to segment remote sensing images into image objects and then classifies image objects using the machine learning classifier, random forest. A post-processing approach based on the minimum area bounding rectangle (MABR) is proposed and a structure feature index is adopted to obtain the complete road networks. Finally, a multistep approach, that is, morphology thinning, Harris corner detection, and least square fitting (MHL) approach, is designed to accurately extract the road centerlines from the complex road networks. The proposed method is applied to three datasets, including the New York dataset obtained from the object identification dataset, the Vaihingen dataset obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D semantic labelling benchmark and Guangzhou dataset. Compared with two state-of-the-art methods, the proposed method can obtain the highest completeness, correctness, and quality for the three datasets. The experiment results show that the proposed method is an efficient solution for extracting road centerlines in complex scenes from VHR aerial images and light detection and ranging (LiDAR) data.

29 citations


Journal ArticleDOI
TL;DR: A direct and explicit implementation of common and novel optimization strategies, and a NUMA-aware parallelization are studied, which shows a noticeably good scalability performance on a dual-socket INTEL Broadwell-E/EP.

27 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: In this article, the authors proposed edge and corner detection algorithms for unorganized point clouds, which evaluate symmetry in a local neighborhood and use an adaptive density based threshold to differentiate 3D edge points.
Abstract: In this paper, we propose novel edge and corner detection algorithms for unorganized point clouds. Our edge detection method evaluates symmetry in a local neighborhood and uses an adaptive density based threshold to differentiate 3D edge points. We extend this algorithm to propose a novel corner detector that clusters curvature vectors and uses their geometrical statistics to classify a point as corner. We perform rigorous evaluation of the algorithms on RGB-D semantic segmentation and 3D washer models from the ShapeNet dataset and report higher precision and recall scores. Finally, we also demonstrate how our edge and corner detectors can be used as a novel approach towards automatic weld seam detection for robotic welding. We propose to generate weld seams directly from a point cloud as opposed to using 3D models for offline planning of welding paths. For this application, we show a comparison between Harris 3D and our proposed approach on a panel workpiece.

26 citations


Journal ArticleDOI
28 Mar 2018-Sensors
TL;DR: The Modelsim simulation results found that the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost and the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively.
Abstract: Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC’s and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.

22 citations


Proceedings ArticleDOI
01 Aug 2018
TL;DR: The new method shows that the corners distribute equably and have high accuracy, and combine scale invariance and rotational invariance, and greatly improves image corner detection performance compared with the traditional Harris algorithm.
Abstract: Harris algorithm is a classical corner detection algorithm, it has affine invariant and partial rotational invariance. But it does not have the scale invariance, and it has poor real-time and adaptability. This paper will put forward a new corner detection algorithm which based on improved Canny edge detection algorithm and improved multi-scale Harris corner detection algorithm. First the edge information of the target can be got by the improved Canny algorithm. Then process the original image by the Mean-Shift filter. At last adopts improved multi-scale Harris algorithm to detect the corners from the edge information, and marks the position of the corners in the original image. According to the experimental results, the new method shows that the corners distribute equably and have high accuracy. At the same time combine scale invariance and rotational invariance. The new method greatly improves image corner detection performance compared with the traditional Harris algorithm.

21 citations


Proceedings ArticleDOI
27 Jun 2018
TL;DR: In order to enable the optical flow to track larger and faster moving targets, pyramid Lucas-Kanade optical flow method is used to detect and track moving targets.
Abstract: In order to enable the optical flow to track larger and faster moving targets, pyramid Lucas-Kanade optical flow method is used to detect and track moving targets. First, detecting the corners which is easy to track, in order to improve the tracking accuracy, detected corners and then calculate the sub-pixel corner, and then the video in each frame of the image layered in the image pyramid to calculate the optical flow at the top corner, use the next pyramid as the starting point of the pyramid and repeat this process until the bottom pyramid image, which can overcome the Lucas-Kanade optical flow method cannot track faster and larger movements the shortcomings, to achieve the tracking of moving goals.

Journal ArticleDOI
TL;DR: A robust, accurate, and efficient universal algorithm that incorporates preprocessing, coarse positioning, and fine positioning stages to solve the online component positioning problem based on corner points is proposed.
Abstract: Component pick-and-place technology has been widely used to improve production efficiency and reduce common defects The vision-driven measurement system of a component pick-and-place machine requires an appropriate positioning algorithm with low computational complexity, high accuracy, and high generalizability To satisfy these attributes is rather challenging This paper focuses on the online component positioning problem based on corner points Thus, we propose a robust, accurate, and efficient universal algorithm that incorporates preprocessing, coarse positioning, and fine positioning stages Two types of model key points are introduced for interpreting the model component To enhance positioning accuracy and robustness against illumination changes, the Harris corners and subpixel corner points are extracted from the images of real components In the coarse positioning step, distance and shape feature matching methods are introduced to, respectively, compute the coarse and correct correspondences between type I model key points and Harris corner points After the corresponding point pairs have been obtained, the coarse and fine positioning problems are formulated as least squares error problems The effectiveness of the proposed method was verified by applying the method in several real component positioning experiments

Journal ArticleDOI
TL;DR: The proposed scheme has better detection and robustness against some geometric transformation attacks compared with state‐of‐the‐art methods and local feature points are described using Multi‐support Region Order‐based Gradient Histogram descriptor.
Abstract: In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods.

Posted Content
TL;DR: The proposed edge and corner detectors can be used as a novel approach towards automatic weld seam detection for robotic welding and a comparison between Harris 3D and the proposed approach on a panel workpiece is shown.
Abstract: In this paper, we propose novel edge and corner detection algorithms for unorganized point clouds. Our edge detection method evaluates symmetry in a local neighborhood and uses an adaptive density based threshold to differentiate 3D edge points. We extend this algorithm to propose a novel corner detector that clusters curvature vectors and uses their geometrical statistics to classify a point as corner. We perform rigorous evaluation of the algorithms on RGB-D semantic segmentation and 3D washer models from the ShapeNet dataset and report higher precision and recall scores. Finally, we also demonstrate how our edge and corner detectors can be used as a novel approach towards automatic weld seam detection for robotic welding. We propose to generate weld seams directly from a point cloud as opposed to using 3D models for offline planning of welding paths. For this application, we show a comparison between Harris 3D and our proposed approach on a panel workpiece.

Book ChapterDOI
09 Aug 2018
TL;DR: A novel detection algorithm which can maintain high accuracy on inputs under multiply scenarios without any prior knowledge of the checkerboard pattern is proposed, which shows superior robustness, accuracy and wide applicability in quantitative comparisons with the state-of-the-art methods.
Abstract: Aiming to improve the checkerboard corner detection robustness against the images with poor quality, such as lens distortion, extreme poses, and noise, we propose a novel detection algorithm which can maintain high accuracy on inputs under multiply scenarios without any prior knowledge of the checkerboard pattern. This whole algorithm includes a checkerboard corner detection network and some post-processing techniques. The network model is a fully convolutional network with improvements of loss function and learning rate, which can deal with the images of arbitrary size and produce correspondingly-sized output with a corner score on each pixel by efficient inference and learning. Besides, in order to remove the false positives, we employ three post-processing techniques including threshold related to maximum response, non-maximum suppression, and clustering. Evaluations on two different datasets show its superior robustness, accuracy and wide applicability in quantitative comparisons with the state-of-the-art methods, like MATE, ChESS, ROCHADE and OCamCalib.

Proceedings ArticleDOI
01 Mar 2018
TL;DR: An overview of various corner detection approaches that have been proposed in the last four decades is presented and the problems of the existing corner detection methods are given.
Abstract: In this paper, we present an overview of various corner detection approaches that have been proposed in the last four decades. Corner detection algorithms can be divided into intensity-based, contour-based and model-based methods. Intensity-based methods are mainly based on measuring local intensity variation of image. Contour-based methods find corners by analyzing the shape of edge contour. Model-based methods extract corners by fitting the local image into a predefined model. Meanwhile, the problems of the existing corner detection methods are given. Keywords—corner detection; feature extraction; algorithm research

Journal ArticleDOI
10 Oct 2018
TL;DR: The purpose of this work is to detect fracture or non-fracture and classify type of fracture of the lower leg bone (tibia) in x-ray image and produce 82% accuracy for classification fracture types.
Abstract: Nowadays, computer aided diagnosis (CAD) system become popular because it improves the interpretation of the medical images compared to the early diagnosis of the various diseases for the doctors and the medical expert specialists. Similarly, bone fracture is a common problem due to pressure, accident and osteoporosis. Moreover, bone is rigid portion and supports the whole body. Therefore, the bone fracture is taken account of the important problem in recent year. Bone fracture detection using computer vision is getting more and more important in CAD system because it can help to reduce workload of the doctor by screening out the easy case. In this paper, lower leg bone (Tibia) fracture types recognition is developed using various image processing techniques. The purpose of this work is to detect fracture or non-fracture and classify type of fracture of the lower leg bone (tibia) in x-ray image. The tibia bone fracture detection system is developed with three main steps. They are preprocessing, feature extraction and classification to classify types of fracture and locate fracture locations. In preprocessing, Unshrap Masking (USM), which is the sharpening technique, is applied to enhance the image and highlight the edges in the image. The sharpened image is then processed by Harris corner detection algorithm to extract corner feature points for feature extraction. And then, two classification approaches are chosen to detect fracture or non-fracture and classify fracture types. For fracture or not classification, simple Decision Tree (DT) is employed and K-Nearest Neighbour (KNN) is used for classifying fracture types. In this work, Normal, Transverse, Oblique and Comminute are defined as the four fracture types. Moreover, fracture locations are pointed out by the produced Harris corner points. Finally, the outputs of the system are evaluated by two performance assessment methods. The first one is performance evaluation for fracture or non-fracture (normal) conditions using four possible outcomes such as TP, TN, FP and FN. The second one is to analysis for accuracy of each fracture type within error conditions using the Kappa assessment method. The programming software used to implement the system is MATLAB with wide range of image processing tools environment. The system produces 82% accuracy for classification fracture types.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: An improved Harris corner detection algorithm using the B-spline function to replace a Gaussian window function for smoothing filtering, then the corner points are pre-selected to obtain candidate corners to improve the detection accuracy and efficiency.
Abstract: Aiming at the problems that the traditional Harris corner detection algorithm could extract more flase corner points and computational complexity when performing corner extraction on the image, an improved Harris corner detection algorithm is proposed. First, the B-spline function is used to replace a Gaussian window function for smoothing filtering, then the corner points are pre-selected to obtain candidate corners. Finally, in order to improve the adaptability of the algorithm, an auto-adaptive threshold method is used when the non-maximum value is suppressed. Experimental results show that this algorithm improves the detection accuracy and efficiency, and has good corner detection performance.

Journal ArticleDOI
TL;DR: A bit-width optimization strategy for designing hardware-efficient HCD that exploits the thresholding step in the algorithm to determine interest points from the corner responses and relies on the threshold as a guide to truncate the bit- widths of the operators at various stages of the HCD pipeline with only marginal loss of accuracy.
Abstract: High-speed corner detection is an essential step in many real-time computer vision applications, e.g., object recognition, motion analysis, and stereo matching. Hardware implementation of corner detection algorithms, such as the Harris corner detector (HCD) has become a viable solution for meeting real-time requirements of the applications. A major challenge lies in the design of power, energy and area efficient architectures that can be deployed in tightly constrained embedded systems while still meeting real-time requirements. In this paper, we proposed a bit-width optimization strategy for designing hardware-efficient HCD that exploits the thresholding step in the algorithm to determine interest points from the corner responses. The proposed strategy relies on the threshold as a guide to truncate the bit-widths of the operators at various stages of the HCD pipeline with only marginal loss of accuracy. Synthesis results based on 65-nm CMOS technology show that the proposed strategy leads to power-delay reduction of 35.2%, and area reduction of 35.4% over the baseline implementation. In addition, through careful retiming, the proposed implementation achieves over 2.2 times increase in maximum frequency while achieving an area reduction of 35.1% and power-delay reduction of 35.7% over the baseline implementation. Finally, we performed repeatability tests to show that the optimized HCD architecture achieves comparable accuracy with the baseline implementation (average decrease of repeatability is less than 0.6%).

Journal ArticleDOI
TL;DR: In this article, the authors propose a method to compute the convolution of a linear spatial kernel with the output of an event camera, which operates on the event stream output of the camera directly without synthesizing pseudo-image frames as is common in the literature.
Abstract: Spatial convolution is arguably the most fundamental of 2D image processing operations. Conventional spatial image convolution can only be applied to a conventional image, that is, an array of pixel values (or similar image representation) that are associated with a single instant in time. Event cameras have serial, asynchronous output with no natural notion of an image frame, and each event arrives with a different timestamp. In this paper, we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera. The approach operates on the event stream output of the camera directly without synthesising pseudo-image frames as is common in the literature. The key idea is the introduction of an internal state that directly encodes the convolved image information, which is updated asynchronously as each event arrives from the camera. The state can be read-off as-often-as and whenever required for use in higher level vision algorithms for real-time robotic systems. We demonstrate the application of our method to corner detection, providing an implementation of a Harris corner-response "state" that can be used in real-time for feature detection and tracking on robotic systems.

Journal ArticleDOI
TL;DR: The proposed method takes the traffic peak period detection problem as a salient point detection problem and uses the image processing strategies to solve this problem and has more intuition.
Abstract: Traffic peak period detection is very important for the guidance and control of traffic flow. Most common methods for traffic peak period detection are based on data analysis. They have achieved good performance. However, the detection processes are not intuitional enough. Besides that, the accuracy of these methods needs to be improved further. From an image processing view, we introduce a concept in corner detection, sharpness, to detect the traffic peak periods in this paper. The proposed method takes the traffic peak period detection problem as a salient point detection problem and uses the image processing strategies to solve this problem. Firstly, it generates a speed curve image with the speed data. With this image, the method for detection of salient points is adopted to obtain the peak point candidates. If one candidate has the lowest speed value, this candidate is the peak point. Finally, the peak period is gotten by moving forward and backward the corresponding time of the peak point with a time interval. Experimental results show that the proposed method has achieved higher accuracy. More importantly, as the proposed method solves the traffic peak period detection problem from an image processing view, it has more intuition.

Journal ArticleDOI
TL;DR: The main contribution of this paper is that the proposed global QHT can reduce the search range effectively for candidates, and the proposed SALSM can notably improve the accuracy of the corresponding corners.
Abstract: Corner feature matching remains a difficult task for wide-baseline images because of viewpoint distortion, surface discontinuities, and partial occlusions. In this paper, we propose a robust Harris corner matching method based on the quasi-homography transform (QHT) and self-adaptive window. Our method is divided into three steps. First, high-quality Harris corners were extracted from stereo images using optimal detecting, and initial matches were simultaneously acquired by integrating complementary affine-invariant features and the scale-invariant feature transform descriptor. Second, the pair of fundamental matrices was estimated based on the initial matches and improved random sample consensus algorithm. Subsequently, the global QHT was produced by duplicate epipolar geometries. Third, conjugate Harris corners were obtained by combining QHT and normalized cross correlation, and the accuracy of the corresponding points was further improved based on self-adaptive least-squares matching (SALSM). Experiments on six groups of wide-baseline images demonstrate the effectiveness of the proposed method, and a comprehensive comparison with the existing corner matching algorithms indicates that our method has notable superiority in terms of accuracy and distribution. The main contribution of this paper is that the proposed global QHT can reduce the search range effectively for candidates, and the proposed SALSM can notably improve the accuracy of the corresponding corners.

Journal ArticleDOI
TL;DR: To solve the problems of the traditional SUSAN algorithm including the fixed threshold of gray value difference and the failed detection of symmetry corners, an adaptive threshold extraction method is raised in this study and better performance has been achieved.
Abstract: Infrared (IR) image fusion is designed to fuse several IR images into a comprehensive image to boost imaging quality and reduce redundancy information, and image matching is an indispensable step. However, Conventional matching techniques are susceptible to the noise and fuzzy edges in IR images and it is therefore very desirable to have a matching algorithm that is tolerant to them. This paper presents a method for infrared image matching based on the SUSAN corner detection. To solve the problems of the traditional SUSAN algorithm including the fixed threshold of gray value difference and the failed detection of symmetry corners, an adaptive threshold extraction method is raised in this study. Furthermore, an attached double ring mask is used to improve the complex corner detection capability. A constraint condition and a principle of gravity are adopted to filtrate the candidate corners. The proposed method is qualitatively and quantitatively evaluated on IR images in the experiments. In comparison with other methods, better performance has been achieved.

Journal ArticleDOI
TL;DR: The experimental results prove that the proposed method is promising in terms of robustness, imperceptibility and security to most notable image processing attacks, geometrical attacks, and video processing attacks among the various conventional watermarking algorithms.
Abstract: Digital images are widely distributed today over the internet and through other mediums. There are powerful digital image processing tools which have made perfect image duplication a trivial procedure. Therefore, security, copyright protection, and integrity verification of digital video have become an urgent issue in the digital world, which can be achieved by means of a technique called digital watermarking. The issues of watermarking are to achieve imperceptibility, robustness, payload and security simultaneously. This paper presents a new Quaternion Curvelet Transform (QCT) based video watermarking technique for embedding a video on another video in a secure and optimized way. However, many of the existing techniques are unable to handle the problem of rotation, translation and scaling attacks on watermarked video. This study presents a novel embedding approach where in the quaternion transform followed by the traditional Curvelet transform is able to overcome the above disadvantages. In this paper, we first represent the color cover video frames in the quaternion form and subsequently generate Quaternion Curvelet Transform (QCT) coefficients for each such quaternion frames. Second, in order to withstand the desynchronization causes due to the geometrical attacks, the Harris corner detection algorithm is used to determine the invariant feature points on the QCT coefficients, which is followed by the calculation of energy for each block centered on those feature points. Third, in order to maintain the trade-off among the watermarking characteristics, the optimized location for embedding the watermark frames is determined using the cuckoo search optimization algorithm. Finally, to attain the authenticity, an authentication key generation procedure is employed using the owner’s biometric image and the watermarked frames. The experimental results prove that the proposed method is promising in terms of robustness, imperceptibility and security to most notable image processing attacks, geometrical attacks, and video processing attacks among the various conventional watermarking algorithms.

Journal ArticleDOI
TL;DR: A ship target detection algorithm based on information theory and Harris corner detection for SAR images is proposed and the effectiveness and superiority of the proposed algorithm are verified by comparing the proposed method with the results of constant false alarm rate (CFAR) detection algorithm combined with morphological processing algorithm and other ship targets detection algorithms.
Abstract: In order to make up the shortcomings of some existing ship target detection algorithms for high-resolution synthetic aperture radar (SAR) images, a ship target detection algorithm based on information theory and Harris corner detection for SAR images is proposed in this paper. Firstly, the SAR image is pretreated, and next, it is divided into superpixel patches by using the improved simple linear iterative clustering (SLIC) superpixel generation algorithm. Then, the self-information value of the superpixel patches is calculated, and the threshold T1 is set to select the candidate superpixel patches. And then, the extended neighborhood weighted information entropy growth rate threshold T2 is set to eliminate the false alarm candidate superpixel patches. Finally, the Harris corner detection algorithm is used to process the detection result and the number of the corner threshold T3 is set to filter out the false alarm patches, and the final SAR image target detection result is obtained. The effectiveness and superiority of the proposed algorithm are verified by comparing the proposed method with the results of constant false alarm rate (CFAR) detection algorithm combined with morphological processing algorithm and other ship target detection algorithms.

Journal ArticleDOI
TL;DR: Computer simulation results show that the proposed illumination-invariant method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.
Abstract: Illumination-invariant method for computing local feature points and descriptors, referred to as LUminance Invariant Feature Transform (LUIFT), is proposed. The method helps us to extract the most significant local features in images degraded by nonuniform illumination, geometric distortions, and heavy scene noise. The proposed method utilizes image phase information rather than intensity variations, as most of the state-of-the-art descriptors. Thus, the proposed method is robust to nonuniform illuminations and noise degradations. In this work, we first use the monogenic scale-space framework to compute the local phase, orientation, energy, and phase congruency from the image at different scales. Then, a modified Harris corner detector is applied to compute the feature points of the image using the monogenic signal components. The final descriptor is created from the histograms of oriented gradients of phase congruency. Computer simulation results show that the proposed method yields a superior feature detection and matching performance under illumination change, noise degradation, and slight geometric distortions comparing with that of the state-of-the-art descriptors.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: A novel algorithm for detection and measurement of indentations in Vickers hardness testing images, within a specific case of applied research on material quality evaluation based on image processing, which achieves competitive accuracy compared to the best known methods but is simpler and hence more efficient.
Abstract: The paper presents a novel algorithm for detection and measurement of indentations in Vickers hardness testing images, within a specific case of applied research on material quality evaluation based on image processing. The algorithm performs image segmentation by binarization, morphological filtering, and region growing, where the binarization threshold is automatically obtained from the input image. After identification of the rhombus shaped indentation footprint, its four vertices are determined using corner detection, which are used to calculate the diagonal lengths and the Vickers hardness number. The proposed procedure has been tested on 185 images of real data obtained by the micro hardness machine Mitutoyo HM-124, mostly from Steel-316 specimens, but also from Hafnium Nitride. Test images include specular-polished and rough surfaces, specimen with artifacts or imperfections, indentations with deformed or damaged edges, and low contrast images. Ground true diagonal lengths obtained in the conventional manual manner by an expert were compared with the results determined by our method. The proposed method achieves competitive accuracy compared to the best known methods, but it is simpler and hence more efficient.

Proceedings ArticleDOI
13 Jul 2018
TL;DR: A method for distance measurement only using a single target image without any internal camera parameters is developed, which satisfies the real-time and accurate requirements of distance detection.
Abstract: In this paper, a method for distance measurement only using a single target image without any internal camera parameters is developed. The mapping relation between image row pixel values and the actual distances is established by detecting and locating the corners on the reference target image. The distance information is extracted in real time by combining the moving target detection method based on Gaussian mixture model (GMM) and the shadow elimination in Hue-Saturation-Intensity (HSI) color space. The method has a simple calibration procedure with a single image, which is suitable for practical application. The experimental result shows that the algorithm is effective and satisfies the real-time and accurate requirements of distance detection.

Proceedings ArticleDOI
13 Jun 2018
TL;DR: The experiment results show that the improved ORB algorithm proposed in this paper can effectively improve the matching accuracy of the OrB algorithm on the basis of real-time performance.
Abstract: In order to improve the matching accuracy and reduce mismatch of the ORB algorithm when it was directly applied into image matching, an improved ORB algorithm is proposed based on the LATCH feature and the improved LBP feature. In feature point detection, the multi-scale FAST corner detection algorithm is used to find out the feature point with scale invariance. In feature point description, the LATCH feature and the LBP feature are used to generate feature point descriptor. Aiming at the problem that the LBP feature is too sensitive to noise and image local change, image block method is applied into the generate the improved LBP feature proposed in this paper. Then these two features are combined to form a new binary LATCH/LBP feature by multi-feature fusion method. Then combined with the Center of gravity method used in the ORB algorithm, the new LATCH/LBP feature can obtain strong robustness. In feature matching, the nearest neighbor matching algorithm is used to realize the matching between two image feature vectors. Finally, the matching results are screened by RANSAC algorithm to eliminate mismatch points. The experiment results show that the improved ORB algorithm proposed in this paper can effectively improve the matching accuracy of the ORB algorithm on the basis of real-time performance.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A crowd motion estimation system based on the multiple scale optical flow and corner detection employing the advantages of working all day of infrared cameras and high flexibility of the UAV is proposed.
Abstract: Crowd motion estimation is an important part of detection and analysis of abnormal behavior of the crowd. Crowd motion Analysis in special places is a necessary for maintaining the safety and social stability in public place and there is a research difficulty in the field of intelligent video monitoring in an unexpected dynamic open environment. Unmanned Aerial Vehicles (UAVs) have become a flexible monitoring platform in recent years. Existing approaches for crowd motion estimation based on traditional visible light cameras have the limitation to find the warm object clearly in the night. This paper proposes a crowd motion estimation system based on the multiple scale optical flow and corner detection employing the advantages of working all day of infrared cameras and high flexibility of the UAV. Firstly, the original infrared images are captured with the airborne thermal infrared imager TAU2-336. And then the preprocessed images are obtained by Median filtering. Secondly, the corners detection and tracking process are completed by using multiscale analysis. Finally, the average velocity is calculated for crowd motion estimation. The experimental results show that the proposed approach is effective for estimating the motion speed and crowd behavior status.