scispace - formally typeset
Search or ask a question

Showing papers on "Centroid published in 2016"


Journal ArticleDOI
TL;DR: This paper can exploit a recent result by Petitjean et al. to allow meaningful averaging of “warped” time series, which then allows us to create super-efficient nearest “centroid” classifiers that are at least as accurate as their more computationally challenged nearest neighbor relatives.
Abstract: A concerted research effort over the past two decades has heralded significant improvements in both the efficiency and effectiveness of time series classification. The consensus that has emerged in the community is that the best solution is a surprisingly simple one. In virtually all domains, the most accurate classifier is the nearest neighbor algorithm with dynamic time warping as the distance measure. The time complexity of dynamic time warping means that successful deployments on resource-constrained devices remain elusive. Moreover, the recent explosion of interest in wearable computing devices, which typically have limited computational resources, has greatly increased the need for very efficient classification algorithms. A classic technique to obtain the benefits of the nearest neighbor algorithm, without inheriting its undesirable time and space complexity, is to use the nearest centroid algorithm. Unfortunately, the unique properties of (most) time series data mean that the centroid typically does not resemble any of the instances, an unintuitive and underappreciated fact. In this paper we demonstrate that we can exploit a recent result by Petitjean et al. to allow meaningful averaging of "warped" time series, which then allows us to create super-efficient nearest "centroid" classifiers that are at least as accurate as their more computationally challenged nearest neighbor relatives. We demonstrate empirically the utility of our approach by comparing it to all the appropriate strawmen algorithms on the ubiquitous UCR Benchmarks and with a case study in supporting insect classification on resource-constrained sensors.

125 citations


Journal ArticleDOI
TL;DR: A hybrid method to model and analyze the dynamic coupling of a space robotic system avoids the singularity problem for solving differential equations; at the velocity level, each type of coupling motion was separately modeled and analyzed for different requirements.
Abstract: Resolving linear and angular momentum conservation equations in different ways, a hybrid method was proposed to model and analyze the dynamic coupling of a space robotic system. This method dealt with the coupling problems for the base’s centroid position at the position level and attitude at the velocity level. Based on the base centroid virtual manipulator concept, the coupled space was addressed to represent the base’s centroid position coupling. For different cases, the reachable coupled space, attitude-constrained coupled space, and free coupled space were defined. However, the coupling for the base’s velocities was decomposed into joint-to-base rotation, joint-to-base translation, end-to-base rotation, and end-to-base translation coupling types. The dependence of the rotation and translation coupling was revealed, and the coupling factors were determined to measure the coupling degree. Then, the coupling effect for different loads, installation positions, and joint configurations was analyzed. Coupled maps were established to plan the trajectory for minimizing disturbance. Compared with previous works, dynamic coupling at the position level avoids the singularity problem for solving differential equations; at the velocity level, each type of coupling motion was separately modeled and analyzed for different requirements. The proposed method is useful for practicalapplications, such as designing a new manipulator or using an existing robotic system.

101 citations


Proceedings ArticleDOI
Yi Wang1, Yuexian Zou1
19 Aug 2016
TL;DR: Experimental results show that the proposed fast DE-VOC method is comparable with mainstream ones on counting accuracy while running much faster in testing phase.
Abstract: Density estimation based visual object counting (DE-VOC) methods estimate the counts of an image by integrating over its predicted density map. They perform effectively but inefficiently. This paper proposes a fast DE-VOC method but maintains its effectiveness. Essentially, the feature space of image patches from VOC can be clustered into subspaces, and the examples of each subspace can be collected to learn its embedding. Also, it is assumed that the neighborhood embeddings of image patches and their corresponding density maps generated from training images are similar. With these principles, a closed form DE-VOC algorithm is derived, where the embedding and centroid of each neighborhood are precomputed by the training samples. Consequently, the density map of a given patch is estimated by simple classification and mapping. Experimental results show that our proposed method is comparable with mainstream ones on counting accuracy while running much faster in testing phase.

88 citations


Journal ArticleDOI
16 Jun 2016
TL;DR: The results indicated that k-means has a potential to classify BCW dataset and provided extensive understanding of the computational parameters that can be used with k-Means.
Abstract: Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2–9), and iteration (4–10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

85 citations


Journal ArticleDOI
TL;DR: A novel interferometric method for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion.
Abstract: A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramer-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.

75 citations


Journal ArticleDOI
16 Sep 2016-Forests
TL;DR: A method for automatic stem detection and stem profile estimation based on terrestrial laser scanning (TLS) was validated and contains a new way of extracting the flatness saliency feature using the centroid of a subset of a point cloud within a voxel cell that approximation the point by point calculations.
Abstract: A method for automatic stem detection and stem profile estimation based on terrestrial laser scanning (TLS) was validated. The root-mean-square error was approximately 1 cm for stem diameter estimations. The method contains a new way of extracting the flatness saliency feature using the centroid of a subset of a point cloud within a voxel cell that approximates the point by point calculations. The loss of accuracy is outweighed by a much higher computational speed, making it possible to cover large datasets. The algorithm introduces a new way to connect surface patches belonging to a stem and investigates if they belong to curved surfaces. Thereby, cylindrical objects, like stems, are found in the pre-filtering stage. The algorithm uses a new cylinder fitting method that estimates the axis direction by transforming the TLS points into a radial-angular coordinate system and evaluates the deviations by a moving window convex hull algorithm. Once the axis direction is found, the cylinder center is chosen as the position with the smallest radial deviations. The cylinder fitting method works on a point cloud in both the single-scan setup, as well as a multiple scan setup of a TLS system.

53 citations


Journal ArticleDOI
TL;DR: Novel algorithms and the underlying mathematics to process photographs of planetary illuminated bodies and use them for navigation purposes to provide autonomous navigation capabilities to spacecrafts by observing a planet or a moon are introduced.
Abstract: This study introduces novel algorithms and the underlying mathematics to process photographs of planetary illuminated bodies and use them for navigation purposes. The goal is to accurately estimate the observer-to-body relative position in inertial coordinates. The main motivation is to provide autonomous navigation capabilities to spacecrafts by observing a planet or a moon. This is needed, for example, in human-rated vehicles in order to provide navigation capabilities in a loss-of-communications scenario. The algorithm is derived for the general case of a triaxial ellipsoid that is observed bounded by an elliptical cone. The orientation of the elliptical cone reference frame is obtained by eigenanalysis, and the offset between the elliptical cone axis and the body center direction as well as the equation of the terminator are quantified. The main contribution of this paper is in the image-processing approach adopted to derive centroid and distance to the body. This is done by selecting a set of pixels ...

39 citations


Journal ArticleDOI
TL;DR: A new gene selection method is proposed to choose the best subset of features for microarray data with the irrelevant and redundant features removed, based on a newly defined linear discriminant analysis criterion.

32 citations


Journal ArticleDOI
TL;DR: A new unsupervised domain adaptation algorithm based on class centroid alignment (CCA) is proposed for classification of remote sensing images and better moving directions can be determined by preserving the local similarity in the changed target domain, resulted in neighborhood based CCA (NCCA).

31 citations


Journal ArticleDOI
TL;DR: This paper proposes to adapt the classical Lloyd algorithm to the context of Shape Analysis, focusing on the three dimensional case and presents a study comparing its performance with the Hartigan-Wong $$k$$k-means algorithm, one that was previously adapted to the field of Statistical Shape Analysis.
Abstract: Clustering of objects according to shapes is of key importance in many scientific fields. In this paper we focus on the case where the shape of an object is represented by a configuration matrix of landmarks. It is well known that this shape space has a finite-dimensional Riemannian manifold structure (non-Euclidean) which makes it difficult to work with. Papers about clustering on this space are scarce in the literature. The basic foundation of the $$k$$k-means algorithm is the fact that the sample mean is the value that minimizes the Euclidean distance from each point to the centroid of the cluster to which it belongs, so, our idea is integrating the Procrustes type distances and Procrustes mean into the $$k$$k-means algorithm to adapt it to the shape analysis context. As far as we know, there have been just two attempts in that way. In this paper we propose to adapt the classical $$k$$k-means Lloyd algorithm to the context of Shape Analysis, focusing on the three dimensional case. We present a study comparing its performance with the Hartigan-Wong $$k$$k-means algorithm, one that was previously adapted to the field of Statistical Shape Analysis. We demonstrate the better performance of the Lloyd version and, finally, we propose to add a trimmed procedure. We apply both to a 3D database obtained from an anthropometric survey of the Spanish female population conducted in this country in 2006. The algorithms presented in this paper are available in the Anthropometry R package, whose most current version is always available from the Comprehensive R Archive Network.

24 citations


Journal ArticleDOI
01 Oct 2016
TL;DR: An application of a genetic algorithm (GA) based segmentation algorithm is presented for automatic grouping of unlabeled pixels of the MR images into different homogeneous clusters and a performance comparison is manifested between the fuzzy intercluster hostility index based GA method and the well-known automatic clustering using differential evolution (ACDE) algorithm.
Abstract: Graphical abstractDisplay Omitted Segmentation of magnetic resonance (MR) images plays an important role in the medical science or clinical research. In this article, an application of a genetic algorithm (GA) based segmentation algorithm is presented for automatic grouping of unlabeled pixels of the MR images into different homogeneous clusters. Before the segmentation, the information about the optimal number of segments as well as the underlying pixel distribution of an image is not required in this method. The centroid of different segments is demarcated as active/inactive centroid by the fuzzy intercluster hostility index. After that, the test images are segmented by the selected active centroids. The optimal number of segments and their respective centroids are determined by this method. A performance comparison is manifested between the fuzzy intercluster hostility index based GA method and the well-known automatic clustering using differential evolution (ACDE) algorithm and one genetic algorithm based non-automatic algorithm with the help of two real life MR images. The comparison depicted the superiority of the GA based automatic image segmentation method with the help of fuzzy intercluster hostility index over other two algorithms.

Journal ArticleDOI
TL;DR: A two-step extraction method for star centroid with sub-pixel accuracy is proposed and can be implemented in hardware to increase processing speed, using Verilog hardware description languages.
Abstract: Spacecraft's attitude information plays an important role in celestial navigation The attitude is mainly determined by matching the star's centroid in the obtained image with its corresponding information in star catalog Generally, the star image can be regarded as a spot with a diameter <5 pixels Therefore, it is very difficult to extract the star centroid with sub-pixel accuracy, especially in the hardware system, such as FPGAs The existing spot centroid extraction methods with high accuracy require plenty of pixels to realize the complex computations Limited to the star's diameter and hardware requirements, such methods are not suitable for star centroid extraction in hardware system To solve the problem, a two-step extraction method for star centroid with sub-pixel accuracy is proposed The maximum pixel-level center can be located through zero crossing of the first derivative in a small region Taking the pixel-level center as the middle of the window with fixed size, the sub-pixel offsets to the sub-pixel center can be calculated using fixed window weighted centroid method The sub-pixel center of the star is then obtained by adding the offsets to the pixel-level center This method can be implemented in hardware to increase processing speed, using Verilog hardware description languages A simulation is performed on computer and FPGA Experimental results show the excellent performance in accuracy and processing speed of two-step method In addition, two-step method has strong ability of resisting noise and good robustness compared to other methods

Journal ArticleDOI
TL;DR: It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).
Abstract: Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).

Journal ArticleDOI
01 Jan 2016
TL;DR: Experimental results demonstrated the effectiveness of the proposed approach for shape recognition with high accuracy, and Probabilistic Neural Network was used to classify the leaf shape.
Abstract: This research recognizes the leaf shape using Centroid Contour Distance (CCD) as shape descriptor. CCD is an algorithm of shape representation contour-based approach which only exploits boundary information. CCD calculates the distance between the midpoint and the points on the edge corresponding to interval angle. Leaf shapes that included in this study are ellips, cordate, ovate, and lanceolate. We analyzed 200 leaf images of tropical plant. Each class consists of 50 images. The best accuracy is obtained by 96.67%. We used Probabilistic Neural Network to classify the leaf shape. Experimental results demonstrated the effectiveness of the proposed approach for shape recognition with high accuracy.



Journal ArticleDOI
TL;DR: In this paper, a high-accuracy algorithm is presented to extract the planet centroid from its raw image by segmenting the planet image block to eliminate noise and to reduce the computation load.
Abstract: A planet centroid is an important observable object in autonomous optical navigation. A high-accuracy algorithm is presented to extract the planet centroid from its raw image. First, we proposed a planet segmentation algorithm to segment the planet image block to eliminate noise and to reduce the computation load. Second, we developed an effective algorithm based on Prewitt-Zernike moments to detect sub-pixel real edges by determining possible edges with the Prewitt operator, removing pseudo-edges in backlit shady areas, and relocating real edges to a sub-pixel accuracy in the Zernike moments. Third, we proposed an elliptical model to fit sub-pixel edge points. Finally, we verified the performance of this algorithm against real images from the Cassini-Huygens mission and against synthetic simulated images. Simulation results showed that the accuracy of the planet centroid is up to 0·3 pixels and that of the line-of-sight vector is at 2·1 × 10−5 rad.

Journal ArticleDOI
TL;DR: A novel primary user localization algorithm based on compressive sensing (PU-CSL) in cognitive radio networks (CRNs) is proposed in this paper, which shows higher locating accuracy for integrally exploring correlation between source signal and secondary users (SUs).
Abstract: In order to locate source signal more accurately in authorized frequency bands, a novel primary user localization algorithm based on compressive sensing (PU-CSL) in cognitive radio networks (CRNs) is proposed in this paper. In comparison to existing centroid locating algorithms, PU-CSL shows higher locating accuracy for integrally exploring correlation between source signal and secondary users (SUs). Energy detection is first adopted for collecting the energy fingerprint of source signal at each SU, then degree of correlation between source signal and SUs is reconstructed based on compressive sensing (CS), which determines weights of centroid coordinates. A weighted centroid scheme is finally utilized to estimate source position. Simulation results show that PU-CSL has smaller maximum error of positioning and root-mean-square error. Moreover, the proposed PU-CSL algorithm possess excellent location accuracy and strong anti-noise performance.

Journal ArticleDOI
TL;DR: A novel circle views (CVs) shape signature for recognizing 2-D object silhouettes is proposed that provides a promising retrieval rate and a slight modification to the learning technique has been proposed that reduces its computational cost significantly.
Abstract: An important problem in computer vision is object recognition, which has received considerable attention in the literature. The performance of any object recognition system depends on the shape representation used and on the matching algorithm applied. In this paper, we propose a novel circle views (CVs) shape signature for recognizing 2-D object silhouettes. Many views from one circular orbit (or more) centered at the shape centroid are defined based on the distances from each viewing point on the circular orbit to a fixed number of sampled shape contour points. One compact and robust shape descriptor is obtained by applying the Fourier transform to the proposed signature. The obtained descriptor is translation, rotation, and scale invariant. Two popular shape benchmarks have been used for testing: 1) MPEG-7 and 2) Kimia’s-99 databases. The proposed CVs signature provides a promising retrieval rate (83.71% on MPEG-7 database). A further increase in the retrieval rate (90.35%) has been achieved by applying a shape context learning technique. A slight modification to the learning technique has been proposed that reduces its computational cost significantly. An attractive feature of the proposed CVs signature is its simplicity and computational efficiency, which makes the CVs signature more practical for different application areas.

Book ChapterDOI
20 Nov 2016
TL;DR: In this paper, the Signature of Geometric Centroids descriptor is proposed to support direct shape matching on the scans, without requiring any preprocessing such as scan denoising or converting into a mesh.
Abstract: Depth scans acquired from different views may contain nuisances such as noise, occlusion, and varying point density. We propose a novel Signature of Geometric Centroids descriptor, supporting direct shape matching on the scans, without requiring any preprocessing such as scan denoising or converting into a mesh. First, we construct the descriptor by voxelizing the local shape within a uniquely defined local reference frame and concatenating geometric centroid and point density features extracted from each voxel. Second, we compare two descriptors by employing only corresponding voxels that are both non-empty, thus supporting matching incomplete local shape such as those close to scan boundary. Third, we propose a descriptor saliency measure and compute it from a descriptor-graph to improve shape matching performance. We demonstrate the descriptor’s robustness and effectiveness for shape matching by comparing it with three state-of-the-art descriptors, and applying it to object/scene reconstruction and 3D object recognition.

Book ChapterDOI
19 Sep 2016
TL;DR: An approximation algorithm is presented for the strongly NP-hard problem of partitioning a set of Euclidean points into two clusters and it is proved that it is a fully polynomial-time approximation scheme when the space dimension is bounded by a constant.
Abstract: We consider the strongly NP-hard problem of partitioning a set of Euclidean points into two clusters so as to minimize the sum (over both clusters) of the weighted sum of the squared intracluster distances from the elements of the clusters to their centers. The weights of sums are the cardinalities of the clusters. The center of one of the clusters is given as input, while the center of the other cluster is unknown and determined as the geometric center (centroid), i.e. the average value over all points in the cluster. We analyze the variant of the problem with cardinality constraints. We present an approximation algorithm for the problem and prove that it is a fully polynomial-time approximation scheme when the space dimension is bounded by a constant.

Patent
16 Dec 2016
TL;DR: In this paper, a selection of data types is defined from available log data for an evaluation of events associated with an entity, one or more evaluations associated with the entity are defined and reference data is generated from the selection of the data types based on the defined evaluations.
Abstract: A selection of data types is defined from available log data for an evaluation of events associated with an entity. One or more evaluations associated with the entity are defined and reference data is generated from the selection of data types based on the one or more defined evaluations. The one or more evaluations are grouped into a pattern. A three dimensional (3D) score diversity diagram visualization is initialized for display in a graphical user interface, where a point representing the entity in the visualization is localized in 3D space at a coordinate based on two-dimensional (2D) coordinates in a 2D coordinate system of a centroid of the calculated area of a polygon placed to into the 2D coordinate system and defined by the values of each evaluation associated with the entity.

Journal ArticleDOI
TL;DR: Centroid precision and orientation precision, as developed here, are useful concepts due to the generality of the expressions and the widespread interest in localization microscopy for super‐resolution imaging and particle tracking.
Abstract: The concept of localization precision, which is essential to localization microscopy, is formally extended from optical point sources to microscopic rigid bodies. Measurement functions are presented to calculate the planar pose and motion of microscopic rigid bodies from localization microscopy data. Physical lower bounds on the associated uncertainties - termed centroid precision and orientation precision - are derived analytically in terms of the characteristics of the optical measurement system and validated numerically by Monte Carlo simulations. The practical utility of these expressions is demonstrated experimentally by an analysis of the motion of a microelectromechanical goniometer indicated by a sparse constellation of fluorescent nanoparticles. Centroid precision and orientation precision, as developed here, are useful concepts due to the generality of the expressions and the widespread interest in localization microscopy for super-resolution imaging and particle tracking.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: The distance-based control law is proposed based on the estimations, such that the weighted centroid of the formation is driven to track the assigned time-varying reference, meanwhile maintaining the prescribed formation shape.
Abstract: This paper investigates the weighted centroid formation tracking control for multi-agent systems. First, a class of novel distributed observers is developed for each agent to infer the formation's weighted centroid in finite time. Then, the distance-based control law is proposed based on the estimations, such that the weighted centroid of the formation is driven to track the assigned time-varying reference, meanwhile maintaining the prescribed formation shape. Moreover, the formation stabilization error is shown to converge to zero using the proposed observer-controller scheme utilizing the finite-time Lyapunov stability of the observers. Finally, all the theoretical results are further validated through numerical simulations.

Journal ArticleDOI
TL;DR: A new robust estimation method for the central value of a set of covariance matrices, called Huber's centroid, is described starting from the expression of two well-known methods that are the center of mass and the median.
Abstract: This letter introduces a new robust estimation method for the central value of a set of $N$ covariance matrices. This estimator, called Huber's centroid, is described starting from the expression of two well-known methods that are the center of mass and the median. In addition, a computation algorithm based on the gradient descent is proposed. Moreover, Huber's centroid performances are analyzed on simulated data to identify the impact of outliers on the estimation process. In the end, the algorithm is applied to brain decoding, based on magnetoencephalography data. For both simulated and real data, the covariance matrices are considered as realizations of Riemannian Gaussian distributions and the results are compared to those given by the center of mass and the median.

Patent
09 Mar 2016
TL;DR: In this paper, a centroid frequency and spectral ratio integrated borehole seismic quality factor inversion method is proposed, and the method comprises the steps: building a reversion equation for calculating a stratum attenuation coefficient through a spectral ratio; building a combined reversion expression among the spectral ratio, the centroid frequencies and stratum coefficients; and solving a target function of combined inversion through employing a damping LSQR algorithm.
Abstract: The invention discloses a centroid frequency and spectral ratio integrated borehole seismic quality factor inversion method, and the method comprises the steps: building a reversion equation for calculating a stratum attenuation coefficient through a spectral ratio; building a reversion equation between the movement amount of the centroid frequency and the stratum attenuation coefficient; building a combined reversion equation among the spectral ratio, the centroid frequency and the stratum attenuation coefficient; and solving a target function of combined inversion through employing a damping LSQR algorithm. The method maintains the stability of the results of a spectral ratio method, and the impact on the method from non-stratum factor amplitude attenuation is small. The method integrates the advantages of high calculation precision of a centroid frequency calculation method and the sensitive reflection of attenuation abnormity, makes the most of the frequency changes of attenuation of seismic waves, and builds the combined reversion equation for solving the attenuation coefficient through the centroid frequency and the spectral ratio for combined reversion. The method employs better spectral ratio information for the constraint reversion of the centroid frequency method, improves the reversion effect, effectively reduces the noises and other interference, and improves the stability of absorption attenuation parameter reversion results.

Patent
03 Feb 2016
TL;DR: In this paper, a light stripe center rapid extraction method based on a gray centroid method, belongs to the field of computer vision measurement, and relates to feature information effective acquisition in vision measurement when left and right video camera view fields and photographing angles are inconsistent.
Abstract: The invention provides a light stripe center rapid extraction method based on a gray centroid method, belongs to the field of computer vision measurement, and relates to feature information effective acquisition in vision measurement when left and right video camera view fields and photographing angles are inconsistent. According to the method, laser light stripes of a surface to be measured are photographed by a binocular video camera, center point crude extraction is performed in each line of pixels of the light stripes by utilizing the conventional gray centroid method, and a boundary recognition threshold is set so as to determine the effective measurement area of the light stripes. Then light stripe direction pixel coordinates are linearly split by utilizing determination of the number of left and right image information so that the sub-pixel coordinates of light stripe centers in the direction can be obtained. Light stripe center accurate extraction is performed by utilizing boundary point information and the split result so that effective light stripe center point coordinates are obtained. According to the method, equivalent, rapid and high-precision extraction of the light stripe center points of the object surface to be measured is realized so that various problems existing in subsequent matching can be effectively reduced on the basis of meeting the real-time requirement of measurement, and thus binocular vision measurement subsequent reconstruction precision can be enhanced.

Book ChapterDOI
16 Jul 2016
TL;DR: This research developed a new version of the well-known k-means clustering algorithm that deals with such incomplete datasets and experimented on six standard numerical datasets from different fields and compared the performance of the proposed k-Means to other basic methods.
Abstract: Missing values in data are common in real world applications. In this research we developed a new version of the well-known k-means clustering algorithm that deals with such incomplete datasets. The k-means algorithm has two basic steps, performed at each iteration: it associates each point with its closest centroid and then it computes the new centroids. So, to run it we need a distance function and a mean computation formula. To measure the similarity between two incomplete points, we use the distribution of the incomplete attributes. We propose several directions for computing the centroids. In the first, incomplete points are dealt with as one point and the centroid is computed according to the developed formula derived in this research. In the second and the third, each incomplete point is replaced with a large number of points according to the data distribution and from these points the centroid is computed. Even so, the runtime complexity of the suggested k-means is the same as the standard k-means over complete datasets. We experimented on six standard numerical datasets from different fields and compared the performance of our proposed k-means to other basic methods. Our experiments show that our suggested k-means algorithms outperform previously published methods.

Journal ArticleDOI
TL;DR: This paper defines the mean points in BEMD 'sifting' processing as centroid point of neighbour extrema points in Delaunay triangulation and proposes using mean approximation instead of envelope mean in 'sifted'.

Proceedings ArticleDOI
29 Aug 2016
TL;DR: A new robust estimation method for the central value of a set of N covariance matrices, called Huber's centroid, is described starting from the expression of two well-known methods that are the center of mass and the median.
Abstract: Many signal and image processing applications, including texture analysis, radar detection or EEG signal classification, require the computation of a centroid from a set of covariance matrices. The most popular approach consists in considering the center of mass. While efficient, this estimator is not robust to outliers arising from the inherent variability of the data or from faulty measurements. To overcome this, some authors have proposed to use the median as a more robust estimator. Here, we propose an estimator which takes advantage of both efficiency and robustness by combining the concepts of Riemannian center of mass and median. Based on the theory of M-estimators, this robust centroid estimator is issued from the so-called Huber's function. We present a gradient descent algorithm to estimate it. In addition, an experiment on both simulated and real data is carried out to evaluate the influence of outliers on the estimation and classification performances.