scispace - formally typeset
Search or ask a question

Showing papers on "Centroid published in 2012"


Journal Article
TL;DR: A survey of various algorithms for computing matrix geometric means and new second-order optimization algorithms to compute the Karcher mean are presented and it is concluded that currently first-order algorithms are best suited for this optimization problem as the size and/or number of the matrices increase.
Abstract: In this paper we present a survey of various algorithms for computing matrix geometric means and derive new second-order optimization algorithms to compute the Karcher mean. These new algorithms are constructed using the standard definition of the Riemannian Hessian. The survey includes the ALM list of desired properties for a geometric mean, the analytical expression for the mean of two matrices, algorithms based on the centroid computation in Euclidean (flat) space, and Riemannian optimization techniques to compute the Karcher mean (preceded by a short introduction into differential geometry). A change of metric is considered in the optimization techniques to reduce the complexity of the structures used in these algorithms. Numerical experiments are presented to compare the existing and the newly developed algorithms. We conclude that currently first-order algorithms are best suited for this optimization problem as the size and/or number of the matrices increase. Copyright © 2012, Kent State University.

128 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm to rapidly determine the moment tensor and centroid location for large earthquakes employing local and regional real-time high-rate displacement records from GPS.
Abstract: SUMMARY We present an algorithm to rapidly determine the moment tensor and centroid location for large earthquakes employing local and regional real-time high-rate displacement records from GPS. The algorithm extracts the coseismic offset from the displacement waveforms and uses the information to invert for the moment tensor. The Green's functions for a layered earth are obtained numerically from open source code EDGRN. To determine the centroid, multiple inversions are simultaneously performed within a grid of inversion nodes, and the node with the smallest misfit is then assigned the centroid location. We show results for two large earthquakes replayed in simulated real-time mode using recorded 1 Hz GPS displacements: the 2003 Mw 8.3 Tokachi-oki and the 2010 Mw 7.2 El Mayor-Cucapah earthquakes. We demonstrate that it is feasible to obtain accurate CMT solutions within the first 2–3 min after rupture initiation without any prior assumptions on fault characteristics, demonstrating an order of magnitude improvement in latency compared to existing seismic methods for the two earthquakes studied. This methodology is useful for rapid earthquake response, tsunami prediction and as a starting point for rapid finite fault modelling.

112 citations


Journal ArticleDOI
TL;DR: This work proposes a new way to solve the auxiliary problem of finding a column with negative reduced cost based on geometric arguments that greatly improves the efficiency of the whole algorithm and leads to exact solution of instances with over 2,300 entities.
Abstract: Given a set of entities associated with points in Euclidean space, minimum sum-of-squares clustering (MSSC) consists in partitioning this set into clusters such that the sum of squared distances from each point to the centroid of its cluster is minimized. A column generation algorithm for MSSC was given by du Merle et al. in SIAM Journal Scientific Computing 21:1485–1505. The bottleneck of that algorithm is the resolution of the auxiliary problem of finding a column with negative reduced cost. We propose a new way to solve this auxiliary problem based on geometric arguments. This greatly improves the efficiency of the whole algorithm and leads to exact solution of instances with over 2,300 entities, i.e., more than 10 times as much as previously done.

85 citations


Journal ArticleDOI
TL;DR: A local mean-based k-nearest centroid neighbor classifier that assigns to each query pattern a class label with nearest local centroid mean vector so as to improve the classification performance.
Abstract: K-nearest neighbor (KNN) rule is a simple and effective algorithm in pattern classification. In this article, we propose a local mean-based k-nearest centroid neighbor classifier that assigns to each query pattern a class label with nearest local centroid mean vector so as to improve the classification performance. The proposed scheme not only takes into account the proximity and spatial distribution of k neighbors, but also utilizes the local mean vector of k neighbors from each class in making classification decision. In the proposed classifier, a local mean vector of k nearest centroid neighbors from each class for a query pattern is well positioned to sufficiently capture the class distribution information. In order to investigate the classification behavior of the proposed classifier, we conduct extensive experiments on the real and synthetic data sets in terms of the classification error. Experimental results demonstrate that our proposed method performs significantly well, particularly in the small sample size cases, compared with the state-of-the-art KNN-based algorithms.

79 citations


Journal ArticleDOI
TL;DR: This work proposes a flexible location estimation algorithm using generalized regression neural network (GRNN) and weighted centroid localization that is remarkably good in comparison with its simplicity and requiring no additional hardware.
Abstract: Traditional received signal strength (RSS)-based localizations are often erroneous for the low-cost WSN devices. The reason is that the wireless channel is vulnerable to so many factors that deriving the appropriate propagation loss model for the WSN device is difficult. We propose a flexible location estimation algorithm using generalized regression neural network (GRNN) and weighted centroid localization. In the first phase of the proposed scheme, two GRNNs are trained separately for x and y coordinates, using RSS data gathered at the access points from the reference nodes. The networks are then used to estimate the approximate location of the target node and its close neighbors. In the second phase, the target node position is determined by calculating the weighted centroid of the Nc-closer neighbors. Performance of the proposed algorithm is compared with some existing RSS based techniques. Simulation and experimental results indicate that the location accuracy is satisfactory. The system performance is remarkably good in comparison with its simplicity and requiring no additional hardware.

73 citations


Proceedings ArticleDOI
02 Jul 2012
TL;DR: A head detection algorithm for depth video provided by a Kinect camera and its application to fall detection, which is more robust to human articulation and back bending and less affected by the centroid fluctuation.
Abstract: This article proposes a head detection algorithm for depth video provided by a Kinect camera and its application to fall detection. The proposed algorithm first detects possible head positions and then based on these positions, recognizes people by detecting the head and the shoulders. Searching for head positions is rapid because we only look for the head contour on the human outer contour. The human recognition is a modification of HOG (Histogram of Oriented Gradient) for the head and the shoulders. Compared with the original HOG, our algorithm is more robust to human articulation and back bending. The fall detection algorithm is based on the speed of the head and the body centroid and their distance to the ground. By using both the body centroid and the head, our algorithm is less affected by the centroid fluctuation. Besides, we also present a simple but effective method to verify the distance from the ground to the head and the centroid.

62 citations


Journal ArticleDOI
TL;DR: In this paper, the authors unify and slightly improve several bounds on the isotropic constant of high-dimensional convex bodies; in particular, a linear dependence on the body's ψ 2 constant is obtained.

57 citations


Journal ArticleDOI
TL;DR: A novel method named as inverse class frequency is proposed to increase the quality of the centroid values, which involves an update of the classical values to decrease the training and classification times.

51 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: The proposed algorithm uses weight of anchors to improve localization accuracy and the simulation results show that the localization error of the weighted centroid localization algorithm is smaller than DV-Hop and centroid location algorithm.
Abstract: Localization is an important problem in wireless sensor networks (WSNs), since an event information with little node location information is meaningless in practical application. The well known localization algorithms DV-hop and centroid location algorithm can be simply implemented in real WSNs. In this paper, we will analyze the drawbacks of these two algorithms and propose a new weighted centroid localization algorithm based on DV-Hop. The proposed algorithm uses weight of anchors to improve localization accuracy. The simulation results show that the localization error of the weighted centroid localization algorithm is smaller than DV-Hop and centroid localization algorithm.

47 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of projected large-scale structure (LSS), smoothing of mass maps, and shape noise on the weak-lensing peak positions was studied, and the authors concluded that projected LSS, although a major contaminant for weaklensing mass estimates, is not a source of confusion for identifying halo centres.
Abstract: Centroid positions of peaks identified in weak-lensing mass maps often show offsets with respect to other means of identifying halo centres, such as position of the brightest cluster galaxy or X-ray emission centroid. Here we study the effect of projected large-scale structure (LSS), smoothing of mass maps, and shape noise on the weak-lensing peak positions. In addition, we compare the offsets in mass maps to those found in parametric model fits. Using ray-tracing simulations through the Millennium RunN-body simulation, we find that projected LSS does not alter the weak-lensing peak position within the limits of our simulations’ spatial resolution, which exceeds the typical resolution of weak-lensing maps. We conclude that projected LSS, although a major contaminant for weak-lensing mass estimates, is not a source of confusion for identifying halo centres. The typically reported offsets in the literature are caused by a combination of shape noise and smoothing alone. This is true for centroid positions derived both from mass maps and model fits.

44 citations


Journal ArticleDOI
TL;DR: This paper addresses the type-reduction phase in GT2 FLSs, using GT2 fuzzy sets (FSs) represented in the α-plane framework using the monotone centroid flow (MCF) algorithm.
Abstract: Recently, type-2 fuzzy logic systems (T2 FLSs) have received increased research attention due to their potential to model and cope with the dynamic uncertainties ubiquitous in many engineering applications. However, because of the complex nature and the computational intensity of the inference process, only the constrained version of T2 FLSs, i.e., the interval T2 FLSs, was typically used. Fortunately, the very recently introduced concepts of α-planes and zSlices allow for efficient representation, as well as a computationally fast inference process, with general T2 (GT2) FLSs. This paper addresses the type-reduction phase in GT2 FLSs, using GT2 fuzzy sets (FSs) represented in the α-plane framework. The monotone property of centroids of a set of α-planes is derived and leveraged toward developing a simple to implement but fast algorithm for type reduction of GT2 FSs - i.e., the monotone centroid flow (MCF) algorithm. When compared with the centroid flow (CF) algorithm, which was previously developed by Zhai and Mendel, the MCF algorithm features the following advantages. 1) The MCF algorithm computes numerically identical centroid as the Karnik-Mendel (KM) iterative algorithms, unlike the approximated centroid which is obtained with the CF algorithm; 2) the MCF algorithm is faster than the CF algorithm, as well as the independent application of the KM algorithms; 3) the MCF algorithm is easy to implement, unlike the CF algorithm, which requires computation of the derivatives of the centroid; and 4) the MCF algorithm completely eliminates the need to apply the KM iterative procedure to any α-planes of the GT2 FS. The performance of the algorithm is presented on benchmark problems and compared with other type-reduction techniques that are available in the literature.

Journal ArticleDOI
TL;DR: This paper proposes a method that, exploiting the knowledge provided by background ontologies (like WordNet), is able to construct the centroid of multivariate datasets described by means of textual attributes, able to provide optimal centroids according to the exploited background ontology and a semantic similarity measure.
Abstract: Centroids are key components in many data analysis algorithms such as clustering or microaggregation. They are considered as the central value that minimises the distance to all the objects in a dataset or cluster. Methods for centroid construction are mainly devoted to datasets with numerical and categorical attributes, focusing on the numerical and distributional properties of data. Textual attributes, on the contrary, consist of term lists referring to concepts with a specific semantic content (i.e., meaning), which cannot be evaluated by means of classical numerical operators. Hence, the centroid of a dataset with textual attributes should be the term that minimises the semantic distance against the members of the set. Semantically-grounded methods aiming to construct centroids for datasets with textual attributes are scarce and, as it will be discussed in this paper, they are hampered by their limited semantic analysis of data. In this paper, we propose a method that, exploiting the knowledge provided by background ontologies (like WordNet), is able to construct the centroid of multivariate datasets described by means of textual attributes. Special efforts have been put in the minimisation of the semantic distance between the centroid and the input data. As a result, our method is able to provide optimal centroids (i.e., those that minimise the distance to all the objects in the dataset) according to the exploited background ontology and a semantic similarity measure. Our proposal has been evaluated by means of a real dataset consisting on short textual answers provided by visitors of a natural park. Results show that our centroids retain the semantic content of the input data better than related works.

Journal ArticleDOI
Changsoo Je1, Min Tang1, Youngeun Lee1, Minkyoung Lee1, Young J. Kim1 
TL;DR: A real-time algorithm that finds the Penetration Depth between general polygonal models based on iterative and local optimization techniques and solves the Linear Complementarity Problem (LCP) using a type of Gauss-Seidel iterative algorithm.
Abstract: We present a real-time algorithm that finds the Penetration Depth (PD) between general polygonal models based on iterative and local optimization techniques. Given an in-collision configuration of an object in configuration space, we find an initial collision-free configuration using several methods such as centroid difference, maximally clear configuration, motion coherence, random configuration, and sampling-based search. We project this configuration on to a local contact space using a variant of continuous collision detection algorithm and construct a linear convex cone around the projected configuration. We then formulate a new projection of the in-collision configuration onto the convex cone as a Linear Complementarity Problem (LCP), which we solve using a type of Gauss-Seidel iterative algorithm. We repeat this procedure until a locally optimal PD is obtained. Our algorithm can process complicated models consisting of tens of thousands triangles at interactive rates.

Posted Content
TL;DR: This work theoretically investigate major existing methods of partitional clustering, and alternatively propose a well-founded approach to clustering uncertain data based on a novel notion of cluster centroid, which allows for better representing a cluster of uncertain objects.
Abstract: Clustering uncertain data has emerged as a challenging task in uncertain data management and mining. Thanks to a computational complexity advantage over other clustering paradigms, partitional clustering has been particularly studied and a number of algorithms have been developed. While existing proposals differ mainly in the notions of cluster centroid and clustering objective function, little attention has been given to an analysis of their characteristics and limits. In this work, we theoretically investigate major existing methods of partitional clustering, and alternatively propose a well-founded approach to clustering uncertain data based on a novel notion of cluster centroid. A cluster centroid is seen as an uncertain object defined in terms of a random variable whose realizations are derived based on all deterministic representations of the objects to be clustered. As demonstrated theoretically and experimentally, this allows for better representing a cluster of uncertain objects, thus supporting a consistently improved clustering performance while maintaining comparable efficiency with existing partitional clustering algorithms.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed method outperforms weighted centroid and multilateration methods using single mobile beacon, and can localize all the sensor nodes with appropriate parameters.

Journal ArticleDOI
TL;DR: The proposed approach introduces many floating centroids, which are spread throughout the partition space and obtained by using K-Means algorithm, which has favorable performance especially with respect to the training accuracy, generalization accuracy, and average F-measures.
Abstract: This paper presents a novel technique—Floating Centroids Method (FCM) designed to improve the performance of a conventional neural network classifier. Partition space is a space that is used to categorize data sample after sample is mapped by neural network. In the partition space, the centroid is a point, which denotes the center of a class. In a conventional neural network classifier, position of centroids and the relationship between centroids and classes are set manually. In addition, number of centroids is fixed with reference to the number of classes. The proposed approach introduces many floating centroids, which are spread throughout the partition space and obtained by using K-Means algorithm. Moreover, different classes labels are attached to these centroids automatically. A sample is predicted as a certain class if the closest centroid of its corresponding mapped point is labeled by this class. Experimental results illustrate that the proposed method has favorable performance especially with respect to the training accuracy, generalization accuracy, and average F-measures.

Journal ArticleDOI
08 May 2012-PLOS ONE
TL;DR: This work proposes a new method that can directly extract nuclei centroids from fluorescence microscopy images and reveals a promising achievement of the technique presented in terms of average sensitivity and precision.
Abstract: Accurate identification of cell nuclei and their tracking using three dimensional (3D) microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i) Pre-processing, (ii) Local enhancement, and (iii) Centroid extraction. The first step includes two variations: first variation (Variant-1) uses the whole 3D pre-processed image, whereas the second one (Variant-2) modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2), when compared with an existing method (86.06% and 90.11%), originally developed for analyzing C. elegans images.

Journal ArticleDOI
TL;DR: An improved version of the CF algorithm is introduced, which is called enhanced CF algorithm, that reduces such accumulative errors by half and, therefore, greatly improves the computational accuracy.
Abstract: Recently, a centroid-flow (CF) algorithm has been proposed to compute the centroid of a type-2 fuzzy set A. This algorithm utilizes the Karnik-Mendel (KM) or the enhanced KM (EKM) algorithm only at the α = 0 α-level of A and then lets its result “flow” upward to the α = 1 α-level of A. It avoids having to apply the KM/EKM algorithms at every α-level, which significantly improves its computational efficiency; however, the CF algorithm approximation errors will gradually accumulate as the algorithm “flows” upward, and in some cases, this can cause the centroid of the α = 1 α-level of A to differ from its theoretical value. This paper introduces an improved version of the CF algorithm, which is called enhanced CF algorithm, that reduces such accumulative errors by half and, therefore, greatly improves the computational accuracy.

Journal ArticleDOI
TL;DR: An anchorless distributed technique for estimating the centroid of a network of agents from noisy relative measurements and shows that such a centroid-based representation produces results that are more accurate than anchor-based ones, irrespective of the selected anchor.

Journal ArticleDOI
Chandan Singh1, Pooja1
TL;DR: An effective descriptor based on angular radial transform (ART) and polar Hough transform (PHT) and the combination of both global and local features yields improved retrieval accuracy, which is analyzed over various sorts of image databases.
Abstract: Retrieval efficiency and accuracy are the important issues in designing content based image retrieval system. Thus, in this paper, we propose an effective descriptor based on angular radial transform (ART) and polar Hough transform (PHT), which is capable of fulfilling the above requirements. ART is used as a region based shape descriptor, which represents the global aspects of an image. PHT is used as a local shape descriptor for detecting linear edges in an edge image. The detected linear edges represent the association among adjacent edge points. The perpendicular distance of each linear edge to the centroid of the edge image is computed to build histograms, which exhibit rotation, scale and translation invariant properties. The combination of both global (ART based) and local (PHT based) features yields improved retrieval accuracy, which is analyzed over various sorts of image databases. An extensive set of experiments witness the superiority of the proposed hybrid system over other major contour based, region based and hybrid approaches.

Journal ArticleDOI
TL;DR: In this article, the source, significance and properties of such centroid origin shift and the characteristics of the resultant shifted quantum similarity matrices are discussed in deep, although all the procedures in this work are described by means of a quantum similarity theoretical background, based on QOS structure within the space of molecules.
Abstract: Shifting the origin of a known quantum object set (QOS) or of some discrete molecular point cloud (MPC) by choosing the centroid of such sets, leads to the way to produce quantum similarity matrices (SM) and tensors, which can be systematically referred to a canonical origin, whatever their nature, dimension or cardinality. In this paper the source, significance and properties of such centroid origin shift and the characteristics of the resultant shifted SM are discussed in deep. From such an analysis some interesting applications emerge; as, for instance, a new collection of MPC ordering possibilities. In addition, although all the procedures in this work are described by means of a quantum similarity theoretical background, based on QOS structure within the space of molecules, everything can be also easily implemented in a classical QSPR framework made of molecular numerical images attached to discrete molecular vectors, constructed with well-defined descriptor parameters.

Journal ArticleDOI
TL;DR: In this article, a tuning method for stabilizing PI controllers that utilizes the stability region centroid in the controller parameter space is presented. But, it does not rely on predetermined information with regard to the nature or range of parameter v...
Abstract: In this paper we offer a tuning method for the design of stabilizing PI controllers that utilizes the stability region centroid in the controller parameter space. To this end, analytical formulas are derived to describe the stability boundaries of a class of relative-degree-one linear time invariant second-order systems, the stability region of which has a closed convex shape. The so-called centroid stable point is then calculated analytically and the resultant set of algebraic formulas are utilized to tune the controller parameters. The freedom to choose the surface density function in the calculation of centroid stable point provides the designer with the possibility to incorporate optimal or robustness requirements in the controller design process. The proposed method uses the stability regions in the controller parameter space to ensure closed-loop stability, and, while offering robust stability properties, it does not rely on predetermined information with regard to the nature or range of parameter v...

Journal ArticleDOI
TL;DR: A ranking method for ordering fuzzy numbers based on area, mode, spreads and weights of generalized fuzzy numbers is described, which can rank various types of fuzzy numbers and also crisp numbers which are considered to be a special case of fuzzyNumbers.
Abstract: This paper describes a ranking method for ordering fuzzy numbers based on area, mode, spreads and weights of generalized fuzzy numbers. The area used in this method is obtained from the generalized trapezoidal fuzzy number, first by splitting the generalized trapezoidal fuzzy numbers into three plane figures and then calculating the centroids of each plane figure followed by the centroid of these centroids and then finding the area of this centroid from origin which is a process of defuzzification proposed in this paper. This method is simple in evaluation and can rank various types of fuzzy numbers and also crisp numbers which are considered to be a special case of fuzzy numbers.

Journal ArticleDOI
TL;DR: It is proved theoretically that the shape features from modified Fourier descriptors are invariant to translation, rotation, scaling, and change of start point and testified by measuring the retrieval performance of the systems that they are more discriminative than those from other Fourier descriptor.
Abstract: A modified Fourier descriptor was presented. Information from a local space can be used more efficiently. After the boundary pixel set of an object was computed, centroid distance approach was used to compute shape signature in the local space. A pair of shape signature and boundary pixel gray was used as a point in a feature space. Then, Fourier transform was used for composition of point information in the feature space so that the shape features could be computed. It is proved theoretically that the shape features from modified Fourier descriptors are invariant to translation, rotation, scaling, and change of start point. It is also testified by measuring the retrieval performance of the systems that the shape features from modified Fourier descriptors are more discriminative than those from other Fourier descriptors.

Journal ArticleDOI
TL;DR: A novel method based on combination of the feature space and the visual perception theory to achieve an accurate and robust classification of ISAR images is proposed and results show a significant improvement on recognition accuracy and robustness.
Abstract: The problem of target classification using inverse synthetic aperture radar (ISAR) images is studied under conditions of mass data processing, sparse scattering centre distribution, image deterioration and variation with the radar imaging view, all of which make target classification difficult. In this study, the authors propose a novel method based on combination of the feature space and the visual perception theory to achieve an accurate and robust classification of ISAR images. In order to make full use of local spatial structure information for classification, the local non-negative matrix factorisation (LNMF) is employed to construct an initial feature space, which is then optimised to calculate more discriminable feature projection vectors of each target. The approaches including speckle noise and stripes suppression, centroid and scale normalisation, LNMF, feature space optimisation with the maximum intersubject variation and minimum intrasubject variation and feature projection vectors calculation are detailed. Finally, the classification is performed with a k neighbours classifier. ISAR images used are obtained by range–Doppler imaging method with radar echoes of aircraft models generated by RadBase. Simulation results show a significant improvement on recognition accuracy and robustness of the proposed method.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach for determining the centroid of star spot which based on MEMS-Gyro's data deep coupling, it achieves the deep fusion of the data of star tracker and MEMS gyro at star map level through the introduction of EKF.
Abstract: The traditional approach of star tracker for determining the centroid of spot requires enough energy and good shape, so a relatively long exposure time and stable three-axis state become necessary conditions to maintain high accuracy, these limit its update rate and dynamic performance. In view of these issues, this paper presents an approach for determining the centroid of star spot which based on MEMS-Gyro's data deep coupling, it achieves the deep fusion of the data of star tracker and MEMS-Gyro at star map level through the introduction of EKF. The trajectory predicted by using the angular velocity of three axes can be used to set the extraction window, this enhances the dynamic performance because of the accurate extraction when the satellite has angular speed. The optimal estimations of the centroid position and the drift in the output signal of MEMS-Gyro through this approach reduce the influence of noise of the detector on accuracy of the traditional approach for determining the centroid and effectively correct the output signal of MEMS-Gyro. At the end of this paper, feasibility of this approach is verified by simulation.

Proceedings ArticleDOI
01 Mar 2012
TL;DR: This paper presents range free weighted centroid localization for 3D WSN using Mamdani & Suggano Fuzzy Inference System, and compares the weightedCentroid technique, through extensive simulation with simple centroid.
Abstract: Localization in WSN used to determine the position of node with the help of known positions of anchor nodes. In many of the applications, where coarse accuracy is sufficient, range free localization mechanism are being pursued alternative to range based localization mechanism. Because the range free localization scheme is low cost and consumes low energy. In this paper, we present range free weighted centroid localization for 3D WSN using Mamdani & Suggano Fuzzy Inference System. In this first anchor nodes are connected to unknown nodes (sensor nodes) localized to found, after this edge weight of anchor nodes are calculated based on received signal strength indicator information (RSSI) by using Mamdani & Sugano inference system. We compare the weighted centroid technique, through extensive simulation with simple centroid. The simulation result represent te effectiveness of 3D weighted centroid scheme.

Journal ArticleDOI
Tie Qiu1, Yu Zhou1, Feng Xia1, Naigao Jin1, Lin Feng1 
TL;DR: The N‐times trilateral centroid weighted localization algorithm, which can reduce the error considerably, is presented, which uses the weighted average of many RSSIs as current RSSI to improve the accuracy.
Abstract: Localization based on received signal strength indication (RSSI) is a low cost and low complexity technology, and it is widely applied in distance-based localization of wireless sensor networks. The error of existing localization technologies is significant. This paper presents the N-times trilateral centroid weighted localization algorithm, which can reduce the error considerably. Considering the instability of RSSI, we use the weighted average of many RSSIs as current RSSI. To improve the accuracy, we select a number of (no less than three) reliable beacon nodes to increase the localization times. Then we calculate the distances between reliable beacon nodes and the mobile node using an empirical formula. The mobile node is located N times using the trilateral centroid algorithm. Finally, we take the weighted average of the filtered reference coordinates as the mobile node's coordinates. We conduct experiments with the STM32W108 chip, which supports IEEE 802.15.4. The results show that the proposed algorithm performs better than the trilateral centroid algorithm. Copyright © 2012 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
21 Apr 2012
TL;DR: An improved DV-Hop location algorithm based on reduce the accumulation errors of the average hop distance of the unknown nodes in wireless sensor networks based on the weighted averaged method for calculate to the averageHop distance in the second stages is proposed.
Abstract: The localization of the unknown nodes is essential for many applications in wireless sensor networks (WSN). But the traditional DV-Hop algorithm has not high accuracy of the localization of the nodes. In this paper, we proposed an improved DV-Hop location algorithm based on reduce the accumulation errors of the average hop distance of the unknown nodes in wireless sensor networks. We have made detailed analysis of the DV-Hop algorithm, and have introduced the weighted averaged method for calculate to the average hop distance in the second stages. Then the estimates regional based on two beacon nodes is built, and the centroid of the region is the coordinates of the unknown node. Simulation experiments are conducted to compare the traditional DV-Hop algorithm and improved DV-Hop algorithm. The simulation results show that the proposed algorithm has better localization accuracy based on the same or different number of the sensor nodes and the communication radius.

Journal ArticleDOI
TL;DR: A numerical procedure to obtain the centroid IRC based on first principles is provided by combining ab initio path integral simulation with the string method and it is found that, in the intramolecular proton transfer of NH(3), the free energy barrier for the Centroid variables decreases with an amount of about 20% compared to the classical one at the room temperature.
Abstract: We propose a generalization of the intrinsic reaction coordinate (IRC) for quantum many-body systems described in terms of the mass-weighted ring polymer centroids in the imaginary-time path integral theory. This novel kind of reaction coordinate, which may be called the “centroid IRC,” corresponds to the minimum free energy path connecting reactant and product states with a least amount of reversible work applied to the center of masses of the quantum nuclei, i.e., the centroids. We provide a numerical procedure to obtain the centroid IRC based on first principles by combining ab initio path integral simulation with the string method. This approach is applied to NH3 molecule and N2H5− ion as well as their deuterated isotopomers to study the importance of nuclear quantum effects in the intramolecular and intermolecular proton transfer reactions. We find that, in the intramolecular proton transfer (inversion) of NH3, the free energy barrier for the centroid variables decreases with an amount of about 20% c...