scispace - formally typeset
Search or ask a question

Showing papers on "Centroid published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors proposed novel distance measures for the intuitionistic fuzzy set (IFS) to discuss the decision-making problems, which are based on four different notions of centers, namely centroid, orthocenter, circumcenter and incenter of the triangle.
Abstract: The paper aims at introducing novel distance measures for the intuitionistic fuzzy set (IFS) to discuss the decision-making problems. The current work exploits four different notions of centers, namely centroid, orthocenter, circumcenter and incenter of the triangle. First, we mold knowledge embedded in IFSs into isosceles TFN (triangular fuzzy number). Hence, based on these TFNs, we design four-novel distance/similarity measures for IFSs using the structures of the four aforementioned centers and inspect their properties. To avoid the loss of information during the conversion of IFSs into isosceles TFNs, we included the degree of hesitation (t) between the pairs of the membership function in the process. The compensations and authentication of the proposed measures are established with diverse counter-intuitive patterns and decision-making obstacles. Further, a clustering algorithm is also given to match the objects based on confidence levels. The performed analysis shows that the proposed measures give distinguishable and compatible results as contrasted to existing ones.

40 citations


Journal ArticleDOI
TL;DR: A robust learning-based method using a single convolution neural network for analyzing particle shadow images using a two-channel-output U-net model to generate a binary particle image and a particle centroid image.
Abstract: Conventional image processing for particle shadow image is usually time-consuming and suffers degraded image segmentation when dealing with the images consisting of complex-shaped and clustered particles with varying backgrounds. In this paper, we introduce a robust learning-based method using a single convolution neural network (CNN) for analyzing particle shadow images. Our approach employs a two-channel-output U-net model to generate a binary particle image and a particle centroid image. The binary particle image is subsequently segmented through marker-controlled watershed approach with particle centroid image as the marker image. The assessment of this method on both synthetic and experimental bubble images has shown better performance compared to the state-of-art non-machine-learning method. The proposed machine learning shadow image processing approach provides a promising tool for real-time particle image analysis.

27 citations


Journal ArticleDOI
TL;DR: In this paper, a single-pixel imaging technique that measures zero-order and first-order geometric moments, which are leveraged to reconstruct and track the centroid of a fast-moving object in real time is presented.
Abstract: Real-time tracking of fast-moving object have many important applications in various fields. However, it is a great challenge to track of fast-moving object with high frame rate in real-time by employing single-pixel imaging technique. In this paper, we present the first single-pixel imaging technique that measures zero-order and first-order geometric moments, which are leveraged to reconstruct and track the centroid of a fast-moving object in real time. This method requires only 3 geometric moment patterns to illuminate a moving object in one frame. And the corresponding intensities collected by a single-pixel detector are equivalent to the values of the zero-order and first-order geometric moments. We apply this new approach of measuring geometric moments to object tracking by detecting the centroid of the object in two experiments. The root mean squared errors in the transverse and axial directions are 5.46 pixels and 5.53 pixels respectively, according to the comparison of data captured by a camera system. In the second experiment, we successfully track a moving magnet with a frame rate up to 7400 Hz. The proposed scheme provides a new method for ultrafast target tracking applications.

23 citations


Journal ArticleDOI
TL;DR: A new cluster validity index, Saraswat-and-Mittal index, has been proposed in this article for hyperellipsoid or hyperspherical shape close clusters with distant centroids, generated by fuzzy c-means, and validated against ten state-of-the-art cluster validity indices.
Abstract: Determining the correct number of clusters is essential for efficient clustering and cluster validity indices are widely used for the same. Generally, the effectiveness of a cluster validity index relies on two factors: first, separation, defined by the distance between a pair of cluster centroids or a pair of data points belonging to different clusters and second, compactness, which is determined in terms of the distance between a data point and a centroid or between a pair of data points belonging to the same cluster. However, the existing cluster validity indices for centroid-based clustering are unreliable when the clusters are too close, but corresponding centroids are distant. To mitigate this, a new cluster validity index, Saraswat-and-Mittal index, has been proposed in this article for hyperellipsoid or hyperspherical shape close clusters with distant centroids, generated by fuzzy c-means. The proposed index computes compactness in terms of the distance between data points and corresponding centroids, whereas the distance between data points of disjoint clusters defines separation. These parameters benefit the proposed index in the analysis of close clusters with distinct centroids efficiently. The performance of the proposed index is validated against ten state-of-the-art cluster validity indices on artificial, UCI, and image datasets, clustered by the fuzzy c-means.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a k-centroid link method, which considers the effect of the objects around cluster centers to provide a better solution than the traditional linkage methods, such as single link, complete link, average link, mean link, centroid link and Ward method.
Abstract: In hierarchical clustering, the most important factor is the selection of the linkage method which is the decision of how the distances between clusters will be calculated. It extremely affects not only the clustering quality but also the efficiency of the algorithm. However, the traditional linkage methods do not consider the effect of the objects around cluster centers. Based on this motivation, in this article, we propose a novel linkage method, named k-centroid link, in order to provide a better solution than the traditional linkage methods. In the proposed k-centroid link method, the dissimilarity between two clusters is mainly defined as the average distance between all pairs of k data objects in each cluster, which are the k closest ones to the centroid of each cluster. In the experimental studies, the proposed method was tested on 24 different publicly available benchmark datasets. The results demonstrate that by hierarchical clustering via the k-centroid link method, it is possible to obtain better performance in terms of clustering quality compared to the conventional linkage methods such as single link, complete link, average link, mean link, centroid link, and the Ward method.

21 citations


Journal ArticleDOI
Yang Chen, Guanlan Liu, Yaming Xu, Pai Pan, Yin Xing 
TL;DR: In this article, the authors proposed a modified PointNet++ network architecture which concentrates the point-level and global features on the centroid point towards the local features to facilitate classification.
Abstract: Airborne laser scanning (ALS) point cloud has been widely used in the fields of ground powerline surveying, forest monitoring, urban modeling, and so on because of the great convenience it brings to people’s daily life. However, the sparsity and uneven distribution of point clouds increases the difficulty of setting uniform parameters for semantic classification. The PointNet++ network is an end-to-end learning network for irregular point data and highly robust to small perturbations of input points along with corruption. It eliminates the need to calculate costly handcrafted features and provides a new paradigm for 3D understanding. However, each local region in the output is abstracted by its centroid and local feature that encodes the centroid’s neighborhood. The feature learned on the centroid point may not contain relevant information of itself for random sampling, especially in large-scale neighborhood balls. Moreover, the centroid point’s global-level information in each sample layer is also not marked. Therefore, this study proposed a modified PointNet++ network architecture which concentrates the point-level and global features on the centroid point towards the local features to facilitate classification. The proposed approach also utilizes a modified Focal Loss function to solve the extremely uneven category distribution on ALS point clouds. An elevation- and distance-based interpolation method is also proposed for the objects in ALS point clouds which exhibit discrepancies in elevation distributions. The experiments on the Vaihingen dataset of the International Society for Photogrammetry and Remote Sensing and the GML(B) 3D dataset demonstrate that the proposed method which provides additional contextual information to support classification achieves high accuracy with simple discriminative models and new state-of-the-art performance in power line categories.

21 citations


Posted Content
TL;DR: In this article, the mean centroid representation is used both during training and retrieval to reduce the intra-class variance caused by changes in view angle, lighting, background clutter or occlusion.
Abstract: Image retrieval task consists of finding similar images to a query image from a set of gallery (database) images. Such systems are used in various applications e.g. person re-identification (ReID) or visual product search. Despite active development of retrieval models it still remains a challenging task mainly due to large intra-class variance caused by changes in view angle, lighting, background clutter or occlusion, while inter-class variance may be relatively low. A large portion of current research focuses on creating more robust features and modifying objective functions, usually based on Triplet Loss. Some works experiment with using centroid/proxy representation of a class to alleviate problems with computing speed and hard samples mining used with Triplet Loss. However, these approaches are used for training alone and discarded during the retrieval stage. In this paper we propose to use the mean centroid representation both during training and retrieval. Such an aggregated representation is more robust to outliers and assures more stable features. As each class is represented by a single embedding - the class centroid - both retrieval time and storage requirements are reduced significantly. Aggregating multiple embeddings results in a significant reduction of the search space due to lowering the number of candidate target vectors, which makes the method especially suitable for production deployments. Comprehensive experiments conducted on two ReID and Fashion Retrieval datasets demonstrate effectiveness of our method, which outperforms the current state-of-the-art. We propose centroid training and retrieval as a viable method for both Fashion Retrieval and ReID applications.

20 citations


Journal ArticleDOI
TL;DR: The model construction and validation of the centroid deformation method are implemented on the world’s highest arch dam through evaluating its structural behavior and predicting the development trend and can provide strong technical support to better grasp its performance during long-term operation.

19 citations


Journal ArticleDOI
TL;DR: In this paper, a three-stage MST-based hierarchical clustering algorithm (CTCEHC) is proposed, in which a preliminary partition is performed with the degrees of vertices in MST and small subclusters are merged via the geodesic distance between the centroids of MST in two clusters and the cut edge constraint I.

17 citations


Journal ArticleDOI
TL;DR: This article presents a regression-based algorithm for the protection of transmission line with means clustering and weighted weighted nearest neighbor and the robustness of the algorithm for different fault parameters such as fault impedance and fault location is validated.
Abstract: This article presents a $k$ -means clustering and weighted $k$ -nearest neighbor ( $k$ -NN) regression-based algorithm for the protection of transmission line. Three-phase current signals of both the terminals are synchronized and sampled with a sampling frequency of 3.84 kHz. Cumulative differential sum (CDS) is computed by subtracting the samples of current cycle from the previous cycle at both the terminals of transmission line. $k$ -means clustering is applied on CDS to compute two centroids using moving window of width, equal to one cycle. Difference between the absolute values of centroids is computed at both the terminals and represented by the centroid difference (CD). The CD of both the terminals is added to compute the fault index. The computed fault index is used to detect and classify the types of faults. The location of the fault is estimated by the weighted $k$ -NN regression method. Various case studies are performed to validate the robustness of the algorithm for different fault parameters such as fault impedance and fault location. The effect of noise is also considered to check the accuracy of the proposed algorithm in the noisy environment.

16 citations


Proceedings ArticleDOI
21 May 2021
TL;DR: In this article, the authors have improved the localization accuracy by using two stage mechanisms, i.e., the conventional weighted centroid methods and fine tuned it by using propagation model based computational techniques by minimizing the reference nodes.
Abstract: Identification of localization for sensor nodes in wireless sensor nodes is significant issues in many aspects. Location of sensor nodes can be identified through different methods categories as range based and range free. Through this research work we have improved the localization accuracy by using two stage mechanisms. We are using range based location identification methods where we are using received signal strength indicator as primary factors. We are using conventional weighted centroid methods and fine tuned it by using propagation model based computational techniques by minimizing the reference nodes. We have tested our approaches using Cooja simulator and observed that the method can identify the location up to 97 percent accuracy.

Journal ArticleDOI
TL;DR: It is shown how different graph representations can be developed for accurately summarizing first-person (egocentric) videos in a computationally efficient manner.

Journal ArticleDOI
TL;DR: In this paper, an improved Min-Max algorithm with area partition strategy (Min-Max-APS) is proposed to achieve better localization performance, in which the area of interest is first partitioned into four subareas, each of which contains a vertex of the original area and a minimum range difference criterion is designed to determine the target affiliated subarea whose vertex is "closest" to the target node.
Abstract: Min-Max algorithm was widely used as a simple received signal strength (RSS-) based algorithm for indoor localization due to its easy implementation. However, the original Min-Max algorithm only achieves coarse estimation in which the target node (TN) is regarded as the geometric centroid of the area of interest determined by measured RSS values. Although extended Min-Max (E-Min-Max) methods using weighted centroid instead of geometric centroid were recently proposed to cope with this problem, the improvement in the localization accuracy is still limited. In this paper, an improved Min-Max algorithm with area partition strategy (Min-Max-APS) is proposed to achieve better localization performance. In the proposed algorithm, the area of interest is first partitioned into four subareas, each of which contains a vertex of the original area of interest. Moreover, a minimum range difference criterion is designed to determine the target affiliated subarea whose vertex is “closest” to the target node. Then the target node’s location is estimated as the weighted centroid of the target affiliated subarea. Since the target affiliated subarea is smaller than the original area of interest, the weighted centroid of the target affiliated subarea will be more accurate than that of the original area of interest. Simulation results show that the localization error (LE) of the proposed Min-Max-APS algorithm can drop below 0.16 meters, which is less than one-half of that of the E-Min-Max algorithm, and is also less than one-seventh of that of the original Min-Max algorithm. Moreover, for the proposed Min-Max-APS, 90% of the LE are smaller than 0.38 meters, while the same percentage of the LE are as high as 0.49 meters for the E-Min-Max and 1.12 meters for the original Min-Max, respectively.

Journal ArticleDOI
TL;DR: In this article, a modification of the k-means algorithm is proposed based on Tukey's rule in conjunction with a new distance metric to improve the clustering accuracy and centroids convergence.

Journal ArticleDOI
TL;DR: This work presents a concept factorization with the local centroids (CFLCs) approach for data clustering that is formulated as a bipartite graph partitioning task, and an efficient algorithm is designed for optimization.
Abstract: Data clustering is a fundamental problem in the field of machine learning. Among the numerous clustering techniques, matrix factorization-based methods have achieved impressive performances because they are able to provide a compact and interpretable representation of the input data. However, most of the existing works assume that each class has a global centroid, which does not hold for data with complicated structures. Besides, they cannot guarantee that the sample is associated with the nearest centroid. In this work, we present a concept factorization with the local centroids (CFLCs) approach for data clustering. The proposed model has the following advantages: 1) the samples from the same class are allowed to connect with multiple local centroids such that the manifold structure is captured; 2) the pairwise relationship between the samples and centroids is modeled to produce a reasonable label assignment; and 3) the clustering problem is formulated as a bipartite graph partitioning task, and an efficient algorithm is designed for optimization. Experiments on several data sets validate the effectiveness of the CFLC model and demonstrate its superior performance over the state of the arts.

Proceedings Article
18 May 2021
TL;DR: Huang et al. as discussed by the authors proposed a balanced Open Set Domain Adaptation (OSDA) method which could recognize the unknown samples while maintaining high classification performance for the known samples.
Abstract: Open Set Domain Adaptation (OSDA) is a challenging domain adaptation setting which allows the existence of unknown classes on the target domain. Although existing OSDA methods are good at classifying samples of known classes, they ignore the classification ability for the unknown samples, making them unbalanced OSDA methods. To alleviate this problem, we propose a balanced OSDA methods which could recognize the unknown samples while maintain high classification performance for the known samples. Specifically, to reduce the domain gaps, we first project the features to a hyperspherical latent space. In this space, we propose to bound the centroid deviation angles to not only increase the intra-class compactness but also enlarge the inter-class margins. With the bounded centroid deviation angles, we employ the statistical Extreme Value Theory to recognize the unknown samples that are misclassified into known classes. In addition, to learn better centroids, we propose an improved centroid update strategy based on sample reweighting and adaptive update rate to cooperate with centroid alignment. Experimental results on three OSDA benchmarks verify that our method can significantly outperform the compared methods and reduce the proportion of the unknown samples being misclassified into known classes.

Journal ArticleDOI
TL;DR: It is demonstrated that velocity-model uncertainties can profoundly affect parameter estimation and that their inclusion leads to more realistic parameter uncertainty quantification, although not all approaches perform equally well.
Abstract: Centroid moment tensor (CMT) parameters can be estimated from seismic waveforms. Since these data indirectly observe the deformation process, CMTs are inferred as solutions to inverse problems which are generally underdetermined and require significant assumptions, including assumptions about data noise. Broadly speaking, we consider noise to include both theory and measurement errors, where theory errors are due to assumptions in the inverse problem and measurement errors are caused by the measurement process. While data errors are routinely included in parameter estimation for full CMTs, less attention has been paid to theory errors related to velocity-model uncertainties and how these affect the resulting moment-tensor (MT) uncertainties. Therefore, rigorous uncertainty quantification for CMTs may require theory-error estimation which becomes a problem of specifying noise models. Various noise models have been proposed, and these rely on several assumptions. All approaches quantify theory errors by estimating the covariance matrix of data residuals. However, this estimation can be based on explicit modelling, empirical estimation and/or ignore or include covariances. We quantitatively compare several approaches by presenting parameter and uncertainty estimates in nonlinear full CMT estimation for several simulated data sets and regional field data of the Ml 4.4, 2015 June 13 Fox Creek, Canada, event. While our main focus is at regional distances, the tested approaches are general and implemented for arbitrary source model choice. These include known or unknown centroid locations, full MTs, deviatoric MTs and double-couple MTs. We demonstrate that velocity-model uncertainties can profoundly affect parameter estimation and that their inclusion leads to more realistic parameter uncertainty quantification. However, not all approaches perform equally well. Including theory errors by estimating non-stationary (non-Toeplitz) error covariance matrices via iterative schemes during Monte Carlo sampling performs best and is computationally most efficient. In general, including velocity-model uncertainties is most important in cases where velocity structure is poorly known.

Journal ArticleDOI
TL;DR: The proposed algorithm can solve two weaknesses of the k-prototype algorithm, which is a well-known algorithm for clustering mixed data that has two main weaknesses, the use of mode as a cluster center for categorical attributes cannot accurately represent the objects and the algorithm may stop at the local optimum solution.

Journal ArticleDOI
TL;DR: This work illustrates how ensemble processing mechanisms and mental shortcuts can significantly distort visual summaries of data, and can lead to misjudgments like the demonstrated weighted average illusion.
Abstract: Scatterplots can encode a third dimension by using additional channels like size or color (e.g. bubble charts). We explore a potential misinterpretation of trivariate scatterplots, which we call the weighted average illusion, where locations of larger and darker points are given more weight toward x- and y-mean estimates. This systematic bias is sensitive to a designer's choice of size or lightness ranges mapped onto the data. In this paper, we quantify this bias against varying size/lightness ranges and data correlations. We discuss possible explanations for its cause by measuring attention given to individual data points using a vision science technique called the centroid method. Our work illustrates how ensemble processing mechanisms and mental shortcuts can significantly distort visual summaries of data, and can lead to misjudgments like the demonstrated weighted average illusion.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new fire identification algorithm by merging fire segmentation and multifeature fusion of fire, which improved the accuracy of fire identification based on video in the Internet of Things environment.
Abstract: In order to improve the accuracy of fire identification based on video in the Internet-of-Things environment, this article proposes a new fire identification algorithm by merging fire segmentation and multifeature fusion of fire. First, according to the relationship between R and Y channels, the improved YCbCr models are established for initial fire segmentation under reflection and nonreflection conditions, respectively. Simultaneously, the reflection and nonreflection conditions are judged by comparing the areas obtained by the two improved YCbCr models. Second, an improved region growing algorithm is proposed for fine fire segmentation by making use of the relationship between the seed point and its adjacent points. The seed points are determined using the weighted average of centroid coordinates of each segmented image. Finally, the quantitative indicators of fire identification are given according to the variation coefficient of fire area, the dispersion of centroid, and the circularity. Extensive experiments were conducted, and the experimental results demonstrate that the proposed fire detection method considerably outperforms the traditional methods on average in terms of three performance indexes: precision, recall, and $F1$ -score. Specifically, compared with the deep learning method, the precision of the proposed method is slightly higher. Although the recall of the proposed method is slightly lower than the deep learning method, its computation complexity is low.

Journal ArticleDOI
TL;DR: Comprehensive comparison and analysis from the three aspects of mean, variance and classification success rate, the experimental results show that the proposed NCOHHO algorithm for optimization FNN has the best comprehensive performance and has more outstanding performance in terms of the performance measures.
Abstract: The Harris Hawks Optimization Algorithm is a new metaheuristic optimization that simulates the process of Harris Hawk hunting prey (rabbit) in nature. The global and local search processes of the algorithm are performed by simulating several stages of cooperative behavior during hunting. To enhance the performance of this algorithm, in this paper we propose a neighborhood centroid opposite-based learning Harris Hawks optimization algorithm (NCOHHO). The mechanism of applying the neighborhood centroid under the premise of using opposite-based learning technology to improve the performance of the algorithm, the neighborhood centroid is used as a reference point for the generation of the opposite particle, while maintaining the diversity of the population and make full use of the swarm search experience to expand the search range of the reverse solution. Enhancing the probability of finding the optimal solution and the improved algorithm is superior to the original Harris Hawks Optimization algorithm in all aspects. We apply NCOHHO to the training of feed-forward neural network (FNN). To confirm that using NCOHHO to train FNN is more effective, five classification datasets are applied to benchmark the performance of the proposed method. Comprehensive comparison and analysis from the three aspects of mean, variance and classification success rate, the experimental results show that the proposed NCOHHO algorithm for optimization FNN has the best comprehensive performance and has more outstanding performance than other metaheuristic algorithms in terms of the performance measures.

Journal ArticleDOI
TL;DR: Some theoretical results about the Chebyshev integral inequality for a class of Interval Type–2 fuzzy numbers which leads to obtain non–iterative closed forms for the centroid and its bounds for Type–1 fuzzy and interval Type– 2 fuzzy numbers (by extension).

Posted Content
TL;DR: In this paper, a branch-and-bound algorithm was proposed to solve the minimum sum-of-squares clustering problem (MSSC) in which the objective is to minimize the sum of squared distances from the points to the centroid of their cluster.
Abstract: The minimum sum-of-squares clustering problem (MSSC) consists in partitioning $n$ observations into $k$ clusters in order to minimize the sum of squared distances from the points to the centroid of their cluster. In this paper, we propose an exact algorithm for the MSSC problem based on the branch-and-bound technique. The lower bound is computed by using a cutting-plane procedure where valid inequalities are iteratively added to the Peng-Wei SDP relaxation. The upper bound is computed with the constrained version of k-means where the initial centroids are extracted from the solution of the SDP relaxation. In the branch-and-bound procedure, we incorporate instance-level must-link and cannot-link constraints to express knowledge about which instances should or should not be grouped together. We manage to reduce the size of the problem at each level preserving the structure of the SDP problem itself. The obtained results show that the approach allows to successfully solve for the first time real-world instances up to 4000 data points.

Journal ArticleDOI
TL;DR: The mathematical analysis and experimental results of this paper can provide a theoretical tool for using PEDCC to solve the key problems in the field of pattern recognition, such as interpretable supervised/unsupervised learning, incremental learning, uncertainty analysis and so on.
Abstract: Predefined evenly-distributed class centroids (PEDCC) can be widely used in models and algorithms of pattern classification, such as CNN classifiers, classification autoencoders, clustering, and semi-supervised learning, etc. Its basic idea is to predefine the class centers, which are evenly-distributed on the unit hypersphere in feature space, to maximize the inter-class distance. The previous method of generating PEDCC uses an iterative algorithm based on a charge model. The generated class centers will have some errors with the theoretically evenly-distributed points, and the generation time is long. This paper takes advantage of regular polyhedron in high-dimensional space and the evenly distributed points on the $n$ dimensional hypersphere to generate PEDCC mathematically. Then, we discussed the basic and extended characteristics of the frames formed by PEDCC, and some meaningful conclusions are obtained. Finally, the effectiveness of the new algorithm and related conclusions are proved by experiments. The mathematical analysis and experimental results of this paper can provide a theoretical tool for using PEDCC to solve the key problems in the field of pattern recognition, such as interpretable supervised/unsupervised learning, incremental learning, uncertainty analysis and so on.

Proceedings ArticleDOI
12 Jan 2021
TL;DR: In this article, a Gaussian Mixture Model (GMM) was used to estimate the human body portion, the position of body portion (head, torso, arms, and legs), size and orientation within the scene to recognize the action.
Abstract: Estimation and detection different human body Portion from different scenes of videos and images is an important step for most model based systems. Human body-portion detection from a single still image estimate the layout of human body portion, the position of body portion (head, torso, arms, and legs), size and orientation within the scene to recognize the action. For the foreground segmentation technique; we have used salient-object detection via Structured Matrix Decomposition (SMD) and skin-tone detection. After extraction of silhouette; body-portion estimation is applied by using Gaussian Mixture Model (GMM). Five basic parts are determined by using classical expectation maximization (EM) algorithm. The minimum number of twelve ellipsoids represented on the image with the centroid of each ellipse. The estimated distance between the centroids of ellipses are compared. The experimental results over dataset as PAMI09_release has accuracies of 86.2%, respectively. Our proposed pose descriptors outperform other state-of-the-art body portion detection model.

Journal ArticleDOI
TL;DR: Experimental results show that the superpixels generated by this algorithm not only adhere to object boundaries closely but also preserve regular shape, and it outperforms state-of-the-art methods, especially for the images with strong gradient textures.


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, a scalable approach for nuclei centroid detection of 3D microscopy volumes is presented, where 3D agglomerative hierarchical clustering (AHC) is used to estimate the 3D centroids of nuclei in a volume.
Abstract: Robust and accurate nuclei centroid detection is important for the understanding of biological structures in fluorescence microscopy images. Existing automated nuclei localization methods face three main challenges: (1) Most of object detection methods work only on 2D images and are difficult to extend to 3D volumes; (2) Segmentation-based models can be used on 3D volumes but it is computational expensive for large microscopy volumes and they have difficulty distinguishing different instances of objects; (3) Hand annotated ground truth is limited for 3D microscopy volumes. To address these issues, we present a scalable approach for nuclei centroid detection of 3D microscopy volumes. We describe the RCNN-SliceNet to detect 2D nuclei centroids for each slice of the volume from different directions and 3D agglomerative hierarchical clustering (AHC) is used to estimate the 3D centroids of nuclei in a volume. The model was trained with the synthetic microscopy data generated using Spatially Constrained Cycle-Consistent Adversarial Networks (SpCycle-GAN) and tested on different types of real 3D microscopy data. Extensive experimental results demonstrate that our proposed method can accurately count and detect the nuclei centroids in a 3D microscopy volume.

Journal ArticleDOI
TL;DR: This paper aims to present the novel information measures using four different centers namely centroid, orthocenter, circumcenter and incenter under the IFS environment to address the cognitive-based human decision-making problems.
Abstract: Cognitive computing has deep extents, which embrace different features of cognition. In the decision-making process, multi-criteria decision making is credited as a cognitive-based human action. However, to treat and unite the information from several resources, the most vital stage is data collection. Intuitionistic fuzzy set (IFS) is one of the most robust and trustworthy tools to accomplish the imprecise information with the help of the membership degrees. In addition to this, an information measure plays an essential role in treating uncertain information to reach the final decision based on the degree of the separation between the pairs of the numbers. Motivated by these, this paper aims to present the novel information measures using four different centers namely centroid, orthocenter, circumcenter and incenter under the IFS environment to address the cognitive-based human decision-making problems. The present work is divided into three folds. The first fold is to propose a technique of transforming intuitionistic fuzzy values into general triangular fuzzy numbers (TFNs). The right-angled and isosceles TFNs are special cases of the proposed transformation technique. The second fold is to develop distance and similarity measures using four different centers namely centroid, orthocenter, circumcenter and incenter of transformed TFNs. The basic axioms of the proposed measures are investigated in detail. The third fold is to justify superiority and validity of the proposed measures. The effectiveness of the developed measures is examined by applying it in clustering as well as the pattern recognition problems, and their results are correlated with some prevailing studies. Additionally, a clustering technique is discussed based on the stated measures to classify the objects. A detailed comparative analysis is done with some of the existing measures and concludes that several existing measures fail to discriminate the results under the different instances such as division by zero problems or counter-intuitive cases while the proposed measure has successfully overcome this drawback.

Journal ArticleDOI
TL;DR: In this article, the maximal hyperplane sections of the regular n-simplex were determined, if the distance of the hyperplane to the centroid is fairly large, i.e. larger than the distance from the centre to the edges.