scispace - formally typeset
Journal ArticleDOI

Granular Flow Graph, Adaptive Rule Generation and Tracking

01 Dec 2017-IEEE Transactions on Systems, Man, and Cybernetics (IEEE Trans Cybern)-Vol. 47, Iss: 12, pp 4096-4107

TL;DR: A new method of adaptive rule generation in granular computing framework is described based on rough rule base and granular flow graph, and applied for video tracking, and it is shown that the neighborhood granulation provides a balanced tradeoff between speed and accuracy as compared to pixel level computation.

AbstractA new method of adaptive rule generation in granular computing framework is described based on rough rule base and granular flow graph, and applied for video tracking. In the process, several new concepts and operations are introduced, and methodologies formulated with superior performance. The flow graph enables in defining an intelligent technique for rule base adaptation where its characteristics in mapping the relevance of attributes and rules in decision-making system are exploited. Two new features, namely, expected flow graph and mutual dependency between flow graphs are defined to make the flow graph applicable in the tasks of both training and validation. All these techniques are performed in neighborhood granular level. A way of forming spatio-temporal 3-D granules of arbitrary shape and size is introduced. The rough flow graph-based adaptive granular rule-based system, thus produced for unsupervised video tracking, is capable of handling the uncertainties and incompleteness in frames, able to overcome the incompleteness in information that arises without initial manual interactions and in providing superior performance and gaining in computation time. The cases of partial overlapping and detecting the unpredictable changes are handled efficiently. It is shown that the neighborhood granulation provides a balanced tradeoff between speed and accuracy as compared to pixel level computation. The quantitative indices used for evaluating the performance of tracking do not require any information on ground truth as in the other methods. Superiority of the algorithm to nonadaptive and other recent ones is demonstrated extensively.

...read more


Citations
More filters
Journal ArticleDOI
TL;DR: This study identifies the principles of Granular Computing, shows how information granules are constructed and subsequently used in describing relationships present among the data, and advocates that the level of abstraction can be flexibly adjusted through Granular computing.
Abstract: In the plethora of conceptual and algorithmic developments supporting data analytics and system modeling, humancentric pursuits assume a particular position owing to ways they emphasize and realize interaction between users and the data. We advocate that the level of abstraction, which can be flexibly adjusted, is conveniently realized through Granular Computing. Granular Computing is concerned with the development and processing information granules – formal entities which facilitate a way of organizing knowledge about the available data and relationships existing there. This study identifies the principles of Granular Computing, shows how information granules are constructed and subsequently used in describing relationships present among the data.

68 citations

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results show that FGKNN and BFGKNN have better performance than that of the methods mentioned above if the appropriate parameters are given.
Abstract: K-nearest neighbor (KNN) is a classic classifier, which is simple and effective. Adaboost is a combination of several weak classifiers as a strong classifier to improve the classification effect. These two classifiers have been widely used in the field of machine learning. In this paper, based on information fuzzy granulation, KNN and Adaboost, we propose two algorithms, a fuzzy granule K-nearest neighbor (FGKNN) and a boosted fuzzy granule K-nearest neighbor (BFGKNN), for classification. By introducing granular computing, we normalize the process of solving problem as a structured and hierarchical process. Structured information processing is focused, so the performance including accuracy and robust can be enhanced to data classification. First, a fuzzy set is introduced, and an atom attribute fuzzy granulation is performed on samples in the classified system to form fuzzy granules. Then, a fuzzy granule vector is created by multiple attribute fuzzy granules. We design the operators and define the measure of fuzzy granule vectors in the fuzzy granule space. And we also prove the monotonic principle of the distance of fuzzy granule vectors. Furthermore, we also give the definition of the concept of K-nearest neighbor fuzzy granule vector and present FGKNN algorithm and BFGKNN algorithm. Finally, we compare the performance among KNN, Back Propagation Neural Network (BPNN), Support Vector Machine (SVM), Logistic Regression (LR), FGKNN and BFGKNN on UCI data sets. Theoretical analysis and experimental results show that FGKNN and BFGKNN have better performance than that of the methods mentioned above if the appropriate parameters are given.

15 citations

Journal ArticleDOI
TL;DR: An attribute-driven granular model (AGrM) under a machine-learning scheme that achieved comparable pinch recognition accuracy but was of lowest computational cost and highest force grand prediction accuracy.
Abstract: Fine multifunctional prosthetic hand manipulation requires precise control on the pinch-type and the corresponding force, and it is a challenge to decode both aspects from myoelectric signals. This paper proposes an attribute-driven granular model (AGrM) under a machine-learning scheme to solve this problem. The model utilizes the additionally captured attribute as the latent variable for a supervised granulation procedure. It was fulfilled for EMG-based pinch-type classification and the fingertip force grand prediction. In the experiments, 16 channels of surface electromyographic signals (i.e., main attribute) and continuous fingertip force (i.e., subattribute) were simultaneously collected while subjects performing eight types of hand pinches. The use of AGrM improved the pinch-type recognition accuracy to around 97.2% by 1.8% when constructing eight granules for each grasping type and received more than 90% force grand prediction accuracy at any granular level greater than six. Further, sensitivity analysis verified its robustness with respect to different channel combination and interferences. In comparison with other clustering-based granulation methods, AGrM achieved comparable pinch recognition accuracy but was of lowest computational cost and highest force grand prediction accuracy.

11 citations


Additional excerpts

  • ...groups, classes or clusters of a universe, which is closely related to the cognitive strategy of human being in problem solving and it is technically transferable to the design of human-centric intelligent systems [17], which has been applied in image-based crowd segmentation [18], longterm prediction model for the energy system [19], [20], video based object tracking [21] and principle curve extraction [22], etc....

    [...]

Journal ArticleDOI
06 Mar 2019
TL;DR: A node reduction algorithm based on significance degree of condition attribute node is developed to delete redundant or irrelevant condition attribute nodes, which can improve clustering distribution and reduce computational complexity.
Abstract: In order to address the problem that redundant condition attribute nodes and poor reasoning ability of flow graph may lead to high computational burden and low diagnosis accuracy, a fault severity identification method of roller bearings using flow graph and non-naive Bayesian inference is put forward in this paper. First, a normalized flow graph constructed according to fault features of roller bearings extracted from training samples is used to represent and describe the causal relationship among attributes. Then, the significance degree of condition attribute node with respect to the decision attribute node set is defined to quantitatively measure the impact of the node on the decision-making abilities of the flow graph. A node reduction algorithm based on significance degree of condition attribute node is developed to delete redundant or irrelevant condition attribute nodes, which can improve clustering distribution and reduce computational complexity. Finally, non-naive Bayesian inference is utilized...

10 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive overview of object detection and tracking using deep learning (DL) networks and compare the performance of different object detectors and trackers, including the recent development in granulated DL models.
Abstract: Object detection and tracking is one of the most important and challenging branches in computer vision, and have been widely applied in various fields, such as health-care monitoring, autonomous driving, anomaly detection, and so on. With the rapid development of deep learning (DL) networks and GPU’s computing power, the performance of object detectors and trackers has been greatly improved. To understand the main development status of object detection and tracking pipeline thoroughly, in this survey, we have critically analyzed the existing DL network-based methods of object detection and tracking and described various benchmark datasets. This includes the recent development in granulated DL models. Primarily, we have provided a comprehensive overview of a variety of both generic object detection and specific object detection models. We have enlisted various comparative results for obtaining the best detector, tracker, and their combination. Moreover, we have listed the traditional and new applications of object detection and tracking showing its developmental trends. Finally, challenging issues, including the relevance of granular computing, in the said domain are elaborated as a future scope of research, together with some concerns. An extensive bibliography is also provided.

6 citations


References
More filters
Book
31 Oct 1991
TL;DR: Theoretical Foundations.
Abstract: I. Theoretical Foundations.- 1. Knowledge.- 1.1. Introduction.- 1.2. Knowledge and Classification.- 1.3. Knowledge Base.- 1.4. Equivalence, Generalization and Specialization of Knowledge.- Summary.- Exercises.- References.- 2. Imprecise Categories, Approximations and Rough Sets.- 2.1. Introduction.- 2.2. Rough Sets.- 2.3. Approximations of Set.- 2.4. Properties of Approximations.- 2.5. Approximations and Membership Relation.- 2.6. Numerical Characterization of Imprecision.- 2.7. Topological Characterization of Imprecision.- 2.8. Approximation of Classifications.- 2.9. Rough Equality of Sets.- 2.10. Rough Inclusion of Sets.- Summary.- Exercises.- References.- 3. Reduction of Knowledge.- 3.1. Introduction.- 3.2. Reduct and Core of Knowledge.- 3.3. Relative Reduct and Relative Core of Knowledge.- 3.4. Reduction of Categories.- 3.5. Relative Reduct and Core of Categories.- Summary.- Exercises.- References.- 4. Dependencies in Knowledge Base.- 4.1. Introduction.- 4.2. Dependency of Knowledge.- 4.3. Partial Dependency of Knowledge.- Summary.- Exercises.- References.- 5. Knowledge Representation.- 5.1. Introduction.- 5.2. Examples.- 5.3. Formal Definition.- 5.4. Significance of Attributes.- 5.5. Discernibility Matrix.- Summary.- Exercises.- References.- 6. Decision Tables.- 6.1. Introduction.- 6.2. Formal Definition and Some Properties.- 6.3. Simplification of Decision Tables.- Summary.- Exercises.- References.- 7. Reasoning about Knowledge.- 7.1. Introduction.- 7.2. Language of Decision Logic.- 7.3. Semantics of Decision Logic Language.- 7.4. Deduction in Decision Logic.- 7.5. Normal Forms.- 7.6. Decision Rules and Decision Algorithms.- 7.7. Truth and Indiscernibility.- 7.8. Dependency of Attributes.- 7.9. Reduction of Consistent Algorithms.- 7.10. Reduction of Inconsistent Algorithms.- 7.11. Reduction of Decision Rules.- 7.12. Minimization of Decision Algorithms.- Summary.- Exercises.- References.- II. Applications.- 8. Decision Making.- 8.1. Introduction.- 8.2. Optician's Decisions Table.- 8.3. Simplification of Decision Table.- 8.4. Decision Algorithm.- 8.5. The Case of Incomplete Information.- Summary.- Exercises.- References.- 9. Data Analysis.- 9.1. Introduction.- 9.2. Decision Table as Protocol of Observations.- 9.3. Derivation of Control Algorithms from Observation.- 9.4. Another Approach.- 9.5. The Case of Inconsistent Data.- Summary.- Exercises.- References.- 10. Dissimilarity Analysis.- 10.1. Introduction.- 10.2. The Middle East Situation.- 10.3. Beauty Contest.- 10.4. Pattern Recognition.- 10.5. Buying a Car.- Summary.- Exercises.- References.- 11. Switching Circuits.- 11.1. Introduction.- 11.2. Minimization of Partially Defined Switching Functions.- 11.3. Multiple-Output Switching Functions.- Summary.- Exercises.- References.- 12. Machine Learning.- 12.1. Introduction.- 12.2. Learning From Examples.- 12.3. The Case of an Imperfect Teacher.- 12.4. Inductive Learning.- Summary.- Exercises.- References.

7,826 citations

Journal ArticleDOI
TL;DR: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends to discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.
Abstract: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.

5,085 citations


"Granular Flow Graph, Adaptive Rule ..." refers background in this paper

  • ...Most of the approaches are partially supervised [12]–[20]....

    [...]

Journal ArticleDOI
TL;DR: M Modes of information granulation (IG) in which the granules are crisp (c-granular) play important roles in a wide variety of methods, approaches and techniques, but this does not reflect the fact that in almost all of human reasoning and concept formation thegranules are fuzzy (f- Granular).
Abstract: There are three basic concepts that underlie human cognition: granulation, organization and causation. Informally, granulation involves decomposition of whole into parts; organization involves integration of parts into whole; and causation involves association of causes with effects. Granulation of an object A leads to a collection of granules of A, with a granule being a clump of points (objects) drawn together by indistinguishability, similarity, proximity or functionality. For example, the granules of a human head are the forehead, nose, cheeks, ears, eyes, etc. In general, granulation is hierarchical in nature. A familiar example is the granulation of time into years, months, days, hours, minutes, etc. Modes of information granulation (IG) in which the granules are crisp (c-granular) play important roles in a wide variety of methods, approaches and techniques. Crisp IG, however, does not reflect the fact that in almost all of human reasoning and concept formation the granules are fuzzy (f-granular). The granules of a human head, for example, are fuzzy in the sense that the boundaries between cheeks, nose, forehead, ears, etc. are not sharply defined. Furthermore, the attributes of fuzzy granules, e.g., length of nose, are fuzzy, as are their values: long, short, very long, etc. The fuzziness of granules, their attributes and their values is characteristic of ways in which humans granulate and manipulate information.

2,402 citations


Additional excerpts

  • ...This concept was first introduced by Zadeh [11]....

    [...]

Journal ArticleDOI
TL;DR: A comprehensive review of recent Kinect-based computer vision algorithms and applications covering topics including preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping.
Abstract: With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

1,377 citations


"Granular Flow Graph, Adaptive Rule ..." refers background in this paper

  • ...Some such examples are, multiple cameras [27], PTZ camera [28], and Kinect sensor [29], [30]....

    [...]

Journal ArticleDOI
TL;DR: An object detection scheme that has three innovations over existing approaches that is based on a model of the background as a single probability density, and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph.
Abstract: Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foregrounds modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes.

663 citations


"Granular Flow Graph, Adaptive Rule ..." refers methods in this paper

  • ...The unsupervised methods are mostly based on background estimation [21], [22]....

    [...]