scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Systems, Man, and Cybernetics in 1994"


Journal ArticleDOI
TL;DR: An efficient differential box-counting approach to estimate fractal dimension is proposed and by comparison with four other methods, it has been shown that the method is both efficient and accurate.
Abstract: Fractal dimension is an interesting feature proposed to characterize roughness and self-similarity in a picture. This feature has been used in texture segmentation and classification, shape analysis and other problems. An efficient differential box-counting approach to estimate fractal dimension is proposed in this note. By comparison with four other methods, it has been shown that the authors, method is both efficient and accurate. Practical results on artificial and natural textured images are presented. >

767 citations


Journal ArticleDOI
TL;DR: A decision making procedure is proposed to rank alternatives in MADM problems with uncertainty to deal with uncertain decision knowledge in multiple-attribute decision making (MADM) problems with both quantitative and qualitative attributes.
Abstract: A new evidential reasoning based approach is proposed that may be used to deal with uncertain decision knowledge in multiple-attribute decision making (MADM) problems with both quantitative and qualitative attributes. This approach is based on an evaluation analysis model and the evidence combination rule of the Dempster-Shafer theory. It is akin to a preference modeling approach, comprising an evidential reasoning framework for evaluation and quantification of qualitative attributes. Two operational algorithms have been developed within this approach for combining multiple uncertain subjective judgments. Based on this approach and a traditional MADM method, a decision making procedure is proposed to rank alternatives in MADM problems with uncertainty. A numerical example is discussed to demonstrate the implementation of the proposed approach. A multiple-attribute motor cycle evaluation problem is then presented to illustrate the hybrid decision making procedure. >

743 citations


Journal ArticleDOI
TL;DR: A simple and effective approach for approximate estimation of the cluster centers on the basis of the concept of a mountain function, based upon a griding on the space, the construction of amountain function from the data and then a destruction of the mountains to obtain the cluster center centers.
Abstract: We develop a simple and effective approach for approximate estimation of the cluster centers on the basis of the concept of a mountain function. We call the procedure the mountain method. It can be useful for obtaining the initial values of the clusters that are required by more complex cluster algorithms. It also can be used as a stand alone simple approximate clustering technique. The method is based upon a griding on the space, the construction of a mountain function from the data and then a destruction of the mountains to obtain the cluster centers. >

622 citations


Journal ArticleDOI
LiMin Fu1
TL;DR: This paper shows how to interpret neural network knowledge in symbolic form, lay dawn required definitions for this treatment, formulate the interpretation algorithm, and formally verify its soundness.
Abstract: The neural network approach has proven useful for the development of artificial intelligence systems. However, a disadvantage with this approach is that the knowledge embedded in the neural network is opaque. In this paper, we show how to interpret neural network knowledge in symbolic form. We lay dawn required definitions for this treatment, formulate the interpretation algorithm, and formally verify its soundness. The main result is a formalized relationship between a neural network and a rule-based system. In addition, it has been demonstrated that the neural network generates rules of better performance than the decision tree approach in noisy conditions. >

376 citations


Journal ArticleDOI
TL;DR: It is proved that all stable stationary points of the algorithm are Nash equilibria for the game and it is shown that the algorithm always converges to a desirable solution.
Abstract: A multi-person discrete game where the payoff after each play is stochastic is considered. The distribution of the random payoff is unknown to the players and further none of the players know the strategies or the actual moves of other players. A learning algorithm for the game based on a decentralized team of learning automata is presented. It is proved that all stable stationary points of the algorithm are Nash equilibria for the game. Two special cases of the game are also discussed, namely, game with common payoff and the relaxation labelling problem. The former has applications such as pattern recognition and the latter is a problem widely studied in computer vision. For the two special cases it is shown that the algorithm always converges to a desirable solution. >

316 citations


Journal ArticleDOI
TL;DR: It is shown that if knowledge of the domain is available, it is exploited by the genetic algorithm leading to an even better performance of the fuzzy controller.
Abstract: The successful application of fuzzy reasoning models to fuzzy control systems depends on a number of parameters, such as fuzzy membership functions, that are usually decided upon subjectively. It is shown in this paper that the performance of fuzzy control systems may be improved if the fuzzy reasoning model is supplemented by a genetic-based learning mechanism. The genetic algorithm enables us to generate an optimal set of parameters for the fuzzy reasoning model based either on their initial subjective selection or on a random selection. It is shown that if knowledge of the domain is available, it is exploited by the genetic algorithm leading to an even better performance of the fuzzy controller. >

291 citations


Journal ArticleDOI
TL;DR: A general multilevel evaluation process is developed in this paper for dealing with a multiple attribute decision making (MADM) problem with both quantitative and qualitative attributes.
Abstract: Based on an evidential reasoning framework, a general multilevel evaluation process is developed in this paper for dealing with a multiple attribute decision making (MADM) problem with both quantitative and qualitative attributes. In this new process, a qualitative attribute may be evaluated by uncertain subjective judgments through multiple levels of factors and each of the judgments may be assigned by single or multiple experts in any rational way within the evidential reasoning framework. The qualitative attributes can then be quantified by means of general evaluation analysis and evidential reasoning. A few evaluation analysis models and the corresponding evidential reasoning algorithms are explored for parallel combination and hierarchical propagation of factor evaluations. With all the qualitative attributes being quantified by this rational process, the MADM problem represented by an extended decision matrix is then transformed into an ordinary decision matrix, which can be dealt with using a traditional MADM method. This new general evaluation process and the hybrid decision making procedure are demonstrated using a multiple attribute motor cycle evaluation problem with uncertainty. >

282 citations


Journal ArticleDOI
TL;DR: Experimental results show that GAMAS consistently outperforms simple genetic algorithms and alleviates the problem of premature convergence.
Abstract: Much research has been done in developing improved genetic algorithms (GA's). Past research has focused on the improvement of operators and parameter settings and indicates that premature convergence is still the preeminent problem in GA's. This paper presents an improved genetic algorithm based on migration and artificial selection (GAMAS). GAMAS is an algorithm whose architecture is specifically designed to confront the causes of premature convergence. Though based on simple genetic algorithms, GAMAS is not concerned with the evolution of a single population, but instead is concerned with macroevolution, or the creation of multiple populations or species, and the derivation of solutions from the combined evolutionary effects of these species. New concepts that are emphasized in this architecture are artificial selection, migration, and recycling. Experimental results show that GAMAS consistently outperforms simple genetic algorithms and alleviates the problem of premature convergence. >

245 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to provide a thorough survey of papers on group technology and cellular manufacturing system design and to state some important design factors that cannot be ignored.
Abstract: A number of survey papers on group technology and cellular manufacturing system design have been published. Many of them focus primarily on clustering techniques that manipulate rows and columns of the part-machine processing indicator matrix to form a block diagonal structure. Since the last survey paper was published, there have been some developments in cellular manufacturing system design. A number of papers that consider practical design constraints while designing cellular manufacturing systems have been published. The purpose of this paper is to provide a thorough survey of papers on group technology and cellular manufacturing system design. Its purpose is also to state some important design factors that cannot be ignored. >

202 citations


Journal ArticleDOI
TL;DR: It is concluded that the successful solution of the control problem has implications for biological visuo-motor control because the neural network employed in the control of the SoftArm bears close analogies to a network which successfully models visual brain maps.
Abstract: A neural map algorithm has been employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm (SoftArm) employed in this investigation shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a three-dimensional set of pressures corresponding to the end effector positions, as well as a set of 3/spl times/4 Jacobian matrices for interpolating between these positions. The gripper orientation was achieved through adaptation of a 1/spl times/4 Jacobian matrix for a fourth joint. Because of the properties of the rubber-tube actuators of the SoftArm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (/spl sim/3 mm) after 200 learning steps and the orientation could be controlled to two pixels after 800 learning steps. This was achieved through employment of a linear correction algorithm using the Jacobian matrices mentioned above. Applications of repeated corrections in each positioning and grasping step leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that the Jacobian yields a reduction of the distance between gripper and target. The neural network employed in the control of the SoftArm bears close analogies to a network which successfully models visual brain maps. It is concluded, therefore, from this fact and from the close analogy between the SoftArm and natural muscle systems that the successful solution of the control problem has implications for biological visuo-motor control. >

172 citations


Journal ArticleDOI
TL;DR: Algorithms for constructing fuzzy rules from input-output training data, which require only a single pass through the training set, are examined to produce a computationally efficient method of learning.
Abstract: Fuzzy inference systems and neural networks both provide mathematical systems for approximating continuous real-valued functions. Historically, fuzzy rule bases have been constructed by knowledge acquisition from experts while the weights on neural nets have been learned from data. This paper examines algorithms for constructing fuzzy rules from input-output training data. The antecedents of the rules are determined by a fuzzy decomposition of the input domains. The decomposition localizes the learning process, restricting the influence of each training example to a single rule. Fuzzy learning proceeds by determining entries in a fuzzy associative memory using the degree to which the training data matches the rule antecedents. After the training set has been processed, similarity to existing rules and interpolation are used to complete the rule base. Unlike the neural network algorithms, fuzzy learning algorithms require only a single pass through the training set. This produces a computationally efficient method of learning. The effectiveness of the fuzzy learning algorithms is compared with that of a feedforward neural network trained with back-propagation. >

Journal ArticleDOI
TL;DR: This paper investigates the relationship between conditional objects obtained as a qualitative counterpart to conditional probabilities, and nonmonotonic reasoning, and proposes a logic of conditional objects that is more elementary and intuitive than the preferential semantics of Lehmann and colleagues and does not require probabilistic semantics.
Abstract: This paper investigates the relationship between conditional objects obtained as a qualitative counterpart to conditional probabilities, and nonmonotonic reasoning. Viewed as an inference rule expressing a contextual belief, the conditional object is shown to possess all properties of a well-behaved nonmonotonic consequence relation when a suitable choice of connectives and deduction operation is made. Using previous results from Adams' conditional probabilistic logic, a logic of conditional objects is proposed. Its axioms and inference rules are those of preferential reasoning logic of Lehmann and colleagues. But the semantics relies on a three-valued truth valuation first suggested by De Finetti. It is more elementary and intuitive than the preferential semantics of Lehmann and colleagues and does not require probabilistic semantics. The analysis of a notion of consistency of a set of conditional objects is studied in the light of such a three-valued semantics and higher level counterparts of deduction theorem, modus ponens, resolution and refutation are suggested. Limitations of this logic are discussed. >

Journal ArticleDOI
TL;DR: This paper shows that further extension can be made by considering the interactive fuzzy subtraction and by observing that only the nonnegative part of the fuzzy numbers can have physical interpretation, and the formulas for the latest allowable time and slack for each event are presented.
Abstract: There have been several attempts in the literature to apply fuzzy numbers to the critical path method. The result delivers the earliest expected time for each event of the project. This paper shows that further extension can be made by considering the interactive fuzzy subtraction and by observing that only the nonnegative part of the fuzzy numbers can have physical interpretation. Based on these two observations, the formulas for the latest allowable time and slack for each event are presented. The availability of fuzzy slacks provides enough information, at least for certain /spl alpha/-level of the slack, to identify the critical path in the network model of the project. Thus, practically we can generalize the critical path method by accepting imprecise, fuzzy data for the duration of the activities. >

Journal ArticleDOI
TL;DR: A monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world, a Bayesian network incorporating temporal aspects, which is a dynamic belief network used to reason under uncertainty about both the causes and consequences of the events being monitored.
Abstract: We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a dynamic belief network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behavior, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the data association problem. >

Journal ArticleDOI
TL;DR: The main results demonstrate that the introduced coherence conditions are necessary and sufficient for the existence of a de Finetti coherent probability, agreeing with the generalized probabilistic assessment.
Abstract: Conditions of coherence are given for generalized assessments of probability on arbitrary sets of conditional events, that is assessments including also imprecise values (probability intervals) and ordinal evaluations (comparative probabilities). Such coherence conditions ensure, like the well known de Finetti coherence condition for numerical probabilities, the possibility of extending generalized assessments of probability and preserving coherence. The main results demonstrate that the introduced coherence conditions are necessary and sufficient for the existence of a de Finetti coherent probability, agreeing with the generalized probabilistic assessment. >

Journal ArticleDOI
TL;DR: A local search method with a search space smoothing technique that is capable of smoothing the rugged terrain surface of the search space and has significantly improved the performance of existing heuristic search algorithms.
Abstract: Local search is very efficient to solve combinatorial optimization problems. Due to the rugged terrain surface of the search space, it often gets stuck at a locally optimum configuration. In this paper, we give a local search method with a search space smoothing technique. It is capable of smoothing the rugged terrain surface of the search space. Any conventional heuristic search algorithm can be used in conjunction with this smoothing method. In a parameter space, by altering the shape of the objective function, the original problem instance is transformed into a series of gradually simplified problem instances with smoother terrain surfaces. Using an existing local search algorithm, an instance with the simplest terrain structure is solved first, the original problem instance with more complicated terrain structure is solved last, and the solutions of the simplified problem instances are used to guide the search of more complicated ones. A case study of using such technique to solve the traveling salesman problem (TSP) is described. We tested this method with numerous randomly generated TSP instances. We found that it has significantly improved the performance of existing heuristic search algorithms. >

Journal ArticleDOI
TL;DR: A simplified mathematical model is introduced which is used to determine the workspace related to the reachability of the wrist and represents the spatial motion of two characteristic points, epicondylus lateralis and proc.
Abstract: The paper introduces a simplified mathematical model of the human arm kinematics which is used to determine the workspace related to the reachability of the wrist. The model contains six revolute degrees of freedom, five in the shoulder complex and one in the elbow joint. It is not directly associated to the anatomical structure of the arm, but represents the spatial motion of two characteristic points, epicondylus lateralis and proc. styloideus. Use of this simplified model for the determination of reachable workspace offers several advantages versus direct measurement: (i) the workspace can be obtained in few minutes on a micro VAX II computer, (ii) patients with various injuries in various stages of recovery can be treated since only a few brief and simple measurements of the model's parameters are needed, and (iii) the calculated workspace includes complete information of the envelope, as well as inside characteristics. >

Journal ArticleDOI
TL;DR: A new multilayer neural network system to identify computer users was presented, made up of the time intervals between successive keystrokes created by users while typing a known sequence of characters.
Abstract: This paper presents a new multilayer neural network system to identify computer users. The input vectors were made up of the time intervals between successive keystrokes created by users while typing a known sequence of characters. Each input vector was classified into one of several classes, thereby identifying the user who typed the character sequence. Three types of networks were discussed: a multilayer feedforward network trained using the backpropagation algorithm, a sum-of-products network trained with a modification of backpropagation, and a new hybrid architecture that combines the two. A maximum classification accuracy of 97.5% was achieved using a neural network based pattern classifier. Such approach can improve computer access security. >

Journal ArticleDOI
TL;DR: The focus of this paper is the development of a stand-alone system capable of determining exercise progression and remediation automatically during a training session in a simulator-based trainer, on the basis of the students's past performance.
Abstract: As simulator-based training systems become more complex, the amount of effort required to generate, monitor, and maintain training exercises multiplies greatly. This has significantly increased the burden on the instructors, potentially making the training experience less efficient as well as less effective. Research on intelligent tutoring systems (ITS) has largely addressed this issue by replacing the instructor with a computer model of the appropriate pedagogical concepts and the domain expertise. While this approach is highly desirable, the effort required to develop and maintain an ITS can be quite significant. A more modest as well as practical alternative to an ITS is the development of intelligent computer-based tools that can support the instructors in their tasks. The advantage of this approach is that various tools can be developed to address the different aspects of the instructor's duties. Moreover, without the burden of having to replace the instructor, these tools are more easily developed and fielded in existing trainers. One aspect of an instructor's task is to assess the students' performance after each training exercise and select the next exercise based on their previous performances. It would clearly be advantageous if this exercise selection process were to be automated, thus relieving the instructor of a significant burden and allowing him to concentrate on other tasks. Therefore, the focus of this paper is the development of a stand-alone system capable of determining exercise progression and remediation automatically during a training session in a simulator-based trainer, on the basis of the students's past performance. Instructional heuristics were developed to carry out the exercise progression process. A prototype was developed and applied to gunnery training on the Army M1 main battle tank. >

Journal ArticleDOI
TL;DR: A new process for risk assessment and an appropriate reasoning algorithm for choice has been developed which supports the human operator in analyzing risks and making decisions in real-time during unexpected disruptions in the operations of large-scale systems.
Abstract: The need for more effective ways to manage the risk and safety of technological systems has been widely recognized and accepted by government and industry. Traditionally, risk analysis has been considered as part of the process of planning a technological system and addressed the risk inherent in its day-to-day operations. However, risk must also be considered when responding to episodic events whose uniqueness requires taking actions that are variants of, or different from planned operational procedures. The purpose of this paper is to present a new paradigm for real-time risk analysis that capitalizes upon the advances in computer power, human-machine interfaces, and communication technology. A new process for risk assessment and an appropriate reasoning algorithm for choice has been developed which supports the human operator in analyzing risks and making decisions in real-time during unexpected disruptions in the operations of large-scale systems. The process recognizes that although advances in technology may automate many tasks, humans will always be an integral part of managing large-scale systems. The practical realism of the new approach of operational risk management is illustrated by two examples, hazardous material transportation and emergency management. The first example is discussed within the context of a prototype decision support system for interactive real-time risk management. >

Journal ArticleDOI
TL;DR: It is suggested that one can improve the structure of fuzzy logic controllers by learning the values of the parameters introduced, and parameterize these operations using S-OWA operators give a new class of flexible structured fuzzy Logic controllers (FS-FLC).
Abstract: We suggest a new approach to the construction of fuzzy logic controllers based upon the selection of systems parameters. We first show that the standard fuzzy logic controllers have four basic operations which determine the nature of their functioning, the aggregation process used to combine individual antecedent firing levels to give a rule firing level, the determination of rule output based on antecedent firing level, the aggregation of individual rule outputs to find combined output, and the defuzzification process. The next show how we can parameterize these operations using S-OWA operators. These parameterized models give us a new class of flexible structured fuzzy logic controllers (FS-FLC). We look at the structure and performance of these controllers. We then suggest that one can improve the structure of fuzzy logic controllers by learning the values of the parameters introduced. >

Journal ArticleDOI
TL;DR: A new modification of the BAM is made and a new model named asymmetric bidirectional associative memory (ABAM) is proposed, which can cater for the logical asymmetry of interconnections but also is capable of accommodating a larger number of non-orthogonal patterns.
Abstract: Bidirectional associative memory (BAM) is a potentially promising model for heteroassociative memories. However, its applications are severely restricted to networks with logical symmetry of interconnections and pattern orthogonality or small pattern size. Although the restrictions on pattern orthogonality and pattern size can be relaxed to a certain extent, all previous efforts are at the cost of increase in connection complexity. In this paper, a new modification of the BAM is made and a new model named asymmetric bidirectional associative memory (ABAM) is proposed. This model not only can cater for the logical asymmetry of interconnections but also is capable of accommodating a larger number of non-orthogonal patterns. Furthermore, all these properties of the ABAM are achieved without increasing the connection complexity of the network. Theoretical analysis and simulation results all demonstrate that the ABAM indeed outperforms the BAM and its existing variants in all aspects of storage capacity, error-correcting capability and convergence. >

Journal ArticleDOI
TL;DR: An algorithm for the detection of dominant points and for building a hierarchical approximation of a digital curve is proposed and is shown to perform well for a wide variety of shapes, including scaled and rotated ones.
Abstract: An algorithm for the detection of dominant points and for building a hierarchical approximation of a digital curve is proposed. The algorithm does not require any parameter tuning and is shown to perform well for a wide variety of shapes, including scaled and rotated ones. Dominant points are first located by a coarse-to-fine detector scheme. They constitute the vertices of a polygon closely approximating the curve. Then, a criterion of perceptual significance is used to repeatedly remove suitable vertices until a stable polygonal configuration, the contour sketch, is reached. A highly compressed hierarchical description of the shape also becomes available. >

Journal ArticleDOI
TL;DR: The authors propose a hierarchical approach, using voting methods to build associations through consensus and relational graphs to represent the organization at each level, which is very efficient in terms of time and space and performs impressively for a wide range of organizations.
Abstract: Presents an efficient computational structure for preattentive perceptual organization. By perceptual organization the authors refer to the ability of a vision system to organize features detected in images based on viewpoint consistency and other Gestaltic perceptual phenomena. This usually has two components, a primarily bottom up preattentive part and a top down attentive part, with meaningful features emerging in a synergistic fashion from the original set of (very) primitive features. In this work the authors advance a computational structure for preattentive perceptual organization. The authors propose a hierarchical approach, using voting methods to build associations through consensus and relational graphs to represent the organization at each level. The voting method is very efficient in terms of time and space and performs impressively for a wide range of organizations. The graphical representation allows the ready extraction of higher order features, or perceptual tokens, because the relational information is rendered explicit. >

Journal ArticleDOI
TL;DR: A global optimum path planning scheme for redundant space robotic manipulators to be used in such missions and a planar redundant space manipulator consisting of three arms and a base is considered to demonstrate the feasibility of the formulation.
Abstract: Robotic manipulators will play a significant role in the maintenance and repair of space stations and satellites, and other future space missions. Robot path planning and control for the above applications should be optimum, since any inefficiency in the planning may considerably risk the success of the space mission. This paper presents a global optimum path planning scheme for redundant space robotic manipulators to be used in such missions. In this formulation, a variational approach is used to minimize the objective functional. It is assumed that the gravity is zero in space, and the robotic manipulator is mounted on a completely free-flying base (spacecraft) and the attitude control (reaction wheels or thrust jets) is off. Linear and angular momentum conditions for this system lead to a set of mixed holonomic and nonholonomic constraints. These equations are adjoined to the objective functional using a Lagrange multiplier technique. The formulation leads to a system of differential and algebraic equations (DAEs). A numerical scheme for forward integration of this system is presented. A planar redundant space manipulator consisting of three arms and a base is considered to demonstrate the feasibility of the formulation. The approach to optimum path planning of redundant space robots is significant since most robots that have been developed for space applications so far are redundant. The kinematic redundancy of space robots offers efficient control and provides the necessary dexterity for extra-vehicular activity that exceeds human capacity. >

Journal ArticleDOI
TL;DR: This paper describes an interactive (decision maker-computer) methodology for multiple response optimization of simulation models based on a multiple criteria optimization technique called the STEP method.
Abstract: Simulation is a popular tool for the design and analysis of manufacturing systems. The popularity of simulation is due to its flexibility, its ability to model systems when analytical methods have failed, and its ability to model the time dynamic behavior of systems. However, in and of itself, simulation is not a design tool; therefore, in order to optimize a simulation model, it often must be used in conjunction with an optimum-seeking method. This paper describes an interactive (decision maker-computer) methodology for multiple response optimization of simulation models. This approach is based on a multiple criteria optimization technique called the STEP method. The proposed methodology is illustrated with an example involving the optimization of a manufacturing system. >

Journal ArticleDOI
TL;DR: A combined approach for discrete-time fuzzy model identification is proposed and a recursive identification algorithm based on the prediction-error method is derived for optimally resolving the numerical fuzzy relational equation by minimizing a quadratic performance index.
Abstract: A combined approach for discrete-time fuzzy model identification is proposed. By this approach, the identification is performed in two stages. First, the linguistic approach is utilized to obtain an approximate fuzzy relation from the sampled nonfuzzy input-output data. This approximate fuzzy relation is then used as the initial estimate for the second stage in which a more accurate fuzzy relation is determined by the approach of numerical resolution of fuzzy relational equation. A recursive identification algorithm based on the prediction-error method is derived for optimally resolving the numerical fuzzy relational equation by minimizing a quadratic performance index. This algorithm makes the proposed approach particularly attractive to online applications. Two numerical examples are provided to show the superiority of the combined approach over other methods. >

Journal ArticleDOI
TL;DR: The generalized Hough transform and geometric hashing are two contemporary paradigms for model-based object recognition that are put in perspective and differences and similarities are examined.
Abstract: The generalized Hough transform and geometric hashing are two contemporary paradigms for model-based object recognition. Both schemes simultaneously find instances of objects in a scene and determine the location and orientation of these instances. The methods encode the models for the objects in a similar fashion and object recognition is achieved by image features "voting" for object models. For both schemes, the object recognition time is largely independent of the number of objects that are encoded in the object-model database. This paper puts the two schemes in perspective and examines differences and similarities. The authors also study object representation techniques and discuss how object representations are used for object recognition and position estimation. >

Journal ArticleDOI
Y. Yao1
TL;DR: This paper presents a general model based on timed Petri net, capable of handling both qualitative and quantitative temporal information, and graphical representation of a timedPetri net gives a straightforward view of relations between temporal objects.
Abstract: This paper presents a general model based on timed Petri net, capable of handling both qualitative and quantitative temporal information. Both metric relations between time points and qualitative relations between time interval can be encoded in a model. This model also allows the representation of higher-order expression and repeated activities which constitute a large part of the normal schedule. In addition, graphical representation of a timed Petri net gives a straightforward view of relations between temporal objects. >

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed inductive method, implemented in a program called OBSERVER-II, is capable of discovering underlying patterns and explaining the behaviour of certain sequence-generation processes that are not obvious or easily understood.
Abstract: Suppose we are given a sequence of events that are generated probabilistically in the sense that the attributes of one event are dependent, to a certain extent, on those observed before it. This paper presents an inductive method that is capable of detecting the inherent patterns in such a sequence and to make predictions about the attributes of future events. Unlike previous AI-based prediction methods, the proposed method is particularly effective in discovering knowledge in ordered event sequences even if noisy data are being dealt with. The method can be divided into three phases: (i) detection of underlying patterns in an ordered event sequence; (ii) construction of sequence-generation rules based on the detected patterns; and (iii) use of these rules to predict the attributes of future events. The method has been implemented in a program called OBSERVER-II, which has been tested with both simulated and real-life data. Experimental results indicate that it Is capable of discovering underlying patterns and explaining the behaviour of certain sequence-generation processes that are not obvious or easily understood. The performance of OBSERVER-II has been compared with that of existing AI-based prediction systems, and it is found to be able to successfully solve prediction problems programs such as SPARC have failed on. >