Showing papers in "International Journal of Software Science and Computational Intelligence in 2011"
TL;DR: The FKRS system is implemented in Java as a core component towards the development of the CLE and other knowledge-based systems in cognitive computing and computational intelligence.
Abstract: It is recognized that the generic form of machine learning is a knowledge acquisition and manipulation process mimicking the brain. Therefore, knowledge representation as a dynamic concept network is centric in the design and implementation of the intelligent knowledge base of a Cognitive Learning Engine CLE. This paper presents a Formal Knowledge Representation System FKRS for autonomous concept formation and manipulation based on concept algebra. The Object-Attribute-Relation OAR model for knowledge representation is adopted in the design of FKRS. The conceptual model, architectural model, and behavioral models of the FKRS system is formally designed and specified in Real-Time Process Algebra RTPA. The FKRS system is implemented in Java as a core component towards the development of the CLE and other knowledge-based systems in cognitive computing and computational intelligence.
52 citations
TL;DR: A set of cognitive models for causation analyses and causal inferences that enables machines to mimic complex human reasoning mechanisms in cognitive informatics, cognitive computing, and computational intelligence is presented.
Abstract: Human thought, perception, reasoning, and problem solving are highly dependent on causal inferences. This paper presents a set of cognitive models for causation analyses and causal inferences. The taxonomy and mathematical models of causations are created. The framework and properties of causal inferences are elaborated. Methodologies for uncertain causal inferences are discussed. The theoretical foundation of humor and jokes as false causality is revealed. The formalization of causal inference methodologies enables machines to mimic complex human reasoning mechanisms in cognitive informatics, cognitive computing, and computational intelligence.
51 citations
TL;DR: The results indicated that the model could represent the change in the cognitive mental load based on measurable data, which means that the framework of this paper will be useful for designing user interfaces for next-generation systems that actively employ user situations.
Abstract: This paper explores applying qualitative reasoning to a driver's mental state in real driving situations so as to develop a working load for intelligent transportation systems. The authors identify the cognitive state that determines whether a driver will be ready to operate a device in car navigation. In order to identify the driver's cognitive state, the authors will measure eye movements during car-driving situations. Data can be acquired for the various actions of a car driver, in particular braking, acceleration, and steering angles from the experiment car. The authors constructed a driver cognitive mental load using the framework of qualitative reasoning. The response of the model was checked by qualitative simulation. The authors also verified the model using real data collected by driving an actual car. The results indicated that the model could represent the change in the cognitive mental load based on measurable data. This means that the framework of this paper will be useful for designing user interfaces for next-generation systems that actively employ user situations.
32 citations
TL;DR: A new approach for automated diagnosis and classification of Magnetic Resonance MR human brain images is proposed which segregates MR brain images into normal and abnormal and employs genetic algorithm for feature selection which requires much lighter computational burden.
Abstract: A new approach for automated diagnosis and classification of Magnetic Resonance MR human brain images is proposed. The proposed method uses Wavelets Transform WT as input module to Genetic Algorithm GA and Support Vector Machine SVM. It segregates MR brain images into normal and abnormal. This contribution employs genetic algorithm for feature selection which requires much lighter computational burden in comparison with Sequential Floating Backward Selection SFBS and Sequential Floating Forward Selection SFFS methods. A percentage reduction rate of 88.63% is achieved. An excellent classification rate of 100% could be achieved using the support vector machine. The observed results are significantly better than the results reported in a previous research work employing Wavelet Transform and Support Vector Machine.
28 citations
TL;DR: Simulations show that these four architectures can maintain stability in the presence of large time delay and two types of multilateral shared control architecture based on passive four-channel architecture, which exists in space teleoperation, are put forward.
Abstract: The four-channel architecture in teleoperation with force feedback has been studied in various existing literature. However, most of them focused on Lawrence architecture and did not research other cases. This paper proposes two other four-channel architectures: passive four-channel architecture and passive four-channel architecture with operator force. Furthermore, two types of multilateral shared control architecture based on passive four-channel architecture, which exists in space teleoperation, are put forward. One is dual-master multilateral shared control architecture, and the other is dual-slave multilateral shared control architecture. Simulations show that these four architectures can maintain stability in the presence of large time delay.
23 citations
TL;DR: The quadratic neural unit QNU is the important midpoint between linear systems and highly nonlinear neural networks because it is relatively very strong in nonlinear approximation, and its optimization and performance have fast and convex-like nature.
Abstract: The paper discusses the quadratic neural unit QNU and highlights its attractiveness for industrial applications such as for plant modeling, control, and time series prediction. Linear systems are still often preferred in industrial control applications for their solvable and single solution nature and for the clarity to the most application engineers. Artificial neural networks are powerful cognitive nonlinear tools, but their nonlinear strength is naturally repaid with the local minima problem, overfitting, and high demands for application-correct neural architecture and optimization technique that often require skilled users. The QNU is the important midpoint between linear systems and highly nonlinear neural networks because the QNU is relatively very strong in nonlinear approximation; however, its optimization and performance have fast and convex-like nature, and its mathematical structure and the derivation of the learning rules is very comprehensible and efficient for implementation. These advantages of QNU are demonstrated by using real and theoretical examples.
20 citations
TL;DR: The authors believe that inconsistency can serve as one of the important learning stimuli toward building perpetual learning agents that incrementally improve their performance over time.
Abstract: One of the long-term research goals in machine learning is how to build never-ending learners. The state-of-the-practice in the field of machine learning thus far is still dominated by the one-time learner paradigm: some learning algorithm is utilized on data sets to produce certain model or target function, and then the learner is put away and the model or function is put to work. Such a learn-once-apply-next or LOAN approach may not be adequate in dealing with many real world problems and is in sharp contrast with the human's lifelong learning process. On the other hand, learning can often be brought on through overcoming some inconsistent circumstances. This paper proposes a framework for perpetual learning agents that are capable of continuously refining or augmenting their knowledge through overcoming inconsistencies encountered during their problem-solving episodes. The never-ending nature of a perpetual learning agent is embodied in the framework as the agent's continuous inconsistency-induced belief revision process. The framework hinges on the agents recognizing inconsistency in data, information, knowledge, or meta-knowledge, identifying the cause of inconsistency, revising or augmenting beliefs to explain, resolve, or accommodate inconsistency. The authors believe that inconsistency can serve as one of the important learning stimuli toward building perpetual learning agents that incrementally improve their performance over time.
18 citations
TL;DR: A strategy of automatic answer retrieval for repeated or similar questions in user-interactive systems by employing semantic question patterns that enhances the semantic representation and greatly reduces the ambiguity of a question instance when asked by a user using such pattern.
Abstract: A strategy of automatic answer retrieval for repeated or similar questions in user-interactive systems by employing semantic question patterns is proposed in this paper. The used semantic question pattern is a generalized representation of a group of questions with both similar structure and relevant semantics. Specifically, it consists of semantic annotations or constraints for the variable components in the pattern and hence enhances the semantic representation and greatly reduces the ambiguity of a question instance when asked by a user using such pattern. The proposed method consists of four major steps: structure processing, similar pattern matching and filtering, automatic pattern generation, question similarity evaluation and answer retrieval. Preliminary experiments in a real question answering system show a precision of more than 90% of the method.
18 citations
TL;DR: This paper provides an overview over the relationship between Petri Nets and Discrete Event Systems as they have been proved as key factors in the cognitive processes of perception and memorization.
Abstract: This paper provides an overview over the relationship between Petri Nets and Discrete Event Systems as they have been proved as key factors in the cognitive processes of perception and memorization. In this sense, different aspects of encoding Petri Nets as Discrete Dynamical Systems that try to advance not only in the problem of reachability but also in the one of describing the periodicity of markings and their similarity, are revised. It is also provided a metric for the case of Non-bounded Petri Nets.
17 citations
TL;DR: The proposed entropy quad-trees technique is used in the initial stage of a crater detection algorithm using digital images taken from the surface of Mars and can be generalized to higher dimensional data.
Abstract: This paper introduces entropy quad-trees, which are structures derived from quad-trees by allowing nodes to split only when those correspond to sufficiently complex sub-domains of a data domain. Complexity is evaluated using an information-theoretic measure based on the analysis of the entropy associated to sets of objects designated by nodes. An alternative measure related to the concept of box-counting dimension is also explored. Experimental results demonstrate the efficiency of entropy quad-trees to mine complex regions. As an application, the proposed technique is used in the initial stage of a crater detection algorithm using digital images taken from the surface of Mars. Additional experimental results are provided that demonstrate the crater detection performance and analyze the effectiveness of entropy quad-trees for high-complexity regions detection in the pixel space with significant presence of noise. This work focuses on 2-dimensional image domains, but can be generalized to higher dimensional data.
16 citations
TL;DR: This paper examines the development of an intelligent fault recognition and monitoring system Melvin I, which detects and diagnoses rotating machine conditions according to changes in fault frequency indicators and is a smart tool for both system vibration analysts and industrial machine operators.
Abstract: Monitoring industrial machine health in real-time is not only in high demand, it is also complicated and difficult. Possible reasons for this include: a access to the machines on site is sometimes impracticable, and b the environment in which they operate is usually not human-friendly due to pollution, noise, hazardous wastes, etc. Despite theoretically sound findings on developing intelligent solutions for machine condition-based monitoring, few commercial tools exist in the market that can be readily used. This paper examines the development of an intelligent fault recognition and monitoring system Melvin I, which detects and diagnoses rotating machine conditions according to changes in fault frequency indicators. The signals and data are remotely collected from designated sections of machines via data acquisition cards. They are processed by a signal processor to extract characteristic vibration signals of ten key performance indicators KPIs. A 3-layer neural network is designed to recognize and classify faults based on a pre-determined set of KPIs. The system implemented in the laboratory and applied in the field can also incorporate new experiences into the knowledge base without overwriting previous training. Results show that Melvin I is a smart tool for both system vibration analysts and industrial machine operators.
TL;DR: This paper presents an empirical study on the functional complexity of software known as cognitive complexity based on large-scale samples using a Software Cognitive Complexity Analysis Tool SCCAT.
Abstract: Functional complexity is one of the most fundamental properties of software because almost all other software attributes and properties such as functional size, development effort, costs, quality, and project duration are highly dependent on it. The functional complexity of software is a macro-scope problem concerning the semantic properties of software and human cognitive complexity towards a given software system; while the computational complexity is a micro-scope problem concerning algorithmic analyses towards machine throughput and time/space efficiency. This paper presents an empirical study on the functional complexity of software known as cognitive complexity based on large-scale samples using a Software Cognitive Complexity Analysis Tool SCCAT. Empirical data are obtained with SCCAT on 7,531 programs and five formally specified software systems. The theoretical foundation of software functional complexity is introduced and the metric of software cognitive complexity is formally modeled. The functional complexities of a large-scale software system and the air traffic control systems ATCS are rigorously analyzed. A novel approach to represent software functional complexities and their distributions in software systems is developed. The nature of functional complexity of software in software engineering is rigorously explained. The relationship between the symbolic and functional complexities of software is quantitatively analyzed.
TL;DR: A rigorous denotational mathematics, Real-Time Process Algebra RTPA, is adopted, which allows both architectural and behavioral models of files and FMS to be rigorously designed and implemented in a top-down approach.
Abstract: Files are a typical abstract data type for data objects and software modeling, which provides a standard encapsulation and access interface for manipulating large-volume information and persistent data. File management systems are an indispensable component of operating systems and real-time systems for file manipulations. This paper develops a comprehensive design pattern of files and a File Management System FMS. A rigorous denotational mathematics, Real-Time Process Algebra RTPA, is adopted, which allows both architectural and behavioral models of files and FMS to be rigorously designed and implemented in a top-down approach. The conceptual model, architectural model, and the static/dynamic behavioral models of files and FMS are systematically presented. This work has been applied in the design and modeling of a real-time operating system RTOS+.
TL;DR: The proposed framework incorporates general principles in value-based software testing and makes it possible to prioritize testing decisions that are rooted in the stakeholder value propositions and allows for a cost-effective way to fulfill most valuable testing objectives first and a graceful degradation when planned testing process has to be shortened.
Abstract: The fundamental objective in value-based software engineering is to integrate consistent stakeholder value propositions into the full extent of software engineering principles and practices so as to increase the value for software assets. In such a value-based setting, artifacts in software development such as requirement specifications, use cases, test cases, or defects, are not treated as equally important during the development process. Instead, they will be differentiated according to how much they are contributing, directly or indirectly, to the stakeholder value propositions. The higher the contributions, the more important the artifacts become. In turn, development activities involving more important artifacts should be given higher priorities and greater considerations in the development process. In this paper, a value-based framework is proposed for carrying out software evolutionary testing with a focus on test data generation through genetic algorithms. The proposed framework incorporates general principles in value-based software testing and makes it possible to prioritize testing decisions that are rooted in the stakeholder value propositions. It allows for a cost-effective way to fulfill most valuable testing objectives first and a graceful degradation when planned testing process has to be shortened.
TL;DR: Experimental results show that without the time-consuming parameter optimization, the sparse representation based algorithm achieves comparable performance with SVM, and demonstrate that the algorithm is robust to a certain degree of background clutter and intra-class variations with the bag-of-visual-words representations.
Abstract: The sparse representation based classification algorithm has been used to solve the problem of human face recognition, but the image database is restricted to human frontal faces with only slight illumination and expression changes. This paper applies the sparse representation based algorithm to the problem of generic image classification, with a certain degree of intra-class variations and background clutter. Experiments are conducted with the sparse representation based algorithm and Support Vector Machine SVM classifiers on 25 object categories selected from the Caltech101 dataset. Experimental results show that without the time-consuming parameter optimization, the sparse representation based algorithm achieves comparable performance with SVM. The experiments also demonstrate that the algorithm is robust to a certain degree of background clutter and intra-class variations with the bag-of-visual-words representations. The sparse representation based algorithm can also be applied to generic image classification task when the appropriate image feature is used.
TL;DR: This paper examines RS Cognition, which consists of many software functions for perceiving various situations like events or humans' activities in RS and develops two perceptual functions, sitting posture recognition and human's location estimation for a person, as RS perception tasks.
Abstract: Realization of human-computer symbiosis is an important idea in the context of ubiquitous computing. Symbiotic Computing is a concept that bridges the gap between situations in Real Space RS and data in Digital Space DS. The main purpose is to develop an intelligent software application as well as establish the next generation information platform to develop the symbiotic system. In this paper, the authors argue that it is necessary to build 'Mutual Cognition' between human and system. Mutual cognition consists of two functions: 'RS Cognition' and 'DS Cognition'. This paper examines RS Cognition, which consists of many software functions for perceiving various situations like events or humans' activities in RS. The authors develop two perceptual functions, sitting posture recognition and human's location estimation for a person, as RS perception tasks. In the resulting experiments, developed functions are quite competent to recognize a human's activities.
TL;DR: Using operation flow inGeneric cabling, the index constraints affecting generic cabling have been evolved and the introduced retrospective algorithm makes the ants avoid the path marked "invalid" in the subsequent search process and improves the search performance and convergence speed of the ant colony algorithm.
Abstract: Generic cabling is a key component for multiplex cable wiring. It is one of the basic foundations of intelligent buildings. Using operation flow in generic cabling, the index constraints affecting generic cabling have been evolved in this paper. A mathematical model is built based on the ant colony algorithm with multiple constraints, and improvements were made on the original basis to extend the ant colony algorithm from the regular simple ant colony and structure to a multi-ant colony and structure. The equilibrium settlement of multiplex wiring is realized according to the introduction of the multi-ant colony model. The ant cycle model is combined to extend the optimization target from the local wiring path to the entire wiring path, and to solve the drawbacks existing in the regular ant colony algorithm and other search algorithms that take the local wiring path as the optimization target. The introduced retrospective algorithm make the ants avoid the path marked "invalid" in the subsequent search process and improves the search performance and convergence speed of the ant colony algorithm.
TL;DR: The author provides a formal definition of locality of inconsistency and describes how to identify clusters of inconsistent circumstances in a knowledge base, paving the way for a disciplined approach to manage knowledge inconsistency.
Abstract: Inconsistency is commonplace in the real world in long-term memory and knowledge based systems. Managing inconsistency is considered a hallmark of the plasticity of human intelligence. Belief revision is an important mental process that underpins human intelligence. To facilitate belief revision, it is necessary to know the localities and contexts of inconsistency and how different types of inconsistency are clustered. In this paper, the author provides a formal definition of locality of inconsistency and describes how to identify clusters of inconsistent circumstances in a knowledge base. The results pave the way for a disciplined approach to manage knowledge inconsistency.
TL;DR: This paper develops a comprehensive design pattern of formal lists using a doubly-linked-circular DLC list architecture and adopts a rigorous denotational mathematics, Real-Time Process Algebra RTPA, which allows both architectural and behavioral models of lists to be rigorously designed and implemented in a top-down approach.
Abstract: Data Types ADTs are a set of highly generic and rigorously modeled data structures in type theory. Lists as a finite sequence of elements are one of the most fundamental and widely used ADTs in system modeling, which provide a standard encapsulation and access interface for manipulating large-volume information and persistent data. This paper develops a comprehensive design pattern of formal lists using a doubly-linked-circular DLC list architecture. A rigorous denotational mathematics, Real-Time Process Algebra RTPA, is adopted, which allows both architectural and behavioral models of lists to be rigorously designed and implemented in a top-down approach. The architectural models of DLC-Lists are created using RTPA architectural modeling methodologies known as the Unified Data Models UDMs. The behavioral models of DLC-Lists are specified and refined by a set of Unified Process Models UPMs in three categories namely the management operations, traversal operations, and node I/O operations. This work has been applied in a number of real-time and nonreal-time system designs such as a real-time operating system RTOS+, a file management system FMS, and the ADT library for an RTPA-based automatic code generation tool.
TL;DR: D discrete wavelet transform method was used to decompose the average power of C3 electrode and C4 electrode in left-right hands imagery movement during some periods of time and showed that false classification rate by Support Vector Machine was lower and gained an ideal classification results.
Abstract: Accurate classification of EEG left and right hand motor imagery is an important issue in brain-computer interface. Firstly, discrete wavelet transform method was used to decompose the average power of C3 electrode and C4 electrode in left-right hands imagery movement during some periods of time. The reconstructed signal of approximation coefficient A6 on the sixth level was selected to build up a feature signal. Secondly, the performances by Fisher Linear Discriminant Analysis with two different threshold calculation ways and Support Vector Machine methods were compared. The final classification results showed that false classification rate by Support Vector Machine was lower and gained an ideal classification results.
TL;DR: This paper presents a semantic network-based association analysis model including three spreading activation methods that are applied to assess the quality of a dataset, and generate semantically valid new hypotheses for adaptive study design especially useful in medical studies.
Abstract: Association mining aims to find valid correlations among data attributes, and has been widely applied to many areas of data analysis. This paper presents a semantic network-based association analysis model including three spreading activation methods. It applies this model to assess the quality of a dataset, and generate semantically valid new hypotheses for adaptive study design especially useful in medical studies. The approach is evaluated on a real public health dataset, the Heartfelt study, and the experiment shows promising results.
TL;DR: This paper studies the fundamental properties of Universal Array UA and presents a comprehensive design pattern that has been applied in a number of real-time and nonreal-time systems such as compilers, a file management system, the real- time operating system RTOS+, and the ADT library for an RTPA-based automatic code generation tool.
Abstract: Arrays are one of the most fundamental and widely applied data structures, which are useful for modeling both logical designs and physical implementations of multi-dimensional data objects sharing the same type of homogeneous elements. However, there is a lack of a formal model of the universal array based on it any array instance can be derived. This paper studies the fundamental properties of Universal Array UA and presents a comprehensive design pattern. A denotational mathematics, Real-Time Process Algebra RTPA, allows both architectural and behavioral models of UA to be rigorously designed and refined in a top-down approach. The conceptual model of UA is rigorously described by tuple-and matrix-based mathematical models. The architectural models of UA are created using RTPA architectural modeling methodologies known as the Unified Data Models UDMs. The physical model of UA is implemented using linear list that is indexed by an offset pointer of elements. The behavioral models of UA are specified and refined by a set of Unified Process Models UPMs. As a case study, the formal UA models are implemented in Java. This work has been applied in a number of real-time and nonreal-time systems such as compilers, a file management system, the real-time operating system RTOS+, and the ADT library for an RTPA-based automatic code generation tool.
TL;DR: The authors have used a new distance measure to compare the promoter sequences of citrate synthase of different mammals and reveal that there exists more similarity between organisms in the same chromosome.
Abstract: Understanding how the regulation of gene networks is orchestrated is an important challenge for characterizing complex biological processes. The DNA sequences that comprise promoters do not provide much direct information about regulation. A substantial part of the regulation results from the interaction of transcription factors TFs with specific cis regulatory DNA sequences. These regulatory sequences are organized in a modular fashion, with each module enhancer containing one or more binding sites for a specific combination of TFs. In the present work, the authors have proposed to investigate the inter motif distance between the important motifs in the promoter sequences of citrate synthase of different mammals. The authors have used a new distance measure to compare the promoter sequences. Results reveal that there exists more similarity between organisms in the same chromosome.
TL;DR: In QsaBC, Search space-zoomed factor and Attractor are introduced according to the dynamic behavior and stability of particles, which not only reduce the subjective interference and enforce the capability of global search, but also enhance the power of local search and escaping from an inferior local optimum.
Abstract: To control particles to fly inside the limited search space and deal with the problems of slow search speed and premature convergence of particle swarm optimization algorithm, this paper applies the theory of topology, and proposed a quotient space-based boundary condition named QsaBC by using the properties of quotient space and homeomorphism. In QsaBC, Search space-zoomed factor and Attractor are introduced according to the dynamic behavior and stability of particles, which not only reduce the subjective interference and enforce the capability of global search, but also enhance the power of local search and escaping from an inferior local optimum. Four CEC'2008 benchmark functions are selected to evaluate the performance of QsaBC. Comparative experiments show that QsaBC can achieve the satisfactory optimization solution with fast convergence speed. Furthermore, QsaBC is more effective with errant particles, and has easier calculation and better robustness than other methods.
TL;DR: A binary tree B-tree is a typical balanced tree where the fan-out of each node is at most t... as mentioned in this paper, which is a non-linear hierarchical structure of linked nodes.
Abstract: Trees are one of the most fundamental and widely used non-linear hierarchical structures of linked nodes. A binary tree B-Tree is a typical balanced tree where the fan-out of each node is at most t...