scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science and Technology in 2000"


Journal Article
TL;DR: Multiagent Systems is the title of a collection of papers dedicated to surveying specific themes of Multiagent Systems (MAS) and Distributed Artificial Intelligence (DAI).
Abstract: Multiagent Systems is the title of a collection of papers dedicated to surveying specific themes of Multiagent Systems (MAS) and Distributed Artificial Intelligence (DAI). All of them authored by leading researchers of this dynamic multidisciplinary field.

635 citations


Journal Article
TL;DR: This book makes a clear presentation of the traditional topics included in a course of undergraduate parallel programming, and can be used almost directly to teach basic parallel programming.
Abstract: This book makes a clear presentation of the traditional topics included in a course of undergraduate parallel programming. As explained by the authors, it was developed from their own experience in classrooms, introducing their students to parallel programming. It can be used almost directly to teach basic parallel programming.

249 citations


Journal ArticleDOI
Zhou Aoying1, Zhou Shuigeng1, Cao Jing1, Fan Ye1, Hu Yunfa1 
TL;DR: Several approaches are proposed to scale DBSCAN algorithm to large spatial databases and some experimental results are given to demonstrate the effectiveness and efficiency of these algorithms.
Abstract: The huge amount of information stored in databases owned by corporations (e.g., retail, financial, telecom) has spurred a tremendous interest in the area of knowledge discovery and data mining. Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and other business applications. Although researchers have been working on clustering algorithms for decades, and a lot of algorithms for clustering have been developed, there is still no efficient algorithm for clustering very large databases and high dimensional data. As an outstanding representative of clustering algorithms, DBSCAN algorithm shows good performance in spatial data clustering. However, for large spatial databases, DBSCAN requires large volume of memory support and could incur substantial I/O costs because it operates directly on the entire database. In this paper, several approaches are proposed to scale DBSCAN algorithm to large spatial databases. To begin with, a fast DBSCAN algorithm is developed, which considerably speeds up the original DBSCAN algorithm. Then a sampling based DBSCAN algorithm, a partitioning-based DBSCAN algorithm, and a parallel DBSCAN algorithm are introduced consecutively. Following that, based on the above-proposed algorithms, a synthetic algorithm is also given. Finally, some experimental results are given to demonstrate the effectiveness and efficiency of these algorithms.

83 citations


Journal ArticleDOI
TL;DR: The problem of discovering association rules between items in a lange database of sales transactions is discussed, and a novel algorithm, BitMatrix, is proposed, which outperforms the known ones for large databases.
Abstract: In this paper, the problem of discovering association rules between items in a lange database of sales transactions is discussed, and a novel algorithm, BitMatrix, is proposed. The proposed algorithm is fundamentally different from the known algorithms Apriori and AprioriTid. Empirical evaluation shows that the algorithm outperforms the known ones for large databases. Scale-up experiments show that the algorithm scales linearly with the number of transactions.

34 citations


Journal Article
TL;DR: A new algorithm for generating antibody strings is presented that allows to find in advance the number of of strings which cannot be detected by an "ideal" receptors repertoire.
Abstract: The paper provides a brief introduction into a relatively new discipline: artificial immune systems (AIS). These are computer systems exploiting the natural immune system (or NIS for brevity) metaphor: protect an organism against invaders. Hence, a natural field of applications of AIS is computer security. But the notion of invader can be extended further: for instance a fault occurring in a system disturbs patterns of its regular functioning. Thus fault, or anomaly detection is another field of applications. It is convenient to represent the information about normal and abnormal functioning of a system in binary form (e.g. computer programs/viruses are binary files). Now the problem can be stated as follows: given a set of self patterns representing normal behaviour of a system under considerations find a set of detectors (i.e, antibodies, or more precisely, receptors) identifying all non self strings corresponding to abnormal states of the system. A new algorithm for generating antibody strings is presented. Its interesting property is that it allows to find in advance the number of of strings which cannot be detected by an "ideal" receptors repertoire.

29 citations


Journal Article
TL;DR: A new model for non-stationary problems and a classifcation of these problems by the type of changes is proposed, and the evolutionary algorithm is extended by two mechanisms dedicated to non- stationary optimization: redundant genetic memory structures and a diversity maintenance technique -random inmigrants mechanism.
Abstract: As most real-world problemas are dynamic, it is not sufficient to "solve" the problem for the some (current) scenario, but it is also necessary to modify the current solution due to various changes in the environment (e. g., machine breakdowns, sickness of employees, etc.). Thus it is important to investigate properties of adaptive algorithms which do not require re-start every time a change is recorded. In this paper such non-stationary problems (i. e., problems, which change in time) are considered. We describe different types of changes in the environment. A new model for non-stationary problems and a classifcation of these problems by the type of changes is proposed. We apply evolutionary algorithms in non-stationary problems. We extend the evolutionary algorithm by two mechanisms dedicated to non-stationary optimization: redundant genetic memory structures and a diversity maintenance technique -random inmigrants mechanism. We report on experiments with evolutionary optimization employing two mechanisms (separately and togheter); the results of experiments are discussed and some observations are made.

26 citations


Journal ArticleDOI
Sun Wei1
TL;DR: The current computer-aided technologies in design and product development, the evolution of CAD modeling, and a framework of multi-volume CAD modeling system for heterogeneous object design and fabrication are presented in this paper.
Abstract: The current computer-aided technologies in design and product development, the evolution of CAD modeling, and a framework of multi-volume CAD modeling system for heterogeneous object design and fabrication are presented in this paper. The multi-volume CAD modeling system is presented based on nonmanifold topological elements. Material identifications are defined as design attributes introduced along with geometric and topological information at the design stage. Extended Euler operation and reasoning Boolean operations for merging and extraction are executed according to the associated material identifications in the developed multi-volume modeling system for heterogeneous object. An application example and a pseudo-processing algorithm for prototyping of heterogeneous structure through solid free-form fabrication are also described.

24 citations


Journal Article
TL;DR: This book is an applied introduction to the problems and solutions of modern computer vision and offers a collection of selected, well-tested methods (theory and algorithms), aiming to balance difficulty and applicability.
Abstract: This book is an applied introduction to the problems and solutions of modern computer vision. It offers a collection of selected, well-tested methods (theory and algorithms), aiming to balance difficulty and applicability. It can be considered a starting point to understand and investigate the literature of computer vision, including conferences, journals, and Internet sites.

24 citations


Journal ArticleDOI
TL;DR: It is shown that the new notion of bisimulation is captured, in which the manipulation of maximally consistent conditions is replaced with a systematic employment of schematic names, and an efficient algorithm is presented to check bisimulations for finite-control π-calculus.
Abstract: Symbolic bisimulation avoids the infinite branching problem caused by instantiating input names with all names in the standard definition of bisimulation in π-calculus. However, it does not automatically lead to an efficient algorithm, because symbolic bisimulation is indexed by conditions on names, and directly manipulating such conditions can be computationally costly. In this paper a new notion of bisimulation is introduced, in which the manipulation of maximally consistent conditions is replaced with a systematic employment of schematic names. It is shown that the new notion captures symbolic bisimulation in a precise sense. Based on the new definition an efficient algorithm, which instantiates input names “on-the-fly”, is presented to check bisimulations for finite-control π-calculus.

17 citations


Journal ArticleDOI
TL;DR: Borders on the average-case number of stacks required for sorting sequential or parallel Queuesort or Stacksort are proved and the incompressibility method is developed.
Abstract: Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

12 citations


Journal ArticleDOI
TL;DR: A Virtual Organization Integrated Support Environment (VOISE) is presented, which provides the computer-aided support for rapidly building an optimal AVE.
Abstract: With the trend of the worldwide market competition, the global agile manufacturing will becom an advanced manufacturing technology in 21st century, and the Agile Virtual Enterprise (AVE) will also become a new organization form of manufacturing enterprises. As AVE is a complicated enterprise, how to build an optimal AVE organization is a difficult problem. In this paper, based on the analysis of the AVE organization, the methodology for AVE including enterprise architecture, reference model, enterprise modeling methods and toolkit, guideline for system implementation is proposed. This paper also presents a Virtual Organization Integrated Support Environment (VOISE), which provides the computer-aided support for rapidly building an optimal AVE.

Journal ArticleDOI
TL;DR: The numerical results of solving a non-linear heat transfer equation show that the optimum implementation of multi-grid parallel algorithm with virtual boundary forecast method is much better than the non-optimum one.
Abstract: In this paper, an optimum tactic of multi-grid parallel algorithm with virtual boundary forecast method is disscussed, and a two-stage implementation is presented. The numerical results of solving a non-linear heat transfer equation show that the optimum implementation is much better than the non-optimum one.

Journal Article
TL;DR: This work proposes new sources of parallelism that can be implicitly exploited in the defeasible argumentation formalism implemented through DLP, and shows how the argumentation process and the dialectical analysis benefit from exploiting those sources of Parallelism.
Abstract: Implicitly exploitable parallelism for Logic Programming has received ample attention Defeasible Argutmentation is specially apt for this optimizing technique Defeasible Logic Programming (DLP), which is based on a defeasible argumentation formalism, could take full advantage of this type of parallel evaluation to improve the computational response of its proof procedure In a defeasible argumentation formalism, a conclusion q is accepted only when the argument A that supports q becomes a justification To decide if an argument A is a justification a dialectical analysis is performed This analysis considers arguments and counter-arguments DLP extends conventional Logic Programming, capturing common sense reasoning features, and providing a knowledge representation language for defeasible argumentation Since DLP is an extension of Logic Programming, the different types of parallelism studied for Logic Programming could be applied We propose new sources of parallelism that can be implicitly exploited in the defeasible argumentation formalism implemented through DLP Both the argumentation process and the dialectical analysis benefit from exploiting those sources of parallelism

Journal ArticleDOI
TL;DR: A text-based transformation method of Chinese-Chinese sign language machine translation is proposed, Gesture and facial expression models are created, and a practical system is implemented.
Abstract: Sign language is a visual-gestural language mainly used by hearing-impaired people to communicate with each other. Gesture and facial expression are important grammar parts of sign language. In this paper, a text-based transformation method of Chinese-Chinese sign language machine translation is proposed. Gesture and facial expression models are created. And a practical system is implemented. The input of the system is Chinese text. The output of the system is “graphics person” who can gesticulate Chinese sign language accompanied by facial expression that corresponds to the Chinese text entered so as to realize automatic translation from Chinese text to Chinese sign language.

Journal ArticleDOI
Xiang Fei1, Xiaoyan He1, Junzhou Luo1, Jieyi Wu1, Guanqun Gu1 
TL;DR: Fuzzy neural network developed in this paper can solve congestion control and traffic prediction problems satisfactorily and the comparison among no-feedback control scheme, reactive control scheme and neural network based control scheme is shown.
Abstract: Congestion control is one of the key problems in high-speed networks, such as ATM. In this paper, a kind of traffic prediction and preventive congestion control scheme is proposed using neural network approach. Traditional predictor using BP neural network has suffered from long convergence time and dissatisfying error. Fuzzy neural network developed in this paper can solve these problems satisfactorily. Simulations show the comparison among no-feedback control scheme, reactive control scheme and neural network based control scheme.

Journal ArticleDOI
TL;DR: A novel word-segmentation algorithm is presented to delimit words in Chinesenatural language queries in NChiql system, a Chinese natural language query interface to databases, based on the database semantics, namely Semantic Conceptual Model (SCM) for specific domain knowledge.
Abstract: In this paper a novel word-segmentation algorithm is presented to delimit words in Chinese natural language queries in NChiql system, a Chinese natural language query interface to databases. Although there are sizable literatures on Chinese segmentation, they cannot satisfy particular requirements in this system. The novel word-segmentation algorithm is based on the database semantics, namely Semantic Conceptual Model (SCM) for specific domain knowledge. Based on SCM, the segmenter labels the database semantics to words directly, which eases the disambiguation and translation (from natural language to database query) in NChiql.

Journal Article
TL;DR: This work proposes a multiresolution triangulation scheme that eliminates the restrictions of the restricted quadtree triangulating and obtains better results.
Abstract: Interactive visualisation of triangulated terrain surfaces is still a problem for virtual reality systems. A polygonal model of very large terrain data requires a large number of triangles. The main problems are the representation rendering efficiency and the transmission over networks. The major challenge is to simplify a model while preserving its appearance. A multiresolution model represents different levels of detail of an object. We can choose the preferable level of detail according to the position of the observer to improve rendering and we can make a progressive transmission of the different levels. We propose a multiresolution triangulation scheme that eliminates the restrictions of the restricted quadtree triangulation and obtains better results.

Journal Article
TL;DR: In this article, an application of images compression of patient s computed tomographies using neural networks, which allows to carry out both compression and decompression of the images with a fixed ratio of 8:1 and a loss of 2%.
Abstract: Images compression is a widely studied topic. Conventional situations offer variable compression ratios depending on the image in question and, in general, do not yield good results for images that are rich in tones. This work is an application of images compression of patient s computed tomographies using neural networks, which allows to carry out both compression and decompression of the images with a fixed ratio of 8:1 and a loss of 2%.

Journal ArticleDOI
TL;DR: It is proved in this paper that checking a timed automaton ℳ with respect to a linear duration property ⊅ can be done by investigating only the integral timed states ofℳ.
Abstract: It is proved in this paper that checking a timed automaton ℳ with respect to a linear duration property ⊅ can be done by investigating only the integral timed states of ℳ. An equivalence relation is introduced in this paper to divide the infinite number of integral timed states into finite number of equivalence classes. Based on this, a method is proposed for checking whether ℳ satisfies ⊅. In some cases, the number of equivalence classes is too large for a computer to manipulate. A technique for reducing the search-space for checking linear duration property is also described. This technique is more, suitable for the case in this paper than those in the literature because most of those techniques are designed for reachability analysis.

Journal ArticleDOI
TL;DR: An efficient algorithm is presented that implements one-to-many, or multicast, communication in one-port wormhole-routed cube-connected cycles (CCCs) in the absence of hardware multicast support by exploiting the properties of the switching technology and the use of virtual channels.
Abstract: This paper presents an efficient algorithm that implements one-to-many, or multicast, communication in one-port wormhole-routed cube-connected cycles (CCCs) in the absence of hardware multicast support. By exploiting the properties of the switching technology and the use of virtual channels, a minimum-time multicast algorithm is presented forn-dimensional CCCs that use deterministic routing of unicast messages. The algorithm can deliver a multicast message tom−1 destinations in [log2m] message-passing steps, while avoiding contention among the constituent unicast messages. Performance results of a simulation study on CCCs with up to 10,240 nodes are also given.

Journal ArticleDOI
TL;DR: A novel approach for detection of suspicious regions in digitized mammograms is presented, where the edges of the suspicious region in mammogram are enhanced using an improved logic filter and the result is a gray-level histogram with distinguished characteristics, which facilitates the segmentation of thesuspicious masses.
Abstract: This paper presents a novel approach for detection of suspicious regions in digitized mammograms. The edges of the suspicious region in mammogram are enhanced using an improved logic filter. The result of further image processing gives a gray-level histogram with distinguished characteristics, which facilitates the segmentation of the suspicious masses. The experiment results based on 25 digital sample mammograms, which are definitely diagnosed, are analyzed and evaluated briefly.

Journal ArticleDOI
TL;DR: The objective model gives the objective model the theoretic satisfaction conditions of three kinds of speech acts in MASr-s, a kind of multi-agent systems with requirements/services cooperation style.
Abstract: Adopting three kinds of speech acts: request, promise, and inform, this paper analyses the interaction among aents in a kind of multi-agent systems with requirements/services cooperation style (MASr-s). The paper gives the objective model the theoretic satisfaction conditions of three kinds of speech acts in MASr-s. The formal definition of MASr-s has been presented. To evaluate concrete implementation architecture and mechanism of the variant MASr-s, including client/server computing architecture and mechanism, a spectrum of MASr-s has been proposed, which capturesdirect request/passive service mechanism,direct request/active service mechanism,indirect request/active service mechanism, and peerto-peer request/service mechanism. The spectrum shows a thread to improve traditional client/server computing.

Journal ArticleDOI
TL;DR: The Genetic Algorithm is used in this paper in order to construct an accurate model for some existing test generation algorithms that are being used everywhere in the world.
Abstract: In this era of VLSI circuits, testability is truly a very crucial issue. To generate a test set for a given circuit, choice of an algorithm from a number of existing test generation algorithms to apply is bound to vary from circuit to circuit. In this paper, the Genetic Algorithm is used in order to construct an accurate model for some existing test generation algorithms that are being used everywhere in the world. Some objective quantitative measures are used as an effective tool in making such choice. Such measures are so important to the analysis of algorithms that they become one of the subjects of this work.

Journal ArticleDOI
TL;DR: This paper demonstrates the testability and communication speed of the tree-topology atomic SYN-event under different numbers of branches in order to achieve a more satisfactory tradeoff between testable and communication efficiency.
Abstract: Testing of parallel programs involves two parts—testing of control-flow within the processes and testing of timing-sequence. This paper focuses on the latter, particularly on the timing-sequence of message-passing paradigms. Firstly the coarse-grained SYN-sequence model is built up to describe the execution of distributed programs. All of the topics discussed in this paper are based on it. The most direct way to test a program is to run it. A fault-free parallel program should be of both correct computing results and proper SYN-sequence. In order to analyze the validity of observed SYN-sequence, this paper presents the formal specification (Backus Normal Form) of the valid SYN-sequence. Till now there is little work about the testing coverage for distributed programs. Calculating the number of the valid SYN-sequences is the key to coverage problem, while the number of the valid SYN-sequences is terribly large and it is very hard to obtain the combination law among SYN-events. In order to resolve this problem, this paper proposes an efficient testing strategy—atomic SYN-event testing, which is to linearize the SYN-sequence (making it only consist of serial atomic SYN-events) first and then test each atomic SYN-event independently. This paper particularly provides the calculating formula about the number of the valid SYN-sequences for tree-topology atomic SYN-event (broadcast and combine). Furthermore, the number of valid SYN-sequences also, to some degree, mirrors the testability of parallel programs. Taking tree-topology atomic SYN-event as an example, this paper demonstrates the testability and communication speed of the tree-topology atomic SYN-event under different numbers of branches in order to achieve a more satisfactory tradeoff between testability and communication efficiency.

Journal ArticleDOI
TL;DR: A novel text representation and matching scheme for Chinese text retrieval that uses both proximity and mutual information of the word pairs to represent the text coutent so as to overcome the high false drop, new word and phrase problems that exist in the character-based and word-based systems.
Abstract: This paper proposed a novel text representation and matching scheme for Chinese text retrieval. At present, the indexing methods of Chinese retrieval systems are either character-based or word-based. The character-based indexing methods, such as bi-gram or tri-gram indexing, have high false drops due to the mismatches between queries and documents. On the other hand, it’s difficult to efficiently identify all the proper nouns, terminology of different domains, and phrases in the word-based indexing systems. The new indexing method uses both proximity and mutual information of the word pairs to represent the text coutent so as to overcome the high false drop, new word and phrase problems that exist in the character-based and word-based systems. The evaluation results indicate that the average query precision of proximity-based indexing is 5.2% higher than the best results of TREC-5.

Journal ArticleDOI
TL;DR: An approach of motion-based segmentation is proposed to realize the extraction of spatial-temporal features of gesturing, and based on the dominant motion model the gesturing region is extracted, i.e., the dominant object.
Abstract: One of the key problems in a vision-based gesture recognition system is the extraction of spatial-temporal features of gesturing. In this paper an approach of motion-based segmentation is proposed to realize this task. The direct method cooperated with the robust M-estimator to estimate the affine parameters of gesturing motion is used, and based on the dominant motion model the gesturing region is extracted, i.e., the dominant object. So the spatial-temporal features of gestures can be extracted. Finally, the dynamic time warping (DTW) method is directly used to perform matching of 12 control gestures (6 for “translation” orders, 6 for “rotation” orders). A small demonstration system has been set up to verify the method, in which a panorama image viewer can be controlled (set by mosaicing a sequence of standard “Garden” images) with recognized gestures instead of the 3-D mouse tool.

Journal ArticleDOI
TL;DR: For finite automaton public key cryptosystems, of which automata in public keys belong to such a kind of compound finite automata, search amounts of exhaust search algorithms in average case and in worse case, and successful probabilities of stochastic search algorithms for both encryption and signature are evaluated.
Abstract: In this paper, weights of output set and of input set for finite automata are discussed. For a weakly invertible finite automaton, we prove that for states with minimal output weight, the distribution of input sets is uniform. Then for a kind of compound finite automata, we give weights of output set and of input set explicitly, and a characterization of their input-trees. For finite automaton public key cryptosystems, of which automata in public keys belong to such a kind of compound finite automata, we evaluate search amounts of exhaust search algorithms in average case and in worse case for both encryption and signature, and successful probabilities of stochastic search algorithms for both encryption and signature. In addition, a result on mutual invertibility of finite automata is also given.

Journal ArticleDOI
TL;DR: In this paper, the self fault-tolerance of protocols is discussed, and a semantics-based approach for achieving self faulted protocols is presented, and some main characteristics of self faulting concerning liveness, nontermination and infinity are presented.
Abstract: The cooperation of different processes may be lost by mistake when a protocol is executed. The protocol cannot be normally operated under this condition. In this paper, the self fault-tolerance of protocols is discussed, and a semantics-based approach for achieving self fault-tolerance of protocols is presented. Some main characteristics of self fault-tolerance of protocols concerning liveness, nontermination and infinity are also presented. Meanwhile, the sufficient and necessary conditions for achieving self fault-tolerance of protocols are given. Finally, a typical protocol that does not satisfy the self fault-tolerance is investigated, and a new redesign version of this existing protocol using the proposed approach is given.

Journal ArticleDOI
TL;DR: A new data-summarizing technique based on confidence interval that fits not only for evaluating DSM systems, but also for evaluating other systems, such as memory system and communication systems is proposed.
Abstract: Distributed Shared Memory (DSM) systems have gained popular acceptance by combining the scalability and low cost of distributed system with the ease of use of single address space. Many new hardware DSM and software DSM systems have been proposed in recent years. In general, benchmarking is widely used to demonstrate the performance advantages of new systems. However, the common method used to summarize the measured results is the arithmetic mean of ratios, which is incorrect in some cases. Furthermore, many published papers list a lot of data only, and do not summarize them effectively, which confuse users greatly. In fact, many users want to get a single number as conclusion, which is not provided in old summarizing techniques. Therefore, a new data-summarizing technique based on confidence interval is proposed in this paper. The new technique includes two data-summarizing methods: (1) paired confidence interval method; (2) unpaired confidence interval method. With this new technique, it is concluded that at some confidence one system is better than others. Four examples are shown to demonstrate the advantages of this new technique. Furthermore, with the help of confidence level, it is proposed to standardize the benchmarks used for evaluating DSM systems so that a convincing result can be got. In addition, the new summarizing technique fits not only for evaluating DSM systems, but also for evaluating other systems, such as memory system and communication systems.

Journal ArticleDOI
TL;DR: Compared with the previous version of SSNS algorithm, the new version decreases the Chinese character error rate (CCER) in the word decoding by 42.1% across a database consisting of a large number of testing sentences (syllable strings).
Abstract: The previously proposed syllable-synchronous network search (SSNS) algorithm plays a very important role in the word decoding of the continuous Chinese speech recognition and achieves satisfying performance. Several related key factors that may affect the overall word decoding effect are carefully studied in this paper, including the perfecting of the vocabulary, the big-discount Turing re-estimating of theN-Gram probabilities, and the managing of the searching path buffers. Based on these discussions, corresponding approaches to improving the SSNS algorithm are proposed. Compared with the previous version of SSNS algorithm, the new version decreases the Chinese character error rate (CCER) in the word decoding by 42.1% across a database consisting of a large number of testing sentences (syllable strings).