scispace - formally typeset
Search or ask a question

Showing papers by "Shojiro Nishio published in 1998"


Journal ArticleDOI
01 Mar 1998
TL;DR: The study shows that a set of sophisticated generalization operators can be constructed for generalization of complex data objects, a dimension-based class generalization mechanism can be developed for object cube construction, and sophisticated rule formation methods can be develop for extraction of different kinds of knowledge from data.
Abstract: Data mining is the discovery of knowledge and useful information from the large amounts of data stored in databases. With the increasing popularity of object-oriented database systems in advanced database applications, it is important to study the data mining methods for object-oriented databases because mining knowledge from such databases may improve understanding, organization, and utilization of the data stored there. In this paper, issues on generalization-based data mining in object-oriented databases are investigated in three aspects: (1) generalization of complex objects, (2) class-based generalization, and (3) extraction of different kinds of rules. An object cube model is proposed for class-based generalization, on-line analytical processing, and data mining. The study shows that (i) a set of sophisticated generalization operators can be constructed for generalization of complex data objects, (ii) a dimension-based class generalization mechanism can be developed for object cube construction, and (iii) sophisticated rule formation methods can be developed for extraction of different kinds of knowledge from data, including characteristic rules, discriminant rules, association rules, and classification rules. Furthermore, the application of such discovered knowledge may substantially enhance the power and flexibility of browsing databases, organizing databases and querying data and knowledge in object-oriented databases.

78 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the effective use of database migration produces better performance than the conventional process, based on the two-phase commit protocol.
Abstract: Due to recent developments in network technologies, broader channel bandwidth is becoming prevalent in worldwide networks. As one of the new technologies making good use of such broadband channels, dynamic relocation of databases through networks, database migration, will soon be used in practice as a powerful and basic database operation. We propose two transaction processing methods to take advantage of database migration in broadband networks. These methods choose the most efficient transaction processing method between the conventional method, based on the two-phase commit protocol, and our method, using database migration. We also propose a concurrency control mechanism and a recovery mechanism for our proposed methods. Simulation results are presented comparing the performance of our proposed methods and the conventional transaction processing method based on the two-phase commit protocol. The results demonstrate that the effective use of database migration produces better performance than the conventional method.

15 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: This work discusses the proposal of a distributed database system, DB-MAN (distributed database system based on DataBase Migration in ATM Networks), which takes advantage of database migration in virtual LANs (local area networks) of ATM networks and shows simulation results regarding performance comparison between the proposed system and the conventional distributed database systems.
Abstract: Because of the recent development of network technologies such as ATM (Asynchronous Transfer Mode), broader channel bandwidth is becoming available everywhere in the world wide networks. As one of the new technologies to make good use of such broadband channel, dynamic relocation of databases through networks, which we call database migration, will soon become a powerful and basic database operation of practical use. We discuss our proposal of a distributed database system, DB-MAN (distributed database system based on DataBase Migration in ATM Networks), which takes advantage of database migration in virtual LANs (local area networks) of ATM networks. DB-MAN has two notable mechanisms: a mechanism for selecting the transaction processing method and a mechanism for concurrency control with database migration. The former, is a mechanism which chooses the more efficient method between two transaction processing methods: the conventional method based on the two phase commit protocol and our method employing database migration. The latter is a mechanism to prevent the transaction processing throughput from deteriorating in environments where data contention is a significant factor. Then we show simulation results regarding performance comparison between our proposed system and the conventional distributed database system based on the two phase commit protocol. The obtained results demonstrate that effective use of database migration gives higher performance than that of the conventional system.

13 citations



01 Jan 1998
TL;DR: In this paper, the authors proposed a database compression technique which requires only partial decompression for read operations and no decompression of write operations for relational databases, which is suitable for databases in active use and can be used to compress data in relational databases.
Abstract: Despite the drop in prices, storage cost is still a major cost factor in large scale database applications, such as data warehouses. Data compression is needed to reduce the cost. Many data compression techniques have been proposed and the issue of database compression has been discussed. Conventional data compression techniques require that compressed data be decompressed before read operations or write operations can be carried out. As a result, it is not practical to compress databases in active use using the conventional data compression techniques. In this chapter, we propose a database compression technique which needs only partial decompression for read operations and no decompression for write operations. It is suitable for databases in active use and can be used to compress data in relational databases. The proposed technique finds rules in a relational database using the Apriori Algorithm and store data using rules to achieve high compression ratios. The rules are in turn stored in a deductive database to enable easy data access.

8 citations




Book ChapterDOI
19 Nov 1998
TL;DR: The DSK method and the DSK(S) method are proposed to discover characteristic rules from large amount of deduction results without having to store all of them to overcome the problem of difficult and sometimes impossible to store, view or analyze the query results.
Abstract: The ability of the deductive database to handle recursive queries is one of its most useful features. It opens up new possibilities for users to view and analyze data. This ability to handle recursive queries, however, still cannot be fully utilized because when recursive rules are involved, the amount of deduced facts can become very large, making it difficult and sometimes impossible to store, view or analyze the query results. In order to overcome this problem, we have proposed the DSK method and the DSK(S) method to discover characteristic rules from large amount of deduction results without having to store all of them. In this paper, we propose two new methods, the DSK(T) method and the DSK(ST) method which are faster than the DSK method and the DSK(S) method respectively. In addition, we propose a new sampling method called magic sampling, which is used by the two methods to achieve the improvement in speed. Magic sampling works when linear recursive rules are involved and the magic set algorithm is used for deduction.

1 citations