scispace - formally typeset
Search or ask a question
Topic

Tuple

About: Tuple is a research topic. Over the lifetime, 6513 publications have been published within this topic receiving 146057 citations. The topic is also known as: tuple & ordered tuplet.


Papers
More filters
Proceedings ArticleDOI
01 Jun 1984
TL;DR: A mechanism is proposed in which the view is materialized at all times and how to quickly update the view in response to database changes is addressed.
Abstract: In relational databases a view definition is a query against the database, and a view materialization is the result of applying the view definition to the current database A view materialization over a database may change as relations in the database undergo modificationsIn this paper a mechanism is proposed in which the view is materialized at all times The problem which this mechanism addresses is how to quickly update the view in response to database changes A structure is maintained which provides information useful in minimizing the amount of work caused by updatesMethods are presented for handling both general databases and the much simpler tree databases (also called acyclic database) In both cases adding or deleting a tuple can be performed in polynomial time For tree databases the degree of the polynomial is independent of the schema structure while for cyclic databases the degree depends on the schema structure The cost of a sequence of tuple additions (deletions) is also analyzed

112 citations

Posted Content
TL;DR: Two new types of modelling entities that address situations where the existing time variables are inadequate are introduced and defined within the framework, which provides a foundation, using algebraic bind operators, for the querying of variable databasesvia existing query languages.
Abstract: While "now" is expressed in SQL as CURRENT-TIMESTAMP within queries, this value cannot bestored in the database. However, this notion of an ever-increasing current-time value has beenreflected in some temporal data models by inclusion of database-resident variables, such as"now," "until-changed," "∞," "@" and "-." Time variables are very desirable, but their usealso leads to a new type of database, consisting of tuples with variables, termed a variabledatabase.This paper proposes a framework for defining the semantics of the variable databases of temporalrelational data models. A framework is presented because several reasonable meaningsmay be given to databases that use some of the specific temporal variables that have appearedin the literature. Using the framework, the paper defines a useful semantics for such databases.Because situations occur where the existing time variables are inadequate, two new types ofmodeling entities that address these shortcomings, timestamps which we call now-relative andnow-relative indeterminate, are introduced and defined within the framework. Moreover, the paperprovides a foundation, using algebraic bind operators, for the querying of variable databasesvia existing query languages. This transition to variable databases presented here requires minimalchange to the query processor. Finally, to underline the practical feasibility of variabledatabases, we show that database variables can be precisely specified and efficiently implementedin conventional query languages, such as SQL, and in temporal query languages, suchas TSQL2.

112 citations

Journal ArticleDOI
TL;DR: The expanded closed world assumption is proposed, which discusses how to perform updates on databases containing setnulls, marked nulls, and simple conditional tuples, and addresses some issues of refining incompletely specified information.
Abstract: In this paper we consider approaches to updating databases containing null values and incomplete information. Our approach distinguishes between modeling incompletely known worlds and modeling changes in these worlds. As an alternative to the open and closed world assumptions, we propose the expanded closed world assumption. Under this assumption, we discuss how to perform updates on databases containing set nulls, marked nulls, and simple conditional tuples, and address some issues of refining incompletely specified information.

112 citations

Journal ArticleDOI
02 Aug 1993
TL;DR: In this paper, a generalization of Datalog based on generalizing databases with the addition of integer-order constraints to relational tuples is presented, which can express any Turing-computable function.
Abstract: We provide a generalization of Datalog based on generalizing databases with the addition of integer-order constraints to relational tuples. For Datalog queries with integer-order constraints we show that there is a closed-form evaluation. We also show that the tuple recognition problem can be done in PTIME in the size of the (generalized) database, assuming that the size of the constants in the query is logarithmic in the size of the database. Note that the absence of negation is critical. Datalog queries with integer-order constraints can express any Turing-computable function.

112 citations

Dissertation
Xiaohua Hu1
03 Oct 1996
TL;DR: The method is able to identify the essential subset of nonredundant attributes that determine the discovery task, and can learn different kinds of knowledge rules efficiently from large databases with noisy data and in a dynamic environment and deal with databases with incomplete information.
Abstract: Knowledge discovery systems face challenging problems from the real-world databases which tend to be very large, redundant, noisy and dynamic. In this thesis, we develop an attribute-oriented rough set approach for knowledge discovery in databases. The method adopts the artificial intelligent "learning from examples" paradigm combined with rough set theory and database operations. The learning procedure consists of two phases: data generalization and data reduction. In data generalization, our method generalizes the data by performing attribute-oriented concept tree ascension, thus some undesirable attributes are removed and a set of tuples may be generalized to the same generalized tuple. The goal of data reduction is to find a minimal subset of interesting attributes that have all the essential information of the generalized relation; thus the minimal subset of the attributes can be used rather than the entire attribute set of the generalized relation. By removing those attributes which are not important and/or essential, the rules generated are more concise and ellicacious. Our method integrates a variety of knowledge discovery algorithms, such as DBChar for deriving characteristic rules. DBClass for classification rules. DBDeci for decision rules. DBMaxi for maximal generalized rules. DMBkbs for multiple sets of knowledge rules and DBTrend for data trend regularities, which permit a user to discover various kinds of relationships and regularities in the data. This integration inherit the advantages of the attribute-oriented induction model and rough set theory. Our method makes some contribution to the KDD. A generalized rough set model is formally defined with the ability to handle statistical information and also consider the importance of attributes and objects in the databases. Our method is able to identify the essential subset of nonredundant attributes (factors) that determine the discovery task, and can learn different kinds of knowledge rules efficiently from large databases with noisy data and in a dynamic environment and deal with databases with incomplete information. A prototype system DBROUGH was constructed under a Unix/C/Sybase environment. Our system implements a number of novel ideas. In our system, we use attribute-oriented induction rather than tuple-oriented induction, thus greatly improving the learning efficiency. By integrating rough set techniques into the learning procedure, the derived knowledge rules are particularly concise and pertinent, since only the relevant and/or important attributes (factors) to the learning task are considered. In our system, the combination of transition network and concept hierarchy provides a nice mechanism to handle dynamic characteristic of data in the databases. For applications with noisy data, our system can generate multiple sets of knowledge rules through a decision matrix to improve the learning accuracy. The experiments using the NSERC information system illustrate the promise of attribute-oriented rough set learning for knowledge discovery for databases. (Abstract shortened by UMI.)

110 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
86% related
Time complexity
36K papers, 879.5K citations
85% related
Server
79.5K papers, 1.4M citations
83% related
Scalability
50.9K papers, 931.6K citations
83% related
Polynomial
52.6K papers, 853.1K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023203
2022459
2021210
2020285
2019306
2018266