scispace - formally typeset
Search or ask a question
Topic

Tuple

About: Tuple is a research topic. Over the lifetime, 6513 publications have been published within this topic receiving 146057 citations. The topic is also known as: tuple & ordered tuplet.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that any knowledge compilation representations from a class that contain decision-DNNFs can be converted into equivalent Free Binary Decision Diagrams, also known as Read-Once Branching Programs, with only a quasi-polynomial increase in representation size.
Abstract: We prove exponential lower bounds on the running time of the state-of-the-art exact model counting algorithms—algorithms for exactly computing the number of satisfying assignments, or the satisfying probability, of Boolean formulas. These algorithms can be seen, either directly or indirectly, as building Decision-Decomposable Negation Normal Form (decision-DNNF) representations of the input Boolean formulas. Decision-DNNFs are a special case of d-DNNFs where d stands for deterministic. We show that any knowledge compilation representations from a class (called DLDDs in this article) that contain decision-DNNFs can be converted into equivalent Free Binary Decision Diagrams (FBDDs), also known as Read-Once Branching Programs, with only a quasi-polynomial increase in representation size. Leveraging known exponential lower bounds for FBDDs, we then obtain similar exponential lower bounds for decision-DNNFs, which imply exponential lower bounds for model-counting algorithms. We also separate the power of decision-DNNFs from d-DNNFs and a generalization of decision-DNNFs known as AND-FBDDs.We then prove new lower bounds for FBDDs that yield exponential lower bounds on the running time of these exact model counters when applied to the problem of query evaluation in tuple-independent probabilistic databases—computing the probability of an answer to a query given independent probabilities of the individual tuples in a database instance. This approach to the query evaluation problem, in which one first obtains the lineage for the query and database instance as a Boolean formula and then performs weighted model counting on the lineage, is known as grounded inference. A second approach, known as lifted inference or extensional query evaluation, exploits the high-level structure of the query as a first-order formula. Although it has been widely believed that lifted inference is strictly more powerful than grounded inference on the lineage alone, no formal separation has previously been shown for query evaluation. In this article, we show such a formal separation for the first time. In particular, we exhibit a family of database queries for which polynomial-time extensional query evaluation techniques were previously known but for which query evaluation via grounded inference using the state-of-the-art exact model counters requires exponential time.

107 citations

Journal ArticleDOI
TL;DR: A new temporal data model designed with the single purpose of capturing the time-dependent semantics of data is described, using the notion of snapshot equivalence to map temporal relation instances and temporal operators of one existing model to equivalent instances and operators of another.

106 citations

Journal ArticleDOI
TL;DR: This work presents a novel ER system, called DeepER, that achieves good accuracy, high efficiency, as well as ease-of-use, and requires much less human labeled data and does not need feature engineering, compared with traditional machine learning based approaches.
Abstract: Entity resolution (ER) is a key data integration problem. Despite the efforts in 70+ years in all aspects of ER, there is still a high demand for democratizing ER - humans are heavily involved in labeling data, performing feature engineering, tuning parameters, and defining blocking functions. With the recent advances in deep learning, in particular distributed representation of words (a.k.a. word embeddings), we present a novel ER system, called DeepER, that achieves good accuracy, high efficiency, as well as ease-of-use (i.e., much less human efforts). For accuracy, we use sophisticated composition methods, namely uni- and bi-directional recurrent neural networks (RNNs) with long short term memory (LSTM) hidden units, to convert each tuple to a distributed representation (i.e., a vector), which can in turn be used to effectively capture similarities between tuples. We consider both the case where pre-trained word embeddings are available as well the case where they are not; we present ways to learn and tune the distributed representations. For efficiency, we propose a locality sensitive hashing (LSH) based blocking approach that uses distributed representations of tuples; it takes all attributes of a tuple into consideration and produces much smaller blocks, compared with traditional methods that consider only a few attributes. For ease-of-use, DeepER requires much less human labeled data and does not need feature engineering, compared with traditional machine learning based approaches which require handcrafted features, and similarity functions along with their associated thresholds. We evaluate our algorithms on multiple datasets (including benchmarks, biomedical data, as well as multi-lingual data) and the extensive experimental results show that DeepER outperforms existing solutions.

106 citations

Book ChapterDOI
01 Jan 1991
TL;DR: This work introduces “abstract maps,” an analogical representation that inherently reflects the structure of the represented domain, and demonstrates their use in spatial reasoning and facilitates “coarse” reasoning and the hierarchical organization of knowledge.
Abstract: There have been some straightforward efforts to extend Allen’s interval-based temporal logic to spatial dimensions by using Cartesian tuples of relations (Guesgen, 1989). We take a different approach based on a study of the kind of information that best relates two entities in 2-dimensional space qualitatively. The relevant spatial categories turn out to be “projection” and “orientation.” We define a small set of spatial relations and stress the importance of making their reference frames explicit. Furthermore, we introduce “abstract maps,” an analogical representation that inherently reflects the structure of the represented domain, and demonstrate their use in spatial reasoning. This scheme also facilitates “coarse” reasoning and the hierarchical organization of knowledge. These representational issues form the basis for an experimental system to develop “cognitive maps” from 2-D scanned layout plans of buildings.

106 citations

Book ChapterDOI
TL;DR: A modal logic is presented that permits reasoning about behavioural properties of systems and various type systems that help in controlling agents movements and actions in Klaim.
Abstract: Klaim (Kernel Language for Agents Interaction and Mobility) is an experimental language specifically designed to program distributed systems consisting of several mobile components that interact through multiple distributed tuple spaces. Klaim primitives allow programmers to distribute and retrieve data and processes to and from the nodes of a net. Moreover, localities are first-class citizens that can be dynamically created and communicated over the network. Components, both stationary and mobile, can explicitly refer and control the spatial structures of the network. This paper reports the experiences in the design and development of Klaim. Its main purpose is to outline the theoretical foundations of the main features of Klaim and its programming model. We also present a modal logic that permits reasoning about behavioural properties of systems and various type systems that help in controlling agents movements and actions. Extensions of the language in the direction of object oriented programming are also discussed together with the description of the implementation efforts which have lead to the current prototypes.

105 citations


Network Information
Related Topics (5)
Graph (abstract data type)
69.9K papers, 1.2M citations
86% related
Time complexity
36K papers, 879.5K citations
85% related
Server
79.5K papers, 1.4M citations
83% related
Scalability
50.9K papers, 931.6K citations
83% related
Polynomial
52.6K papers, 853.1K citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023203
2022459
2021210
2020285
2019306
2018266