scispace - formally typeset
Search or ask a question
Author

Amnon Lotem

Bio: Amnon Lotem is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Flow shop scheduling & Domain (software engineering). The author has an hindex of 10, co-authored 11 publications receiving 3040 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An elegant and remarkably simple algorithm ("the threshold algorithm", or TA) is analyzed that is optimal in a much stronger sense than FA, and is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability worst-case sense, but over every database.

1,315 citations

Proceedings ArticleDOI
01 May 2001
TL;DR: An elegant and remarkably simple algorithm is analyzed that is optimal in a much stronger sense than FA, and is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability sense, but over every database.
Abstract: Assume that each object in a database has m grades, or scores, one for each of m attributes. For example, an object can have a color grade, that tells how red it is, and a shape grade, that tells how round it is. For each attribute, there is a sorted list, which lists each object and its grade under that attribute, sorted by grade (highest grade first). There is some monotone aggregation function, or combining rule, such as min or average, that combines the individual grades to obtain an overall grade.To determine objects that have the best overall grades, the naive algorithm must access every object in the database, to find its grade under each attribute. Fagin has given an algorithm (“Fagin's Algorithm”, or FA) that is much more efficient. For some distributions on grades, and for some monotone aggregation functions, FA is optimal in a high-probability sense.We analyze an elegant and remarkably simple algorithm (“the threshold algorithm”, or TA) that is optimal in a much stronger sense than FA. We show that TA is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability sense, but over every database. Unlike FA, which requires large buffers (whose size may grow unboundedly as the database size grows), TA requires only a small, constant-size buffer.We distinguish two types of access: sorted access (where the middleware system obtains the grade of an object in some sorted list by proceeding through the list sequentially from the top), and random access (where the middleware system requests the grade of object in a list, and obtains it in one step). We consider the scenarios where random access is either impossible, or expensive relative to sorted access, and provide algorithms that are essentially optimal for these cases as well.

908 citations

Proceedings Article
31 Jul 1999
TL;DR: In the authors' tests, SHOP was several orders of magnitude faster man Blackbox and several times faster than TLpian, even though SHOP is coded in Lisp and the other planners are coded in C.
Abstract: SHOP (Simple Hierarchical Ordered Planner) is a domain-independent HTN planning system with the following characteristics. • SHOP plans for tasks in the same order that they will later be executed. This avoids some goal-interaction issues that arise in other HTN planners, so that the planning algorithm is relatively simple. • Since SHOP knows the complete world-state at each step of the planning process, it can use highly expressive domain representations. For example, it can do planning problems that require complex numeric computations. • In our tests, SHOP was several orders of magnitude faster man Blackbox and several times faster than TLpian, even though SHOP is coded in Lisp and the other planners are coded in C.

499 citations

Proceedings Article
04 Aug 2001
TL;DR: The experimental results suggest that in some problem domains, the difficulty of writing SHOP knowledge bases derives from SHOP's total-ordering requirement--and that in such cases, SHOP2 can plan as efficiently as SHOP using knowledge bases simpler than those needed by SHOP.
Abstract: One of the more controversial recent planning algorithms is the SHOP algorithm, an HTN planning algorithm that plans for tasks in the same order that they are to be executed. SHOP can use domaindependent knowledge to generate plans very quickly, but it can be difficult to write good knowledge bases for SHOP. Our hypothesis is that this difficulty is because SHOP's total-ordering requirement for the subtasks of its methods is more restrictive than it needs to be. To examine this hypothesis, we have developed a new HTN planning algorithm called SHOP2. Like SHOP, SHOP2 is sound and complete, and it constructs plans in the same order that they will later be executed. But unlike SHOP, SHOP2 allows the subtasks of each method to be partially ordered. Our experimental results suggest that in some problem domains, the difficulty of writing SHOP knowledge bases derives from SHOP's total-ordering requirement--and that in such cases, SHOP2 can plan as efficiently as SHOP using knowledge bases simpler than those needed by SHOP.

146 citations

Proceedings Article
31 Jul 1999
TL;DR: The main objective is to show that a carefully chosen IP formulation significantly improves the "strength" of the LP relaxation, and that the resultant LPs are useful in solving the IP and the associated planning problems.
Abstract: Recent research has shown the promise of using propositional reasoning and search to solve AI planning problems In this paper, we further explore this area by applying Integer Programming to solve AI planning problems The application of Integer Programming to AI planning has a potentially significant advantage, as it allows quite naturally for the incorporation of numerical constraints and objectives into the planning domain Moreover, the application of Integer Programming to AI planning addresses one of the challenges in propositional reasoning posed by Kautz and Selman, who conjectured that the principal technique used to solve Integer Programs--the linear programming (LP) relaxation--is not useful when applied to propositional search We discuss various IP formulations for the class of planning problems based on STRIPS-style planning operators Our main objective is to show that a carefully chosen IP formulation significantly improves the "strength" of the LP relaxation, and that the resultant LPs are useful in solving the IP and the associated planning problems Our results clearly show the importance of choosing the "right" representation, and more generally the promise of using Integer Programming techniques in the AI planning domain

120 citations


Cited by
More filters
Journal ArticleDOI
01 Aug 2011
TL;DR: Under the meta path framework, a novel similarity measure called PathSim is defined that is able to find peer objects in the network (e.g., find authors in the similar field and with similar reputation), which turns out to be more meaningful in many scenarios compared with random-walk based similarity measures.
Abstract: Similarity search is a primitive operation in database and Web search engines. With the advent of large-scale heterogeneous information networks that consist of multi-typed, interconnected objects, such as the bibliographic networks and social media networks, it is important to study similarity search in such networks. Intuitively, two objects are similar if they are linked by many paths in the network. However, most existing similarity measures are defined for homogeneous networks. Different semantic meanings behind paths are not taken into consideration. Thus they cannot be directly applied to heterogeneous networks.In this paper, we study similarity search that is defined among the same type of objects in heterogeneous networks. Moreover, by considering different linkage paths in a network, one could derive various similarity semantics. Therefore, we introduce the concept of meta path-based similarity, where a meta path is a path consisting of a sequence of relations defined between different object types (i.e., structural paths at the meta level). No matter whether a user would like to explicitly specify a path combination given sufficient domain knowledge, or choose the best path by experimental trials, or simply provide training examples to learn it, meta path forms a common base for a network-based similarity search engine. In particular, under the meta path framework we define a novel similarity measure called PathSim that is able to find peer objects in the network (e.g., find authors in the similar field and with similar reputation), which turns out to be more meaningful in many scenarios compared with random-walk based similarity measures. In order to support fast online query processing for PathSim queries, we develop an efficient solution that partially materializes short meta paths and then concatenates them online to compute top-k results. Experiments on real data sets demonstrate the effectiveness and efficiency of our proposed paradigm.

1,583 citations

Journal ArticleDOI
TL;DR: PDDL2.1 as discussed by the authors is a modelling language capable of expressing temporal and numeric properties of planning domains and has been used in the International Planning Competitions (IPC) since 1998.
Abstract: In recent years research in the planning community has moved increasingly towards application of planners to realistic problems involving both time and many types of resources. For example, interest in planning demonstrated by the space research community has inspired work in observation scheduling, planetary rover exploration and spacecraft control domains. Other temporal and resource-intensive domains including logistics planning, plant control and manufacturing have also helped to focus the community on the modelling and reasoning issues that must be confronted to make planning technology meet the challenges of application. The International Planning Competitions have acted as an important motivating force behind the progress that has been made in planning since 1998. The third competition (held in 2002) set the planning community the challenge of handling time and numeric resources. This necessitated the development of a modelling language capable of expressing temporal and numeric properties of planning domains. In this paper we describe the language, PDDL2.1, that was used in the competition. We describe the syntax of the language, its formal semantics and the validation of concurrent plans. We observe that PDDL2.1 has considerable modelling power -- exceeding the capabilities of current planning technology -- and presents a number of important challenges to the research community.

1,420 citations

Journal ArticleDOI
31 Aug 2004
TL;DR: It is shown that the data complexity of some queries is #P-complete, which implies that these queries do not admit any efficient evaluation methods, and an optimization algorithm is described that can compute efficiently most queries.
Abstract: We describe a system that supports arbitrarily complex SQL queries on probabilistic databases. The query semantics is based on a probabilistic model and the results are ranked, much like in Information Retrieval. Our main focus is efficient query evaluation, a problem that has not received attention in the past. We describe an optimization algorithm that can compute efficiently most queries. We show, however, that the data complexity of some queries is #P-complete, which implies that these queries do not admit any efficient evaluation methods. For these queries we describe both an approximation algorithm and a Monte-Carlo simulation algorithm.

1,113 citations

Journal ArticleDOI
01 Mar 2005
TL;DR: In this paper, a branch-and-bound skyline (BBS) algorithm based on nearest-neighbor search is proposed, which is I/O optimal and performs a single access only to those nodes that may contain skyline points.
Abstract: The skyline of a d-dimensional dataset contains the points that are not dominated by any other point on all dimensions. Skyline computation has recently received considerable attention in the database community, especially for progressive methods that can quickly return the initial results without reading the entire database. All the existing algorithms, however, have some serious shortcomings which limit their applicability in practice. In this article we develop branch-and-bound skyline (BBS), an algorithm based on nearest-neighbor search, which is I/O optimal, that is, it performs a single access only to those nodes that may contain skyline points. BBS is simple to implement and supports all types of progressive processing (e.g., user preferences, arbitrary dimensionality, etc). Furthermore, we propose several interesting variations of skyline computation, and show how BBS can be applied for their efficient processing.

905 citations

Book ChapterDOI
06 Jul 2004
TL;DR: This paper shows how to use OWL-S in conjunction with Web service standards, and explains and illustrates the value added by the semantics expressed in OWl-S.
Abstract: Service interface description languages such as WSDL, and related standards, are evolving rapidly to provide a foundation for interoperation between Web services. At the same time, Semantic Web service technologies, such as the Ontology Web Language for Services (OWL-S), are developing the means by which services can be given richer semantic specifications. Richer semantics can enable fuller, more flexible automation of service provision and use, and support the construction of more powerful tools and methodologies. Both sets of technologies can benefit from complementary uses and cross-fertilization of ideas. This paper shows how to use OWL-S in conjunction with Web service standards, and explains and illustrates the value added by the semantics expressed in OWL-S.

896 citations