scispace - formally typeset
Search or ask a question

Showing papers on "Ranking (information retrieval) published in 1998"


Proceedings Article
24 Jul 1998
TL;DR: RankBoost as discussed by the authors is an algorithm for combining preferences based on the boosting approach to machine learning, which can be applied to several applications, such as that of combining the results of different search engines, or the "collaborative filtering" problem of ranking movies for a user based on movie rankings provided by other users.
Abstract: We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the "collaborative-filtering" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.

1,888 citations


Proceedings ArticleDOI
01 Jan 1998
TL;DR: The MaximalMarginal Relevance (MMR) criterion as mentioned in this paper aims to reduce redundancy while maintaining query relevance in retrieving retrieved documents and selecting appropriate passages for text summarization.
Abstract: This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.

1,479 citations


Patent
09 Jan 1998
TL;DR: In this article, the authors propose a method to assign importance ranks to nodes in a linked database, such as any database of documents containing citations, the world wide web or any other hypermedia database.
Abstract: A method assigns importance ranks to nodes in a linked database, such as any database of documents containing citations, the world wide web or any other hypermedia database. The rank assigned to a document is calculated from the ranks of documents citing it. In addition, the rank of a document is calculated from a constant representing the probability that a browser through the database will randomly jump to the document. The method is particularly useful in enhancing the performance of search engine results for hypermedia databases, such as the world wide web, whose documents have a large variation in quality.

939 citations


Patent
18 Dec 1998
TL;DR: In this paper, the authors present a software facility for identifying the items most relevant to a current query based on items selected in connection with similar queries, and the facility identifies as most relevant those items having the highest ranking values, by combining the relative frequencies with which users selected that item from the query results generated from queries specifying each of the terms specified by the query.
Abstract: The present invention provides a software facility for identifying the items most relevant to a current query based on items selected in connection with similar queries. In preferred embodiments of the invention, the facility receives a query specifying one or more query terms. In response, the facility generates a query result identifying a plurality of items that satisfy the query. The facility then produces a ranking value for at least a portion of the items identified in the query result by combining the relative frequencies with which users selected that item from the query results generated from queries specifying each of the terms specified by the query. The facility identifies as most relevant those items having the highest ranking values.

527 citations


Proceedings ArticleDOI
01 Aug 1998
TL;DR: Investigation into the utility of document summarisation in the context of information retrieval and the application of so called query biased (or user directed) summaries indicate that the use of query biased summaries significantly improves both the accuracy and speed of user relevance judgements.
Abstract: This paper presents an investigation into the utility of document summarisation in the context of information retrieval, more specifically in the application of so called query biased (or user directed) summaries: summaries customised to reflect the information need expressed in a query. Employed in the retrieved document list displayed after a retrieval took place, the summaries' utility was evaluated in a task-based environment by measuring users' speed and accuracy in identifying relevant documents. This was compared to the performance achieved when users were presented with the more typical output of an IR system: a static predefined summary composed of the title and first few sentences of retrieved documents. The results from the evaluation indicate that the use of query biased summaries significantly improves both the accuracy and speed of user relevance judgements.

493 citations


Journal ArticleDOI
01 Apr 1998
TL;DR: It is demonstrated that it is surprisingly difficult to identify which techniques work best, and comment on the experimental methodology required to support any claims as to the superiority of one method over another.
Abstract: Ranked queries are used to locate relevant documents in text databases In a ranked query a list of terms is specified, then the documents that most closely match the query are returned---in decreasing order of similarity---as answers Crucial to the efficacy of ranked querying is the use of a similarity heuristic, a mechanism that assigns a numeric score indicating how closely a document and the query match In this note we explore and categorise a range of similarity heuristics described in the literature We have implemented all of these measures in a structured way, and have carried out retrieval experiments with a substantial subset of these measuresOur purpose with this work is threefold: first, in enumerating the various measures in an orthogonal framework we make it straightforward for other researchers to describe and discuss similarity measures; second, by experimenting with a wide range of the measures, we hope to observe which features yield good retrieval behaviour in a variety of retrieval environments; and third, by describing our results so far, to gather feedback on the issues we have uncovered We demonstrate that it is surprisingly difficult to identify which techniques work best, and comment on the experimental methodology required to support any claims as to the superiority of one method over another

416 citations


Proceedings ArticleDOI
01 Jun 1998
TL;DR: An algorithm that detects sub-optimality of a query execution plan during query execution and attempts to correct the problem is described, and it is reported that this can result in significant improvements in the performance of complex queries.
Abstract: For a number of reasons, even the best query optimizers can very often produce sub-optimal query execution plans, leading to a significant degradation of performance. This is especially true in databases used for complex decision support queries and/or object-relational databases. In this paper, we describe an algorithm that detects sub-optimality of a query execution plan during query execution and attempts to correct the problem. The basic idea is to collect statistics at key points during the execution of a complex query. These statistics are then used to optimize the execution of the query, either by improving the resource allocation for that query, or by changing the execution plan for the remainder of the query. To ensure that this does not significantly slow down the normal execution of a query, the Query Optimizer carefully chooses what statistics to collect, when to collect them, and the circumstances under which to re-optimize the query. We describe an implementation of this algorithm in the Paradise Database System, and we report on performance studies, which indicate that this can result in significant improvements in the performance of complex queries.

405 citations


Journal ArticleDOI
TL;DR: In this article, a cost-effectiveness methodology is constructed, which results in a particular formula that can be used as a criterion to rank projects, and the ranking criterion is sufficiently operational to be useful in suggesting what to look at when determining actual conservation priorities among endangered species.
Abstract: This paper is about the economic theory of biodiversity preservation. A cost-effectiveness methodology is constructed, which results in a particular formula that can be used as a criterion to rank projects. The ranking criterion is sufficiently operational to be useful in suggesting what to look at when determining actual conservation priorities among endangered species. At the same time, the formula is firmly rooted in a mathematically rigorous optimization framework, so that its theoretical underpinnings are clear. The underlying model, called the Noah's Ark Problem, is intended to be a kind of canonical form that hones down to its analytical essence the problem of best preserving diversity under a limited budget constraint.

400 citations


Patent
Jiong Wu1
02 Nov 1998
TL;DR: In this paper, a search query is applied to documents in a document repository wherein the documents are organized into a hierarchy and a search engine searches the hierarchy to return documents which match a query term either directly or indirectly.
Abstract: A search query is applied to documents in a document repository wherein the documents are organized into a hierarchy. A search engine searches the hierarchy to return documents which match a query term either directly or indirectly. A specific embodiment of the search engine organizes the query term into individual subterms and matches the subterms against documents, returning only those documents which indirectly match the entire search query term and directly match at least one of the query subterms.

342 citations


Patent
03 Nov 1998
TL;DR: In this article, a method and apparatus for retrieving documents from a collection of documents based on information other than the contents of a desired document is provided for retrieval of documents from the Web.
Abstract: A method and apparatus are provided for retrieving documents from a collection of documents based on information other than the contents of a desired document. The collection of documents, which may be a hypertext system or documents available via the World Wide Web, is indexed. In one embodiment, an indexing process of a search engine receives one or more specifications that identify documents, or document locations, and non-content information such as a tag word or code word. The indexing process searches the index to identify all documents in the index that match one or more of the specifications. If a match is found, the tag word is added to the index, and information about the matching document is stored in the index in association with the tag word. A search query is submitted to the search engine. The search query is automatically modified to add a reference to the tag word, such as a query term that will exclude any index entry for a document associated with the tag word. The search is executed against the index, and a set of search results is generated. Accordingly, the search results automatically exclude all documents associated with the tag word. These techniques may be used, for example, to implement a Web search service that produces more accurate search results or that prevents certain documents, such as pornographic materials, from appearing in search results.

292 citations


Proceedings ArticleDOI
01 Aug 1998
TL;DR: There was only a slight difference in performance between the original English queries and the best crosslanguage queries, i.e., structured queries with medical dictionary and general dictionary translation.
Abstract: In this study, the effects of query structure and various setups of translation dictionaries on the performance of cross-language information retrieval (CLIR) were tested. The document collection was a subset of the TREC collection, and as test requests the study used TREC's health related topics. The test system was the INQUERY retrieval system. The performance of translated Finnish queries against English documents was compared to the performance of original English queries against English documents. Four natural language query types and three query translation methods, using a general dictionary and a domain specific (= medical) dictionary, were studied. There was only a slight difference in performance between the original English queries and the best crosslanguage queries, i.e., structured queries with medical dictionary and general dictionary translation. The structuring of queries was done on the basis of the output of dictionaries.

Journal ArticleDOI
01 Apr 1998
TL;DR: A server that provides linkage information for all pages indexed by the AltaVista search engine and can produce the entire neighbourhood of L up to a given distance, and envisage numerous other applications such as ranking, visualization, and classification.
Abstract: We have built a server that provides linkage information for all pages indexed by the AltaVista search engine. In its basic operation, the server accepts a query consisting of a set L of one or more URLs and returns a list of all pages that point to pages in L (predecessors) and a list of all pages that are pointed to from pages in L (successors). More generally the server can produce the entire neighbourhood (in the graph theory sense) of L up to a given distance and can include information about all links that exist among pages in the neighbourhood. Although some of this information can be retrieved directly from Alta Vista or other search engines, these engines are not optimized for this purpose and the process of constructing the neighbourhood of a given set of pages is show and laborious. In contrast our prototype server needs less than 0.1 ms per result URL. So far we have built two applications that use the Connectivity Server: a direct interface that permits fast navigation of the Web via the predecessor/successor relation, and a visualization tool for the neighbourhood of a given set of pages. We envisage numerous other applications such as ranking, visualization, and classification.

Book ChapterDOI
21 Sep 1998
TL;DR: The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking.
Abstract: This paper presents a new probabilistic model of information retrieval The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms This assumption is not made in well known existing models of information retrieval, but is essential in the field of statistical natural language processing Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking A pilot experiment on the Cranfield test collection indicates that the presented model outperforms the vector space model with classical tf×idf and cosine length normalisation

Patent
16 Jul 1998
TL;DR: A system and method for relative ranking and contextual summarization of search hits from multiple distributed, heterogeneous information resources based upon the original content of each hit is disclosed in this article.
Abstract: A system and method for relative ranking and contextual summarization of search hits from multiple distributed, heterogeneous information resources based upon the original content of each hit is disclosed. In particular, the system and method of the present invention improve upon metasearch engine techniques by downloading the original documents (text or multimedia) identified by standard search engines as relevant and using the original content of each “hit” to re-rank them relative to each other according to the original query pattern for the search, providing a uniform ranking methodology for the user. The present invention is also directed to an improved summarization process where the downloaded documents are re-summarized relative to each other according to the original query pattern for the search, providing a uniform summarization methodology for the user.

Proceedings Article
09 Jan 1998
TL;DR: An information extraction system was adapted to act as a post-filter on the output of an IR system to improve precision on routing tasks and make it easier to write IE grammars for multiple topics.
Abstract: : The authors describe an approach to applying a particular kind of Natural Language Processing (NLP) system to the TREC routing task in Information Retrieval (IR). Rather than attempting to use NLP techniques in indexing documents in a corpus, they adapted an information extraction (IE) system to act as a post-filter on the output of an IR system. The IE system was configured to score each of the top 2000 documents as determined by an IR system and on the basis of that score to rerank those 2000 documents. One aim was to improve precision on routing tasks. Another was to make it easier to write IE grammars for multiple topics.

Journal ArticleDOI
01 Apr 1998
TL;DR: Inquireirus as discussed by the authors is a meta search engine that works by downloading and analyzing the individual documents, instead of working with the list of documents and summaries returned by search engines, as current meta search engines typically do.
Abstract: World Wide Web (WWW) search engines (e.g. AltaVista, Infoseek, HotBot, etc.) have a number of deficiencies including: periods of downtime, low coverage of the WWW, inconsistent and inefficient user interfaces, out of date databases, poor relevancy ranking and precision, and difficulties with spamming techniques. Meta search engines have been introduced which address some of these and other difficulties in searching the WWW. However, current meta search engines retain some of these difficulties and may also introduce their own problems (e.g. reduced relevance because one or more of the search engines returns results with poor relevance). We present Inquirus, the NECI meta search engine, which addresses many of the deficiencies in current techniques. Rather than working with the list of documents and summaries returned by search engines, as current meta search engines typically do, the Inquirus meta search engine works by downloading and analyzing the individual documents. The Inquirus meta search engine makes improvements over existing search engines in a number of areas, e.g.: more useful document summaries incorporating query term context, identification of both pages which no longer exist and pages which no longer contain the query terms, advanced detection of duplicate pages, improved document ranking using proximity information, dramatically improved precision for certain queries by using specific expressive forms, and quick jump links and highlighting when viewing the full documents.

Patent
10 Jun 1998
TL;DR: In this paper, a knowledge base comprising a plurality of nodes of terminology, arranged hierarchically, that reflect associations among the terminology is used to generate hierarchical query feedback to facilitate the user in reformulating the query.
Abstract: An information retrieval system generates hierarchical query feedback to a user to facilitate the user in reformulating the query. The information retrieval system, which supports both text and theme queries, includes a knowledge base comprising a plurality of nodes of terminology, arranged hierarchically, that reflect associations among the terminology. For the hierarchical query feedback terms, the information retrieval system selects terminology that broadens and narrows the query terms by selecting parent nodes and child nodes, respectively, of the nodes for terminology that corresponds to the terms of the query. The information retrieval system also selects terminology that is generally related to the query terms by selecting nodes of the knowledge base that are cross linked to the nodes for terminology that corresponds to the terms of the query. Normalization processing, which generates canonical forms for query processing, and a content processing system, which generates themes for theme queries, are also disclosed.

Patent
10 Jul 1998
TL;DR: In this paper, a query is forwarded to one or more third party search engines, and the responses from the third-party search engine or engines are parsed in order to extract information regarding the documents matching the query.
Abstract: A computer implemented meta search engine and search method. In accordance with this method, a query is forwarded to one or more third party search engines, and the responses from the third party search engine or engines are parsed in order to extract information regarding the documents matching the query. The full text of the documents matching the query are downloaded, and the query terms in the documents are located. The text surrounding the query terms are extracted, and that text is displayed.

Proceedings ArticleDOI
01 Aug 1998
TL;DR: This paper implemented the model and ran a series of experiments to show that, in addition to the added functionality, the use of the structural information embedded in SGML documents can improve the effectiveness of document retrieval, compared to the case where no such information is used.
Abstract: In traditional information retrieval (IR) systems, a document as a whole is the target for a query. With increasing interests in structured documents like SGML documents, there is a growing need to build an LR system that can retrieve parts of documents, which satisfy not only content-based but also structure-based requirements. In this paper, we describe an inference-net-based approach to this problem. The model is capable of retrieving elements at any level in a principled way, satisfying certain containment constraints in a quety. Moreover, lvhile the model is general enough to reproduce the ranking strategy adopted by conventional document retrieval systems by making use of document and collection level statistics such as TF and IDF, its flexibility allows for incorporation of a variety of pragmatic and semantic information associated with document structures. We implemented the model and ran a series of experiments to show that, in addition to the added functionality, the use of the structural information embedded in SGML documents can improve the effectiveness of document retrieval, compared to the case where no such information is used. We also show that giving a pragmatic preference to a certain element tape of the SGML documents can enhance retrieval effectiveness.

Journal Article
TL;DR: The Inquirus meta search engine makes improvements over existing search engines in a number of areas, e.g.: more useful document summaries incorporating query term context, identification of both pages which no longer exist and pages which have no longer contain the query terms.

Journal ArticleDOI
Candy Schwartz1
TL;DR: The shift to distributed search across multitype database systems could extend general networked discovery and retrieval to include smaller resource collections with rich metadata and navigation tools.
Abstract: This review looks briefly at the history of World Wide Web search engine development, considers the current state of affairs, and reflects on the future. Networked discovery tools have evolved along with Internet resource availability. World Wide Web search engines display some complexity in their variety, content, resource acquisition strategies, and in the array of tools they deploy to assist users. A small but growing body of evaluation literature, much of it not systematic in nature, indicates that performance effectiveness is difficult to assess in this setting. Significant improvements in general-content search engine retrieval and ranking performance may not be possible, and are probably not worth the effort, although search engine providers have introduced some rudimentary attempts at personalization, summarization, and query expansion. The shift to distributed search across multitype database systems could extend general networked discovery and retrieval to include smaller resource collections with rich metadata and navigation tools. © 1998 John Wiley & Sons, Inc.

Proceedings ArticleDOI
01 Aug 1998
TL;DR: The effects of query structures and query expansion (QE) on retrieval performance were tested with a best match retrieval system and, with weak structures and Boolean structured queries, QE was not very effective.
Abstract: The effects of query structures and query expansion (QE) on retrieval performance were tested with a best match retrieval system (INQUERY1) Query structure means the use of operators to express the relations between search keys Eight different structures were tested, representing weak structures (averages and weighted averages of the weights of the keys) and strong structures (eg, queries with more elaborated search key relations) QE was based on concepts, which were first selected from a conceptual model, and then expanded by semantic relationships given in the model The expansion levels were (a) no expansion, (b) a synonym expansion, (c) a narrower concept expansion, (d) an associative concept expansion, and (e) a cumulative expansion of all other expansions With weak structures and Boolean structured queries, QE was not very effective The best performance was achieved with one of the strong structures at the largest expansion level

Proceedings ArticleDOI
11 May 1998
TL;DR: It is indicated that a global index organization might outperform a local index organization in a tightly coupled environment.
Abstract: We consider a digital library distributed in a tightly coupled environment. The library is indexed by inverted files and the vector space model is used as ranking strategy. Using a simple analytical model coupled with a small simulator, we study how query performance is affected by the index organization, the network speed, and the disks transfer rate. Our results, which are based on the Tipster/Trec3 collection, indicate that a global index organization might outperform a local index organization.

Journal ArticleDOI
TL;DR: The paper discusses the Hyperlink Vector Voting method which adds a qualitative dimension to its rankings by factoring in the number and descriptions of hyperlinks to the document.
Abstract: Traditional search engines do not consider document quality in ranking search results. The paper discusses the Hyperlink Vector Voting method which adds a qualitative dimension to its rankings by factoring in the number and descriptions of hyperlinks to the document.

Proceedings ArticleDOI
01 Jun 1998
TL;DR: This work introduces a similarity algebra that brings together relational operators and results of multiple similarity implementations in a uniform language and provides a generic cost model for evaluating cost of query plans in the similarity algebra and query optimization methods based on this model.
Abstract: The need to automatically extract and classify the contents of multimedia data archives such as images, video, and text documents has led to significant work on similarity based retrieval of data. To date, most work in this area has focused on the creation of index structures for similarity based retrieval. There is very little work on developing formalisms for querying multimedia databases that support similarity based computations and optimizing such queries, even though it is well known that feature extraction and identification algorithms in media data are very expensive. We introduce a similarity algebra that brings together relational operators and results of multiple similarity implementations in a uniform language. The algebra can be used to specify complex queries that combine different interpretations of similarity values and multiple algorithms for computing these values. We prove equivalence and containment relationships between similarity algebra expressions and develop query rewriting methods based on these results. We then provide a generic cost model for evaluating cost of query plans in the similarity algebra and query optimization methods based on this model. We supplement the paper with experimental results that illustrate the use of the algebra and the effectiveness of query optimization methods using the Integrated Search Engine (I.SEE) as the testbed.

Journal ArticleDOI
TL;DR: A review of the issues in content-based visual query, then a description of the current MetaSeek implementation is described and the results of experiments evaluated in comparison to a previous version of the system are presented.
Abstract: MetaSeek is an image metasearch engine developed to explore the querying of large, distributed, online visual information systems. The current implementation integrates user feedback into a performance-ranking mechanism. MetaSeek selects and queries the target image search engines according to their success under similar query conditions in previous searches. The current implementation keeps track of each target engine's performance by integrating user feedback for each visual query into a performance database. We begin with a review of the issues in content-based visual query, then describe the current MetaSeek implementation. We present the results of experiments that evaluated the implementation in comparison to a previous version of the system and a baseline engine that randomly selects the individual search engines to query. We conclude by summarizing open issues for future research.

Patent
15 Jun 1998
TL;DR: In this article, a method and apparatus are disclosed for integration of campaign management and data mining, which can include building queries for a database or ranking criteria for records in a database that include a reference to a data mining model.
Abstract: Method and apparatus are disclosed for integration of campaign management and data mining. The method and apparatus disclose incorporating references to data mining models into the campaign management process. In some embodiments, this permits evaluating the data mining model for fewer than all of the records in a database, potentially saving computation time. The method and apparatus can include building queries for a database or ranking criteria for records in a database that include a reference to a data mining model.

Proceedings Article
01 Dec 1998
TL;DR: This work proposes an alternative and novel technique that produces sparse representations constructed from sets of highly-related words that significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents.
Abstract: The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Classically, documents and queries are represented as vectors of word counts. In its simplest form, relevance is defined to be the dot product between a document and a query vector-a measure of the number of common terms. A central difficulty in text retrieval is that the presence or absence of a word is not sufficient to determine relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance. We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries are represented by their distance to these sets, and relevance is measured by the number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents.

01 Jan 1998
TL;DR: In this article, an adaptive model is proposed to predict the usefulness of a message based on the available message features and may be used to rank messages by expected importance or economic worth.
Abstract: The decision to examine a message at a particular point in time should be made rationally and economically if the message recipient is to operate efficiently. Electronic message distribution systems, electronic bulletin board systems, and telephone systems capable of leaving digitized voice messages can contribute to “information overload,” definedas the economic loss associated with the examination of a number of non- or less-relevant messages. Our model provides a formal method for minimizing expected information overload. The proposed adaptive model predicts the usefulness of a message based on the available message features and may be used to rank messages by expected importance or economic worth. The assumptions of binary and two Poisson independent probabilistic distributions of message feature frequencies are examined, and methods of incorporating these distributions into the ranking model are examined. Ways to incorporate user supplied relevance feedback are suggested. Analytic performance measures are proposed to predict system quality. Other message handling models, including rule based expert systems, are seen as special cases of the model. The performance is given for a set of UNIX shell programs which rank messages. Problems with the use of this formal model are examined, and areas for future research are suggested.

Proceedings ArticleDOI
01 Jan 1998
TL;DR: A new approach is presented for finding an optimal edge ranking of a tree, improving the time complexity to linear, and the best known algorithm requires more than quadratic time.
Abstract: Given a tree, finding an optimal node ranking and finding an optimal edge ranking are interesting computational problems The former problem already has a linear time algorithm in the literature For the latter, only recently polynomial time algorithms have been revealed, and the best known algorithm requires more than quadratic time In this paper we present a new approach for finding an optimal edge ranking of a tree, improving the time complexity to linear