scispace - formally typeset
Search or ask a question

Showing papers on "Ranking (information retrieval) published in 2005"


Proceedings ArticleDOI
07 Aug 2005
TL;DR: RankNet is introduced, an implementation of these ideas using a neural network to model the underlying ranking function, and test results on toy data and on data from a commercial internet search engine are presented.
Abstract: We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data from a commercial internet search engine.

2,813 citations


Patent
01 Sep 2005
TL;DR: In this paper, a method and system for evaluating the reputation of a member of a social networking system is disclosed, which is consistent with the embodiment of the invention of the technology.
Abstract: A method and system for evaluating the reputation of a member of a social networking system is disclosed. Consistent with an embodiment of the invention, one or more attributes associated with a social networking profile of a member of a social network are analyzed. Based on the analysis, a ranking, rating or score is assigned to a particular category of reputation. When requested, the ranking, rating or score is displayed to a user of the social network.

644 citations


Proceedings ArticleDOI
21 Aug 2005
TL;DR: A novel approach for using clickthrough data to learn ranked retrieval functions for web search results by using query chains to generate new types of preference judgments from search engine logs, thus taking advantage of user intelligence in reformulating queries.
Abstract: This paper presents a novel approach for using clickthrough data to learn ranked retrieval functions for web search results. We observe that users searching the web often perform a sequence, or chain, of queries with a similar information need. Using query chains, we generate new types of preference judgments from search engine logs, thus taking advantage of user intelligence in reformulating queries. To validate our method we perform a controlled user study comparing generated preference judgments to explicit relevance judgments. We also implemented a real-world search engine to test our approach, using a modified ranking SVM to learn an improved ranking function from preference data. Our results demonstrate significant improvements in the ranking given by the search engine. The learned rankings outperform both a static ranking function, as well as one trained without considering query chains.

530 citations


Proceedings ArticleDOI
15 Aug 2005
TL;DR: This paper proposes several context-sensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents.
Abstract: A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several context-sensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.

501 citations


Journal ArticleDOI
17 Aug 2005-Nature
TL;DR: The H-index as mentioned in this paper sums up publication record and is the most popular publication index in the world, and it can be found here: http://www.h-index.org/
Abstract: ‘H-index’ sums up publication record.

393 citations


Patent
Reiner Kraft1
21 Jul 2005
TL;DR: In this paper, a system and methods for implementing searches using contextual information associated with a Web page (or other document) that a user is viewing when a query is entered is described.
Abstract: Systems and methods are provided for implementing searches using contextual information associated with a Web page (or other document) that a user is viewing when a query is entered. The page includes a contextual search interface that has an associated context vector representing content of the page. When the user submits a search query via the contextual search interface, the query and the context vector are both provided to the query processor and used in responding to the query.

381 citations


Journal ArticleDOI
TL;DR: This article works within the hubs and authorities framework defined by Kleinberg and proposes new families of algorithms, and provides an axiomatic characterization of the INDEGREE heuristic which ranks each node according to the number of incoming links.
Abstract: The explosive growth and the widespread accessibility of the Web has led to a surge of research activity in the area of information retrieval on the World Wide Web. The seminal papers of Kleinberg [1998, 1999] and Brin and Page [1998] introduced Link Analysis Ranking, where hyperlink structures are used to determine the relative authority of a Web page and produce improved algorithms for the ranking of Web search results. In this article we work within the hubs and authorities framework defined by Kleinberg and we propose new families of algorithms. Two of the algorithms we propose use a Bayesian approach, as opposed to the usual algebraic and graph theoretic approaches. We also introduce a theoretical framework for the study of Link Analysis Ranking algorithms. The framework allows for the definition of specific properties of Link Analysis Ranking algorithms, as well as for comparing different algorithms. We study the properties of the algorithms that we define, and we provide an axiomatic characterization of the INDEGREE heuristic which ranks each node according to the number of incoming links. We conclude the article with an extensive experimental evaluation. We study the quality of the algorithms, and we examine how different structures in the graphs affect their performance.

323 citations


Proceedings ArticleDOI
10 May 2005
TL;DR: The experimental results show that PopRank can achieve significantly better ranking results than naively applying PageRank on the object graph, and the proposed efficient approaches to automatically decide these factors are proposed.
Abstract: In contrast with the current Web search methods that essentially do document-level ranking and retrieval, we are exploring a new paradigm to enable Web search at the object level. We collect Web information for objects relevant for a specific application domain and rank these objects in terms of their relevance and popularity to answer user queries. Traditional PageRank model is no longer valid for object popularity calculation because of the existence of heterogeneous relationships between objects. This paper introduces PopRank, a domain-independent object-level link analysis model to rank the objects within a specific domain. Specifically we assign a popularity propagation factor to each type of object relationship, study how different popularity propagation factors for these heterogeneous relationships could affect the popularity ranking, and propose efficient approaches to automatically decide these factors. Our experiments are done using 1 million CS papers, and the experimental results show that PopRank can achieve significantly better ranking results than naively applying PageRank on the object graph.

319 citations


Patent
04 Jan 2005
TL;DR: In this paper, a method and system for ranking relevancy of metadata associated with media on a computer network, such as multimedia and streaming media, include categorizing the metadata into sets of metadata.
Abstract: A method and system for ranking relevancy of metadata associated with media on a computer network, such as multimedia and streaming media, include categorizing the metadata into sets of metadata. The categories are broad categories relating to areas such as who, what, when, and where, such as artist, media type, and creation date, creation location. Weights are assigned to each set of metadata. Weights are related to technical information such as bit rate, duration, sampling rate, frequency of occurrence of a specific term, etc. A score is calculated for ranking the relevancy of each set of metadata. The score is calculated in accordance with the assigned weight and category. This score is available for search systems (e.g., search engines) and/or users to determine the relative ranking of search results.

286 citations


Proceedings ArticleDOI
14 Jun 2005
TL;DR: RankSQL is introduced, a system that provides a systematic and principled framework to support efficient evaluations of ranking (top-k) queries in relational database systems (RDBMS), by extending relational algebra and query optimization.
Abstract: This paper introduces RankSQL, a system that provides a systematic and principled framework to support efficient evaluations of ranking (top-k) queries in relational database systems (RDBMS), by extending relational algebra and query optimization. Previously, top-k query processing is studied in the middleware scenario or in RDBMS in a "piecemeal" fashion, i.e., focusing on specific operator or sitting outside the core of query engines. In contrast, we aim to support ranking as a first-class database construct. As a key insight, the new ranking relationship can be viewed as another logical property of data, parallel to the "membership" property of relational data model. While membership is essentially supported in RDBMS, the same support for ranking is clearly lacking. We address the fundamental integration of ranking in RDBMS in a way similar to how membership, i.e., Boolean filtering, is supported. We extend relational algebra by proposing a rank-relational model to capture the ranking property, and introducing new and extended operators to support ranking as a first-class construct. Enabled by the extended algebra, we present a pipelined and incremental execution model of ranking query plans (that cannot be expressed traditionally) based on a fundamental ranking principle. To optimize top-k queries, we propose a dimensional enumeration algorithm to explore the extended plan space by enumerating plans along two dual dimensions: ranking and membership. We also propose a sampling-based method to estimate the cardinality of rank-aware operators, for costing plans. Our experiments show the validity of our framework and the accuracy of the proposed estimation model.

286 citations


Book ChapterDOI
29 May 2005
TL;DR: This work proposes a model for the exploitation of ontology-based KBs to improve search over large document repositories, which includes an ontological-based scheme for the semi-automatic annotation of documents, and a retrieval system.
Abstract: Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.

Proceedings ArticleDOI
Hang Cui1, Renxu Sun1, Keya Li1, Min-Yen Kan1, Tat-Seng Chua1 
15 Aug 2005
TL;DR: This work presents two methods for learning relation mapping scores from past QA pairs: one based on mutual information and the other on expectation maximization, which significantly outperforms state-of-the-art density-based passage retrieval methods.
Abstract: State-of-the-art question answering (QA) systems employ term-density ranking to retrieve answer passages Such methods often retrieve incorrect passages as relationships among question terms are not considered Previous studies attempted to address this problem by matching dependency relations between questions and answers They used strict matching, which fails when semantically equivalent relationships are phrased differently We propose fuzzy relation matching based on statistical models We present two methods for learning relation mapping scores from past QA pairs: one based on mutual information and the other on expectation maximization Experimental results show that our method significantly outperforms state-of-the-art density-based passage retrieval methods by up to 78% in mean reciprocal rank Relation matching also brings about a 50% improvement in a system enhanced by query expansion

Proceedings ArticleDOI
15 Aug 2005
TL;DR: Novel learning methods for estimating the quality of results returned by a search engine in response to a query and the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval are presented.
Abstract: In this article we present novel learning methods for estimating the quality of results returned by a search engine in response to a query. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. We demonstrate the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval. Experiments on TREC data demonstrate the robustness and the effectiveness of our learning algorithms.

Patent
03 Jun 2005
TL;DR: In this paper, a method and system that dynamically ranks electronic messages based on their situational and inherent dimensions, which are judged by a set of filters, is presented, where the filters evaluate the different elemental metadata constituting a message and produce a priority value based on filters relevance and importance.
Abstract: A method and system that dynamically ranks electronic messages based on their situational and inherent dimensions, which are judged by a set of filters. These filters evaluate the different elemental metadata constituting a message and produce a priority value based on filters relevance and importance. The system iterates through queued messages, examine the structured content for expected attributes, statistically analyze unstructured content, apply dynamically weighted rules and policies to deliver a priority ranking, and then display the message and its vital attributes in accordance with the priority ranking. The system also adaptive learns and adjusts its weighted rules and policies to permit priority ranking to change on real-time or interval-based (may be user-defined) schedule. The system includes a GUI for increasing reading and processing efficiency. The GUI performs supervised and unsupervised learning from the user's behaviors, and displays messages in accordance with their priority classification.

Journal ArticleDOI
TL;DR: A set of principles and a novel rank-by-feature framework that could enable users to better understand distributions in one (1D) or two dimensions (2D) and discover relationships, clusters, gaps, outliers, and other features and implemented in the Hierarchical Clustering Explorer.
Abstract: Interactive exploration of multidimensional data sets is challenging because: (1) it is difficult to comprehend patterns in more than three dimensions, and (2) current systems often are a patchwork of graphical and statistical methods leaving many researchers uncertain about how to explore their data in an orderly manner. We offer a set of principles and a novel rank-by-feature framework that could enable users to better understand distributions in one (1D) or two dimensions (2D), and then discover relationships, clusters, gaps, outliers, and other features. Users of our framework can view graphical presentations (histograms, boxplots, and scatterplots), and then choose a feature detection criterion to rank 1D or 2D axis-parallel projections. By combining information visualization techniques (overview, coordination, and dynamic query) with summaries and statistical methods users can systematically examine the most important 1D and 2D axis-parallel projections. We summarize our Graphics, Ranking, and Interaction for Discovery (GRID) principles as: (1) study 1D, study 2D, then find features (2) ranking guides insight, statistics confirm. We implemented the rank-by-feature framework in the Hierarchical Clustering Explorer, but the same data exploration principles could enable users to organize their discovery process so as to produce more thorough analyses and extract deeper insights in any multidimensional data application, such as spreadsheets, statistical packages, or information visualization tools.

Journal ArticleDOI
TL;DR: Two automated methods that learn relevant information from previous experience in a domain and use it to solve new problem instances are presented and compared and indicate a large reduction in search effort in those complex domains where structural information can be inferred.
Abstract: Despite recent progress in AI planning, many benchmarks remain challenging for current planners. In many domains, the performance of a planner can greatly be improved by discovering and exploiting information about the domain structure that is not explicitly encoded in the initial PDDL formulation. In this paper we present and compare two automated methods that learn relevant information from previous experience in a domain and use it to solve new problem instances. Our methods share a common four-step strategy. First, a domain is analyzed and structural information is extracted, then macro-operators are generated based on the previously discovered structure. A filtering and ranking procedure selects the most useful macro-operators. Finally, the selected macros are used to speed up future searches. We have successfully used such an approach in the fourth international planning competition IPC-4. Our system, Macro-FF, extends Hoffmann's state-of-the-art planner FF 2.3 with support for two kinds of macro-operators, and with engineering enhancements. We demonstrate the effectiveness of our ideas on benchmarks from international planning competitions. Our results indicate a large reduction in search effort in those complex domains where structural information can be inferred.

Proceedings ArticleDOI
15 Aug 2005
TL;DR: A novel ranking scheme named Affinity Ranking (AR) is proposed to re-rank search results by optimizing two metrics: diversity -- which indicates the variance of topics in a group of documents; and information richness -- which measures the coverage of a single document to its topic.
Abstract: In this paper, we propose a novel ranking scheme named Affinity Ranking (AR) to re-rank search results by optimizing two metrics: (1) diversity -- which indicates the variance of topics in a group of documents; (2) information richness -- which measures the coverage of a single document to its topic. Both of the two metrics are calculated from a directed link graph named Affinity Graph (AG). AG models the structure of a group of documents based on the asymmetric content similarities between each pair of documents. Experimental results in Yahoo! Directory, ODP Data, and Newsgroup data demonstrate that our proposed ranking algorithm significantly improves the search performance. Specifically, the algorithm achieves 31% improvement in diversity and 12% improvement in information richness relatively within the top 10 search results.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: This work proposes a novel concept-based query expansion technique, which allows disambiguating queries submitted to search engines and shows that this approach leads to gains in average precision figures.
Abstract: Despite the recent advances in search quality, the fast increase in the size of the Web collection has introduced new challenges for Web ranking algorithms. In fact, there are still many situations in which the users are presented with imprecise or very poor results. One of the key difficulties is the fact that users usually submit very short and ambiguous queries, and they do not fully specify their information needs. That is, it is necessary to improve the query formation process if better answers are to be provided. In this work we propose a novel concept-based query expansion technique, which allows disambiguating queries submitted to search engines. The concepts are extracted by analyzing and locating cycles in a special type of query relations graph. This is a directed graph built from query relations mined using association rules. The concepts related to the current query are then shown to the user who selects the one concept that he interprets is most related to his query. This concept is used to expand the original query and the expanded query is processed instead. Using a Web test collection, we show that our approach leads to gains in average precision figures of roughly 32%. Further, if the user also provides information on the type of relation between his query and the selected concept, the gains in average precision go up to roughly 52%.

Patent
Chandrasekhar Thota1
04 Aug 2005
TL;DR: In this paper, the authors proposed a method of ranking weblogs and blog items by creating a context rank around each blog feed, which represents a sum of a context weight, a track-back weight and a comment weight.
Abstract: A mechanism of ranking weblog or "blog" items is provided. More particularly, the subject ranking mechanisms can facilitate ranking blog feeds and blog items contained therein thus focusing and intelligently delivering content (e.g., blog items) to users. The subject innovation facilitates ranking the blog feeds and blog items by creating a context rank around each blog feed. The context rank represents a sum of a context weight, a track-back weight and a comment weight. Accordingly, this context rank can allow readers to sort blog items in the order of popularity or importance thus effectively reducing content noise.

Proceedings ArticleDOI
10 May 2005
TL;DR: A ranking framework which models the process of generation of a stream of news articles, the news articles clustering by topics, and the evolution of news story over the time is proposed and can be obtained without a predefined sliding window of observation over the stream.
Abstract: According to a recent survey made by Nielsen NetRatings, searching on news articles is one of the most important activity online. Indeed, Google, Yahoo, MSN and many others have proposed commercial search engines for indexing news feeds. Despite this commercial interest, no academic research has focused on ranking a stream of news articles and a set of news sources. In this paper, we introduce this problem by proposing a ranking framework which models: (1) the process of generation of a stream of news articles, (2) the news articles clustering by topics, and (3) the evolution of news story over the time. The ranking algorithm proposed ranks news information, finding the most authoritative news sources and identifying the most interesting events in the different categories to which news article belongs. All these ranking measures take in account the time and can be obtained without a predefined sliding window of observation over the stream. The complexity of our algorithm is linear in the number of pieces of news still under consideration at the time of a new posting. This allow a continuous on-line process of ranking. Our ranking framework is validated on a collection of more than 300,000 pieces of news, produced in two months by more then 2000 news sources belonging to 13 different categories (World, U.S, Europe, Sports, Business, etc). This collection is extracted from the index of comeToMyHead, an academic news search engine available online.

Journal Article
TL;DR: In this article, the spectral properties of the Laplacian of the features' measurement matrix are used to define relevance function and the feature selection process is then based on a continuous ranking of features defined by a least-squares optimization process.
Abstract: The problem of selecting a subset of relevant features in a potentially overwhelming quantity of data is classic and found in many branches of science Examples in computer vision, text processing and more recently bio-informatics are abundant In text classification tasks, for example, it is not uncommon to have 104 to 107 features of the size of the vocabulary containing word frequency counts, with the expectation that only a small fraction of them are relevant Typical examples include the automatic sorting of URLs into a web directory and the detection of spam emailIn this work we present a definition of "relevancy" based on spectral properties of the Laplacian of the features' measurement matrix The feature selection process is then based on a continuous ranking of the features defined by a least-squares optimization process A remarkable property of the feature relevance function is that sparse solutions for the ranking values naturally emerge as a result of a "biased non-negativity" of a key matrix in the process As a result, a simple least-squares optimization process converges onto a sparse solution, ie, a selection of a subset of features which form a local maximum over the relevance function The feature selection algorithm can be embedded in both unsupervised and supervised inference problems and empirical evidence show that the feature selections typically achieve high accuracy even when only a small fraction of the features are relevant

Patent
13 May 2005
TL;DR: In this paper, the authors proposed a method for improving user search experience with a search engine by providing a way for associated users to create and share personalized lists of local search results and advertisements through endorsements of such local search result and/or ads.
Abstract: Methods and systems for improving user search experience with a search engine by providing a way for associated users to create and share personalized lists of local search results and/or advertisements through endorsements of such local search results and/or ads. Local search endorsements can be used to personalize the search engine's ranking of local search results by offering a way for users to re-rank the results for themselves and for those who trust them.


Proceedings ArticleDOI
15 Aug 2005
TL;DR: FLOE is presented, a simple density analysis method for modelling the shape of the transformation required, based on training data and without assuming independence between feature and baseline, for a new query independent feature.
Abstract: A query independent feature, relating perhaps to document content, linkage or usage, can be transformed into a static, per-document relevance weight for use in ranking. The challenge is to find a good function to transform feature values into relevance scores. This paper presents FLOE, a simple density analysis method for modelling the shape of the transformation required, based on training data and without assuming independence between feature and baseline. For a new query independent feature, it addresses the questions: is it required for ranking, what sort of transformation is appropriate and, after adding it, how successful was the chosen transformation? Based on this we apply sigmoid transformations to PageRank, indegree, URL Length and ClickDistance, tested in combination with a BM25 baseline.

Journal ArticleDOI
TL;DR: A flexible ranking approach to identify interesting and relevant relationships in the semantic Web and the authors demonstrate the scheme's effectiveness through an empirical evaluation over a real-world data set.
Abstract: Industry and academia are both focusing their attention on information retrieval over semantic metadata extracted from the Web, and it is increasingly possible to analyze such metadata to discover interesting relationships. However, just as document ranking is a critical component in today's search engines, the ranking of complex relationships would be an important component in tomorrow's semantic Web engines. This article presents a flexible ranking approach to identify interesting and relevant relationships in the semantic Web. The authors demonstrate the scheme's effectiveness through an empirical evaluation over a real-world data set.

Journal ArticleDOI
TL;DR: A novel graph-representation model of a software component library (repository) called component rank model is proposed, which shows that SPARS-J gives a higher rank to components that are used more frequently, so software engineers looking for a component have a better chance of finding it quickly.
Abstract: Collections of already developed programs are important resources for efficient development of reliable software systems. In this paper, we propose a novel graph-representation model of a software component library (repository), called component rank model. This is based on analyzing actual usage relations of the components and propagating the significance through the usage relations. Using the component rank model, we have developed a Java class retrieval system named SPARS-J and applied SPARS-J to various collections of Java files. The result shows that SPARS-J gives a higher rank to components that are used more frequently. As a result, software engineers looking for a component have a better chance of finding it quickly. SPARS-J has been used by two companies, and has produced promising results.

Proceedings ArticleDOI
14 Jun 2005
TL;DR: The proposed quality estimator has the potential to alleviate the rich-get-richer phenomenon and help new and high-quality pages get the attention that they deserve and is derived through a careful analysis of a reasonable web user model.
Abstract: In a number of recent studies [4, 8] researchers have found that because search engines repeatedly return currently popular pages at the top of search results, popular pages tend to get even more popular, while unpopular pages get ignored by an average user. This "rich-get-richer" phenomenon is particularly problematic for new and high-quality pages because they may never get a chance to get users' attention, decreasing the overall quality of search results in the long run. In this paper, we propose a new ranking function, called page quality that can alleviate the problem of popularity-based ranking. We first present a formal framework to study the search engine bias by discussing what is an "ideal" way to measure the intrinsic quality of a page. We then compare how PageRank, the current ranking metric used by major search engines, differs from this ideal quality metric. This framework will help us investigate the search engine bias in more concrete terms and provide clear understanding why PageRank is effective in many cases and exactly when it is problematic. We then propose a practical way to estimate the intrinsic page quality to avoid the inherent bias of PageRank. We derive our proposed quality estimator through a careful analysis of a reasonable web user model, and we present experimental results that show the potential of our proposed estimator. We believe that our quality estimator has the potential to alleviate the rich-get-richer phenomenon and help new and high-quality pages get the attention that they deserve.

Proceedings ArticleDOI
02 Oct 2005
TL;DR: The results show that AKTiveRank will have great utility although there is potential for improvement, and a number of metrics are applied in an attempt to investigate their appropriateness for ranking ontologies.
Abstract: In view of the need to provide tools to facilitate the re-use of existing knowledge structures such as ontologies, we present in this paper a system, AKTiveRank, for the ranking of ontologies AKTiveRank uses as input the search terms provided by a knowledge engineer and, using the output of an ontology search engine, ranks the ontologies We apply a number of metrics in an attempt to investigate their appropriateness for ranking ontologies, and compare the results with a questionnaire-based human study Our results show that AKTiveRank will have great utility although there is potential for improvement

Proceedings ArticleDOI
25 Sep 2005
TL;DR: An automated technique for feature location: helping developers map features to relevant source code based on execution-trace analysis that is less sensitive with respect to the quality of the input and more effective when used by developers unfamiliar with the target system is introduced.
Abstract: This paper introduces an automated technique for feature location: helping developers map features to relevant source code. Like several other automated feature location techniques, ours is based on execution-trace analysis. We hypothesize that these techniques, which rely on making binary judgments about a code element's relevance to a feature, are overly sensitive to the quality of the input. The main contribution of this paper is to provide a more robust alternative, whose most distinguishing characteristic is that it employs ranking heuristics to determine a code element's relevance to a feature. We believe that our technique is less sensitive with respect to the quality of the input and we claim that it is more effective when used by developers unfamiliar with the target system. We validate our claim by applying our technique to three systems with comprehensive test suites. A developer unfamiliar with the target system spent a limited amount of effort preparing the test suite for analysis. Our results show that under these circumstances our ranking-based technique compares favorably to a technique based on binary judgements.

Journal ArticleDOI
TL;DR: This paper investigates the multiple attribute decision making (MADM) problem with fuzzy preference information on alternatives and proposes an eigenvector method to rank them and three optimization models are introduced, which integrate subjective fuzzy preference relations and objective information in different ways.