scispace - formally typeset
Search or ask a question

Showing papers on "Ranking (information retrieval) published in 1989"


Journal ArticleDOI
TL;DR: This paper systematically investigates the various problems and issues associated with the use of recall and precision as measures of retrieval system performance and provides a comparative analysis of methods available for defining precision in a probabilistic sense to promote a better understanding of the various issues involved in retrieval performance evaluation.
Abstract: Recall and precision are often used to evaluate the effectiveness of information retrieval systems. They are easy to define if there is a single query and if the retrieval result generated for the query is a linear ordering. However, when the retrieval results are weakly ordered, in the sense that several documents have an identical retrieval status value with respect to a query, some probabilistic notion of precision has to be introduced. Relevance probability, expected precision, and so forth, are some alternatives mentioned in the literature for this purpose. Furthermore, when many queries are to be evaluated and the retrieval results averaged over these queries, some method of interpolation of precision values at certain preselected recall levels is needed. The currently popular approaches for handling both a weak ordering and interpolation are found to be inconsistent, and the results obtained are not easy to interpret. Moreover, in cases where some alternatives are available, no comparative analysis that would facilitate the selection of a particular strategy has been provided. In this paper, we systematically investigate the various problems and issues associated with the use of recall and precision as measures of retrieval system performance. Our motivation is to provide a comparative analysis of methods available for defining precision in a probabilistic sense and to promote a better understanding of the various issues involved in retrieval performance evaluation.

464 citations


Journal ArticleDOI
TL;DR: The model provides a formal method for minimizing expected information overload and predicts the usefulness of a message based on the available message features and may be useful to rank messages by expected importance or economic worth.
Abstract: The decision to examine a message at a particular point in time should be made rationally and economically if the mes sage recipient is to operate efficiently. Electronic message distribution systems, electronic bulletin board systems, and telephone systems capable of leaving digitized voice messages can contribute to "information overload", defined as the eco nomic loss associated with the examination of a number of non- or less-relevant messages. Our model provides a formal method for minimizing expected information overload.The proposed adaptive model predicts the usefulness of a message based on the available message features and may be useful to rank messages by expected importance or economic worth. The assumptions of binary and two Poisson indepen dent probabilistic distributions of message feature frequencies are examined, and methods of incorporating these distributions into the ranking model are examined. Ways to incorporate user supplied relevance feedback are suggested. Analytic perfor mance m...

233 citations


Journal ArticleDOI
TL;DR: Three retrieval models for probabilistic indexing are described along with evaluation results for each, including the binary independence indexing (BII) model, which is a generalized version of the Maron and Kuhns indexing model.
Abstract: In this article three retrieval models for probabilistic indexing are described along with evaluation results for each. First is the binary independence indexing (BII) model, which is a generalized version of the Maron and Kuhns indexing model. In this model, the indexing weight of a descriptor in a document is an estimate of the probability of relevance of this document with respect to queries using this descriptor. Second is the retrieval-with-probabilistic-indexing (RPI) model, which is suited to different kinds of probabilistic indexing. For that we assume that each indexing scheme has its own concept of “correctness” to which the probabilities relate. In addition to the probabilistic indexing weights, the RPI model provides the possibility of relevance weighting of search terms. A third model that is similar was proposed by Croft some years ago as an extension of the binary independence retrieval model but it can be shown that this model is not based on the probabilistic ranking principle. The probabilistic indexing weights required for any of these models can be provided by an application of the Darmstadt indexing approach (DIA) for indexing with descriptors from a controlled vocabulary. The experimental results show significant improvements over retrieval with binary indexing. Finally, suggestions are made regarding how the DIA can be applied to probabilistic indexing with free text terms.

196 citations


Journal ArticleDOI
01 Jun 1989
TL;DR: This work aims at developing criteria when re Optimization is required, how these criteria can be implemented efficiently, and how reoptimization can be avoided by using a new technique called dynamic query evaluation plans.
Abstract: In most database systems, a query embedded in a program written in a conventional programming language is optimized when the program is compiled. The query optimizer must make assumptions about the values of the program variables that appear as constants in the query, the resources that can be committed to query evaluation, and the data in the database. The optimality of the resulting query evaluation plan depends on the validity of these assumptions. If a query evaluation plan is used repeatedly over an extended period of time, it is important to determine when reoptimization is necessary. Our work aims at developing criteria when reoptimization is required, how these criteria can be implemented efficiently, and how reoptimization can be avoided by using a new technique called dynamic query evaluation plans. We experimentally demonstrate the need for dynamic plans and outline modifications to the EXODUS optimizer generator required for creating dynamic query evaluation plans.

182 citations


Journal ArticleDOI
TL;DR: This approach is not suited to log-linear probabilistic models and it needs large samples of relevance feedback data for its application, but it can handle very complex representations of documents and requests and it can be easily applied to multivalued relevance scales.
Abstract: We show that any approach to developing optimum retrieval functions is based on two kinds of assumptions: first, a certain form of representation for documents and requests, and second, additional simplifying assumptions that predefine the type of the retrieval function. Then we describe an approach for the development of optimum polynomial retrieval functions: request-document pairs (fl, dm) are mapped onto description vectors x(fl, dm), and a polynomial function e(x) is developed such that it yields estimates of the probability of relevance P(R | x (fl, dm) with minimum square errors. We give experimental results for the application of this approach to documents with weighted indexing as well as to documents with complex representations. In contrast to other probabilistic models, our approach yields estimates of the actual probabilities, it can handle very complex representations of documents and requests, and it can be easily applied to multivalued relevance scales. On the other hand, this approach is not suited to log-linear probabilistic models and it needs large samples of relevance feedback data for its application.

160 citations


Journal ArticleDOI
A. A. Schäffer1
TL;DR: An O(n log n) time algorithm to find an optimal ranking of an n-node tree is described.

143 citations


Proceedings ArticleDOI
01 Dec 1989
TL;DR: This paper describes the design of a direct manipulation user interface for Boolean information retrieval that presents a two-dimensional graphical representation of a user's natural language query which not only exposes heuristic query transformations performed by the system, but also supports query reformulation by the user via direct manipulation of the representation.
Abstract: This paper describes the design of a direct manipulation user interface for Boolean information retrieval. Intended to overcome the difficulties of manipulating explicit Boolean queries as well as the “black box” drawbacks of so-called natural language query systems, the interface presents a two-dimensional graphical representation of a user's natural language query which not only exposes heuristic query transformations performed by the system, but also supports query reformulation by the user via direct manipulation of the representation. The paper illustrates the operation of the interface as implemented in the AI-STARS full-text information retrieval system.

94 citations


Journal ArticleDOI
TL;DR: This article reports on exploratory experiments in evaluating and improving a thesaurus through studying its effect on retrieval, and how adding non-BT relations to MeSH showed how these non- BT relations could improve document ranking, if DISTANCE were also appropriately revised to treat these relations differently from BT relations.
Abstract: This article reports on exploratory experiments in evaluating and improving a thesaurus through studying its effect on retrieval. A formula called DISTANCE was developed to measure the conceptual distance between queries and documents encoded as sets of thesaurus terms. DISTANCE references MeSH (Medical Subject Headings) and assesses the degree of match between a MeSH-encoded query and document. The performance of DISTANCE on MeSH is compared to the performance of people in the assessment of conceptual distance between queries and documents, and is found to simulate with surprising accuracy the human performance. The power of the computer simulation stems both from the tendency of people to rely heavily on broader-than (BT) relations in making decisions about conceptual distance and from the thousands of accurate BT relations in MeSH. One source for discrepancy between the algorithms' measurement of closeness between query and document and people's measurement of closeness between query and document is occasional inconsistency in the BT relations. Our experiments with adding non-BT relations to MeSH showed how these non-BT non-BT relations to MeSH showed how these non-BT relations could improve document ranking, if DISTANCE were also appropriately revised to treat these relations differently from BT relations.

93 citations


Proceedings ArticleDOI
01 Dec 1989
TL;DR: A study evaluating how easily enhanced queries can be acquired from users and how effectively this additional knowledge can be used in retrieval indicates that significant effectiveness benefits can be obtained through the acquisition of domain concepts related to query concepts.
Abstract: In some recent experimental document retrieval systems, emphasis has been placed on the acquisition of a detailed model of the information need through interaction with the user. It has been argued that these “enhanced” queries, in combination with relevance feedback, will improve retrieval performance. In this paper, we describe a study with the aim of evaluating how easily enhanced queries can be acquired from users and how effectively this additional knowledge can be used in retrieval. The results indicate that significant effectiveness benefits can be obtained through the acquisition of domain concepts related to query concepts, together with their level of importance to the information need.

79 citations


Journal ArticleDOI
TL;DR: In this article, column generation is used during the tree search procedure, combined with a ranking procedure which ensures that the exact optimal integer solution is obtained for the matrix decomposition problem in the context of satellite communication system optimization.

79 citations


Journal ArticleDOI
TL;DR: Strategies for secure query processing multilevel-security database management systems are proposed by query modification, a technique that has been used for enforcing integrity constraints and providing view mechanisms.
Abstract: Strategies for secure query processing multilevel-security database management systems are proposed. They are carried out by query modification, a technique that has been used for enforcing integrity constraints and providing view mechanisms. The technique consists of replacing the query the user presents with one that, when evaluated, will perform the desired function. In the case of a view mechanism, the names of views referenced in the query are replaced by the definitions of the views in terms of base relations. The basic strategy and two variants-adding environmental information and using graphs-are described. The performance of the strategies is examined. >

Journal ArticleDOI
TL;DR: In another and updated ranking of economics departments, extended in scope and subject area coverage as discussed by the authors, the authors presented an extended version of their previous ranking of the economics departments in the US.
Abstract: This is another and updated ranking of economics departments, extended in scope and subject area coverage.

Proceedings ArticleDOI
01 May 1989
TL;DR: A parallel document ranking algorithm suitable for use on databases of 1-1000 GB, resident on primary or secondary storage, is presented, based on inverted indexes, and has two advantages over a previously published parallel algorithm for retrieval based on signature files.
Abstract: In this paper we present a parallel document ranking algorithm suitable for use on databases of 1-1000 GB, resident on primary or secondary storage. The algorithm is based on inverted indexes, and has two advantages over a previously published parallel algorithm for retrieval based on signature files. First, it permits the employment of ranking strategies which cannot be easily implemented using signature files, specifically methods which depend on document-term weighting. Second, it permits the interactive searching of databases resident on secondary storage. The algorithm is evaluated via a mixture of analytic and simulation techniques, with a particular focus on how cost-effectiveness and efficiency change as the size of the database, number of processors, and cost of memory are altered. In particular, we find that if the ratio of the number of processors and/or disks to the size of the database is held constant, then the cost-effectiveness of the resulting system remains constant. Furthermore, for a given size of database, there is a number of processors which optimizes cost-effectiveness. Estimated response times are also presented. Using these methods, it appears that cost-effective interactive access to databases in the 100-1000 GB range can be achieved using current technology.

Proceedings ArticleDOI
01 May 1989
TL;DR: This paper describes the design of the information retrieval facilities of an integrated information system called EUROMATH, an example of a Knowledge Worker Support System designed specifically to support mathematicians in their research work.
Abstract: This paper describes the design of the information retrieval facilities of an integrated information system called EUROMATH. EUROMATH is an example of a Knowledge Worker Support System: it has been designed specifically to support mathematicians in their research work. EUROMATH is required to provide uniform retrieval facilities for searching in a user's personal data, in a shared database of structured documents and in public, bibliographic databases. The design of information retrieval facilities that satisfy these and other requirements posed several interesting design issues regarding the integration of various retrieval techniques. As well as a uniform query language, designed to be highly usable by the target user group, the retrieval facilities provide expert intermediary functions, i.e. sophisticated support for the retrieval of bibliographic data. This support is achieved using a model of the user, a model of the user's information need and a set of search strategies based on those used by human intermediaries. The expert intermediary facilities include extensive help facilities, automatic query reformulation and browsing of a variety of sources of query terms.

Proceedings ArticleDOI
01 Dec 1989
TL;DR: This article proposed using a process model to facilitate and improve query refinement in an online environment and believes incorporating this model into retrieval systems can result in the design of more “intelligent” and useful information retrieval systems.
Abstract: This article reports findings of empirical research that investigated information searchers' online query refinement process. Prior studies have recognized the information specialists' role in helping searchers articulate and refine queries. Using a semantic network and a Problem Behavior Graph to represent the online search process, our study revealed that searchers also refined their own queries in an online task environment. The information retrieval system played a passive role in assisting online query refinement, which was, however, or that confirmed Taylor's four-level query formulation model. Based on our empirical findings, we proposed using a process model to facilitate and improve query refinement in an online environment. We believe incorporating this model into retrieval systems can result in the design of more “intelligent” and useful information retrieval systems.

Proceedings ArticleDOI
01 Dec 1989
TL;DR: This paper focuses on the query processing module of RIME, an experimental prototype of an intelligent information retrieval system designed to manage high-precision queries on a corpus of medical reports, which has a natural language interface.
Abstract: This paper focuses on the query processing module of RIME, an experimental prototype of an intelligent information retrieval system designed to manage high-precision queries on a corpus of medical reports. Though highly specific this particular corpus is representative of an important class of applications: information retrieval among full-text specialized documents which constitute critical sources of information in several organizations (medicine, law, space industry…). This experience allowed us to design and implement an elaborate model for the semantic content of the documents which is an extension of the Conceptual Dependency approach. The underlying retrieval model is inspired from the Logic model proposed by C.J. Van Rijsbergen, which has been considerably refined using an Extended Modal Logic. After presenting the context of the RIME project, we briefly describe the models designed for the internal representation of medical reports and queries. The main part of the paper is then devoted to the retrieval model and its application to the query processing module of RIME which has a natural language interface. Processing a query involves two main phases: the interpretation which transforms the natural language query into a search expression, and the evaluation phases which retrieves the corresponding medical reports. We focus here on the evaluation phases and show its relationship with the underlying retrieval model. Evaluations from practical experiments are also given, along with indications about current developments of the project.

Proceedings ArticleDOI
06 Feb 1989
TL;DR: For efficient processing, a general cyclic query and the access plans generated for a given query are defined and a cost model is developed to determine the cost for each access plant generated.
Abstract: Cyclic query processing issues in object-oriented databases are investigated. A data and cyclic query model is defined for an object-oriented database system, using a graph model. Then the efficient processing of a general object-oriented cyclic query is discussed. For efficient processing, a general cyclic query and the access plans generated for a given query are defined and a cost model is developed to determine the cost for each access plant generated. The retrieval algorithms used for actual data retrieval are also investigated. >

Journal ArticleDOI
TL;DR: A fuzzy retrieval method that enables one to infer weights and ranking outputs is formulated, still based on two-valued indexing, and can obviously be extended to Boolean queries by fuzzy logic, and hence to weighted Boolean systems without assuming weighted indexing.

Journal ArticleDOI
01 Oct 1989
TL;DR: Two studies from a screen icon testing program are reported, one of which determines how well the icons worked as a related set, and the other determines how likely it is that individual icons would be confused with each other.
Abstract: Two studies from a screen icon testing program are reported. An appropriateness ranking study is a preliminary procedure that screens several candidate designs and results in a single image content for each icon. Subjects preferred the more concrete icons to the more abstract ones. Familiar image content was also preferred. The matching study determined how well the icons worked as a related set, and how likely it is that individual icons would be confused with each other. The icons for Clock, Drawing, and Voice score high on correct and low on incorrect. The symmetric and asymmetric confusions are identified and explained in terms of visual and conceptual similarity. There is a discussion of the methodology used.

Journal ArticleDOI
TL;DR: A sufficient condition is given for one mobility matrix to display more mobility than another in the sense of Kanbur and Stiglitz as discussed by the authors, which is a sufficient condition for any mobility matrix.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the expert system can improve the search efficiency of novice searchers without decreasing their search effectiveness.
Abstract: This dissertation explores techniques to improve full-text information retrieval by experienced computer users who are novice users of retrieval systems. An expert system which automatically reformulates Boolean user queries to improve search results is presented. The expert system differs from other intelligent database functions in two ways: it works with semantically and syntactically unprocessed text; and the expert system contains a knowledge base of domain independent search strategies. The passages retrieved are presented to the user in decreasing order of estimated relevancy. This combination of user interface features provides powerful, yet simple, access to full-text documents. Experimental results demonstrate that the expert system can improve the search efficiency of novice searchers without decreasing their search effectiveness. Further, an evaluation of the ranking algorithm confirms that, in general, the system presents potentially relevant passages to the user before irrelevant passages.

Journal ArticleDOI
Baruch Nevo1
TL;DR: An improved variation of matching based on ranking is presented, together with a report on a study conducted by the author in which such a procedure was instituted.
Abstract: Matching models are offered as a possible solution to some of the major methodological problems presented in the research on validation of graphology. Past studies based on matching procedures are briefly summarized. An improved variation of matching based on ranking is presented, together with a report on a study conducted by the author in which such a procedure was instituted. Analyses show that raters can match, with a probability higher than chance, persons known to them with graphological reports of those persons.

Journal ArticleDOI
01 Apr 1989
TL;DR: The most striking result in working with Professor Gerald Salton over 20 years ago on the comparison between the SMART system and the MEDLARS system was the fact that whereas Boolean retrieval did very well or very poorly, theSMART system always seemed to find some of the relevant records.
Abstract: The most striking result in working with Professor Gerald Salton over 20 years ago on the comparison between the SMART system and the MEDLARS system [Salton69] was the fact that whereas Boolean retrieval (MEDLARS) did very well or very poorly, the SMART system always seemed to find some of the relevant records. All of us working at Cornell University during that time wanted to run a full-scale comparison between these systems to demonstrate what was to us the clear superiority of a ranking retrieval system, but this was not possible due to lack of funding and other problems.

Journal ArticleDOI
TL;DR: The language is one of several tools for the Binary Relationship Model being implemented by the Database Systems Group at Maryland and supports queries as well as view definitions.

Journal ArticleDOI
TL;DR: For nonalphabetic trees two different ranking problems are considered, and for each of them it is shown that the next best tree can be solved by a dynamic programming formula of low complexity order.
Abstract: The problem of ranking the K-best binary trees with respect to their weighted average leaves’ levels is considered. Both the alphabetic case, where the order of the weights in the sequence $w_1 , \cdots ,w_n $ must be preserved in the leaves of the tree, and the nonalphabetic case, where no such restriction is imposed, are studied.For the alphabetic case a simple algorithm is provided for ranking the K-best trees based on a recursive formula of complexity $O(Kn^3 )$. For nonalphabetic trees two different ranking problems are considered, and for each of them it is shown that the next best tree can be solved by a dynamic programming formula of low complexity order.

01 Jan 1989
TL;DR: Comparative searches using 130 queries and 20 full-text documents demonstrate the general effectiveness of the nearest neighbour model for paragraph-based searching and it is shown that the output from a nearest neighbour search can be used to guide a reader to the most appropriate segment of an online full- text document.
Abstract: SUMMARY This paper discusses the searching of full-text documents to identify paragraphs that are relevant to a user request. Given a natural language query statement, a nearest neighbour search involves ranking the paragraphs comprising a full-text document in order of descending similarity with the query, where the similarity for each paragraph is determined by the number of keyword stems that it has in common with the query. This approach is compared with the more conventional Boolean search which requires the user to specify the logical relationships between the query terms. Comparative searches using 130 queries and 20 full-text documents demonstrate the general effectiveness of the nearest neighbour model for paragraph-based searching. It is shown that the output from a nearest neighbour search can be used to guide a reader to the most appropriate segment of an online full-text document.

Journal ArticleDOI
01 Oct 1989-Politics
TL;DR: The link between performance indicators and resource allocation has been forged as discussed by the authors, and the drive towards greater efficiency is, of course, to be welcomed, as well as other aspects of university life (such as the ability of departments to attract good quality staft).
Abstract: RECENT TRENDS in government policy towards higher education in Britain have widely been interpreted as a reaction to the rapid expansion which followed the 1963 Robbins Report (Moore, 1987). The financial squeeze imposed on universities by the present government during the 1980s has, however, been consistent with the restrictions imposed elsewhere in the public sector. The cuts of 1981 can be understood in terms of the need to reduce overall government spending so as to reduce monetary growth and hence curb innation. The second round of cuts, in 1986, is more! closely related to the government’s view that the frontiers of the public sector should be rolled back; higher education policy in the second half of the eighties has therefore borne a resemblance to the privatization movement. In his 1989 Lancaster speech, Kenneth Baker (Secremy of State for Education) heralded a further period of improved student access to higher education, but at the same time made it clear that the burden of financing the system will be gradually transferred away from the state. Public expenditure on higher education currently amounts to f4.4 billion per annum. Some f 1.7 billion of this is spent directly on the universities. In the present political context, ‘value for money’ and ‘efficiency’ have become major goals. Efficiency is achieved where, given a constant set of inputs into the system, output is maximized. The drive towards greater efficiency is, of course, to be welcomed. Inevitably part of that drive involves an attempt to measure the success of various parts of the university system. Questions of resource allocation always come to the fore during periods of major expansion or contraction. While in the past this was in the absence of detailed information about the performance of individual institutions and departments, the tools of analysis are by now in place to provide such information. The link between performance indicators and resource allocation has been forged. All this makes it doubly important that attempts to construct league tables of university performance are competently conducted. For many years, Michael Dixon has been publishing rankings of universities based on graduate employability (for example, Dixon, 1982). Such measures provided some amusement in the 197Os, but few observers took them seriously. As Johnes ct aL (1987) have shown, rankings of this kind are determined mainly by subject mix. More recently, Dixon (1 989) has published league tables based on student non-completion (‘wastage’) rates. These measures, too, can be misleading, since a number of factors other than university quality subject mix, course length, propensity of students to live at home can affect student attrition. A little knowledge is clearly a dangerous thing especially if it is accompanied by a lot of cash. If Michael Dixon’s league tables do indeed simply reflect inter university differences in subject mix, then they are at best useless as a decision-making tool but may still influence the decision-maker, consciously or subconsciously. Of course, the number and scope of performance indicators has increased substantially during the present decade. Apart from Dixon’s employability indicators, measures based on unit costs, degree results, student attrition (or wastage), staff publications and citations, and the ability to attract external funding are easily obtainable. In 1986 the University Grants Committee published a set of department rankings based largely on peer review. These have already substantially affected funding decisions, as well as other aspects of university life (such as the ability of departments to attract good quality staft). In addition a number of recently published bibliometric studies provide quantitative measures of research output in university departments, notably Gillett (1987), Lloyd (1987), Rogers and Scratcherd (1986), Lamb (1986), Johnes (1987; 1988a), Davis (1986) and Crewe (1988). These build on the methodology developed by, amongst others, Meltzer (1949), Mans (1951) and Garfield (1964; 1970). More recently, and on the European side of the Atlantic, Martin and Irvine (1983) have enthusiastically supported these techniques but, these two ‘SPRU gurus’ apart, most contemporary

Proceedings ArticleDOI
22 Mar 1989
TL;DR: An improved algorithm is presented which computes in advance an upper bound on closeness, avoiding the exact computation of closeness in many instances and thus optimizing both the number of documents to be evaluated and thenumber of inverted lists to be inspected.
Abstract: The use of best-match search strategies in information retrieval systems is discussed. In response to a given query, best-match searching requires the identification of those documents in the collection which are most similar to the query, with similarity being measured by an appropriate closeness function. The emphasis is on heuristics to efficiently locate the closest documents set. The problem is introduced with reference to a straightforward search procedure that returns the best documents manipulating inverted index entries. An improved algorithm is presented which computes in advance an upper bound on closeness, avoiding the exact computation of closeness in many instances and thus optimizing both the number of documents to be evaluated and the number of inverted lists to be inspected. The algorithm is analyzed, and experimental results are reported. >

Journal ArticleDOI
TL;DR: SIBRIS (Sandwich Interactive Browsing and Ranking In formation System) is an interactive text retrieval system which has been developed to support the browsing of library and product files at Pfizer Central Research.
Abstract: SIBRIS (Sandwich Interactive Browsing and Ranking In formation System) is an interactive text retrieval system which has been developed to support the browsing of library and product files at Pfizer Central Research. Once an initial rank ing has been produced, the system will allow the user to select any document displayed on the screen at any point during the browse and to use that as the basis for another search. Facili ties have been included to enable the user to keep track of the browse and to facilitate backtracking, thus allowing the user to move away from the original query to wander in and out of different areas of interest.

Book
01 Jan 1989
TL;DR: This paper presents the basis for alternatives to the ranking technique in the decision making process and evaluates the potential efficacy and guidelines for the application of this technique.
Abstract: Introduction. The problems of decision making. The relevance of decision analysis techniques currently available. The nature of the ranking technique. Evaluation of technical factors for ranking. Evaluation of economic factors for ranking. Evaluation of socio-political factors for ranking. Assessment of the potential efficacy of the ranking technique. The basis for alternatives to the ranking technique in the decision making process. Guidelines for the application of the ranking technique. Conclusions. Index.