scispace - formally typeset
Search or ask a question

Showing papers on "Ranking (information retrieval) published in 1992"


Proceedings ArticleDOI
01 Jun 1992
TL;DR: These experiments, using the Cranfield 1400 collection, showed the importance of query expansion in addition to query reweighting, and showed that adding as few as 20 well-selected terms could result in performance improvements of over 100%.
Abstract: Researchers have found relevance feedback to be effective in interactive information retrieval, although few formal user experiments have been made. In order to run a user experiment on a large document collection, experiments were performed at NIST to complete some of the missing links found in using the probabilistic retrieval model. These experiments, using the Cranfield 1400 collection, showed the importance of query expansion in addition to query reweighting, and showed that adding as few as 20 well-selected terms could result in performance improvements of over 100%. Additionally it was shown that performing multiple iterations of feedback is highly effective.

441 citations


Journal ArticleDOI
TL;DR: It is shown how the concept of relevance may be replaced by the condition of being highly rated by a similarity measure and it becomes possible to identify the stop words in a cullectmn by automated statistical testing.
Abstract: A stop word may be identified as a word that has the same likehhood of occurring in those documents not relevant to a query as in those documents relevant to the query. In this paper we show how the concept of relevance may be replaced by the condition of being highly rated by a similarity measure. Thus it becomes possible to identify the stop words in a cullectmn by automated statistical testing. We describe the nature of the statistical test as it is realized with a vector retrieval methodology based on the cosine coefficient of document-document similarity. As an example, this tech nique is then applied to a large MEDLINE " subset in the area of biotechnology. The initial processing of this datahase involves a 310 word stop list of common non-content terms. Our technique is then applied and 75% of the remaining terms are identified as stop words. We compare retrieval with and without the removal of these stop words and find that of the top twenty documents retrieved in response to a random query docume...

281 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: A new automated assignment method called “n of 2n” achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves.
Abstract: The 117 manuscripts submitted for the Hypertext '91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and Latent Semantic Indexing. Fifteen reviewers provided exhaustive ratings for the submitted abstracts, indicating how well each abstract matched their interests. The automated methods do a fairly good job of assigning relevant papers for review, but they are still somewhat poorer than assignments made manually by human experts and substantially poorer than an assignment perfectly matching the reviewers' own ranking of the papers. A new automated assignment method called “n of 2n” achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves.

206 citations


Proceedings ArticleDOI
03 Feb 1992
TL;DR: An approach to knowledge mining by imprecise querying that utilizes conceptual clustering techniques is presented and an example of the algorithm's use in query processing is presented.
Abstract: Knowledge mining is the process of discovering knowledge that is hitherto unknown. An approach to knowledge mining by imprecise querying that utilizes conceptual clustering techniques is presented. The query processor has both a deductive and an inductive component. The deductive component finds precise matches in the traditional sense, and the inductive component identifies ways in which imprecise matches may be considered similar. Ranking on similarity is done by using the database taxonomy, by which similar instances become members of the same class. Relative similarity is determined by depth in the taxonomy. The conceptual clustering algorithm, its use in query processing, and an example are presented. >

91 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: Both the pseudo-cosine and the standard vector space models can be viewed as special cases of a generalized linear model and both the necessary and sufficient conditions have been identified under which ranking functions such as the inner-product, cosine, pseudo-Cosine, Dice, covariance and product-moment correlation measures can be used to rank the documents.
Abstract: This paper analyzes the properties, structures and limitations of vector-based models for information retrieval from the computational geometry point of view. It is shown that both the pseudo-cosine and the standard vector space models can be viewed as special cases of a generalized linear model. More importantly, both the necessary and sufficient conditions have been identified, under which ranking functions such as the inner-product, cosine, pseudo-cosine, Dice, covariance and product-moment correlation measures can be used to rank the documents. The structure of the solution region for acceptable ranking is analyzed and an algorithm for finding all the solution vectors is suggested.

43 citations


Journal ArticleDOI
TL;DR: This paper describes a process whereby a morpho-syntactic analysis of phrases or user queries is used to generate a structured representation of text to evaluate the effectiveness or quality of the matching and scoring of phrases.
Abstract: The application of automatic natural language processing techniques to the indexing and the retrieval of text information has been a target of information retrieval researchers for some time. Incorporating semantic-level processing of language into retrieval has led to conceptual information retrieval, which is effective but usually restricted in its domain. Using syntactic-level analysis is domain-independent, but has not yet yielded significant improvements in retrieval quality. This paper describes a process whereby a morpho-syntactic analysis of phrases or user queries is used to generate a structured representation of text. A process of matching these structured representations is then described that generates a metric value or score indicating the degree of match between phrases. This scoring can then be used for ranking the phrases. In order to evaluate the effectiveness or quality of the matching and scoring of phrases, some experiments are described that indicate the method to be quite useful. Ultimately the phrasematching technique described here would be used as part of an overall document retrieval strategy, and some future work towards this direction is outlined.

40 citations



Proceedings ArticleDOI
01 Jun 1992
TL;DR: A model for combining text and fact retrieval is described, which uses descriptions of the occurence of terms in documents instead of precomputed indexing weights for text conditions, thus treating terms similar to attributes.
Abstract: In this paper, a model for combining text and fact retrieval is described. A query is a set of conditions, where a single condition is either a text or fact condition. Fact conditions can be interpreted as being vague, thus leading to nonbinary weights for fact conditions with respect to database objects. For text conditions, we use descriptions of the occurence of terms in documents instead of precomputed indexing weights, thus treating terms similar to attributes. Probabilistic indexing weights for conditions are computed by introducing the notion of correctness (or acceptability) of a condition w.r.t. an object. These indexing weights are used in retrieval for a probabilistic ranking of objects based on the retrieval for a probabilistic ranking of objects based on the retrieval-with-probabilistic-indexing (RPI) model, for which a new derivation is given here.

38 citations


Journal ArticleDOI
TL;DR: The probability ranking principle retrieves documents in decreasing order of their predictive probabilities of relevance and can be suboptimal with respect to expected utility when one of these conditions fails to hold.
Abstract: The probability ranking principle retrieves documents in decreasing order of their predictive probabilities of relevance. Gordon and Lenk (1991) demonstrated that this principal is optimal within a signal detection-decision theory framework, and it maximizes the inquirer's expected utility for relevant documents. These results hold under three conditions: calibration, independent assessment of relevance by the inquirer, and certainty about the computed probabilities of relevance. We demonstrate that the probability ranking principle can be suboptimal with respect to expected utility when one of these conditions fails to hold.

37 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: Results of the 73 tests conducted by this project are included, covering variant term position algorithms, sentence boundaries, stopword counting, every pairs testing, field selection, and combinations of algorithm including collection frequency, record frequency and searcher weighted.
Abstract: Presents seven sets of laboratory results testing variables in term position ranking which produce a phrase effect by weighting the distance between proximate terms. Results of the 73 tests conducted by this project are included, covering variant term position algorithms, sentence boundaries, stopword counting, every pairs testing, field selection, and combinations of algorithm including collection frequency, record frequency and searcher weighted. The discussion includes the results of tests by Fagan and by Croft, the need for term stemming, proximity as a precision device, comparisons with Boolean, and the quality of test collections.

27 citations


Dissertation
01 Jul 1992
TL;DR: This thesis is aimed at investigating interactive query expansion within the context of a relevance feedback system that uses term weighting and ranking in searching online databases that are available through online vendors.
Abstract: This thesis is aimed at investigating interactive query expansion within the context of a relevance feedback system that uses term weighting and ranking in searching online databases that are available through online vendors. Previous evaluations of relevance feedback systems have been made in laboratory conditions and not in a real operational environment. The research presented in this thesis followed the idea of testing probabilistic retrieval techniques in an operational environment. The overall aim of this research was to investigate the process of interactive query expansion (IQE) from various points of view including effectiveness. The INSPEC database, on both Data-Star and ESA-IRS, was searched online using CIRT, a front-end system that allows probabilistic term weighting, ranking and relevance feedback. The thesis is divided into three parts. Part I of the thesis covers background information and appropriate literature reviews with special emphasis on the relevance weighting theory (Binary Independence Model), the approaches to automatic and semi-automatic query expansion, the ZOOM facility of ESA/IRS and the CIRT front-end. Part II is comprised of three Pilot case studies. It introduces the idea of interactive query expansion and places it within the context of the weighted environment of CIRT. Each Pilot study looked at different aspects of the query expansion process by using a front-end. The Pilot studies were used to answer methodological questions and also research questions about the query expansion terms. The knowledge and experience that was gained from the Pilots was then applied to the methodology of the study proper (Part III). Part III discusses the Experiment and the evaluation of the six ranking algorithms. The Experiment was conducted under real operational conditions using a real system, real requests, and real interaction. Emphasis was placed on the characteristics of the interaction, especially on the selection of terms for query expansion. Data were collected from 25 searches. The data collection mechanisms included questionnaires, transaction logs, and relevance evaluations. The results of the Experiment are presented according to their treatment of query expansion as main results and other findings in Chapter 10. The main results discuss issues that relate directly to query expansion, retrieval effectiveness, the correspondence of the online-to-offline relevance judgements, and the performance of the w(p — q) ranking algorithm. Finally, a comparative evaluation of six ranking algorithms was performed. The yardstick for the evaluation was provided by the user relevance judgements on the lists of the candidate terms for query expansion. The evaluation focused on whether there are any similarities in the performance of the algorithms and how those algorithms with similar performance treat terms. This abstract refers only to the main conclusions drawn from the results of the Experiment: (1) One third of the terms presented in the list of candidate terms was on average identified by the users as potentially useful for query expansion; (2) These terms were mainly judged as either variant expression (synonyms) or alternative (related) terms to the initial query terms. However, a substantial portion of the selected terms were identified as representing new ideas. (3) The relationship of the 5 best terms chosen by the users for query expansion to the initial query terms was: (a) 34% have no relationship or other type of correspondence with a query term; (b) 66% of the query expansion terms have a relationship which makes the term: (bl) narrower term (70%), (b2) broader term (5%), (b3) related term (25%). (4) The results provide some evidence for the effectiveness of interactive query expansion. The initial search produced on average 3 highly relevant documents at a precision of 34%; the query expansion search produced on average 9 further highly relevant documents at slightly higher precision. (5) The results demonstrated the effectiveness of the w(p—q) algorithm, for the ranking of terms for query expansion, within the context of the Experiment. (6) The main results of the comparative evaluation of the six ranking algorithms, i.e. w(p — q), EMIM, F4, F4modifed, Porter and ZOOM, are that: (a) w(p — q) and EMIM performed best; and (b) the performance between w(p — q) and EMIM and between F4 and F4modified is very similar; (7) A new ranking algorithm is proposed as the result of the evaluation of the six algorithms. Finally, an investigation is by definition an exploratory study which generates hypotheses for future research. Recommendations and proposals for future research are given. The conclusions highlight the need for more research on weighted systems in operational environments, for a comparative evaluation of automatic vs interactive query expansion, and for user studies in searching weighted systems.

Patent
Mukesh Dalal1, Dipayan Gangopadhyay1
04 Nov 1992
TL;DR: In this article, a technique for enhancing the execution of programs written in a logic-oriented programming language such as PROLOG is described, which ensures that searching within the type hierarchy takes precedence over searching of instances of types.
Abstract: Processing techniques for enhancing execution of programs written in a logic-oriented programming language such as PROLOG are disclosed. The techniques are particularly useful for programs having class predicates and subclass predicates which are definitive of a class/sub-class hierarchy, such as the case with PROLOG's application in object-oriented programming systems, expert systems, object-oriented databases, object-oriented deductive databases, knowledge representations, etc. The techniques ensure that searching within the type hierarchy takes precedence over searching of instances of types. Important to accomplishing this function is the pre-assigning of ranks to predicates and clauses within the program to be processed. Query processing on the program is then based upon the pre-assigned predicate and clause rankings. In particular, novel rules are substituted for conventional predicate and clause selection rules of PROLOG interpreters such that predicates and clauses are preferably processed in order of ranking. In addition, certain query processing simplification steps are introduced. The net effect is a technique which eliminates redundant and unnecessary searching at the instance level by taking advantage of information available in the type lattice.

Proceedings ArticleDOI
03 Feb 1992
TL;DR: The problem of time-constrained query evaluation in a single-user database management system (DBMS) is considered and CASE-DB, a real-time, single user, relational prototype DBMS that uses the relational algebra as its query language, is considered.
Abstract: The problem of time-constrained query evaluation in a single-user database management system (DBMS) is considered. CASE-DB is a real-time, single user, relational prototype DBMS that uses the relational algebra as its query language. Given a nonaggregate query and a fragment chain for each input relation of the query. CASE-DB uses iterative query evaluation techniques to obtain a response first to a modified version of the query, and then to successively improved versions of the query. CASE-DB controls the risk of overspending the time quota at each step using a risk control technique. For periodically occurring queries, CASE-DB uses incremental query evaluation techniques that make sure that each operator in the query has at least one operand relation which contains the changes in the last period, and is expected to be very small compared to the actual database relation. >



Journal Article
01 Jan 1992-Scopus
TL;DR: In this article, a regression method is proposed to combine decisions of multiple character recognition algorithms and derive a consensus ranking, which is computed by a weighted sum of the rank scores produced by the individual classifiers and derived by a logistic regression analysis.
Abstract: A regression method is proposed to combine decisions of multiple character recognition algorithms. The method computes a weighted sum of the rank scores produced by the individual classifiers and derive a consensus ranking. The weights are estimated by a logistic regression analysis. Two experiments are discussed where the method was applied to recognize degraded machine-printed characters and handwritten digits. The results show that the combination outperforms each of the individual classifiers.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Journal ArticleDOI
TL;DR: It is shown that for an n-node tree, one can compute an optimal ranking in O(log n) time using n2/log n CREW PRAM processors.
Abstract: This paper places the optimal tree ranking problem in . A ranking is a labeling of the nodes with natural numbers such that if nodes u and v have the same label then there exists another node with a greater label on the path between them. An optimal ranking is a ranking in which the largest label assigned to any node is as small as possible among all rankings. An O(n) sequential algorithm is known. Researchers have speculated that this problem is P-complete. We show that for an n-node tree, one can compute an optimal ranking in O(log n) time using n2/log n CREW PRAM processors. In fact, our ranking is super critical in that the label assigned to each node is absolutely as small as possible. We achieve these results by showing that a more general problem, which we call the super critical numbering problem, is in . No algorithm for the super critical tree ranking problem, approximate or otherwise, was previously known; the only known algorithm for optimal tree ranking was an approximate one.

Journal ArticleDOI
TL;DR: A system for hypothesis elicitation and ranking formed by a net of computational elements obtained by modifying the classical neural model of Caianiello, which contains an elaboration layer and a query layer that enables the system to gather additional information.
Abstract: The article describes a system for hypothesis elicitation and ranking formed by a net of computational elements obtained by modifying the classical neural model of Caianiello This neural structure was chosen on the basis both of knowledge representation and of parallel processing considerations the two fundamental components of the system are an elaboration layer and, in case the available evidences are insufficient to trigger explanatory hypotheses, a query layer that enables the system to gather additional information Algorithms that help setting the crucial variable parameters of the net are described in the Appendix © 1992 John Wiley & Sons, Inc

Proceedings ArticleDOI
08 Mar 1992
TL;DR: The authors formalize within fuzzy set theory a new model that allows the interpretation of a user query in which a linguistic descriptor is attached to each term.
Abstract: In information retrieval systems the vagueness in user requests for information is mainly managed by the use of numeric weights present the formal definition of a retrieval model in which linguistic descriptors are used in the query language both to express the importance that a term must have in the desired documents and to label the retrieved documents in relevance classes By attaching a numeric weight to a term, a user provides a quantitative description of the importance of that term in the documents sought If the introduction of weights reduces the vagueness in query formulation, the use of numeric weights requires a clear knowledge of their semantics and the translation of a fuzzy concept in a precise numeric value Based on these problems and starting from an existing weighted Boolean retrieval model, the authors formalize within fuzzy set theory a new model that allows the interpretation of a user query in which a linguistic descriptor is attached to each term >

Book
01 Jan 1992
TL;DR: The effectiveness of the system using word, stem, and root retrieval methods is presented using the recall and precision measures along with two nonparametric statistical tests and shows the superiority of the root retrieval method over the word retrieval method, and over the stem retrieval method at high recall levels.
Abstract: Experimentation with retrieval systems in Arabic language environments has been very limited. Arabization of available information retrieval systems has dealt mostly with internal representation of the Arabic data and translation of menus and system messages to Arabic. The problems of working with the Arabic language have not been confronted directly. Stemming algorithms have been widely used to enhance the retrieval behavior of information retrieval systems. In English based systems, stemming algorithms deal with the removal of suffixes to reduce the storage needed for the keyword list and to increase the recall factor by conflating word variants. In the Arabic language, both prefixes and suffixes are added to roots and stems to form related words. The number of affixes used in the Arabic language exceeds that used in English. Surface affix removal processes produce word stems while deep affix removal processes produce word root. This research studies the effect of using words, stems, and roots of Arabic words as index terms on the effectiveness of the retrieval of Arabic bibliographic records. To run the experiment for these three different retrieval methods we used 355 Arabic bibliographic records covering computer and information science, and 29 queries. The test was conducted on an IBM/AT compatible microcomputer using the Microcomputer-based Arabic Information Retrieval System, Micro-AIRS. The effectiveness of the system using word, stem, and root retrieval methods are presented using the recall and precision measures along with two nonparametric statistical tests. The system evaluation results shows the superiority of the root retrieval method over the word retrieval method, and over the stem retrieval method at high recall levels. It also shows the superiority of stem retrieval method over the word retrieval method at all recall levels. The experiments with ranking methods using dice, cosine, and Jaccard similarity coefficients shows that all three similarity coefficients produce exactly the same results when applied to a binary weighted word counts.

01 Jan 1992
TL;DR: It is shown that the Relevance Density Method performs better for multimodal as well as single mode queries than an averaging method, and retrieval is substantially faster for the new method.
Abstract: : A long standing problem in information retrieval is how to treat queries that are best answered by two or more distinct sets of documents Existing methods average across the words or terms in a user's query, and consequently, perform poorly with multimodal queries, such as: Show me documents about French art and American jazz We propose a new method, the Relevance Density Method for selecting documents relevant to a user's query The method can be used whenever the documents and the terms are represented by vectors in a multi-dimensional space, such that the vectors corresponding to documents and terms dealing with closely related topics are close to each other We show that the Relevance Density Method performs better for multimodal as well as single mode queries than an averaging method In addition, we show that retrieval is substantially faster for the new method

Proceedings Article
01 Jan 1992
TL;DR: An expert system, Questions and Answers (Q&A), is developed that assists in formulating an initial strategy given concepts entered by the user and that determines if the strategy is well-formed, refining it when necessary.
Abstract: Inexperienced users of online medical databases often do not know how to formulate their queries for effective searches. Previous attempts to help them have provided some standard procedures for query formulation, but depend on the user to enter the concepts of a query properly so that the correct search strategy will be formed. Intelligent assistance specific to a particular query often is not given. Several systems do refine the initial strategy based on relevance feedback, but usually do not make an effort to determine how well-formed a query is before actually performing the search. As part of the Interactive Query Workstation (IQW), we have developed an expert system, Questions and Answers (Q&A), that assists in formulating an initial strategy given concepts entered by the user and that determines if the strategy is well-formed, refining it when necessary.

Book ChapterDOI
01 Jan 1992
TL;DR: A preprocessor is introduced that uses a relational system and semantic modeling to impose structure on text to show that document retrieval applications can be easily developed within the relational model.
Abstract: We introduce a preprocessor that uses a relational system and semantic modeling to impose structure on text. Our intent is to show that document retrieval applications can be easily developed within the relational model. We illustrate several operations that are typically found in information retrieval systems, and show how each can be performed in the relational model. These include keywording, proximity searches, and relevance ranking. We also include a discussion of an extension to relevance based on semantic modeling.


ReportDOI
30 Apr 1992
TL;DR: A similarity retrieval algorithm for use in retrieval by spatial similarity is proposed and the experimental results obtained quite well agree with the intuitive ranking of the images in the collection.
Abstract: : Image Retrieval has been considered as an important task in many application areas such as Geographic Information Systems and Computer-Aided Design. Facilitating retrieval of images based on their similarity to a specified image is a desirous feature of a retrieval scheme for an image database. Providing a suitable means for expressing spatial relationships in a query often improves the ease of specifying it. In this report, we propose a similarity retrieval algorithm for use in retrieval by spatial similarity. We also describe the generation of a test bed of images and the user interface development. The proposed method has been applied to a test bed of images comprising of floor and furniture layout designs. Each layout design is generated as an image consisting of several objects such as sofa, chair, and table. The dissimilarity between images is based on the notion of distance. The Euclidean distance is computed between the centroids of the matching pairs of constituent objects in both the images. The sum of all such distances plus a suitable penalty for non matching objects is a quantitative measure of spatial similarity. The experimental results obtained using the spatial similarity algorithm quite well agree with our intuitive ranking of the images in the collection. Image Databases, Spatial Databases.

Proceedings Article
01 Jan 1992
TL;DR: Initial results have shown the multilevel ranking scheme to be highly competitive in precesion and recall relative to other ranking strategies.
Abstract: A description of a general-purpose multilevel ranking information retrieval prototype is presented Experiments with the TREC 92 collection of text and queries have been conducted without manual processing Initial results have shown the multilevel ranking scheme to be highly competitive in precesion and recall relative to other ranking strategies

Journal ArticleDOI
TL;DR: The programs reviewed in this issue are the personal information managers 3by5/RediReference, askSam, Dayflo Tracker, and Ize, and Personal Librarian; the hypertext programs are Folio Views and the HyperKRS/HyperCard combination.
Abstract: In this article, the fifth in a series on microcomputer software for information storage and retrieval, test results of seven programs are presented and various properties and qualities of these programs are discussed. In this instalment of the series we discuss programs for information storage and retrieval which are primarily characterised by the properties of personal information managers (PIMs), hypertext programs, or best match and ranking retrieval systems. The programs reviewed in this issue are the personal information managers 3by5/RediReference, askSam, Dayflo Tracker, and Ize; Personal Librarian uses best match and ranking; the hypertext programs are Folio Views and the HyperKRS/HyperCard combination (askSam, Ize and Personal Librarian boast hypertext features as well). HyperKRS/HyperCard is only available for the Apple Macintosh. All other programs run under MS‐DOS; versions of Personal Librarian also run under Windows and some other systems. For each of the seven programs about 100 facts and test results are tabulated. The programs are also discussed individually.

Book ChapterDOI
24 Aug 1992
TL;DR: This work surveys the efficiency of the ‘fast’ parallel algorithms for the recognition and ranking of context-free languages on the Parallel Random Access Machine without write conflicts and presents several new results.
Abstract: We survey the efficiency of the ‘fast’ parallel algorithms for the recognition and ranking of context-free languages on the Parallel Random Access Machine without write conflicts. The efficiency of the algorithm is the total number of operations (the product of time and number of processors). Such efficiency depends heavily on the class of context-free grammars and on the meaning of ‘fast’: log(n), log2n or sublinear time. The slower is the algorithm the better is its total efficiency. Several new results are presented in the paper. A new simpler version of the log(n) time parallel recognition of unambiguous cfl's is presented. The parallel complexity of ranking and max-word problems for several classes of cfl's is related to the complexity of certain (⊕,⊗)-transitive closure problems, where (⊕,⊗)=(+,*) for the ranking problem of unambiguous languages and (⊕,⊗)=(max,concat) for the max-word problem. This simplifies the ranking and max-word algorithms and reduces the number of processors.

Patent
13 Mar 1992
TL;DR: In this paper, the passage time of each competitor (Ci) to a detector (Dk), the classification means (12) Competitor (C) in the portion of each sensor, and processing means (20) for, when moving from a competitor (c) to the detector, determining the instantaneous ranking of the competitor reaching the detector by comparing the opponent's passage time to a set of competitors (cj) already classified.
Abstract: The display system comprises memory means of the passage time of each competitor (Ci) to a detector (Dk), the classification means (12) Competitor (C) in the portion of each sensor (Dk), for establishing, for each detector (Dk), a list of competitors according to their time, and processing means (20) for, when moving from a competitor (Ci) to a detector (Dk): determining the instantaneous ranking of the competitor (C) reaching the detector (Dk) by comparing the passage time to passage time of a set of competitors (Cj) already classified; calculating time differences (R and A) between the competitor's breakthrough time (Ci) and the Competitor passage time (Cj) of immediately adjacent rows, converting each time gap (R and A) to a distance by means of a conversion function (f); displaying, in a predetermined display area (28), a symbol (30) representative of the competitor (Cj) and symbols (32, 34) representative of each of the competitors (Cj) of immediately adjacent rows, spacing said symbols (32 , 34) according to the calculated distances.