scispace - formally typeset
Search or ask a question

Showing papers on "Recommender system published in 1997"


Journal ArticleDOI
TL;DR: This special section includes descriptions of five recommender systems, which provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients, and which combine evaluations with content analysis.
Abstract: Recommender systems assist and augment this natural social process. In a typical recommender system people provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients. In some cases the primary transformation is in the aggregation; in others the system’s value lies in its ability to make good matches between the recommenders and those seeking recommendations. The developers of the first recommender system, Tapestry [1], coined the phrase “collaborative filtering” and several others have adopted it. We prefer the more general term “recommender system” for two reasons. First, recommenders may not explictly collaborate with recipients, who may be unknown to each other. Second, recommendations may suggest particularly interesting items, in addition to indicating those that should be filtered out. This special section includes descriptions of five recommender systems. A sixth article analyzes incentives for provision of recommendations. Figure 1 places the systems in a technical design space defined by five dimensions. First, the contents of an evaluation can be anything from a single bit (recommended or not) to unstructured textual annotations. Second, recommendations may be entered explicitly, but several systems gather implicit evaluations: GroupLens monitors users’ reading times; PHOAKS mines Usenet articles for mentions of URLs; and Siteseer mines personal bookmark lists. Third, recommendations may be anonymous, tagged with the source’s identity, or tagged with a pseudonym. The fourth dimension, and one of the richest areas for exploration, is how to aggregate evaluations. GroupLens, PHOAKS, and Siteseer employ variants on weighted voting. Fab takes that one step further to combine evaluations with content analysis. ReferralWeb combines suggested links between people to form longer referral chains. Finally, the (perhaps aggregated) evaluations may be used in several ways: negative recommendations may be filtered out, the items may be sorted according to numeric evaluations, or evaluations may accompany items in a display. Figures 2 and 3 identify dimensions of the domain space: The kinds of items being recommended and the people among whom evaluations are shared. Consider, first, the domain of items. The sheer volume is an important variable: Detailed textual reviews of restaurants or movies may be practical, but applying the same approach to thousands of daily Netnews messages would not. Ephemeral media such as netnews (most news servers throw away articles after one or two weeks) place a premium on gathering and distributing evaluations quickly, while evaluations for 19th century books can be gathered at a more leisurely pace. The last dimension describes the cost structure of choices people make about the items. Is it very costly to miss IT IS OFTEN NECESSARY TO MAKE CHOICES WITHOUT SUFFICIENT personal experience of the alternatives. In everyday life, we rely on

3,993 citations


Journal ArticleDOI
TL;DR: A conceptual framework for text filtering practice and research is developed, and present practice in the field is reviewed, and user modeling techniques drawn from information retrieval, recommender systems, machine learning and other fields are described.
Abstract: This paper develops a conceptual framework for text filtering practice and research, and reviews present practice in the field. Text filtering is an information seeking process in which documents are selected from a dynamic text stream to satisfy a relatively stable and specific information need. A model of the information seeking process is introduced and specialized to define text filtering. The historical development of text filtering is then reviewed and case studies of recent work are used to highlight important design characteristics of modern text filtering systems. User modeling techniques drawn from information retrieval, recommender systems, machine learning and other fields are described. The paper concludes with observations on the present state of the art and implications for future research on text filtering.

142 citations




01 Jan 1997
TL;DR: Fab’s hybrid structure allows for automatic recognition of emergent issues relevant to various groups of users, and enables two scaling problems, pertaining to the rising number of users and documents, to be addressed.
Abstract: Fab is a recommendation system designed to help users sift through the enormous amount of information available in the World Wide Web. Operational since Dec. 1994, this system combines the content-based and collaborative methods of recommendation in a way that exploits the advantages of the two approaches while avoiding their shortcomings. Fab’s hybrid structure allows for automatic recognition of emergent issues relevant to various groups of users. It also enables two scaling problems, pertaining to the rising number of users and documents, to be addressed.

43 citations


Book ChapterDOI
01 Jan 1997
TL;DR: The system described in this paper (MORSE — movie recommendation system) makes personalised film recommendations based on what is known about users' film preferences, provided to the system by users rating the films they have seen on a numeric scale.
Abstract: The system described in this paper (MORSE — movie recommendation system) makes personalised film recommendations based on what is known about users' film preferences. These are provided to the system by users rating the films they have seen on a numeric scale. MORSE is based on the principle of social filtering. The accuracy of its recommendations improves as more people use the system and as more films are rated by individual users. MORSE is currently running on BT Laboratories' World Wide Web (WWW) server. A full evaluation, described in this paper, was carried out after over 500 users had rated on average 70 films each. Also described are the motivation behind the development of MORSE, its algorithm, and how it compares and contrasts with related systems.

37 citations


Journal ArticleDOI
31 Mar 1997
TL;DR: This study suggests an advanced model for information filtering which is based on a two-phase filtering process which relates the user to one or more stereotypes and operates the appropriate stereotypic rules.
Abstract: Computer users often experience the “lost in information space” syndrome. Information filtering suggests a solution based on restricting the amount of information made available to users. This study suggests an advanced model for information filtering which is based on a two-phase filtering process. The user profiling in the model is constructed on the basis of the user‘s areas of interest and on sociological parameters about him that are known to the system. The system maintains a database of known stereotypes that includes rules on their information retrieval needs and habits. During the filtering process, the system relates the user to one or more stereotypes and operates the appropriate stereotypic rules.

31 citations


01 Jan 1997
TL;DR: In this article, the authors propose a recommender system for scientific computing, where the user specifies his problem in a natural, high level form along with computational objectives such as accuracy, time, cost, etc., and the domain specific PSE selects the resources (algorithm, parameters, platform) necessary to compute the problem solution.
Abstract: It has been predicted that, by the beginning of the next century, the available computational power will enable anyone with access to a computer to find an answer to any scientific problem that has a known or effectively computable solution. The concept of problem solving environments (PSEs) promises to contribute toward the realization of this prediction for multidisciplinary physical modeling. It provides students, scientists and engineers with systems that allow them to spend more time doing science and engineering rather than computing. The first goal of this thesis is to support programming in-the-large where the user specifies his problem in a natural, high level form along with computational objectives (on performance criteria such as accuracy, time, cost, etc.,) and the domain specific PSE selects the resources (algorithm, parameters, platform) necessary to compute the problem solution. The methodology proposed to realize this 'recommender' functionality consists of a knowledge discovery approach and case based reasoning mechanisms. A kernel to aid in the rapid prototyping of recommender systems is designed and the effectiveness of this methodology in two domains of scientific computing--elliptic partial differential equations and numerical quadrature--is demonstrated. The second goal of this thesis is to extend and implement the above methodology in the context of networked computing, where the libraries and machine resources are assumed geographically distributed over the world and connected through a global infrastructure such as the Internet. In this scenario, we demonstrate a collaborative methodology--a multi-agent approach that tracks the relative efficacies of recommender systems, using a notion of reasonableness. Finally, the developed recommender systems are interfaced with a well known mathematical software repository system--the Guide to Available Mathematical Software (GAMS)--to facilitate intelligent search and retrieval of scientific software.

28 citations


Journal Article
TL;DR: ABIS, an intelligent agent for supporting users in ltering data from distributed and heterogeneous archives and repositories, is presented based on an adaptation of the generalized probabilistic model of information retrieval.
Abstract: With the development and di usion of the Internet worldwide connection, a large amount of information is available to the users. Methods of information ltering and fetching are then required. This paper presents two approaches. The rst concerns the information ltering system ProFile based on an adaptation of the generalized probabilistic model of information retrieval. ProFile lters the netnews and uses a scale of 11 prede ned values of relevance. ProFile allows the user to update on{line the pro le and to check the discrepancy between the assessment and the prediction of relevance of the system. The second concerns ABIS, an intelligent agent for supporting users in ltering data from distributed and heterogeneous archives and repositories. ABIS minimizes user's e ort in selecting the huge amount of available documents. The ltering engine memorizes both user preferences and past situations. ABIS compares documents with the past situations and nds the similarity scores on the basis of a memory-based reasoning approach.

26 citations


Proceedings ArticleDOI
09 Sep 1997
TL;DR: Preliminary experiments show a 44% increase in the number of successful classifications in D-SIFTER as compared to the isolated environment called SIFTER (smart information filtering technology for electronic resources).
Abstract: The enormous growth of the World Wide Web has led to a vast amount of unsolicited information being transmitted to many users. Information filtering is a useful technique to combat such an undesired information overload. In this paper, we present a distributed environment for information classification, called D-SIFTER. D-SIFTER (distributed smart information filtering technology for electronic resources) consists of many networked filters communicating to achieve a collaborative classification of incoming documents. Our preliminary experiments show a 44% increase in the number of successful classifications in D-SIFTER as compared to the isolated environment called SIFTER (smart information filtering technology for electronic resources).

6 citations


Book ChapterDOI
Elmar Haneke1
26 Feb 1997
TL;DR: The rapid growth of public information systems, e.g. UseNet or World-Wide-Web, increases the need for tools filtering the information, and approaches which automatically generate interest profiles suffer from the disadvantage that the profiles are very complex.
Abstract: The rapid growth of public information systems, e.g. UseNet or World-Wide-Web, increases the need for tools filtering the information. A significant problem in many information filtering agents is that the user is forced to define his interests explicitly. This task is unacceptable for most users. Approaches which automatically generate interest profiles suffer from the disadvantage that the profiles are very complex. Therefore, a review is not practicable for the user.