scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Advances in network information discovery and retrieval

01 Mar 1995-International Journal of Software Engineering and Knowledge Engineering (World Scientific Publishing Company)-Vol. 05, Iss: 01, pp 143-160
TL;DR: The beginnings of network information discovery and retrieval are surveyed, how the Web has created a surprising level of integration of these systems, and where the current state of the art lies in creating globally accessible information spaces and supporting access to those information spaces are surveyed.
Abstract: Access to information using the Internet has undergone dramatic change and expansion recently. The unrivaled success of the World Wide Web has altered the Internet from something approachable only by the initiated to something of a media craze — the information superhighway made manifest in the personal "home page". This paper surveys the beginnings of network information discovery and retrieval, how the Web has created a surprising level of integration of these systems, and where the current state of the art lies in creating globally accessible information spaces and supporting access to those information spaces.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, an ethic for agents on the web is proposed, based on the assumption that agents are a reality on the Web, and that there are no reasonable means of preventing their proliferation.
Abstract: As the Web continues to evolve, the sophistication of the programs that are employed in interacting with it will also increase in sophistication. Web agents, programs acting autonomously on some task, are already present in the form of spiders. Agents offer substantial benefits and hazards, and because of this, their development must involve not only attention to technical details, but also the ethical concerns relating to their resulting impact. These ethical concerns will differ for agents employed in the creation of a service and agents acting on behalf of a specific individual. An ethic is proposed that addresses both of these perspectives. The proposal is predicated on the assumption that agents are a reality on the Web, and that there are no reasonable means of preventing their proliferation.

91 citations

01 Mar 1996
TL;DR: This work uses ShaDe, an object-based coordination language, to build "coordination" services enacting declarative cooperation laws, and demonstrates the use of the language building two coordination applications, namely a distributed auction system and a stock exchange system.
Abstract: ShaDe is an object-based coordination language. It offers a basic abstraction called the Object Space, that is similar to a tuple space with the difference that it contains both objects and messages. ShaDe objects are active, i.e. they are units (places) of computation. Each object encapsulates a state in form of multiset of tuples and methods in form of rewriting rules. The object space is a coordination medium supporting a number of inter-object associative communication mechanisms, namely unicast, multicast, and broadcast. The most interesting feature of Shade is that coordination is expressed by rules. We exploit such a feature to build "coordination" services enacting declarative cooperation laws. We demonstrate the use of the language building two coordination applications, namely a distributed auction system and a stock exchange system.

44 citations


Cites background from "Advances in network information dis..."

  • ...how effective can be such applications, even if its current scope simply consists of large scale information retrieval and exchange operations [13]....

    [...]

Proceedings ArticleDOI
03 Jun 1996
TL;DR: This paper proposes a completely agent-based solutions, when the different types of agents introduced-named user agent, machine agent, manager-cooperate to decrease user load and gain efficiency in the retrieval process.
Abstract: The core of the problem of Information Gathering is how to generate a concise, high-quality response to the information needs of a user. However, this task is becoming difficult due to the explosion in the amount of electronic information. Agent-based solutions have become an popular approach for locating information in an growing distributed heterogeneous environment like Internet. We first survey the existing non-agent and partially agent-based solutions; then to overcome their drawbacks we propose a completely agent-based solutions, when the different types of agents introduced-named user agent, machine agent, manager-cooperate to decrease user load and gain efficiency in the retrieval process.

9 citations

Proceedings ArticleDOI
19 Jun 1996
TL;DR: The PageSpace is developing the PageSpace, an architecture which enhances the WWW middleware and Java with high-level coordination capabilities which allow to build effective and cheap distributed applications.
Abstract: The World Wide Web can be used as a universal platform to design and implement distributed, multiuser applications. However it needs enhanced middleware support, even when Internet languages like Java are used. We are developing the PageSpace, an architecture which enhances the WWW middleware and Java with high-level coordination capabilities which allow us to build effective and cheap distributed applications.

7 citations

Journal ArticleDOI
TL;DR: Various paradigms have been explored for the development of adaptive information agents, and the performance of these agents differs in terms of computational efficiency, classification effectiveness, learning autonomy, exploration capability, and explanatory power.
Abstract: With the exponential growth of the Internet, information seekers are faced with the so-called problem of information overload. Adaptive Information Agents have been developed to alleviate this problem. The Main issues in the development of these agents are document representation, learning, and classification. Various paradigms have been explored for the development of adaptive information agents, and the performance of these agents differs in terms of computational efficiency, classification effectiveness, learning autonomy, exploration capability, and explanatory power. To develop a basic under-standing of the pros and cons of these paradigms, some representative information agents are examined. Such a review also serves to identify a general for the development of the next generation adaptive information agents.

5 citations

References
More filters
Proceedings Article
01 Dec 1991
TL;DR: The Director of the Office of Scientific Research and Development, Dr. Vannevar Bush holds up an incentive for scientists when the fighting has ceased, and urges that men of science should then turn to the massive task of making more accessible the authors' bewildering store of knowledge.
Abstract: As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on "The American Scholar," this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.

3,464 citations


"Advances in network information dis..." refers background in this paper

  • ...[37] concerning write-only databases, these systems provide great promise in achieving information access of truly great breadth – Vannevar Bush’s dream made real [11]....

    [...]

Book
01 Jul 1945
TL;DR: As the Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare as mentioned in this paper.
Abstract: As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coordinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on "The American Scholar," this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.

3,142 citations

Journal ArticleDOI
Tim Berners-Lee1, Robert Cailliau1, Ari Luotonen1, Henrik Frystyk Nielsen1, Arthur Secret1 
TL;DR: The World Wide Web (W3) as mentioned in this paper is a pool of human knowledge that allows collaborators in remote sites to share their ideas and all aspects of a common project, which is the basis of the Web.
Abstract: Publisher Summary This chapter discusses the history and growth of World Wide Web (W3). The World-Wide Web was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project. Physicists and engineers at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, collaborate with many other institutes to build the software and hardware for high-energy physics research. The idea of the Web was prompted by positive experience of a small “home-brew” personal hypertext system used for keeping track of personal information on a distributed project. The Web was designed so that if it was used independently for two projects, and later relationships were found between the projects, then no major or centralized changes would have to be made, but the information could smoothly reshape to represent the new state of knowledge. This property of scaling has allowed the Web to expand rapidly from its origins at CERN across the Internet irrespective of boundaries of nations or disciplines.

1,065 citations

Journal ArticleDOI
TL;DR: The aims, data model, and protocols needed to implement the “web” and compares them with various contemporary systems are described.
Abstract: The World‐Wide Web (W3) initiative is a practical project designed to bring a global information universe into existence using available technology. This article describes the aims, data model, and protocols needed to implement the “web” and compares them with various contemporary systems.

595 citations

Journal ArticleDOI
TL;DR: Etzioni, Lcsh, and Segal as discussed by the authors developed the Internet Softbot (software robot) which uses a UNIX shell and the World Wide Web to interact with a wide range of internet resources.
Abstract: The Internet Softbot (software robot) is a fullyimplemented AI agent developed at the University of Washington (Etzioni, Lcsh, & Segal 1993). The softbot uses a UNIX shell and the World-Wide Web to interact with a wide range of internet resources. The softbot’s effectors include ftp, telnet, mail, and numerous file manipulation commaslds. Its sensors include internet facilities such as archie, gopher, netfind, and many more. The softbot is designed to incorporate new facilities into its repertoirc as they become available. The softbot’s "added value" is three-fold. First, it provides an integrated and expressive interface to the internet. Second, the softbot dynamically chooses which facilities to invoke, and in what sequence. For example, the softbot might use netfind to determine David McAllester’s e-mail address. Since it knows that netfind requires a person’s institution as input, the softbot would first search bibliographic databases for a technical report by McAllester which would reveal his institutkm, and then feed that information to netfind. Third, the softbot fluidly backtracks from one facility to another based on information collected at run time. As a result., the softbot’s behavior changes in response to transient system conditions (e.g., the UUCP gateway is down). In this article, we focus on the ideas underlying the softbot-based interface.

553 citations