scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 1995"


Proceedings Article
20 Aug 1995
TL;DR: Letizia is a user interface agent that assists a user browsing the World Wide Web by automates a browsing strategy consisting of a best-first search augmented by heuristics inferring user interest from browsing behavior.
Abstract: Letizia is a user interface agent that assists a user browsing the World Wide Web. As the user operates a conventional Web browser such as Netscape, the agent tracks user behavior and attempts to anticipate items of interest by doing concurrent, autonomous exploration of links from the user's current position. The agent automates a browsing strategy consisting of a best-first search augmented by heuristics inferring user interest from browsing behavior.

1,503 citations


Patent
19 May 1995
TL;DR: In this paper, the authors present a unified, remote, graphical, transparent interface for Web users, working at a Web client, to a variety of managed networks, including multimedia and hypermedia capability.
Abstract: The present invention provides network management of a network or multiple networks, using a Web client, and including multimedia and hypermedia capability. The present invention provides a unified, remote, graphical, transparent interface for Web users, working at a Web client, to a variety of managed networks. The present invention receives requests from a Web client forwarded by a Web server and interacts with the managed networks and their associated objects to obtain information. The present invention then converts this information in real time to hypermedia document format in HTTP and HTML, and transmits this information to the Web client via the Web server, appearing to the client as information in a Web file. This permits a Web user to manage multiple networks and access multiple networks via a single Web client, thus providing a unification of the management interface for dissimilar managed networks, and devices.

557 citations


Proceedings Article
11 Sep 1995
TL;DR: This work designed and is now implementing a high level SQL-like language to support effective and flexible query processing, which addresses the structure and content of WWW nodes and their varied sorts of data.
Abstract: The World-Wide Web (WWW) is an ever growing, distributed, non-administered, global information resource. It resides on the worldwide computer network and allows access to heterogeneous information: text, image, video, sound and graphic data. Currently, this wealth of information is difficult to mine. One can either manually, slowly and tediously navigate through the WWW or utilize indexes and libraries which are built by automatic search engines (called knowbots or robots). We have designed and are now implementing a high level SQL-like language to support effective and flexible query processing, which addresses the structure and content of WWW nodes and their varied sorts of data. Query results are intuitively presented and continuously maintained when desired. The language itself integrates new utilities and existing Unix tools (e.g. grep, awk). The implementation strategy is to employ existing WWW browsers and Unix tools to the extent possible.

307 citations


Patent
07 Jun 1995
TL;DR: Distributed Integration Solution (DIS) as discussed by the authors is a set of capsule objects which perform programmable functions upon a received command from a web server control program agent for retrieving, from a database gateway coupled to a plurality of database resources upon a single request made from a Hypertext document, requested information from multiple data bases located at different types of databases geograhically dispersed, performing calculations, formatting, and other services prior to reporting to the web browser or to other locations, in a selected format, as in a display, fax, printer, and to customer installations or to TV
Abstract: A World Wide Web browser makes requests to web servers on a network which receive and fulfill requests as an agent of the browser client, organizing distributed sub-agents as distributed integration solution (DIS) servers on an intranet network supporting the web server which also has an access agent servers accessible over the Internet. DIS servers execute selected capsule objects which perform programmable functions upon a received command from a web server control program agent for retrieving, from a database gateway coupled to a plurality of database resources upon a single request made from a Hypertext document, requested information from multiple data bases located at different types of databases geograhically dispersed, performing calculations, formatting, and other services prior to reporting to the web browser or to other locations, in a selected format, as in a display, fax, printer, and to customer installations or to TV video subscribers, with account tracking.

293 citations


Journal ArticleDOI
TL;DR: The article examines extant Web access patterns with the aim of developing more efficient file-caching and prefetching strategies.
Abstract: To support continued growth, WWW servers must manage a multigigabyte (in some instances a multiterabyte) database of multimedia information while concurrently serving multiple request streams. This places demands on the servers' underlying operating systems and file systems that lie far outside today's normal operating regime. Simply put, WWW servers must become more adaptive and intelligent. The first step on this path is understanding extant access patterns and responses. The article examines extant Web access patterns with the aim of developing more efficient file-caching and prefetching strategies. >

244 citations


Proceedings ArticleDOI
30 Jun 1995
TL;DR: This paper reports on techniques for finding good service providers without a priori knowledge of server location or network topology, and considers the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency.
Abstract: As distributed information services like the World Wide Web become increasingly popular on the Internet, problems of scale are clearly evident. A promising technique that addresses many of these problems is service (or document) replication. However, when a service is replicated, clients then need the additional ability to find a ``good'''' provider of that service. In this paper we report on techniques for finding good service providers without a priori knowledge of server location or network topology. We consider the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency. We show that these two metrics yield very different results in practice. Surprisingly, we show data indicating that the number of hops between two hosts in the Internet is {\em not\/} strongly correlated to round-trip latency. Thus, the distance in hops between two hosts is not necessarily a good predictor of the expected latency of a document transfer. Instead of using known or measured distances in hops, we show that the extra cost at runtime incurred by dynamic latency measurement is well justified based on the resulting improved performance. In addition we show that selection based on dynamic latency measurement performs much better in practice that any static selection scheme. Finally, the difference between the distribution of hops and latencies is fundamental enough to suggest differences in algorithms for server replication. We show that conclusions drawn about service replication based on the distribution of hops need to be revised when the distribution of latencies is considered instead.

176 citations


Journal ArticleDOI
TL;DR: The authors are using the Web, coupled with their own local clinical data server and vocabulary server, to carry out rapid prototype development of clinical information systems, and have developed one such prototype system that can be run on most popular computing platforms from anywhere on the Internet.

127 citations


Proceedings ArticleDOI
01 Jan 1995

113 citations


Journal ArticleDOI
09 Dec 1995-BMJ
TL;DR: The world wide web provides a uniform, user friendly interface to the Internet, and opens up new possibilities for electronic publishing and electronic journals.
Abstract: The world wide web provides a uniform, user friendly interface to the Internet. Web pages can contain text and pictures and are interconnected by hypertext links. The addresses of web pages are recorded as uniform resource locators (URLs), transmitted by hypertext transfer protocol (HTTP), and written in hypertext markup language (HTML). Programs that allow you to use the web are available for most operating systems. Powerful on line search engines make it relatively easy to find information on the web. Browsing through the web--"net surfing"--is both easy and enjoyable. Contributing to the web is not difficult, and the web opens up new possibilities for electronic publishing and electronic journals.

68 citations


Journal ArticleDOI
30 Apr 1995
TL;DR: DeckScape is an experimental World-Wide Web browser based on a “deck” metaphor that consists of a collection of Web pages, and multiple decks may exist on the screen at once.
Abstract: This paper describes DeckScape, an experimental World-Wide Web browser based on a “deck” metaphor. A deck consists of a collection of Web pages, and multiple decks may exist on the screen at once. As the user traverses links, new pages appear on top of the current deck. Retrievals are done using a background thread, so all visible pages in any deck are active at all times. Users can move and copy pages between decks, and decks can be used as a general-purpose way to organize material, such as hotlists, query results, and breadth-first expansions.

60 citations


Journal Article
TL;DR: This chapter discusses the history and growth of World Wide Web (W3), which was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project.
Abstract: Publisher Summary This chapter discusses the history and growth of World Wide Web (W3). The World-Wide Web was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project. Physicists and engineers at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, collaborate with many other institutes to build the software and hardware for high-energy physics research. The idea of the Web was prompted by positive experience of a small “home-brew” personal hypertext system used for keeping track of personal information on a distributed project. The Web was designed so that if it was used independently for two projects, and later relationships were found between the projects, then no major or centralized changes would have to be made, but the information could smoothly reshape to represent the new state of knowledge. This property of scaling has allowed the Web to expand rapidly from its origins at CERN across the Internet irrespective of boundaries of nations or disciplines.

Proceedings Article
01 Jan 1995
TL;DR: Some of the advantages found for prototyping with Web-based applications, including security aspects, are illustrated.
Abstract: We have experimented with developing a prototype Surgeon's Workstation which makes use of the World Wide Web client-server architecture. Although originally intended merely as a means for obtaining user feedback for use in designing a "real" system, the application has been adopted for use by our Department of Surgery. As they begin to use the application, they have suggested changes and we have responded. This paper illustrates some of the advantages we have found for prototyping with Web-based applications, including security aspects.


Journal ArticleDOI
30 Apr 1995
TL;DR: A new Web cataloguing strategy based upon the automatic analysis of documents stored in a proxy server cache, based upon a cache scanning mechanism is presented, and it is shown that it is becoming an increasingly useful resource.
Abstract: This paper presents a new Web cataloguing strategy based upon the automatic analysis of documents stored in a proxy server cache. This could be an elegant method of Web cataloguing as it creates no extra network load and runs completely automatically. Naturally such a mechanism will only reach a subset of Web documents, but at an institute such as the Alfred Wegener Institute, due to the fact that scientists tend to make quite good search engines, the cache usually contains large numbers of documents related to polar and marine research. Details of a database for polar, marine and global change research, based upon a cache scanning mechanism are given, and it is shown that it is becoming an increasingly useful resource. A problem with any collection of information about Web documents is that it quickly becomes old. Strategies have been developed to maintain the database consistency with respect to changes on the Web, while attempting to keep network load to a minimum. This has been found to provide a better quality of response and it appears to be keeping information in the database current. Such strategies are of interest to anyone attempting to create and maintain a Web document location resource.

Journal ArticleDOI
TL;DR: WAVE as mentioned in this paper is a 3D interface for Web information visualization and browsing, which uses the mathematical theory of concept analysis to conceptually cluster objects and provides a formal mechanism that automatically classifies and categorizes documents, creating a conceptual information space.
Abstract: Due to the rapid growth of the World-Wide Web, resource discovery has become an increasing problem. As an answer to the demand for information management, a third generation of World-Wide Web tools will evolve: Information gathering and processing agents. This paper describes WAVE (Web Analysis and Visualization Environment), a 3D interface for World-Wide Web information visualization and browsing. It uses the mathematical theory of concept analysis to conceptually cluster objects. So-called “conceptual scales” for attributes, such as location, title, keywords, topic, size, or modification time, provide a formal mechanism that automatically classifies and categorizes documents, creating a conceptual information space. A visualization shell serves as an ergonomically sound user interface for exploring this information space.

Journal ArticleDOI
TL;DR: The authors show how the system adapts to meet each user's needs and learns from user behaviour and examine how IndustryNet is enhancing the relationship between buyers/specifiers, manufacturers, and distributors.
Abstract: In describing a Web-based on-line information gathering tool, the authors show how the system adapts to meet each user's needs and learns from user behaviour. One of the system that has begun to make good on the early predictions is the IndustryNet marketplace on the Web. IndustryNet provides several services for its users. It is a news service, an on-line trade journal that provides instant reader service information, a searchable archive of new product announcements, a searchable archive of design application cases, and a provider of interactive access to manufacturer and distributor information. We outline the IndustryNet approach to conducting commerce on the Internet. We examine how IndustryNet is enhancing the relationship between buyers/specifiers, manufacturers, and distributors. >

Journal ArticleDOI
30 Apr 1995
TL;DR: This paper presents a prototype environment that facilitates the publishing of documents on the Web by automatically generating meta-information about the document, communicating this to a local scalable architecture, e.g. WHOIS + +, verifying the document's HTML compliance, maintaining referential integrity within the local database, and placing the document in a Web accessible area.
Abstract: This paper presents an environment for publishing information on the World-Wide Web (WWW). Previous work has pointed out that the explosive growth of the WWW is in part due to the ease with which information can be made available to Web users [23]. Yet this property can have negative impacts on the ability to find appropriate information as well as on the integrity of the information published. We present a prototype environment that facilitates the publishing of documents on the Web by automatically generating meta-information about the document, communicating this to a local scalable architecture, e.g. WHOIS + +, verifying the document's HTML compliance, maintaining referential integrity within the local database, and placing the document in a Web accessible area. Additionally, maintenance and versioning facilities are provided. This paper first discusses an idealized publishing environment, then describes our implementation, followed by a discussion of salient issues and future research areas.

Journal Article
TL;DR: In this article, business activity on the World Wide Web and reports on two surveys of business organizations with World-Wide Web pages are presented, where the main problems holding back further development were said to be: development of secure sites and suitable payment systems, faster connection times and wider access.
Abstract: This article introduces the issue of business activity on the World-Wide Web and reports on two surveys of business organizations with World-Wide Web pages. The first survey examined the WWW pages and categorized them by industry sector and by the nature of the use of the Web. The second survey used a brief e-mail questionnaire to those companies in the first survey that provided an e-mail contact address. The companies made use of the Web for publicity, advertising, customer support, and online selling. The major problems holding back further development were said to be: development of secure sites and suitable payment systems, faster connection times and wider access. The main future developments were seen to be: more interaction with users, more general content to be added, the addition of more products and services and increased use of multi-media.

Journal ArticleDOI
30 Apr 1995
TL;DR: The DCE Web toolkit demonstrates that a broad array of new services can be provided in a layer below HTTP, including security and location-independent hyperlinks without modification of the HTTP protocol and only minor changes to Web applications.
Abstract: New WWW services may be created either by extending Web protocols or by adding services in a lower layer. The DCE Web toolkit demonstrates that a broad array of new services can be provided in a layer below HTTP. Toolkit services include security, naming, and a transport-independent communications interface. Web applications can take advantage of these services by communicating their current protocols, such as HTTP, over the toolkit layer. The toolkit provides our prototype Web implementation with many new features, including security and location-independent hyperlinks without modification of the HTTP protocol and only minor changes to Web applications.

Proceedings Article
01 Jan 1995
TL;DR: A World Wide Web Common Gateway Interface package is described for accessing existing online interactive atlases of anatomy, which provides a parallel access path that has much broader potential for development of a distributed distance learning network in anatomy.
Abstract: A World Wide Web Common Gateway Interface package is described for accessing existing online interactive atlases of anatomy. The Web interface accesses the same 2-D and 3-D images of human neuroanatomy, knee anatomy and thoracic viscera that are currently accessed by a custom interactive atlas in distance learning courses. Although the Web interface is too slow to replace the existing atlas, it provides a parallel access path that has much broader potential for development of a distributed distance learning network in anatomy. By maintaining both access methods to the same information sources we continue to satisfy the fast interactivity needs for our local courses, while at the same time providing a migration path to the Web as the capabilities of Web browsers evolve.

Proceedings ArticleDOI
Jeff Sedayao1
05 Mar 1995
TL;DR: This paper attempts to characterize World Wide Web traffic patterns by reviewing the Web's HyperText Transfer Protocol (HTTP), with particular attention to latency factors.
Abstract: The World Wide Web (WWW) generates a significant and growing portion of traffic on the Internet. With the click of a mouse button, a person browsing on the WWW can generate megabytes of multimedia network traffic. WWW's growth and possible network impact merit a study of its traffic patterns, problems, and possible changes. This paper attempts to characterize World Wide Web traffic patterns. First, the Web's HyperText Transfer Protocol (HTTP) is reviewed, with particular attention to latency factors. User access patterns and file size distribution are then described. Next, the HTTP design issues are discussed, followed by a section on proposed revisions. Benefits and drawbacks to each of the proposals are covered. The paper ends with pointers toward more information on this area.

Book
01 Apr 1995
TL;DR: In Build a Web Site, net Genesis shows you how to exploit the power of Web protocols and standards so you can create and implement a successful Web site, extend its functionality, and maximize its commercial potential.
Abstract: From the Publisher: In Build a Web Site, net Genesis shows you how to exploit the power of Web protocols and standards so you can create and implement a successful Web site, extend its functionality, and maximize its commercial potential. Whether you are a budding, ambitious computer user or an experienced, Web-savvy programmer, you'll find: how best to create a home page on the web; program code to enhance your web site; expert advice on hardware, software, and information providers; programming tips to help you write powerful clients and servers; and annotated specifications for HTTP, HTML, and URL standards and protocols.

01 Jan 1995
TL;DR: The design of the user interface which is generated by LoganWeb in HTML is described, which includes the extensive use of hyperlinks to bring together related meeting information.
Abstract: Log files generated by electronic meeting software record the remarks typed by meeting participants and many other meeting events. In their raw format, these meeting logs are not convenient for the meeting participants to read and use as input to future meetings. LoganWeb is a tool which processes meeting log files and produces polymorphic meeting documents which contain a variety of summaries in human-readable form such as keyword indexes and participant summaries. LoganWeb generates polymorphic documents in the HTML format used for laying out documents on the World-Wide Web (Web). This allows exploitation of the powerful Web layout, hypertext and user interface facilities. Using a Web browser alongside the electronic meeting tool allows remotely located participants to consult valuable polymorphic documents from the current and past meetings. This paper describes the design of the user interface which is generated by LoganWeb in HTML. The design includes the extensive use of hyperlinks to bring together related meeting information . The powerful features of LoganWeb are illustrated by means of a meeting scenario which shows the main features of the meeting document tool and its user interface.

Proceedings Article
07 Nov 1995
TL;DR: Earlier work on text browsing and its adaptation to Web browsing is described and how MultiSurf fits into the overall goal of developing large-scale information exploration systems is commented on.
Abstract: Current World Wide Web browsers, e.g., Mosaic and Netscape, support users primarily in the task of browsing the Internet. In some situations, users want to explore topics for which relevant information may reside both on a large local database and on the Web. The MultiSurf project seeks to deal with these situations by integrating text browsing of a local database with hypertext browsing of the Web. In the current implementation, local queries are passed to Web index server(s) for simultaneous search on the Internet. An index server matches query terms with remote documents. Local and remote information is then presented to the user in separate windows. The existence of index servers is made transparent to the user. Instead of opening the URL of a server explicitly and filling the form, users click on the keywords of interest in the text. Multi-Surf composes these keywords into queries and passes them to the index servers. In addition to (hyper)text browsing, MultiSurf also supports visualization of the conceptual structure of a query session. This paper will describe our earlier work on text browsing and its adaptation to Web browsing. We will also discuss early impressions of the MultiSurf prototype and its functionality. We will comment on how MultiSurf fits into our overall goal of developing large-scale information exploration systems. Finally, we will describe a research strategy for integrating disparate systems through innovative user interfaces.

Dissertation
29 Sep 1995
TL;DR: This dissertation describes the development stages of a system which uses the Web searching technology and the designer chose to use the data flow diagram method in order to show the design of this system.
Abstract: This dissertation is being presented as a partial requirement for the degree of Master of Software System Technology in the University of Sheffield The thesis on title “Information Retrieval From The World Wide Web” is undertaken from May to September 1995 The aim of this project is to design a system which uses the Web searching technology When this system is completed, it can be implemented to do indexing and searching any interested document that stored in the Web This application may give some benefits to its users especially to the Department of Computer Science, University of Sheffield Ideally, this system is developed by using one of the software engineering approaches called “Incremental delivery strategy” The project began with the feasibility study of the techniques used in Information Retrieval and also the components involved in the World Wide Web and continued with the process of requirement analysis The designer chose to use the data flow diagram method in order to show the design of this system After the design was done, the system was implemented by using the C programming language and aided by the World Wide Web Library (or known as libwww) This dissertation describes the development stages of this system Problems faced and suggestions to recover were presented as discussions And other relevant information was attached as appendixes

Proceedings ArticleDOI
04 May 1995
TL;DR: This position document discusses the problem of creating complex Web services from simpler ones, under client control, and proposes meta-scripts as a solution, which contains procedures for accessing and composing simpler services.
Abstract: The World-Wide Web is the 'killer application' of the mid-1990's, and as a result it serves as the market place for various services. This position document discusses the problem of creating complex Web services from simpler ones, under client control. For example, one might want an annotated forecast of a company's stock price from three simple services, viz., stock price data, forecasting and annotation. We propose meta-scripts as a solution. A client accesses a meta-script, which contains procedures for accessing and composing simpler services. Meta-identifiers are used in the meta-script instead of static links. Developments in type-consistency mechanisms and various enabling services of the Web help in realization of the meta-script based approach. This model of service composition separates control from data manipulation, and gives this control to a client. As clients are empowered with the selection and coordination of service providers, this can lead to a free market for services.

01 Jan 1995
TL;DR: The CoVis Geosciences Web Server as discussed by the authors is an educational Web resource designed according to the plans outlined in this article, which provides a truly vital and useful resource for classroom learning.
Abstract: In Part II, we will lay out a plan for an educational Web server that goes beyond what is currently available, providing a truly vital and useful resource for classroom learning. Finally, we will describe current plans for the CoVis Geosciences Web Server, an educational Web resource designed according to the plans outlined in this article.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: The paper proposes the new concepts of demand stacking, virtual stackable objects, and pointer swizzling in the World-Wide Web.
Abstract: Object-stacking is a model for structuring object based systems. The main feature of object-stacking is that layers of objects with a uniform interface are constructed, and the functions of these objects are integrated. The effectiveness of object-stacking has been shown for file systems of distributed operating systems. The paper presents the application of the object-stacking model to the World-Wide Web, an information exploring/providing system on the Internet. Object-stacking gives powerful tools to information providers who use the World-Wide Web. The paper describes the implementation method of object-stacking in the World-Wide Web. The paper proposes the new concepts of demand stacking, virtual stackable objects, and pointer swizzling in the World-Wide Web.

Journal ArticleDOI
Don H. Johnson1
TL;DR: The rudiments of accessing the Web and how to create your own information resources are described, focusing on signal processing resources and how the Web catalyzes signal processing research and development.
Abstract: The World Wide Web (WWW) offers much information useful to the signal processing community. Using the Web, information having a variety of different forms can be transferred in a cohesive fashion. The article describes the rudiments of accessing the Web and how to create your own information resources. The authors focus on signal processing resources and how the Web catalyzes signal processing research and development.

Journal ArticleDOI
B.J. Spear1
01 Jun 1995
TL;DR: This paper is based on the author's experience in designing pages for the UK Patent Office Search and Advisory Service and getting the pages mounted on the Web and the contents of the Web pages are of crucial importance in presenting the right image to the world.
Abstract: The World Wide Web is the hypertext/graphic images part of the Internet, and forms an ideal medium for organisations to advertise their goods and services and, if desired, to engage in electronic trading. The contents of the Web pages are therefore of crucial importance in presenting the right image to the world. This paper is based on the author's experience in designing pages for the UK Patent Office Search and Advisory Service and getting the pages mounted on the Web.