scispace - formally typeset
Search or ask a question

Showing papers on "Web page published in 2000"


Proceedings Article
01 Jan 2000
TL;DR: In this paper, a semi-automated approach to ontology merging and alignment is presented. But the approach is not suitable for the problem of ontology alignment and merging, as it requires a large and tedious portion of the sharing process.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user’s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT.

1,119 citations


Journal ArticleDOI
TL;DR: It is shown that Web page providers not only have to make the content informative and timely, but they also need to design a speedy Web page by not putting in unnecessary pictorial data as it might jeopardise the display time.

960 citations


Journal ArticleDOI
01 Jun 2000
TL;DR: The WebML language and its accompanying design method are fully implemented in a pre-competitive Web design tool suite, called ToriiSoft, supporting advanced features like multi-device access, personalization, and evolution management.
Abstract: Designing and maintaining Web applications is one of the major challenges for the software industry of the year 2000. In this paper we present Web Modeling Language (WebML), a notation for specifying complex Web sites at the conceptual level. WebML enables the high-level description of a Web site under distinct orthogonal dimensions: its data content (structural model), the pages that compose it (composition model), the topology of links between pages (navigation model), the layout and graphic requirements for page rendering (presentation model), and the customization features for one-to-one content delivery (personalization model). All the concepts of WebML are associated with a graphic notation and a textual XML syntax. WebML specifications are independent of both the client-side language used for delivering the application to users, and of the server-side platform used to bind data to pages, but they can be effectively used to produce a site implementation in a specific technological setting. WebML guarantees a model-driven approach to Web site development, which is a key factor for defining a novel generation of CASE tools for the construction of complex sites, supporting advanced features like multi-device access, personalization, and evolution management. The WebML language and its accompanying design method are fully implemented in a pre-competitive Web design tool suite, called ToriiSoft.

929 citations


Book ChapterDOI
02 Oct 2000
TL;DR: This work describes ProtEgE-2000 knowledge model that makes the import and export of knowledge bases from and to other knowledge-base servers easy and demonstrates that it can resolve many of the differences between the knowledge models of ProtEe-2000 and Resource Description Framework (RDF)--a system for annotating Web pages with knowledge elements--by defining a new metaclass set.
Abstract: Knowledge-based systems have become ubiquitous in recent years. Knowledge-base developers need to be able to share and reuse knowledge bases that they build. Therefore, interoperability among different knowledge-representation systems is essential. The Open Knowledge-Base Connectivity protocol (OKBC) is a common query and construction interface for frame-based systems that facilitates this interoperability. ProtEgE-2000 is an OKBC-compatible knowledge-base-editing environment developed in our laboratory. We describe ProtEgE-2000 knowledge model that makes the import and export of knowledge bases from and to other knowledge-base servers easy. We discuss how the requirements of being a usable and configurable knowledge-acquisition tool affected our decisions in the knowledge-model design. ProtEgE-2000 also has a flexible metaclass architecture which provides configurable templates for new classes in the knowledge base. The use of metaclasses makes ProtEgE-2000 easily extensible and enables its use with other knowledge models. We demonstrate that we can resolve many of the differences between the knowledge models of ProtEgE-2000 and Resource Description Framework (RDF)--a system for annotating Web pages with knowledge elements--by defining a new metaclass set. Resolving the differences between the knowledge models in declarative way enables easy adaptation of ProtEgE-2000 as an editor for other knowledge-representation systems.

754 citations


Journal ArticleDOI
TL;DR: It is found that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary.
Abstract: This article presents a detailed workload characterization study of the 1998 World Cup Web site. Measurements from this site were collected over a three-month period. During this time the site received 1.35 billion requests, making this the largest Web workload analyzed to date. By examining this extremely busy site and through comparison with existing characterization studies, we are able to determine how Web server workloads are evolving. We find that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary. In particular, we uncover evidence that a better consistency mechanism is required for World Wide Web caches.

743 citations


Patent
25 Apr 2000
TL;DR: In this paper, a method for enabling users to exchange group electronic mail by establishing individual profiles and criteria, for determining personalized subsets within a group, is proposed, where users establish subscriptions to an electronic mailing list by specifying user profile data and acceptance criteria data to screen other users.
Abstract: A method for enabling users to exchange group electronic mail by establishing individual profiles and criteria, for determining personalized subsets within a group. Users establish subscriptions to an electronic mailing list by specifying user profile data and acceptance criteria data to screen other users. When a user subscribes, a web server establishes and stores an individualized recipient list including each matching subscriber and their degree of one-way or mutual match with the user. When the user then sends a message to the mailing list, an email server retrieves 100% her matches and then optionally filters her recipient list down to a message distribution list using each recipient's message criteria. The message is then distributed to matching users. Additionally, email archives and information contributions from users are stored in a database. A web server creates an individualized set of web pages for a user from the database, containing contributions only from users in his recipient list. In other embodiments, users apply one-way or mutual criteria matching and message profile criteria to other group forums, such as web-based discussion boards, chat, online clubs, USENET newsgroups, voicemail, instant messaging, web browsing side channel communities, and online gaming rendezvous.

730 citations


Journal ArticleDOI
07 Dec 2000
TL;DR: The HP Labs' “Cooltown” project has been exploring opportunities through an infrastructure to support “web presence” for people, places and things, providing a model for supporting nomadic users without a central control point.
Abstract: The convergence of Web technology, wireless networks, and portable client devices provides new design opportunities for computer/communications systems. In the HP Labs' Cooltown project we have been exploring these opportunities through an infrastructure to support Web presence for people, places and things. We put Web servers into things like printers and put information into Web servers about things like artwork; we group physically related things into places embodied in Web servers. Using URLs for addressing, physical URL beaconing and sensing of URLs for discovery, and localized Web servers for directories, we can create a location-aware but ubiquitous system to support nomadic users. On top of this infrastructure we can leverage Internet connectivity to support communications services. Web presence bridges the World Wide Web and the physical world we inhabit, providing a model for supporting nomadic users without a central control point.

711 citations


Patent
03 Mar 2000
TL;DR: In this paper, a system is described that facilitates web-based information retrieval and display system, where a wireless phone or similar hand-held wireless device with Internet Protocol capability is combined with other peripherals to provide a portable portal into the Internet.
Abstract: A system is disclosed that facilitates web-based information retrieval and display system. A wireless phone or similar hand-held wireless device with Internet Protocol capability is combined with other peripherals to provide a portable portal into the Internet. The wireless device prompts a user to input information of interest to the user. This information is transmitted a query to a service routine (running on a Web server). The service routine then queries the Web to find price, shipping and availability information from various Web suppliers. This information is then available for use by various applications through an interface support framework.

643 citations


Proceedings Article
10 Sep 2000
TL;DR: An architecture for the incremental crawler is proposed, which combines the best design choices, which can improve the ``freshness'' of the collection significantly and bring in new pages in a more timely manner.
Abstract: In this paper we study how to build an effective incremental crawler. The crawler selectively and incrementally updates its index and/or local collection of web pages, instead of periodically refreshing the collection in batch mode. The incremental crawler can improve the ``freshness'' of the collection significantly and bring in new pages in a more timely manner. We first present results from an experiment conducted on more than half million web pages over 4 months, to estimate how web pages evolve over time. Based on these experimental results, we compare various design choices for an incremental crawler and discuss their trade-offs. We propose an architecture for the incremental crawler, which combines the best design choices.

592 citations


Proceedings ArticleDOI
29 Feb 2000
TL;DR: The paper describes the methodology and the software development of XWRAP, an XML-enabled wrapper construction system for semi-automatic generation of wrapper programs, and introduces and develops a two-phase code generation framework.
Abstract: The paper describes the methodology and the software development of XWRAP, an XML-enabled wrapper construction system for semi-automatic generation of wrapper programs. By XML-enabled we mean that the metadata about information content that are implicit in the original Web pages will be extracted and encoded explicitly as XML tags in the wrapped documents. In addition, the query based content filtering process is performed against the XML documents. The XWRAP wrapper generation framework has three distinct features. First, it explicitly separates tasks of building wrappers that are specific to a Web source from the tasks that are repetitive for any source, and uses a component library to provide basic building blocks for wrapper programs. Second, it provides a user friendly interface program to allow wrapper developers to generate their wrapper code with a few mouse clicks. Third and most importantly, we introduce and develop a two-phase code generation framework. The first phase utilizes an interactive interface facility to encode the source-specific metadata knowledge identified by individual wrapper developers as declarative information extraction rules. The second phase combines the information extraction rules generated at the first phase with the XWRAP component library to construct an executable wrapper program for the given Web source. We report the initial experiments on performance of the XWRAP code generation system and the wrapper programs generated by XWRAP.

520 citations


Proceedings Article
01 Jan 2000
TL;DR: A joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives is described, based on a Probabilistic factor decomposition.
Abstract: We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.

Journal ArticleDOI
TL;DR: The first impressions of web pages presented to users was investigated by using 13 different web pages, three types of scales and 18 participants, finding four important dimensions: beauty, mostly illustrations versus mostly text, overview and structure.
Abstract: The first impressions of web pages presented to users was investigated by using 13 different web pages, three types of scales and 18 participants. Multidimensional analysis of similarity and preference judgements found four important dimensions: beauty, mostly illustrations versus mostly text, overview and structure. Category scales indicated the existence of two factors related to formal aspects and to appeal of the objects, respectively. The best predictor for the overall judgement of the category scales was beauty. Property vector fitting of the multidimensional solutions with the category scales further indicated the importance of beauty for the preference space. Aspects of usability, product design and aesthetics are discussed.

Patent
24 Jan 2000
TL;DR: In this article, an Internet-based video feed management system employs a network of local video-propagation servers located in different localities for receiving the video feeds from different source locations, and a master authorization server for receiving and granting requests via Internet from requesting parties for access to any of the video streams transmitted to the video propagation servers.
Abstract: An Internet-based video feed management system controls, manages, and efficiently administers the commercial distribution of live video feeds from on-site video cameras as well as other sources of video feeds to production companies at other locations. The system employs a network of local video-propagation servers located in different localities for receiving the video feeds from the different source locations, and a master authorization server for receiving and granting requests via Internet from requesting parties for access to any of the video feeds transmitted to the video-propagation servers. The master server issues an access code to the requesting party and establishing a unique publishing point for the requested video feed from the video-propagation server handling the feed. The on-site video cameras can supply live video feeds to the requesting parties, or the video feeds can be transmitted to a video-propagation server for storage and later re-transmission. The master server is provided with a master feed list and a pricing table for computing billings to requesting parties, and payments to sources of video feeds. The master feed list is updated by feed listings input to the video-propagation servers. For live video feeds captured by different types of video cameras at the on-site locations, the system allows a requesting party to access the video camera for remote control on the Internet. An universal control panel GUI is provided for the browser of the requesting party, and is used to issue command codes corresponding to the respective video camera type. The system can be used to automatically generate video Web pages hosted on the master server and linked to the clients' Web sites. The master server allows the client to select from different display templates, and to upload their identification graphics for incorporation into the display template with the desired video feed, thereby obtaining a marked reduction in production costs for creating video Web pages for e-commerce, live events programming, etc.

Journal ArticleDOI
TL;DR: This paper considers how the semantic Web will provide intelligent access to heterogeneous and distributed information, enabling software products (agents) to mediate between user needs and available information sources.
Abstract: The Web has drastically changed the availability of electronic information, but its success and exponential growth have made it increasingly difficult to find, access, present and maintain such information for a wide variety of users. In reaction to this bottleneck many new research initiatives and commercial enterprises have been set up to enrich available information with machine-processable semantics. The paper considers how the semantic Web will provide intelligent access to heterogeneous and distributed information, enabling software products (agents) to mediate between user needs and available information sources. The paper discusses the Resource Description Framework, XML and other languages.

Journal ArticleDOI
TL;DR: The effects of webpage complexity and dynamic content on the hierarchy-of-effects were experimentally tested and it is concluded that consumers experiences with the web, and their attitudes-towards-websites are important factors in assessing advertising effects.
Abstract: The purpose of this study was to replicate and extend a previous study (Stevenson, Bruner, and Kumar, 2000) by further exploring the advertising hierarchy-of-effects and its antecedents in the context of the world wide web. In doing this, the effects of webpage complexity and dynamic content (e.g., animated graphics and commercials) on the hierarchy-of-effects were experimentally tested using non student subjects. It concludes that consumers experiences with the web, and their attitudes-towards-websites are important factors in assessing advertising effects.

Patent
09 May 2000
TL;DR: In this paper, an advertisement system and method are provided for inserting into an end user communication message a background reference to an advertisement, which is usually stored at the message server or other location remote from the end user recipient.
Abstract: An advertisement system and method are provided for inserting into an end user communication message a background reference to an advertisement. In some embodiments, the background reference causes an advertisement image to be tiled, or watermarked, across an end user screen behind the text of an e-mail message or public posting. A message server inserts the background reference after receiving a message originally sent from an end user originator and before sending the message to be delivered to an end user recipient. When necessary, the message server will convert at least a portion of the message into a proper format, such as HTML, before inserting the background reference to an advertisement, which is preferably selected in accordance with end user recipient demographic information and/or ad exposure statistics. The advertisement itself, often a graphical file, is preferably not transmitted with the message, but is typically stored at the message server or other location remote from the end user recipient. Preferably, the message server maintains and refer to records on each end user recipient to allow for selective enablement of background reference insertion and overwriting based upon end user preferences. According to various “non-web” example embodiments, the message server transmits an SMTP, POP3 or NNTP message with an HTML portion for a respective HTML-compatible client. In other “web-based” example embodiments, the message server transmits the entire message in HTML to be used as a stand-alone web page or as a portion of a larger page employing frames or tables.

Patent
27 Apr 2000
TL;DR: In this article, a search engine manages the indexing of web page contents and accepts user selection criteria to find and report hits that meet the search criteria, the search engine has an associated crawler function wherein display images of the web pages are rendered and stored as snapshots, preferably when the pages are indexed.
Abstract: A search engine manages the indexing of web page contents and accepts user selection criteria to find and report hits that meet the search criteria. The inventive search engine has an associated crawler function wherein display images of the web pages are rendered and stored as snapshots, preferably when the pages are indexed. The search engine reports search results by composing an html page with links to the corresponding page hits and containing snapshot reduced size graphic images showing the web pages as they appeared when fetched and stored as snapshots.

Patent
17 Mar 2000
TL;DR: In this paper, a search engine system assists users in locating web pages from which user-specified products can be purchased, based on a set of rules, according to the likelihood of including an online product offering.
Abstract: A search engine system assists users in locating web pages from which user-specified products can be purchased. Web pages located by a crawler program are scored, based on a set of rules, according to likelihood of including an online product offering. A query server accesses an index of the scored web pages to locate pages that are both responsive to a user's search query and likely to include a product offering. In one embodiment, the responsive web pages are listed on a composite search results page together with products that satisfy the query.

Patent
Shawn Domenic Loveland1
11 Feb 2000
TL;DR: In this article, an electronic personal assistant (ePA) is defined as a set of applications providing dual interfaces for rendering services and data based upon the manner in which a user accesses the data.
Abstract: A system enables communication between server resources and a wide spectrum of end-terminals to enable access to the resources of both converged and non-converged networks via voice and/or electronically generated commands. An electronic personal assistant (ePA) incorporates generalizing/abstracting communications channels, data and resources provided through a converged computer/telephony system interface such that the data and resources are readily accessed by a variety of interface formats including a voice interface or data interface. A set of applications provides dual interfaces for rendering services and data based upon the manner in which a user accesses the data. An electronic personal assistant in accordance with an embodiment of the invention provides voice/data access to web pages, email, file shares, etc. A voice-based resource server authenticates a user by receiving vocal responses to one or more requests variably selected and issued by a speaker recognition-based authentication facility. Thereafter an application proxy is created.

ReportDOI
30 Jul 2000
TL;DR: This work presents SHOE, a web-based knowledge representation language that supports multiple versions of ontologies, in the terms of a logic that separates data from ontologies and allows ontologies to provide different perspectives on the data.
Abstract: We discuss the problems associated with managing ontologies in distributed environments such as the Web. The Web poses unique problems for the use of ontologies because of the rapid evolution and autonomy of web sites. We present SHOE, a web-based knowledge representation language that supports multiple versions of ontologies. We describe SHOE in the terms of a logic that separates data from ontologies and allows ontologies to provide different perspectives on the data. We then discuss the features of SHOE that address ontology versioning, the effects of ontology revision on SHOE web pages, and methods for implementing ontology integration using SHOE’s extension and version mechanisms.

Journal ArticleDOI
01 Jun 2000
TL;DR: Using empirical models and a novel analytic metric of `up-to-dateness', the rate at which Web search engines must re-index the Web to remain current is estimated.
Abstract: Recent experiments and analysis suggest that there are about 800 million publicly-indexable Web pages. However, unlike books in a traditional library, Web pages continue to change even after they are initially published by their authors and indexed by search engines. This paper describes preliminary data on and statistical analysis of the frequency and nature of Web page modifications. Using empirical models and a novel analytic metric of `up-to-dateness', we estimate the rate at which Web search engines must re-index the Web to remain current.

Proceedings ArticleDOI
01 Apr 2000
TL;DR: This work designed and implemented new Web browsing facilities to support effective navigation on Personal Digital Assistants (PDAs) with limited capabilities: low bandwidth, small display, and slow CPU.
Abstract: We have designed and implemented new Web browsing facilities to support effective navigation on Personal Digital Assistants (PDAs) with limited capabilities: low bandwidth, small display, and slow CPU. The implementation supports wireless browsing from 3Com's Palm Pilot. An HTTP proxy fetches web pages on the client's behalf and dynamically generates summary views to be transmitted to the client. These summaries represent both the link structure and contents of a set of web pages, using information about link importance. We discuss the architecture, user interface facilities, and the results of comparative performance evaluations. We measured a 45% gain in browsing speed, and a 42% reduction in required pen movements.

Proceedings ArticleDOI
01 Jul 2000
TL;DR: Empirically testing whether topical locality mirrors spatial locality of pages on the Web finds that the likelihood of linked pages having similar textual content to be high, and the similarity of sibling pages increases when the links from the parent are close together, show the foundations necessary for the success of many web systems.
Abstract: Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.

Patent
05 Oct 2000
TL;DR: In this article, an Internet (280) radio for portable applications and uses such as in an automobile (184) is described, which allows access to a host of audio, visual and other information.
Abstract: An Internet (280) radio for portable applications and uses such as in an automobile (184). The Internet (280) radio allows access to a host of audio, visual and other information. Normal radio function is provided along with programmable content and channel selection (162), as well as automatic content and channel updating by location and style. Internet access is also provided. Direct or targeted advertising, as well as electronic commerce is supported. Connection to the Internet (280) is through wireless communications (210). Programmability is achieved off-line via a web page and remote computer (206). Customized information is also communicated to the radio such as stock quotes, travel information, advertising, and e-mail. Onboard global positioning (110) allows for channel (162) updating by location, traffic information, geographic advertising and available similar content.

Proceedings ArticleDOI
01 Aug 2000
TL;DR: A new methodology for visualizing navigation patterns on a Web site that clusters users according to the order in which they request Web pages using a mixture of rst-order Markov models using the ExpectationMaximization algorithm.
Abstract: We present a new methodology for visualizing navigation patterns on a Web site. In our approach, we rst partition site users into clusters such that only users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model based (as opposed to distance based) and partitions users according to the order in which they request Web pages. In particular, we cluster users by learning a mixture of rst-order Markov models using the ExpectationMaximization algorithm. Our algorithm scales linearly with both number of users and number of clusters, and our implementation easily handles millions of users and thousands of clusters in memory. In the paper, we describe the details of our technology and a tool based on it called WebCANVAS. We illustrate the use of our technology on user-traAEc data from msnbc.com.

Patent
06 Jul 2000
TL;DR: In this paper, a visual web site analysis program, implemented as a collection of software components, provides a variety of features for facilitating the analysis, management and load-testing of Web sites.
Abstract: A visual Web site analysis program, implemented as a collection of software components, provides a variety of features for facilitating the analysis, management and load-testing of Web sites. A mapping component scans a Web site over a network connection and builds a site map which graphically depicts the URLs and links of the site. Site maps are generated using a unique layout and display methodology which allows the user to visualize the overall architecture of the Web site. Various map navigation and URL filtering features are provided to facilitate the task of identifying and repairing common Web site problems, such as links to missing URLs. A dynamic page scan feature enables the user to include dynamically-generated Web pages within the site map by capturing the output of a standard Web browser when a form is submitted by the user, and then automatically resubmitting this output during subsequent mappings of the site. An Action Tracker module detects user activity and behavioral data (link activity levels, common site entry and exit points, etc.) from server log files and then superimposes such data onto the site map. A Load Wizard module uses this activity data to generate testing scenarios for load testing the Web site.

Patent
18 Apr 2000
TL;DR: In this paper, a system for retrieving multimedia information is provided using a computer coupled to a computer-based network, such as the Internet, and particularly the World Wide Web (WWW), the system includes a web browser, a graphic user interface enabled through the web browser and an agent server for producing, training, and evolving first agents and second agents.
Abstract: A system for retrieving multimedia information is provided using a computer coupled to a computer-based network, such as the Internet, and particularly the World Wide Web (WWW). The system includes a web browser, a graphic user interface enabled through the web browser to allow a user to input a query representing the information the user wishes to retrieve, and an agent server for producing, training, and evolving first agents and second agents. Each of the first agents retrieves documents (Web page) from the network at a different first network address and at other addresses linked from the document at the first network address. Each of the second agents executes a search on different search engines on the network in accordance with the query to retrieve documents at network addresses provided by the search engine. The system includes a natural language processor which determines the subject categories and important terms of the query, and of the text of each agent retrieved document. The agent server generates and trains an artificial neural network in accordance with the natural language processed query, and embeds the trained artificial neural network in each of the first and second agents. During the search, the first and second agents process through their artificial neural network the subject categories and important terms of each document they retrieve to determine a retrieval value for the document. The graphic user interface displays to the user the addresses of the retrieved documents which are above a threshold retrieval value. The user manually, or the agent server automatically, selects which of the retrieved documents are relevant. Periodically, the artificial neural network of the first and second agents is expanded and retrained by the agent server in accordance with the selected relevant documents to improve their ability to retrieve documents which may be relevant to the query. Further, the agent server can evolve an artificial neural network based on the current artificial neural network, the retrieved documents, and their selected relevancy, by iteratively producing, training, and testing several generations of neural networks to produce an evolved agent. The artificial neural network of the evolved agent then replaces the current artificial neural network used by the agents to search the Internet. One or more concurrent search of the Internet may be provided.

Journal ArticleDOI
TL;DR: It is suggested that Web-site designers consider the genres that are appropriate for their situation and attempt to reproduce or adapt familiar genres.
Abstract: The World Wide Web is growing quickly and being applied to many new types of communications As a basis for studying organizational communications, Yates and Orlikowski (1992; Orlikowski & Yates, 1994) proposed using genres They defined genres as "typified communicative actions characterized by similar substance and form and taken in response to recurrent situations" (Yates & Orlikowski, 1992, p299) They further suggested that communications in a new media would show both reproduction and adaptation of existing communicative genres as well as the emergence of new genres We studied these phenomena on the World Wide Web by examining 1000 randomly selected Web pages and categorizing the type of genre represented Although many pages recreated genres familiar from traditional media, we also saw examples of genres being adapted to take advantage of the linking and interactivity of the new medium and novel genres emerging to fit the unique communicative needs of the audience We suggest that Web-site design

Journal ArticleDOI
TL;DR: Esrock et al. as mentioned in this paper reported on two sets of data that were collected to lay the groundwork for developing an empirically based typology of corporate World Wide Web sites and found that more than 85% of the sample had substantial content that addressed two or more publics.

Journal Article
Steve Lawrence1
TL;DR: Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts.
Abstract: Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diver-