scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Internet Computing in 1999"


Journal Article•DOI•
TL;DR: The authors review the state of the art in load balancing techniques on distributed Web-server systems, and analyze the efficiencies and limitations of the various approaches.
Abstract: Popular Web sites cannot rely on a single powerful server nor on independent mirrored-servers to support the ever-increasing request load. Distributed Web server architectures that transparently schedule client requests offer a way to meet dynamic scalability and availability requirements. The authors review the state of the art in load balancing techniques on distributed Web-server systems, and analyze the efficiencies and limitations of the various approaches.

717 citations


Journal Article•DOI•
TL;DR: SLP provides for fully decentralized operation and scales from a small, unadministered network up to an enterprise network and has also been designed to support security extensibility, browsing operations, and operation over IPv6.
Abstract: As long as configuration remains difficult, network administration will be expensive, tedious, and troublesome and users-especially mobile users-will be unable to take full advantage of network services. SLP is an IETF Proposed Standard for enabling network-based service discovery and automatic configuration of clients. It provides for fully decentralized operation and scales from a small, unadministered network up to an enterprise network. The essential function of SLP is service discovery, but it has also been designed to support security extensibility, browsing operations, and operation over IPv6. The article describes SLP's operation.

520 citations


Journal Article•DOI•
TL;DR: The authors give an overview of Manet technology and current IETF efforts toward producing routing and interface definition standards that support it within the IP suite.
Abstract: Internet-based mobile ad hoc networking is an emerging technology that supports self-organizing, mobile networking infrastructures. The technology enables an autonomous system of mobile nodes, which can operate in isolation or be connected to the greater Internet. Mobile ad hoc networks (Manets) are designed to operate in widely varying environments, from forward-deployed military Manets with hundreds of nodes per mobile domain to applications of low-power sensor networks and other embedded systems. Before Manet technology can be easily deployed, however, improvements must be made in such areas as high-capacity wireless technologies, address and location management, interoperability and security. The authors give an overview of Manet technology and current IETF efforts toward producing routing and interface definition standards that support it within the IP suite.

259 citations


Journal Article•DOI•
TL;DR: The authors identify the main pitfalls awaiting the agent system developer and recommend ways to avoid or rectify them where possible.
Abstract: While the theoretical and experimental foundations of agent-based systems are becoming increasingly well understood, comparatively little effort has been devoted to understanding the everyday reality of carrying out an agent-based development project. As a result, agent system developers needlessly repeat the same mistakes. At best, this wastes resources; at worst, projects fail. The authors identify the main pitfalls awaiting the agent system developer and recommend ways to avoid or rectify them where possible.

217 citations


Journal Article•DOI•
TL;DR: The WebComposition Markup Language is introduced, an XML-based language that implements the model that embodies object-oriented principles such as modularity, abstraction and encapsulation, and WCML, a model for Web application development that implements these principles.
Abstract: Most Web applications are still developed ad hoc. One reason is the gap between established software design concepts and the low-level Web implementation model. We summarize work on WebComposition, a model for Web application development, then introduce the WebComposition Markup Language, an XML-based language that implements the model. WCML embodies object-oriented principles such as modularity, abstraction and encapsulation.

162 citations


Journal Article•DOI•
TL;DR: In this article, the authors discuss the need for large-scale, distributed systems that operate in unbounded network environments to provide essential services and maintain essential properties in the face of attacks, failures, and accidents.
Abstract: Society is increasingly dependent upon large-scale, distributed systems that operate in unbounded network environments Survivability helps ensure that such systems deliver essential services and maintain essential properties in the face of attacks, failures, and accidents

158 citations


Journal Article•DOI•
TL;DR: The article focuses on the use of software agents in such Internet based auctions that enable the exchange of goods much as stock exchanges manage the buying and selling of securities.
Abstract: Auctions on the Internet can involve not only consumers, but also businesses. They can form dynamically and enable the exchange of goods much as stock exchanges manage the buying and selling of securities. But because auctions have a wide scope and a short lifetime, the opportunistic behavior needed for successful interaction requires agents to both participate in and manage auctions. The article focuses on the use of software agents in such Internet based auctions.

146 citations


Journal Article•DOI•
R. Droms1•
TL;DR: The author describes the IETF's Dynamic Host Configuration working group's work on DHCP in detail, outlines the management of a DHCP service, and discusses new DHCP features, including the version being developed for IPv6.
Abstract: The TCP/IP suite has various protocols that must be carefully configured so that networked devices operate efficiently. Setting values by hand is time-consuming and error-prone; moreover, several trends are adding to the need for automated parameter configuration and administration. The Dynamic Host Configuration Protocol, accepted as a proposed standard by the Internet Engineering Task Force, offers a way to automatically configure network devices that use TCP/IP. These devices use DHCP to locate and contact servers, which return the appropriate configuration information as data. The DHCP servers act as agents for network administrators and automate the process of network address allocation and parameter configuration. Addresses can be assigned and individual addresses can be reassigned to new DHCP clients without explicit intervention by a network administrator. The IETF's Dynamic Host Configuration (DHC) working group is now at work adding new features to DHCP. The author describes the group's work on DHCP in detail, outlines the management of a DHCP service, and discusses new DHCP features, including the version being developed for IPv6.

129 citations


Journal Article•DOI•
TL;DR: To meet the scalability and availability requirements of mass-market deployment of carrier-grade telephony services, the authors propose an architecture based on the decomposition of Internet gateway functionality.
Abstract: Network gateways are used to set up calls between the public switched telephone network (PSTN) and the Internet, but existing gateways support a relatively small number of lines. To meet the scalability and availability requirements of mass-market deployment of carrier-grade telephony services, the authors propose an architecture based on the decomposition of Internet gateway functionality. The media transformation function of today's H.323 gateways is separated from the gateway control function, and intelligence is centralized in a call agent. The Media Gateway Control Protocol is introduced; MGCP is an Internet draft currently under discussion by the IETF for standardizing the interface between a call agent and the media transformation gateway.

118 citations


Journal Article•DOI•
TL;DR: Internet service providers offering dial-up access and purveyors of enterprise networks supporting telecommuters face some difficult challenges, as ever-increasing residential dialup subscribers demand available modem (or ISDN) ports, or threaten to take their business elsewhere.
Abstract: Internet service providers (ISPs) offering dial-up access and purveyors of enterprise networks supporting telecommuters face some difficult challenges. Ever-increasing residential dialup subscribers demand available modem (or ISDN) ports, or threaten to take their business elsewhere. To meet this demand, ISPs (dial providers) are deploying a large number of-complex, port-dense network access servers (NAS) to handle thousands of individual dial-up connections. At the same time, the miniaturization of stationary office essentials, such as the laptop computer and cellular telephone, has coupled with the need for maximum customer face time to create a workforce in perpetual motion. These "road warriors" require secure and reliable access to email and Web resources from hotels, airports, and virtual offices around the world. But dial providers must do more than simply offer an available modem port at the other end of a telephone call. They must protect against theft-of-service attacks by unscrupulous individuals with excess free time; they must verify subscribers' levels of access authorization; and for cost recovery, billing, and resource planning purposes, they may need to meter the connection time to the network. Furthermore, to provide maximum coverage to a growing roaming and mobile subscriber base, they may choose to pool their NAS resources while retaining control over their subscribers' access, usage, and billing information. All these services require coordination between the various administrative systems supported by the dial providers in partnership with each other.

110 citations


Journal Article•DOI•
TL;DR: Two new policies implemented in the Squid cache server show marked improvement over the standard mechanism, which affects page load time.
Abstract: Web cache replacement policy choice affects network bandwidth demand and object hit rate, which affect page load time. Two new policies implemented in the Squid cache server show marked improvement over the standard mechanism.


Journal Article•DOI•
TL;DR: The authors have developed an algorithm for dynamically altering the organization of pages at sites where the main design objective is to give users fast access to requested data.
Abstract: Awkward arrangement of documents in an HTML tree can discourage users from staying at a Web site. The authors have developed an algorithm for dynamically altering the organization of pages at sites where the main design objective is to give users fast access to requested data. The algorithm reads information from the HTTP log file and computes the relative popularity of pages within the site. Based on popularity (defined as a relationship between number of accesses, time spent, and location of the page), the hierarchical relationships between pages are rearranged to maximize accessibility for popular pages.

Journal Article•DOI•
TL;DR: There is no standard way for service requests to trigger a workflow process and monitor it across platforms and between organizations, and Web protocols provide no inherent support for automated change notification, handoff of control, or initiation of human- and computer-executed activities.
Abstract: Many organizations are beginning to discover what workflow vendors already know-namely, that the real value of the Web lies not just in its documents and resources, but also in the activities surrounding them. Collaborative work involves not only handoff and routing of data between humans, but the coordination of activities among them and with automated agents as well. Workflow engines typically ensure that the information ends up on the right desktop along with the tools to accomplish a slated task. It is difficult to synchronize work and activity tracking within a technically diverse organization. Tools and formats typically differ among workgroups, as do skill levels and understanding among individual participants in a process. Browser-based user interfaces offer a mechanism to easily access distributed information and hand off documents and data over the Web, but at the expense of being able to effectively manage and track work activities. Web protocols provide no inherent support for automated change notification, handoff of control, or initiation of human- and computer-executed activities. In essence, there is no standard way for service requests to trigger a workflow process and monitor it across platforms and between organizations.

Journal Article•DOI•
TL;DR: The paper discusses the W3C Document Object Model Level 1 which defines the standardized interface which is a foundation for the development of applications that use Web documents in an object-oriented paradigm.
Abstract: Developers manipulating Web documents to provide user interaction need a standard interface to those documents. The paper discusses the W3C Document Object Model Level 1 which defines the standardized interface. The DOM Level 1 defines a language- and platform-neutral API for accessing, navigating and manipulating HTML and XML documents. As such, it is a foundation for the development of applications that use Web documents in an object-oriented paradigm.

Journal Article•DOI•
TL;DR: The AARIA project provides a demonstration of how the manufacturing complex can move toward mass customization by using the Internet as a natural platform for managing distributed operations and by using autonomous agents as the tools for efficiently reconfiguring available productive resources.
Abstract: Major market trends are driving the manufacturing complex from mass production, where the manufacturer tells the customer what to buy, to mass customization, where the customer tells the manufacturing complex what to make. The Internet supports this transformation with global communication between customers and manufacturers. However, the physical realities of manufacturing impose requirements for more than just communication. In some sense, manufacturing enterprises must actually exist over the Internet as an efficiently managed distributed enterprise. Software agents offer a means to achieve this link and thus a reliable global infrastructure for mass customization. The AARIA project provides a demonstration of how the manufacturing complex can move toward mass customization by using the Internet as a natural platform for managing distributed operations and by using autonomous agents as the tools for efficiently reconfiguring available productive resources. We begin by looking at the unique requirements manufacturing imposes on the infrastructure for virtual enterprises and describing the AARIA project components for meeting them. We then describe our scheduling technologies for efficient distributed resource management.

Journal Article•DOI•
H. Frazier1, H. Johnson•
TL;DR: The paper discusses the Gigabit Ethernet in terms of the ISO seven layer reference model in order to extend the operating speed of the world's most deployed LAN to 1 billion bits per second while maintaining compatibility with the installed base of 10 Mbps and 100 Mbps equipment.
Abstract: Gigabit Ethernet is the latest in the family of high-speed Ethernet technologies. It extends the operating speed of the world's most deployed LAN to 1 billion bits per second while maintaining compatibility with the installed base of 10 Mbps and 100 Mbps Ethernet equipment. The paper discusses the Gigabit Ethernet in terms of the ISO seven layer reference model.

Journal Article•
TL;DR: In this paper, the authors discuss the use of the personal ontology and propose an organization scheme based on a model of an office and its information, an ontology, coupled with the proper tools for using it.
Abstract: Corporations can suffer from too much information, and it is often inaccessible, inconsistent, and incomprehensible. The corporate solution entails knowledge management techniques and data warehouses. The paper discusses the use of the personal ontology. The promising approach is an organization scheme based on a model of an office and its information, an ontology, coupled with the proper tools for using it.

Journal Article•DOI•
TL;DR: Web technologies that look to integrate aspects of object technology with the basic infrastructure of the Web, and how to integrate these technologies with general distributed computing environments, are surveyed.
Abstract: The World Wide Web is an increasingly important factor in planning for general distributed computing environments. This article surveys Web technologies that look to integrate aspects of object technology with the basic infrastructure of the Web.

Journal Article•DOI•
T. Narten1•
TL;DR: This paper considers how the neighbor discovery protocols provide address-resolution services and allow hosts to find and keep track of routers, determine when a neighbor becomes unreachable, and switch dynamically to backup routers should the ones they are using fail.
Abstract: The next-generation Internet Protocol, IPv6, includes autoconfiguration facilities that allow IPv6 hosts to plug into the network and start communicating with no special configuration required. These facilities address the requirements of hosts connecting to isolated standalone networks (such as home networks). The paper considers how the neighbor discovery protocols provide address-resolution services and allow hosts to find and keep track of routers, determine when a neighbor becomes unreachable, and switch dynamically to backup routers should the ones they are using fail.

Journal Article•DOI•
TL;DR: The paper discusses the use of the personal ontology and the promising approach is an organization scheme based on a model of an office and its information, an ontology, coupled with the proper tools for using it.
Abstract: Corporations can suffer from too much information, and it is often inaccessible, inconsistent, and incomprehensible. The corporate solution entails knowledge management techniques and data warehouses. The paper discusses the use of the personal ontology. The promising approach is an organization scheme based on a model of an office and its information, an ontology, coupled with the proper tools for using it.

Journal Article•DOI•
TL;DR: The author explains the technical rationale for design decisions that underlay the Mbone tools, describes the evolution of this work from early prototypes into Internet standards, and outlines the open challenges that remain and must be overcome to realize a ubiquitous multicast infrastructure.
Abstract: This survey describes the roots of IP Multicast, the evolution of the Internet Multicast Backbone, or Mbone, and the technologies that have risen around the Mbone to support large-scale Internet-based multimedia conferencing. The author explains the technical rationale for design decisions that underlay the Mbone tools, describes the evolution of this work from early prototypes into Internet standards, and outlines the open challenges that remain and must be overcome to realize a ubiquitous multicast infrastructure.

Journal Article•DOI•
TL;DR: Empirical and analytic characterizations of observed user session-level behavior and interactive behavior of the Multimedia Asynchronous Networked Individualized Courseware, or MANIC, which streams synchronized CM and HTML documents to remote users are presented.
Abstract: Considerable research has gone into investigating networking and operating system mechanisms to support the transfer and playout of stored continuous media, but there is very little information available about how users actually interact with such systems. Developing a user workload characterization can help in the design and evaluation of efficient CM resource allocation and access mechanisms. The authors developed an interactive Web-based, multimedia, client-server application, known as the Multimedia Asynchronous Networked Individualized Courseware, or MANIC, which streams synchronized CM (currently audio) and HTML documents to remote users. This article presents empirical and analytic characterizations of observed user session-level behavior (for example, the length of individual sessions) and interactive behavior (for example, the time between starting, stopping, and pausing the audio within a session). The data come from a full-semester senior-level course given by the University of Massachusetts to more than 200 students who used MANIC to listen to the stored audio lectures and to view the lecture notes.

Journal Article•DOI•
TL;DR: The authors propose a CGI (common gateway interface) solution for trusted user/developers and the Call Processing Language (CPL) -a simple, robust, safe, call processing language- for untrusted user/Developers.
Abstract: The Internet offers an opportunity to enhance traditional telephony services, such as call forwarding, through interaction with e-mail, the Web, and directory services, as well as traditional media types. How do you effectively program these services? The authors propose a CGI (common gateway interface) solution for trusted user/developers (such as administrators) and the Call Processing Language (CPL)-a simple, robust, safe, call processing language-for untrusted user/developers.

Journal Article•DOI•
TL;DR: The authors present a new approach that integrates group coordination with extended multicast services and discusses the current state of research with regard to such services, which they refer to as group coordination.
Abstract: Current research on networked multimedia applications such as distributed interactive simulations concentrates on transport issues like multicast routing and presentation and session management, including session orchestration and quality-of-service support in media delivery. Session orchestration includes mechanisms to coordinate fair and exclusive access to shared resources whose semantics do not allow for concurrent usage, in order to prevent conflicts and inconsistencies in the shared workspace. The authors discuss the current state of research with regard to such services, which they refer to as group coordination, and present a new approach that integrates group coordination with extended multicast services.

Journal Article•DOI•
TL;DR: This work uses a logistic regression model to predict the cache worthiness of Web objects and shows how this can improve network performance using current technologies.
Abstract: Adaptive Web caching offers a way to improve network performance using current technologies The challenge facing researchers is to determine what objects should be cached As biostatisticians use regression models to predict the risk of disease development, the authors use a logistic regression model to predict the cache worthiness of Web objects

Journal Article•DOI•
TL;DR: The paper discusses the SBD (Simulation Based Design) program which uses a CORBA-based infrastructure together with information agents to facilitate resource sharing and process automation in a virtual enterprise.
Abstract: Managing complex product development processes is challenging enough for traditional engineering organizations; it becomes more complex when developers are geographically dispersed. The paper discusses the SBD (Simulation Based Design) program which uses a CORBA-based infrastructure together with information agents to facilitate resource sharing and process automation in a virtual enterprise.

Journal Article•DOI•
C. Metz1•
TL;DR: The author surveys the history of this development from the Internet's original passenger-class-only, best-effort protocol suite, and concludes with a review of the current Internet Engineering Task Force's efforts in the Differentiated Services (DiffServ) working group.
Abstract: The concept of quality of service-that is, the network capability to provide a nondefault service to a subset of the aggregate traffic-has now entered the IP lexicon. The author surveys the history of this development from the Internet's original passenger-class-only, best-effort protocol suite. He concludes with a review of the current Internet Engineering Task Force's efforts in the Differentiated Services (DiffServ) working group.

Journal Article•DOI•
C. Metz1•
TL;DR: The author discusses the basics of using TCP for satellite transmission and describes the changes you can expect to see in the TCP protocol itself as a result of the increase in use of satellites for TCP/IP traffic.
Abstract: Communication satellites are now being used to transport TCP/IP traffic between distant locations, and to offer Internet access. Satellites have thus become the celestial link of the Internet, an "instant" infrastructure in the sky. The rapid growth of satellite communications is evolving the TCP/IP protocol suite in positive ways. In particular, enhancements to the Transmission Control Protocol (TCP) to address the challenges of satellite transmission will benefit all high-bandwidth TCP communications. TCP is the predominant unicast transport protocol used by Internet applications such as Telnet, FTP, and HTTP. The ability of TCP to maximize the link utilization of a satellite channel is being challenged by the inherent delays associated with space communications and some of TCP's own behaviors. The author discusses the basics of using TCP for satellite transmission and describes the changes you can expect to see in the TCP protocol itself as a result of the increase in use of satellites for TCP/IP traffic.

Journal Article•DOI•
TL;DR: Parts of the evolution of telephony networks and services as they relate to the underlying changes in technology are described, including Internet telephony; intelligent networks; billing information systems; electronic commerce.
Abstract: A PSTN phone can generate only a small set of signaling events and tones, and cannot receive or process signaling of any sophistication. Packet phones, on the other hand, can receive and process signaling automatically and also send signaling out of band as a separate set of IP packets. Evolution from plain old telephony services (POTS) to IP telephony therefore promises some pretty amazing new services (PANS). It will nevertheless take many years to transition to a purely packet-based environment. The authors describe aspects of the evolution of telephony networks and services as they relate to the underlying changes in technology. The following technologies are mentioned: Internet telephony; intelligent networks; billing information systems; electronic commerce.