scispace - formally typeset
Search or ask a question

Showing papers on "The Internet published in 2000"


Journal ArticleDOI
16 Nov 2000-Nature
TL;DR: The p53 tumour-suppressor gene integrates numerous signals that control cell life and death, and the disruption of p53 has severe consequences when a highly connected node in the Internet breaks down.
Abstract: The p53 tumour-suppressor gene integrates numerous signals that control cell life and death. As when a highly connected node in the Internet breaks down, the disruption of p53 has severe consequences.

6,605 citations


Journal ArticleDOI
TL;DR: A structural model based on the previous conceptual model of flow that embodies the components of what makes for a compelling online experience is developed and provides marketing scientists with operational definitions of key model constructs and establishes reliability and validity in a comprehensive measurement framework.
Abstract: Intuition and previous research suggest that creating a compelling online environment for Web consumers will have numerous positive consequences for commercial Web providers. Online executives note that creating a compelling online experience for cyber customers is critical to creating competitive advantage on the Internet. Yet, very little is known about the factors that make using the Web a compelling experience for its users, and of the key consumer behavior outcomes of this compelling experience.Recently, the flow construct has been proposed as important for understanding consumer behavior on the World Wide Web, and as a way of defining the nature of compelling online experience. Although widely studied over the past 20 years, quantitative modeling efforts of the flow construct have been neither systematic nor comprehensive. In large parts, these efforts have been hampered by considerable confusion regarding the exact conceptual definition of flow. Lacking precise definition, it has been difficult to measure flow empirically, let alone apply the concept in practice.Following the conceptual model of flow proposed by Hoffman and Novak (1996), we conceptualize flow on the Web as a cognitive state experienced during navigation that is determined by (1) high levels of skill and control; (2) high levels of challenge and arousal; and (3) focused attention; and (4) is enhanced by interactivity and telepresence. Consumers who achieve flow on the Web are so acutely involved in the act of online navigation that thoughts and perceptions not relevant to navigation are screened out, and the consumer focuses entirely on the interaction. Concentration on the navigation experience is so intense that there is little attention left to consider anything else, and consequently, other events occurring in the consumer's surrounding physical environment lose significance. Self-consciousness disappears, the consumer's sense of time becomes distorted, and the state of mind arising as a result of achieving flow on the Web is extremely gratifying.In a quantitative modeling framework, we develop a structural model based on our previous conceptual model of flow that embodies the components of what makes for a compelling online experience. We use data collected from a largesample, Web-based consumer survey to measure these constructs, and we fit a series of structural equation models that test related prior theory. The conceptual model is largely supported, and the improved fit offered by the revised model provides additional insights into the direct and indirect influences of flow, as well as into the relationship of flow to key consumer behavior and Web usage variables.Our formulation provides marketing scientists with operational definitions of key model constructs and establishes reliability and validity in a comprehensive measurement framework. A key insight from the paper is that the degree to which the online experience is compelling can be defined, measured, and related well to important marketing variables. Our model constructs relate in significant ways to key consumer behavior variables, including online shopping and Web use applications such as the extent to which consumers search for product information and participate in chat rooms. As such, our model may be useful both theoretically and in practice as marketers strive to decipher the secrets of commercial success in interactive online environments.

2,881 citations


Journal ArticleDOI
TL;DR: In this study, consumers recognized differences in size and reputation among Internet stores, and those differences influenced their assessments of store trustworthiness and their perception of risk, as well as their willingness to patronize the store.
Abstract: The study reported here raises some questions about the conventional wisdom that the Internet creates a “level playing field” for large and small retailers and for retailers with and without an established reputation. In our study, consumers recognized differences in size and reputation among Internet stores, and those differences influenced their assessments of store trustworthiness and their perception of risk, as well as their willingness to patronize the store. After describing our research methods and results, we draw some implications for Internet merchants.

2,751 citations


Journal ArticleDOI
TL;DR: In this article, a meta-analysis explores factors associated with higher response rates in electronic surveys reported in both published and unpublished research and concludes that response representativeness is more important than response rate in survey research.
Abstract: Response representativeness is more important than response rate in survey research. However, response rate is important if it bears on representativeness. The present meta-analysis explores factors associated with higher response rates in electronic surveys reported in both published and unpublished research. The number of contacts, personalized contacts, and precontacts are the factors most associated with higher response rates in the Web studies that are analyzed.

2,520 citations


Journal ArticleDOI
TL;DR: By offering a typology of Web survey designs, the intent of this article is to facilitate the task of evaluating and improving Web surveys.
Abstract: As we enter the twenty-first century, the Internet is having a profound effect on the survey research industry, as it is on almost every area of human enterprise. The rapid development of surveys on the World Wide Web (WWW) is leading some to argue that soon Internet (and, in particular, Web) surveys will replace traditional methods of survey data collection. Others are urging caution or even voicing skepticism about the future role Web surveys will play. Clearly, we stand at the threshold of a new era for survey research, but how this will play out is not yet clear. Whatever one's views about the likely future for survey research, the current impact of the Web on survey data collection is worthy of serious research attention. Given the rapidly growing interest in Web surveys,' it is important to distinguish among different types of Web surveys. The rubric "Web survey" encompasses a wide variety of methods, with different purposes, populations, target audiences, etc. I present an overview of some of the key types of Web surveys currently being implemented and do so using the language of survey errors. In order to judge the quality of a particular survey (be it Web or any other type), one needs to do so within the context of its stated aims and the claims it makes. By offering a typology of Web survey designs, the intent of this article is to facilitate the task of evaluating and improving Web surveys. Web surveys represent a double-edged sword for the survey industry. On the one hand, the power of Web surveys is that they make survey data collection (as opposed to survey participation) available to the masses. Not only can researchers get access to undreamed of numbers of respondents at dramatically lower costs than traditional methods, but members of the general

2,170 citations


Journal ArticleDOI
TL;DR: The authors empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products-books and CDs-using a data set of over 8,500 price observations collected over a period of 15 months, comparing pricing behavior at 41 Internet and conventional retail outlets.
Abstract: There have been many claims that the Internet represents a new nearly "frictionless market." Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products-books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping, and shopping costs are included in the price. Additionally, we find that Internet retailers' price adjustments over time are up to 100 times smaller than conventional retailers' price adjustments-presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.

2,109 citations


Journal ArticleDOI
TL;DR: Jenny Preece provides readers with an in-depth look at the design of effective online communities and details the enabling technologies behind some of the most successful online communities.
Abstract: From the Publisher: Learn the enabling technologies behind some of the most successful online communities Although the Internet has grown considerably, people are still looking for more effective methods of communicating over it. This has become a hot topic among Web developers as they look for new enabling technologies. Well-respected author Jenny Preece provides readers with an in-depth look at the design of effective online communities. She evaluates these communities and then details the enabling technologies. Analysis is also included to explain what these technologies are capable of doing and what they actually should do. Companion Web site contains a forum for discussions on experiences setting up and running online communities

1,973 citations


Book
23 Jun 2000
TL;DR: A Guide to Modern Econometrics is a new textbook published by John Wiley and Sons that covers a wide range of topics in applied econometricrics in a concise and intuitive way, with emphasis on empirical relevance and intuition.
Abstract: textA Guide to Modern Econometrics is a new textbook published by John Wiley and Sons. It covers a wide range of topics in applied econometrics in a concise and intuitive way. Some distinctive features: Emphasis on empirical relevance and intuition, paying attention to the links between alternative approaches. Limited use of matrix algebra. Coverage of many modern topics from time-series, cross-section and panel data econometrics. Concisely and carefully written, so that the reader does not get lost in the details. Full length empirical illustrations are provided throughout, typically taken from the modern economics literature and using full-size data sets. Empirical illustrations taken from finance, labour economics, environmental economics, monetary economics, international economics and many more. Exercises added to all chapters, with a focus on intuition and interpretation of results. Several exercises involve the use of actual data. Data sets used for illustrations and exercises are available from the internet.

1,867 citations


Book
01 Jan 2000
TL;DR: The most up-to-date introduction to the field of computer networking, this book's top-down approach starts at the application layer and works down the protocol stack, it also uses the Internet as the main example of networks as discussed by the authors.
Abstract: From the Publisher: The most up-to-date introduction to the field of computer networking, this book's top-down approach starts at the application layer and works down the protocol stack. It also uses the Internet as the main example of networks. This all creates a book relevant to those interested in networking today. By starting at the application-layer and working down the protocol stack, this book provides a relevant introduction of important concepts. Based on the rationale that once a reader understands the applications of networks they can understand the network services needed to support these applications, this book takes a "top-down" approach that exposes readers first to a concrete application and then draws into some of the deeper issues surrounding networking. This book focuses on the Internet as opposed to addressing it as one of many computer network technologies, further motivating the study of the material. This book is designed for programmers who need to learn the fundamentals of computer networking. It also has extensive material making it of great interest to networking professionals.

1,793 citations


Journal ArticleDOI
TL;DR: A research framework based on the theory of planned behavior and the diffusion of innovations theory was used to identify the attitudinal, social and perceived behavioral control factors that would influence the adoption of Internet banking.
Abstract: A research framework based on the theory of planned behavior (Ajzen 1985) and the diffusion of innovations theory (Rogers 1983) was used to identify the attitudinal, social and perceived behavioral control factors that would influence the adoption of Internet banking. An online questionnaire was designed on the World Wide Web (WWW). Respondents participated through extensive personalized email invitations as well as postings to newsgroups and hyperlinks from selected Web sites.

1,745 citations



Journal ArticleDOI
TL;DR: This article examined audience uses of the Internet from a uses-and-gratifications perspective and found that contextual age, unwillingness to communicate, social presence, and Internet motives predict outcomes of Internet exposure, affinity and satisfaction.
Abstract: We examined audience uses o f the Internet from a uses-and-gratifications perspective. We expected contextual age, unwillingness to communicate, social presence, and Internet motives to predict outcomes of Internet exposure, affinity and satisfaction. The analyses identified five motives for using the Internet and multivariate links among the antecedents and motives. The results suggested distinctions between instrumental and ritualized Internet use, as well as Internet use serving as a functional alternative to face-to-face interaction.

ReportDOI
14 Jul 2000
TL;DR: This paper presents two different experiments where one technology called Singular Value Decomposition (SVD) is explored to reduce the dimensionality of recommender system databases and suggests that SVD has the potential to meet many of the challenges ofRecommender systems, under certain conditions.
Abstract: : We investigate the use of dimensionality reduction to improve performance for a new class of data analysis software called "recommender systems" Recommender systems apply knowledge discovery techniques to the problem of making product recommendations during a live customer interaction. These systems are achieving widespread success in E-commerce nowadays, especially with the advent of the Internet. The tremendous growth of customers and products poses three key challenges for recommender systems in the E-commerce domain. These are: producing high quality recommendations, performing many recommendations per second for millions of customers and products, and achieving high coverage in the face of data sparsity. One successful recommender system technology is collaborative filtering, which works by matching customer preferences to other customers in making recommendations. Collaborative filtering has been shown to produce high quality recommendations, but the performance degrades with the number of customers and products. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very largescale problems. This paper presents two different experiments where we have explored one technology called Singular Value Decomposition (SVD) to reduce the dimensionality of recommender system databases. Each experiment compares the quality of a recommender system using SVD with the quality of a recommender system using collaborative filtering. The first experiment compares the effectiveness of the two recommender systems at predicting consumer preferences based on a database of explicit ratings of products. The second experiment compares the effectiveness of the two recommender systems at producing Top-N lists based on a real-life customer purchase database from an E-Commerce site. Our experience suggests that SVD has the potential to meet many of the challenges of recommender systems, under certain conditions.

Journal ArticleDOI
TL;DR: Four factors that are critical to Web site success in EC were identified: information and service quality, system use, playfulness, and system design quality.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: The Representational State Transfer (REST) architectural style is introduced, developed as an abstract model of the Web architecture to guide the redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers.
Abstract: The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. In this paper, we introduce the Representational State Transfer (REST) architectural style, developed as an abstract model of the Web architecture to guide our redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers. We describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. We then compare the abstract model to the currently deployed Web architecture in order to elicit mismatches between the existing protocols and the applications they are intended to support.

Book
01 Jan 2000
TL;DR: A rich ethnography of Internet use is presented in this article, which offers a sustained account not just of being online, but of the social, political and cultural contexts which account for the contemporary Internet experience.
Abstract: This pathbreaking book is the first to provide a rigorous and comprehensive examination of Internet culture and consumption. A rich ethnography of Internet use, the book offers a sustained account not just of being online, but of the social, political and cultural contexts which account for the contemporary Internet experience. From cybercafes to businesses, from middle class houses to squatters settlements, from the political economy of Internet provision to the development of ecommerce, the authors have gathered a wealth of material based on fieldwork in Trinidad. Looking at the full range of Internet media -- including websites, email and chat -- the book brings out unforeseen consequences and contradictions in areas as varied as personal relations, commerce, nationalism, sex and religion. This is the first book-length treatment of the impact of the Internet on a particular region. By focusing on one place, it demonstrates the potential for a comprehensive approach to new media. It points to the future direction of Internet research, proposing a detailed agenda for comparative ethnographic study of the cultural significance and effects of the Internet in modern society. Clearly written for the non-specialist reader, it offers a detailed account of the complex integration between on-line and off-line worlds. An innovative tie-in with the book's own website provides copious illustrations amounting to over 2,000 web-pages that bring the material right to your computer.

Journal ArticleDOI
TL;DR: The authors argued that the Internet by itself is not a main effect cause of anything, and that psychology must move beyond this notion to an informed analysis of how social iden tity, social interaction, and relationship formation may be different on the Internet than in real life.
Abstract: Just as with most other communication breakthroughs before it, the initial media and popular reaction to the Internet has been largely negative, if not apocalyptic. For example, it has been described as “awash in pornography”, and more recently as making people “sad and lonely.” Yet, counter to the initial and widely publi cized claim that Internet use causes depression and social isolation, the body of ev idence (even in the initial study on which the claim was based) is mainly to the con trary. More than this, however, it is argued that like the telephone and television before it, the Internet by itself is not a main effect cause of anything, and that psy chology must move beyond this notion to an informed analysis of how social iden tity, social interaction, and relationship formation may be different on the Internet than in real life. Four major differences and their implications for self and identity, social interaction, and relationships are identified: one's greater anonymity, the greatly reduced i...

Journal ArticleDOI
TL;DR: A failure analysis was conducted, identifying trends among user mistakes, and a summary of findings and a discussion of the implications of these findings were concluded.
Abstract: We analyzed transaction logs containing 51,473 queries posed by 18,113 users of Excite, a major Internet search service. We provide data on: (i) sessions — changes in queries during a session, number of pages viewed, and use of relevance feedback; (ii) queries — the number of search terms, and the use of logic and modifiers; and (iii) terms — their rank/frequency distribution and the most highly used search terms. We then shift the focus of analysis from the query to the user to gain insight to the characteristics of the Web user. With these characteristics as a basis, we then conducted a failure analysis, identifying trends among user mistakes. We conclude with a summary of findings and a discussion of the implications of these findings. # 2000 Elsevier Science Ltd. All rights reserved.

Journal ArticleDOI
P. Bender1, Peter J. Black1, M. Grob1, Roberto Padovani1, N. Sindhushyana, S. Viterbi1 
TL;DR: The network architecture, based on Internet protocols adapted to the mobile environment, is described, followed by a discussion of economic considerations in comparison to cable and DSL services.
Abstract: This article presents an approach to providing very high-data-rate downstream Internet access by nomadic users within the current CDMA physical layer architecture. A means for considerably increasing the throughput by optimizing packet data protocols and by other network and coding techniques are presented and supported by simulations and laboratory measurements. The network architecture, based on Internet protocols adapted to the mobile environment, is described, followed by a discussion of economic considerations in comparison to cable and DSL services.

Journal ArticleDOI
TL;DR: The Sequence Manipulation Suite is a collection of freely available JavaScript applications for molecular biologists that consists of over 30 utilities for analyzing and manipulating sequence data, including the following.
Abstract: JavaScript is an object-based scripting language that can be interpreted by most commonly used Web browsers, including Netscape® Navigator® and Internet Explorer®. In conjunction with HTML form elements, JavaScript can be used to make flexible and easy-to-use applications that can be accessed by anyone connected to the Internet (3). The Sequence Manipulation Suite (http://www.ualberta.ca/~stothard/javascript/) is a collection of freely available JavaScript applications for molecular biologists. It consists of over 30 utilities for analyzing and manipulating sequence data, including the following:

Journal ArticleDOI
TL;DR: In this article, the authors surveyed 277 undergraduate Internet users, a population considered to be high risk for pathological Internet use (PIU), to assess incidence of PIU as well as characteristics of the Internet and of users associated with PIU.

Journal ArticleDOI
28 Aug 2000
TL;DR: A general purpose traceback mechanism based on probabilistic packet marking in the network that allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs).
Abstract: This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or ``spoofed'', source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed ``post-mortem'' -- after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology.

Journal ArticleDOI
TL;DR: This article examines a variety of infrastructures that provide access to scientific knowledge and assesses their impact on the way that scientists will create, organize and integrate knowledge and illustrates how online content may become more adaptive and structured.
Abstract: Scientific knowledge is increasingly being stored online. A large number of infrastructures that provide access to scientific knowledge are now available on the Internet. They range from online journals to collaboratories and logic servers. This article examines a variety of such infrastructures and derives implications for their further evolution. It assesses their impact on the way that scientists will create, organize and integrate knowledge. In parallel, the article illustrates how online content may become more adaptive and structured. The text consists of individually marked sections that are assembled dynamically to the needs of each reader on different levels of detail.

Journal ArticleDOI
TL;DR: This survey examines the various definitions of trust in the literature and provides a working definition of trust for Internet applications and some influential examples of trust management systems.
Abstract: Trust is an important aspect of decision making for Internet applications and particularly influences the specification of security policy, i.e., who is authorized to perform actions as well as the techniques needed to manage and implement security to and for the applications. This survey examines the various definitions of trust in the literature and provides a working definition of trust for Internet applications. The properties of trust relationships are explained and classes of different types of trust identified in the literature are discussed with examples. Some influential examples of trust management systems are described.

Journal ArticleDOI
16 May 2000
TL;DR: The design of NiagaraCQ system is presented, some experimental results on the system's performance and scalability are given and other techniques including incremental evaluation of continuous queries, use of both pull and push models for detecting heterogeneous data source changes, and memory caching are employed.
Abstract: Continuous queries are persistent queries that allow users to receive new results when they become available. While continuous query systems can transform a passive web into an active environment, they need to be able to support millions of queries due to the scale of the Internet. No existing systems have achieved this level of scalability. NiagaraCQ addresses this problem by grouping continuous queries based on the observation that many web queries share similar structures. Grouped queries can share the common computation, tend to fit in memory and can reduce the I/O cost significantly. Furthermore, grouping on selection predicates can eliminate a large number of unnecessary query invocations. Our grouping technique is distinguished from previous group optimization approaches in the following ways. First, we use an incremental group optimization strategy with dynamic re-grouping. New queries are added to existing query groups, without having to regroup already installed queries. Second, we use a query-split scheme that requires minimal changes to a general-purpose query engine. Third, NiagaraCQ groups both change-based and timer-based queries in a uniform way. To insure that NiagaraCQ is scalable, we have also employed other techniques including incremental evaluation of continuous queries, use of both pull and push models for detecting heterogeneous data source changes, and memory caching. This paper presents the design of NiagaraCQ system and gives some experimental results on the system's performance and scalability.

Patent
02 Nov 2000
TL;DR: In this paper, a system and method for voice transmission over high level network protocols is presented, where variable compression based on silence detection takes advantage of the natural silences and pauses in human speech, thus reducing the delays in transmission caused by using HTTP/TCP.
Abstract: A system and method for voice transmission over high level network protocols. On the Internet and the World Wide Web, such high level protocols are HTTP/TCP. The restrictions imposed by firewalls and proxy servers are avoided by using HTTP level connections to transmit voice data. In addition, packet delivery guarantees are obtained by using TCP instead of UDP. Variable compression based on silence detection takes advantage of the natural silences and pauses in human speech, thus reducing the delays in transmission caused by using HTTP/TCP. The silence detection includes the ability to bookend the voice data sent with small portions of silence to insure that the voice sounds natural. Finally, the voice data is transmitted to each client computer independently from a common circular list of voice data, thus insuring that all clients will stay current with the most recent voice data. The combination of these features enables simple, seamless, and interactive Internet conferencing.

Book
14 Jun 2000
TL;DR: Introduction Practicalities of Using CMC An Ethical Framework Introducing Online Methods Online Focus Groups The Online Interviewer Power Issues in Internet Research Language Mode and Analysis Virtuality and Data Future Directions
Abstract: Introduction Practicalities of Using CMC An Ethical Framework Introducing Online Methods Online Focus Groups The Online Interviewer Power Issues in Internet Research Language Mode and Analysis Virtuality and Data Future Directions

Journal ArticleDOI
TL;DR: New research in reinforcement learning, information extraction and text classification that enables efficient spidering, the identification of informative text segments, and the population of topic hierarchies are described.
Abstract: Domain-specific internet portals are growing in popularity because they gather content from the Web and organize it for easy access, retrieval and search. For example, www.campsearch.com allows complex queries by age, location, cost and specialty over summer camps. This functionality is not possible with general, Web-wide search engines. Unfortunately these portals are difficult and time-consuming to maintain. This paper advocates the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific Internet portals. We describe new research in reinforcement learning, information extraction and text classification that enables efficient spidering, the identification of informative text segments, and the population of topic hierarchies. Using these techniques, we have built a demonstration system: a portal for computer science research papers. It already contains over 50,000 papers and is publicly available at www.cora.justresearch.com. These techniques are widely applicable to portal creation in other domains.

Patent
24 Jul 2000
TL;DR: In this article, a cross-referencing resource, which may take the form of an independent HTTP server, an LDAP directory server or the existing Internet Domain Name Service (DNS), receives Internet request messages containing all or part of a universal product code and returns the Internet address at which information about the identified product, or the manufacturer of that product, may be obtained.
Abstract: Methods and apparatus for disseminating over the Internet product information produced and maintained by product manufacturers using existing universal product codes (bar codes) as access keys A cross-referencing resource, which may take the form of an independent HTTP server, an LDAP directory server, or the existing Internet Domain Name Service (DNS), receives Internet request messages containing all or part of a universal product code and returns the Internet address at which information about the identified product, or the manufacturer of that product, may be obtained By using preferred Web data storage formats which conform to XML, XLS, XLink, Xpointer and RDF specifications, product information may be seamlessly integrated with information from other sources A “web register” module can be employed to provide an Internet interface between a shared sales Internet server and an otherwise conventional inventory control system, and operates in conjunction with the cross-referencing server to provide detailed product information to Internet shoppers who may purchase goods from existing stores via the Internet

Proceedings ArticleDOI
14 May 2000
TL;DR: The proposed Nimrod/G grid-enabled resource management and scheduling system builds on the earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components.
Abstract: The availability of powerful microprocessors and high-speed networks as commodity components has enabled high-performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise or worldwide), there is a great challenge in integrating, coordinating and presenting them as a single resource to the user, thus forming a computational grid. Another challenge comes from the distributed ownership of resources, with each resource having its own access policy, cost and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod (D. Abramson et al., 1994, 1995, 1997, 2000) and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the GUSTO (GlobUS TOolkit) services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise or global levels, with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real testbed, namely the Globus testbed (GUSTO).