scispace - formally typeset
Search or ask a question

Showing papers on "Utility computing published in 1998"


Journal ArticleDOI
TL;DR: This essay is a speculation of the impact of the next generation technological platform — the internetwork computing architecture (InterNCA) — on systems development and some suggestions for where the information systems research community should focus its efforts are proposed.
Abstract: This essay is a speculation of the impact of the next generation technological platform — the internetwork computing architecture (InterNCA) — on systems development. The impact will be deep and pervasive and more substantial than when computing migrated from closed computer rooms to ubiquitous personal computers and flexible client-server solutions. Initially, by drawing upon the notion of a technological frame, the InterNCA, and how it differs from earlier technological frames, is examined. Thereafter, a number of hypotheses are postulated with regard to how the architecture will affect systems development content, scope, organization and processes. Finally, some suggestions for where the information systems research community should focus its efforts (if the call for relevance is not to be taken lightly) are proposed.

104 citations


Journal Article
TL;DR: This work discusses how information-based computing within computational grids will enable collective advances in knowledge, and illustrates their new capabilities by presenting projects now under way that use some concepts implicit within grid environments.
Abstract: Computational grids provide access to distributed compute resources and distributed data resources, creating unique opportunities for improved access to information. When data repositories are accessible from any platform, applications can be developed that support nontraditional uses of computing resources. Environments thus enabled include knowledge networks, in which researchers collaborate on common problems by publishing results in digital libraries, and digital government, in which policy decisions are based on knowledge gleaned from teams of experts accessing distributed data repositories. In both cases, users access data that has been turned into information through the addition of metadata that describes its origin and quality. Information-based computing within computational grids will enable collective advances in knowledge [396]. In this view of the applications that will dominate in the future, application development will be driven by the need to process and analyze information , rather than the need to simulate a physical process. In addition to accessing specific data sets, applications will need to use information discovery interfaces [138] and dynamically determine which data sets to process. In Section 5.1, we discuss how these applications will evolve, and we illustrate their new capabilities by presenting projects now under way that use some concepts implicit within grid environments. Data-intensive applications that will require the manipulation of terabytes of data aggregated across hundreds of files range from comparisons of numerical simulation output, to analyses of satellite observation data streams, to searches for homologous structures

102 citations


Book ChapterDOI
TL;DR: This paper presents and discusses the idea of Web-based volunteer computing, which allows people to cooperate in solving a large parallel problem by using standard Web browsers to volunteer their computers' processing power.
Abstract: This paper presents and discusses the idea of Web-based volunteer computing, which allows people to cooperate in solving a large parallel problem by using standard Web browsers to volunteer their computers' processing power. Because volunteering requires no prior human contact and very little technical knowledge, it becomes very easy to build very large volunteer computing networks. At its full potential, volunteer computing can make it possible to build world-wide massively parallel computing networks more powerful than any supercomputer. Even on a smaller, more practical scale, volunteer computing can be used within companies or institutions to provide supercomputer-like facilities by harnessing the computing power of existing workstations. Many interesting variations are possible, including networks of information appliances (NOIAs), paid volunteer systems, and barter trade of compute cycles. In this paper, we discuss these possibilities, and identify several issues that will need to be addressed in order to successfully implement them. We also present an overview of the current work being done in the Bayanihan volunteer computing project.

78 citations


BookDOI
01 Jan 1998
TL;DR: The Kyoto Meeting on Social Interaction and Communityware, organized in June 1998, reported the background and results of the meeting on the potential of the community metaphor to generate new directions in research and practice.
Abstract: With the advance of global computer networks, a dramatic shift in computing metaphors has begun: from team to community. Understanding that the team metaphor has created various research fields including groupware and distributed artificial intelligence, it seems that the community metaphor has the potential to generate new directions in research and practice. Based on this motivation, we organized the Kyoto Meeting on Social Interaction and Communityware in June 1998. This article reports the background and results of the meeting.

73 citations


Journal ArticleDOI
TL;DR: Ninflet is designed to make use of Java features to implement important features in global computing, such as resource allocation, inter-Ninflet communication, security, checkpointing, object migration, and easy server management via HTTP.
Abstract: Ninflet is a Java-based global computing system that builds on our experiences with the Ninf system which facilitated RPC-based computing of numerical tasks in a wide-area network. The goal of Ninflet is to become a new generation of concurrent object-oriented systems which harness abundant idle computing powers, and also seamlessly integrate global as well as local network parallel computing. Ninflet is designed to make use of Java features to implement important features in global computing, such as resource allocation, inter-Ninflet communication, security, checkpointing, object migration, and easy server management via HTTP. © 1998 John Wiley & Sons, Ltd.

44 citations


Patent
30 Sep 1998
TL;DR: In this paper, a method and apparatus for automatically redistributing tasks to reduce the effect of a computer outage on a computer network is presented, which comprises at least one redundancy group comprised of one or more computing systems, comprised of a database schema that are replicated at each computing system partition.
Abstract: A method and apparatus for automatically redistributing tasks to reduce the effect of a computer outage on a computer network. The apparatus comprises at least one redundancy group comprised of one or more computing systems, comprised of one or more computing system partitions. The computing system partition includes copies of a database schema that are replicated at each computing system partition. The redundancy group monitors the status of the computing systems and the computing system partitions, and assigns a task to the computing systems based on the monitored status of the computing systems.

26 citations


Proceedings ArticleDOI
16 Apr 1998
TL;DR: The author discusses some software challenges for ubiquitous computing and the promise of ubiquitous computing is that the increased pervasiveness of computation will lead to less intrusive and more valuable services to the end user.
Abstract: The defining characteristic of ubiquitous computing is the attempt to break away from the traditional desktop computing paradigm and move computational power into the environment that surrounds the user. Increasing miniaturization brought on by advances in areas such as VLSI design means that we will soon be able to instrument our environments in order to realize the dream of ubiquitous computing. The promise of ubiquitous computing, however, is that the increased pervasiveness of computation will lead to less intrusive and more valuable services to the end user. For this dream to be realized we need to accompany hardware advances with advances in software technology. In this paper, the author discusses some software challenges for ubiquitous computing.

23 citations


Proceedings ArticleDOI
28 Jul 1998
TL;DR: The field of data intensive computing as mentioned in this paper explores some of the history and future directions of that field, and explores the technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems.
Abstract: Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data from around the world, as well as employing large-scale computation. The distributed systems that solve large-scale problems will always involve aggregating and scheduling many resources. Data must be located and staged, cache and network capacity must be available at the same time as computing capacity, etc. Every aspect of such a system is dynamic: locating and scheduling resources, adapting running application systems to availability and congestion in the middleware and infrastructure, responding to human interaction, etc. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. This paper explores some of the history and future directions of that field.

22 citations


Journal ArticleDOI
01 Jun 1998
TL;DR: The performance of two well-known algorithms is analyzed and optimized by the simulated execution of prototypes, i.e., of partially-implemented program designs, and the performance figures obtained by simulation are compared to the actual ones, measured by executing the implemented program in a real computing environment.
Abstract: The objective of this paper is to show the profitability of the development of applications for heterogeneous networked computing environments using simulation tools. Firstly, an overview of the problem is given, discussing the advantages of development on top of a high-performance simulator rather than directly on the real hardware. Then PS, an existing simulator of PVM applications, is described and the development of a distributed matrix multiplication application in a workstation cluster is discussed as a case study. In the proposed example, the performance of two well-known algorithms is analyzed and optimized by the simulated execution of prototypes, i.e., of partially-implemented program designs. The problem of workload sharing in the presence of computing resource heterogeneity is tackled, and the performance figures obtained by simulation are compared to the actual ones, measured by executing the implemented program in a real computing environment.

16 citations



Journal ArticleDOI
TL;DR: This column extends the discussion of the unique challenges presented by adopting a much more personal model of mobile computing to consider arrays of deeply embedded computing devices that are not fundamentally associated with an individual person.
Abstract: The traditional view of mobile computing (such as it is) typically involves users moving through an environment, or set of environments, with their own personal computing devices. This model works as a rich extension of existing devices such as notebook computers and personal digital assistants. Without some degree of mobility, the problems associated with mobile computing are indistinguishable from those of traditional computing environments. Thus far, much of this fields research has focused on extending services developed for desktop computing environments to mobile devices and managing the challenges that result from unreliable or varying network connectivity. In a recent column, the author discussed the unique challenges presented by adopting a much more personal model of mobile computing, rather than simply considering the impact of desktop applications and services (see ibid., April-June 1998, p. 8-10). In this column, he extends that discussion to consider arrays of deeply embedded computing devices that are not fundamentally associated with an individual person. Collectively, this model and associated technologies will serve for developing a set of fundamentally new systems called smart spaces. Smart spaces incorporate embedded computing devices with sensor technology to provide automatic responses to environmental changes. Although some common examples consider the degenerate case of a single processing node (such as responsive desktops), a richer set of capabilities emerges when these nodes are composed to form larger systems.

Proceedings ArticleDOI
02 Dec 1998
TL;DR: This model classifies distributed processing systems into seven categories based on the location of data storage and the style of processing between client and server and its use in planning the infrastructure of a new system for one of the authors' customers is described.
Abstract: When implementing an application system in distributed computing environment, several architectural questions arise such as, how and where computing resources are distributed and how the communication among computing resources should be implemented. To simplify the process of making these choices, we have developed a distributed computing model. This model classifies distributed processing systems into seven categories based on the location of data storage and the style of processing between client and server. This paper describes our model and its use in planning the infrastructure of a new system for one of our customers.

Proceedings ArticleDOI
B.-A. Molin1
08 Nov 1998
TL;DR: The main activities in the program are multidisciplinary research projects covering several universities and industries, and a distributed graduate school comprising 30 Ph.D. students today and a planned increase to more than 50 students by the end of 1999.
Abstract: This paper describes PCC, a distributed multidisciplinary research program in personal computing and communication. The rich diversity of emerging services in the area of personal mobile distributed computing gives rise to new complex traffic patterns that impose new requirements on the infrastructure. These requirements call for the design of a completely new system architecture, integrating distributed computing concepts, wireless communication concepts, and high-performance communication concepts. With this in mind, PCC started its activities in June 1997. The main activities in the program are multidisciplinary research projects covering several universities and industries, and a distributed graduate school comprising 30 Ph.D. students today and a planned increase to more than 50 students by the end of 1999.

01 Jan 1998
TL;DR: This paper describes research looking towards the next generation of software for such applications, centered on the idea of distributed computing with data, and proposes to take advantage of the CORBA standard for distributed, object-oriented computation.
Abstract: Statistical computing is part of a more general process, which can be called computing with data. Besides traditional statistical analysis, this involves acquiring, organizing, and visualizing data, often in large, structured datasets organized in database management systems and used for purposes beyond analysis. An important challenge for statistical computing (and statistics in general) is to increase the scope of our involvement in this diverse environment. At the same time, the computing environment itself is becoming more diverse in all respects: data and users are widely spread and using many different systems. We describe research looking towards the next generation of software for such applications, centered on the idea of distributed computing with data. By this we mean distributed in two fundamentally different, but related, senses. First, the data and the tasks users apply to the data are distributed geographically, over a heterogeneous network of computers and operating systems. Second, the programming environment we envision is distributed over a variety of languages and other software. We describe research towards a programming environment suitable for distributed computing with data. As a key to this environment, we propose to take advantage of the CORBA standard for distributed, object-oriented computation. This paper describes the background for our approach, the reasoning for the CORBA proposal, and some initial experiments in the new approach.


01 Jan 1998
TL;DR: The development of a unique experimental facility for the exploration of large-scale ubiquitous computing interfaces and applications that are aware of the context of their use and can capture salient memories of collaborative experiences is proposed.
Abstract: We propose the development of a unique experimental facility for the exploration of large-scale ubiquitous computing interfaces and applications that are aware of the context of their use and can capture salient memories of collaborative experiences. Our research goal is to explore new paradigms for human-computer interaction through a prototype next generation ubiquitous computing interface. Our engineering goal is to build a computational and sensing infrastructure so pervasive that it can do for supporting and capturing collaborative activities what the worldwide computer network has done for e-mail and on-line communities. Like the telephone system or the road network, this interface must be available throughout a user’s work space to fully reveal how it will be used and the scientific issues raised. Current ubiquitous computing experiments are limited to the scale of a single room or just provide a minimal service (e.g., location) across a larger area. We propose developing a rich ubiquitous computing infrastructure for the distributed space occupied by the College of Computing at Georgia Tech. We will add sensing capabilities to the environment, including microphones, cameras, and position measurement systems. We will explore a variety of display elements, including speakers and handheld, wearable, and wall-mounted displays. Furthermore, we will install computational resources sufficient to process and generate all of the information that will be sensed in this environment. Scale is not the only hard issue when dealing with ubiquitouscomputing interfaces. Applications supported through ubiquitous technology must provide natural, multi-modal input and output. Users may speak, gesture, or manipulate in the context of the task they are performing, and the ubiquitous interface must make sense of that input. The interface needs to be as accessible as possible so as to reduce the cost of use and support novices. These applications must also be seen by the users as providing a value-added service, or they will not be used in everyday tasks no matter how accessible the interface. Projects that would use this experimental facility include: studies of systems that capture what is happening in an environment and make that information available to users (focusing on capture systems to enhance education); explorations of systems issues in delivering multimedia services in this type of ubiquitous computing environment; studies of how computers can be made aware of what is happening in an environment (computational perception); studies of software engineering issues for ubiquitous computing; and explorations of designs for new interfaces and interface paradigms for ubiquitous computing environments. Basic scientific questions include:

Dissertation
01 Jan 1998
TL;DR: This thesis investigates the design and implementation of a Java based middleware infrastructure system to enable wide area applications and illustrates the system's ability to support the development of several classes of application domains which include electronic commerce home service applications, failure resilient scientific applications, and traditional computer supported cooperative work applications.
Abstract: The growth in popularity of the World Wide Web has resulted in the development of a new generation of tools tailored to Internet computing activities. Prominent examples include the Java programming language and Java capable Web browsers. These Web spinoffs are having a profound impact on the field of distributed computing. Whereas distributed computing the traditionally focused on improving the functionality of local clusters of computers, technology is progressing such that wide area computing networks are now becoming a popular target environment for research in distributed computing. With wide area distributed computing environments, geographically distributed resources such as workstations, personal computers, supercomputers, graphic rendering engines, and scientific instruments will be available for use in a seamless fashion by parallel applications. Many envision that it will be possible to transport application code to remote sites in the wide area virtual computer where it may be executed in the presence of needed resources. The area of research devoted to bringing this vision to reality in the context of scientific applications is referred to as metacomputing. Metacomputing environments are useful for a variety of distributed and parallel applications, particularly those which need access to remote resources or applications that are able to effectively utilize a substantial number of computing resources that the Internet may easily provide. Moreover, other wide area distributed computing domains such as electronic commerce home service applications require more advanced capabilities than those provided by a standard web browser. Such capabilities must address issues such as heterogeneity, object sharing, and failure resilience in wide area environments. In this thesis, we investigate the design and implementation of a Java based middleware infrastructure system to enable wide area applications. We then provide an empirical evaluation of our prototype system for local area, wide area, and home service network environments. We also illustrate the system's ability to support the development of several classes of application domains which include electronic commerce home service applications, failure resilient scientific applications, and traditional computer supported cooperative work applications. Finally, we present a design for the monitoring and visual presentation of activities associated with wide area distributed computing.

Proceedings ArticleDOI
06 Jan 1998
TL;DR: The paper presents CoMMM, an object oriented application framework and runtime environment for DMMSs that wraps methods into CORBA objects and provides CORBA Common Facilities for publishing, finding and executing methods.
Abstract: In network computing, methods (scripts, programs, models, macros, etc.) are executed at remote sites in a transparent way. Distributed method management systems (DMMSs) for network computing allow easy dissemination of new methods over global networks by supporting the publishing of new methods as well as finding and executing existing methods. For consumers, DMMSs replace awkward telnet sessions or debugging cycles for downloaded public domain software with click 'n' go access to methods running in a suitable execution environment. For providers, DMMSs simplify the process of putting methods on the Web to a minimum. In scientific computing, methods are typically scripts for scientific computing packages like Mathematica or Matlab, compiled programs, or standalone Internet servers providing computational services. Methods can be applied to data sets available within the DMMS or anywhere on the Internet. The paper presents CoMMM, an object oriented application framework and runtime environment for DMMSs. CoMMM wraps methods into CORBA objects and provides CORBA Common Facilities for publishing, finding and executing methods.

Proceedings ArticleDOI
22 Oct 1998
TL;DR: The author discusses the progress of distributed computing technology, and explains and compares the two approaches, outlines the benefits that can be harvested by the embedded systems, and also indicates the areas for future work.
Abstract: In the 90s, as the World Wide Web appeared on the Internet and took it by storm, the embedded systems designers have the opportunity to advance the designs and features of their systems. This time, at the center of the development is the distributed computing technology which focuses on using the network as the computing resources. As a result, the technology promotes thin clients for telecommunication embedded systems to reduce the total system costs. Currently, there are two approaches for achieving distributed computing over a network. The Inferno system by Lucent Technologies uses a homogeneous namespace, transaction oriented protocol and virtual machine to provide distributed computing. It is designed to work primarily on a private enterprise network. The other approach is led by Sun Microsystems using CORBA, Java to achieve interactive content on Internet Web pages, and using RMI, JavaSpace as the distributed computing vehicle for embedded systems. The author discusses the progress of distributed computing technology, and explains and compares the two approaches, outlines the benefits that can be harvested by the embedded systems, and also indicates the areas for future work.



Proceedings ArticleDOI
E. Drakopoulos1
01 Jun 1998
TL;DR: The evaluation of the design alternatives is based on a multiclass priority queuing network model that is used to analyze system performance under various load conditions, identify potential bottlenecks and study design tradeoffs.
Abstract: The paper studies the performance of computing environment design alternatives for an application workload associated with a large software product generation process A number of configurations based on shared memory multiprocessor computing platforms and network file servers are considered in the study The various computing environment configurations are differentiated based on the location of the devices that provide physical storage for the input and output files of the software product generation process, and the characteristics of the computing platforms that are used to provide compute and file services The evaluation of the design alternatives is based on a multiclass priority queuing network model that is used to analyze system performance under various load conditions, identify potential bottlenecks and study design tradeoffs

01 Jan 1998
TL;DR: This experiment showed that with careful preparation by several faculty, a single topic can be used to enhance student understanding of the overall topic and its many cross-functional components, with a “real world” flavor.
Abstract: After the American Assembly of Collegiate Schools of Business (AACSB) adopted new accreditation standards in April, 1991, some business schools revised curricula to integrate core areas and pursued crossfunctional content delivery. (AACSB, 1991, 1993) The implementation of curriculum integration across functions is complex and requires concerted effort from faculty. One opportunity to provide a significant cross-functional experience for students materialized in an End User Computing (EUC) course in which the students developed a Decision Support System (DSS) for small businesses considering adoption of a Cafeteria Plan benefits package. The Cafeteria Plan topic area links accounting/taxation, finance, human resource management, business communication and computer information systems. Students who major in a business area are typically introduced to these topics on a piece-meal basis in individual, stand-alone courses. This experiment showed that with careful preparation by several faculty, a single topic can be used to enhance student understanding of the overall topic and its many cross-functional components, with a “real world” flavor.

Book ChapterDOI
20 Jul 1998
TL;DR: Distributed object technology may be the best way to program flexible and maintainable parallel and distributed scientific software systems in the future.
Abstract: Scientific computing requires parallel and distributed computations for high performance, which introduces an additional level of complexity to the application development. Distributed object technology may be the best way to program flexible and maintainable parallel and distributed scientific software systems in the future.