scispace - formally typeset
Search or ask a question
Author

A. Watson

Bio: A. Watson is an academic researcher. The author has contributed to research in topics: Common Data Representation & Interoperable Object Reference. The author has an hindex of 1, co-authored 1 publications receiving 951 citations.

Papers
More filters

Cited by
More filters
Book ChapterDOI
25 Aug 2002
TL;DR: It is argued that traditional approaches to handling resource variability in applications are inadequate, and an alternative architectural framework is described that is better matched to the needs of ubiquitous computing.
Abstract: Ubiquitous computing poses a number of challenges for software architecture. One of the most important is the ability to design software systems that ac- commodate dynamically-changing resources. Resource variability arises natu- rally in a ubiquitous computing setting through user mobility (a user moves from one computing environment to another), and through the need to exploit time-varying resources in a given environment (such as wireless bandwidth). Traditional approaches to handling resource variability in applications attempt to address the problem by imposing uniformity on the environment. We argue that those approaches are inadequate, and describe an alternative architectural framework that is better matched to the needs of ubiquitous computing. A key feature of the architecture is that user tasks become first class entities. User proxies, or Auras, use models of user tasks to set up, monitor and adapt com- puting environments proactively. The architectural framework has been im- plemented and is currently being used as a central component of Project Aura, a campus-wide ubiquitous computing effort.

614 citations

Journal ArticleDOI
TL;DR: MiLAN, a new middleware that allows applications to specify a policy for managing the network and sensors, but the actual implementation of this policy is effected within MiLAN, is described and its effectiveness is shown through the design of a sensor-based personal health monitor.
Abstract: Current trends in computing include increases in both distribution and wireless connectivity, leading to highly dynamic, complex environments on top of which applications must be built. The task of designing and ensuring the correctness of applications in these environments is similarly becoming more complex. The unified goal of much of the research in distributed wireless systems is to provide higher-level abstractions of complex low-level concepts to application programmers, easing the design and implementation of applications. A new and growing class of applications for wireless sensor networks require similar complexity encapsulation. However, sensor networks have some unique characteristics, including dynamic availability of data sources and application quality of service requirements, that are not common to other types of applications. These unique features, combined with the inherent distribution of sensors, and limited energy and bandwidth resources, dictate the need for network functionality and the individual sensors to be controlled to best serve the application requirements. In this article, we describe different types of sensor network applications and discuss existing techniques for managing these types of networks. We also overview a variety of related middleware and argue that no existing approach provides all the management tools required by sensor network applications. To meet this need, we have developed a new middleware called MiLAN. MiLAN allows applications to specify a policy for managing the network and sensors, but the actual implementation of this policy is effected within MiLAN. We describe MiLAN and show its effectiveness through the design of a sensor-based personal health monitor.

554 citations

Journal ArticleDOI
TL;DR: This paper aims to present the state‐of‐the‐art of Grid computing and attempts to survey the major international efforts in developing this emerging technology.
Abstract: The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to-peer or Grid computing. The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure. This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology.

513 citations

Journal ArticleDOI
V. M. Abazov1, Brad Abbott2, M. Abolins3, Bobby Samir Acharya4  +814 moreInstitutions (74)
TL;DR: The D0 experiment enjoyed a very successful data-collection run at the Fermilab Tevatron collider between 1992 and 1996 as discussed by the authors, and the detector has been upgraded to take advantage of improvements to the Tevoton and to enhance its physics capabilities.
Abstract: The D0 experiment enjoyed a very successful data-collection run at the Fermilab Tevatron collider between 1992 and 1996. Since then, the detector has been upgraded to take advantage of improvements to the Tevatron and to enhance its physics capabilities. We describe the new elements of the detector, including the silicon microstrip tracker, central fiber tracker, solenoidal magnet, preshower detectors, forward muon detector, and forward proton detector. The uranium/liquid-argon calorimeters and central muon detector, remaining from Run I, are discussed briefly. We also present the associated electronics, triggering, and data acquisition systems, along with the design and implementation of software specific to D0.

425 citations

Journal ArticleDOI
TL;DR: This article argues for the benefits and feasibility of a generic yet tailorable approach to component-based systems-building that offers a uniform programming model that is applicable in a wide range of systems-oriented target domains and deployment environments.
Abstract: Component-based software structuring principles are now commonplace at the application level; but componentization is far less established when it comes to building low-level systems software. Although there have been pioneering efforts in applying componentization to systems-building, these efforts have tended to target specific application domains (e.g., embedded systems, operating systems, communications systems, programmable networking environments, or middleware platforms). They also tend to be targeted at specific deployment environments (e.g., standard personal computer (PC) environments, network processors, or microcontrollers). The disadvantage of this narrow targeting is that it fails to maximize the genericity and abstraction potential of the component approach. In this article, we argue for the benefits and feasibility of a generic yet tailorable approach to component-based systems-building that offers a uniform programming model that is applicable in a wide range of systems-oriented target domains and deployment environments. The component model, called OpenCom, is supported by a reflective runtime architecture that is itself built from components. After describing OpenCom and evaluating its performance and overhead characteristics, we present and evaluate two case studies of systems we have built using OpenCom technology, thus illustrating its benefits and its general applicability.

407 citations