scispace - formally typeset
Search or ask a question
Author

Alfred C. Weaver

Bio: Alfred C. Weaver is an academic researcher from University of Virginia. The author has contributed to research in topics: The Internet & Multicast. The author has an hindex of 24, co-authored 125 publications receiving 2938 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The mass adoption of social-networking Websites points to an evolution in human social interaction and creates a riper breeding ground for social networking and collaboration.
Abstract: In the context of today's electronic media, social networking has come to mean individuals using the Internet and Web applications to communicate in previously impossible ways. This is largely the result of a culture-wide paradigm shift in the uses and possibilities of the Internet itself. The current Web is a much different entity than the Web of a decade ago. This new focus creates a riper breeding ground for social networking and collaboration. In an abstract sense, social networking is about everyone. The mass adoption of social-networking Websites points to an evolution in human social interaction.

424 citations

Journal ArticleDOI
TL;DR: The two most popular biometric techniques are focused on: fingerprints and iris scans, which are used increasingly as a hedge against identity theft.
Abstract: In this age of digital impersonation, biometric techniques are being used increasingly as a hedge against identity theft. The premise is that a biometric - a measurable physical characteristic or behavioral trait - is a more reliable indicator of identity than legacy systems such as passwords and PINs. There are three general ways to identify yourself to a computer system, based on what you know, what you have, or who you are. Biometrics belong to the "who you are" class and can be subdivided into behavioral and physiological approaches. Behavioral approaches include signature recognition, voice recognition, keystroke dynamics, and gait analysis. Physiological approaches include fingerprints; iris and retina scans; hand, finger, face, and ear geometry; hand vein and nail bed recognition; DNA; and palm prints. In this article, we focus on the two most popular biometric techniques: fingerprints and iris scans.

416 citations

08 Jun 1994
TL;DR: The coming of giga-bit networks makes possible the realization of a single nationwide virtual computer comprised of a variety of geographically distributed high-performance machines and workstations, and the approach to constructing and exploiting such “metasystems” is described.
Abstract: The coming of giga-bit networks makes possible the realization of a single nationwide virtual computer comprised of a variety of geographically distributed high-performance machines and workstations. T o realize the potential that the physical infrastructure provides, software must be developed that is easy to use, supports lar ge degrees of parallelism in applications code, and manages the complexity of the underlying physical system for the user . This paper describes our approach to constructing and exploiting such “metasystems”. Our approach inherits features of earlier work on parallel processing systems and heterogeneous distributed computing systems. In particular , we are building on Mentat, an object-oriented parallel processing system developed at the University of V irginia. This report is a preliminary document. W e expect changes to occur as the architecture and design of the system mature.

202 citations

Proceedings ArticleDOI
26 Oct 2010
TL;DR: A comparison of 9 state-of-the-art keyword search systems contradicts the retrieval effectiveness purported by existing evaluations, reinforces the need for standardized evaluation and inspires the creation of new algorithms and indexing techniques that scale to meet both current and future workloads.
Abstract: With regard to keyword search systems for structured data, research during the past decade has largely focused on performance. Researchers have validated their work using ad hoc experiments that may not reflect real-world workloads. We illustrate the wide deviation in existing evaluations and present an evaluation framework designed to validate the next decade of research in this field. Our comparison of 9 state-of-the-art keyword search systems contradicts the retrieval effectiveness purported by existing evaluations and reinforces the need for standardized evaluation. Our results also suggest that there remains considerable room for improvement in this field. We found that many techniques cannot scale to even moderately-sized datasets that contain roughly a million tuples. Given that existing databases are considerably larger than this threshold, our results motivate the creation of new algorithms and indexing techniques that scale to meet both current and future workloads.

108 citations


Cited by
More filters
Journal ArticleDOI
Gerard J. Holzmann1
01 May 1997
TL;DR: An overview of the design and structure of the verifier, its theoretical foundation, and an overview of significant practical applications are given.
Abstract: SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.

4,159 citations

Proceedings ArticleDOI
22 Feb 1999
TL;DR: A new replication algorithm that is able to tolerate Byzantine faults that works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude.
Abstract: This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3% slower than a standard unreplicated NFS.

3,562 citations

01 Jan 2002
TL;DR: This presentation complements an earlier foundational article, “The Anatomy of the Grid,” by describing how Grid mechanisms can implement a service-oriented architecture, explaining how Grid functionality can be incorporated into a Web services framework, and illustrating how the architecture can be applied within commercial computing as a basis for distributed system integration.
Abstract: In both e-business and e-science, we often need to integrate services across distributed, heterogeneous, dynamic “virtual organizations” formed from the disparate resources within a single enterprise and/or from external resource sharing and service provider relationships. This integration can be technically challenging because of the need to achieve various qualities of service when running on top of different native platforms. We present an Open Grid Services Architecture that addresses these challenges. Building on concepts and technologies from the Grid and Web services communities, this architecture defines a uniform exposed service semantics (the Grid service); defines standard mechanisms for creating, naming, and discovering transient Grid service instances; provides location transparency and multiple protocol bindings for service instances; and supports integration with underlying native platform facilities. The Open Grid Services Architecture also defines, in terms of Web Services Description Language (WSDL) interfaces and associated conventions, mechanisms required for creating and composing sophisticated distributed systems, including lifetime management, change management, and notification. Service bindings can support reliable invocation, authentication, authorization, and delegation, if required. Our presentation complements an earlier foundational article, “The Anatomy of the Grid,” by describing how Grid mechanisms can implement a service-oriented architecture, explaining how Grid functionality can be incorporated into a Web services framework, and illustrating how our architecture can be applied within commercial computing as a basis for distributed system integration—within and across organizational domains. This is a DRAFT document and continues to be revised. The latest version can be found at http://www.globus.org/research/papers/ogsa.pdf. Please send comments to foster@mcs.anl.gov, carl@isi.edu, jnick@us.ibm.com, tuecke@mcs.anl.gov Physiology of the Grid 2

3,455 citations

Journal ArticleDOI
01 Jun 1997
TL;DR: The Globus system is intended to achieve a vertically integrated treatment of application, middleware, and net work, an integrated set of higher level services that enable applications to adapt to heteroge neous and dynamically changing metacomputing environ ments.
Abstract: The Globus system is intended to achieve a vertically integrated treatment of application, middleware, and net work. A low-level toolkit provides basic mechanisms such as communication, authentication, network information, and data access. These mechanisms are used to con struct various higher level metacomputing services, such as parallel programming tools and schedulers. The long- term goal is to build an adaptive wide area resource environment AWARE, an integrated set of higher level services that enable applications to adapt to heteroge neous and dynamically changing metacomputing environ ments. Preliminary versions of Globus components were deployed successfully as part of the I-WAY networking experiment.

3,450 citations

Book
01 Jan 2000
TL;DR: The most up-to-date introduction to the field of computer networking, this book's top-down approach starts at the application layer and works down the protocol stack, it also uses the Internet as the main example of networks as discussed by the authors.
Abstract: From the Publisher: The most up-to-date introduction to the field of computer networking, this book's top-down approach starts at the application layer and works down the protocol stack. It also uses the Internet as the main example of networks. This all creates a book relevant to those interested in networking today. By starting at the application-layer and working down the protocol stack, this book provides a relevant introduction of important concepts. Based on the rationale that once a reader understands the applications of networks they can understand the network services needed to support these applications, this book takes a "top-down" approach that exposes readers first to a concrete application and then draws into some of the deeper issues surrounding networking. This book focuses on the Internet as opposed to addressing it as one of many computer network technologies, further motivating the study of the material. This book is designed for programmers who need to learn the fundamentals of computer networking. It also has extensive material making it of great interest to networking professionals.

1,793 citations