scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 2000"


Journal ArticleDOI
TL;DR: The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
Abstract: The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.

6,527 citations


Journal ArticleDOI
Shouheng Sun1, Christopher B. Murray1, Dieter Weller1, Liesl Folks1, Andreas Moser1 
17 Mar 2000-Science
TL;DR: Thermal annealing converts the internal particle structure from a chemically disordered face- centered cubic phase to the chemically ordered face-centered tetragonal phase and transforms the nanoparticle superlattices into ferromagnetic nanocrystal assemblies that can support high-density magnetization reversal transitions.
Abstract: Synthesis of monodisperse iron-platinum (FePt) nanoparticles by reduction of platinum acetylacetonate and decomposition of iron pentacarbonyl in the presence of oleic acid and oleyl amine stabilizers is reported. The FePt particle composition is readily controlled, and the size is tunable from 3- to 10-nanometer diameter with a standard deviation of less than 5%. These nanoparticles self-assemble into three-dimensional superlattices. Thermal annealing converts the internal particle structure from a chemically disordered face-centered cubic phase to the chemically ordered face-centered tetragonal phase and transforms the nanoparticle superlattices into ferromagnetic nanocrystal assemblies. These assemblies are chemically and mechanically robust and can support high-density magnetization reversal transitions.

5,568 citations


Journal ArticleDOI
16 May 2000
TL;DR: This work considers the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed and proposes a novel reconstruction procedure to accurately estimate the distribution of original data values.
Abstract: A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data.

3,173 citations


Journal ArticleDOI
16 Mar 2000-Nature
TL;DR: In information processing, as in physics, the classical world view provides an incomplete approximation to an underlying quantum reality that can be harnessed to break codes, create unbreakable codes, and speed up otherwise intractable computations.
Abstract: In information processing, as in physics, our classical world view provides an incomplete approximation to an underlying quantum reality. Quantum effects like interference and entanglement play no direct role in conventional information processing, but they can--in principle now, but probably eventually in practice--be harnessed to break codes, create unbreakable codes, and speed up otherwise intractable computations.

3,080 citations


Journal ArticleDOI
01 Jun 2000
TL;DR: The study of the web as a graph yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution.
Abstract: The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200 million pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale.

2,973 citations


Journal ArticleDOI
TL;DR: W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts and to create models of people's appearance so that they can be tracked through interactions such as occlusions.
Abstract: W/sup 4/ is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. W/sup 4/ can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. W/sup 4/ can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320/spl times/240 resolution images on a 400 MHz dual-Pentium II PC.

2,870 citations


Journal ArticleDOI
30 Nov 2000-Nature
TL;DR: ‘mono-molecular’ electronics, in which a single molecule will integrate the elementary functions and interconnections required for computation, is proposed.
Abstract: The semiconductor industry has seen a remarkable miniaturization trend, driven by many scientific and technological innovations. But if this trend is to continue, and provide ever faster and cheaper computers, the size of microelectronic circuit components will soon need to reach the scale of atoms or molecules—a goal that will require conceptually new device structures. The idea that a few molecules, or even a single molecule, could be embedded between electrodes and perform the basic functions of digital electronics—rectification, amplification and storage—was first put forward in the mid-1970s. The concept is now realized for individual components, but the economic fabrication of complete circuits at the molecular level remains challenging because of the difficulty of connecting molecules to one another. A possible solution to this problem is ‘mono-molecular’ electronics, in which a single molecule will integrate the elementary functions and interconnections required for computation.

2,853 citations


Journal ArticleDOI
15 Dec 2000-Science
TL;DR: A simple, robust, chemical route to the fabrication of ultrahigh-density arrays of nanopores with high aspect ratios using the equilibrium self-assembled morphology of asymmetric diblock copolymers is shown.
Abstract: We show a simple, robust, chemical route to the fabrication of ultrahigh-density arrays of nanopores with high aspect ratios using the equilibrium self-assembled morphology of asymmetric diblock copolymers. The dimensions and lateral density of the array are determined by segmental interactions and the copolymer molecular weight. Through direct current electrodeposition, we fabricated vertical arrays of nanowires with densities in excess of 1.9 x 10(11) wires per square centimeter. We found markedly enhanced coercivities with ferromagnetic cobalt nanowires that point toward a route to ultrahigh-density storage media. The copolymer approach described is practical, parallel, compatible with current lithographic processes, and amenable to multilayered device fabrication.

2,106 citations


Journal ArticleDOI
David P. DiVincenzo1
TL;DR: In this article, the requirements for the physical implementation of quantum computation are discussed, plus two relating to the communication of quantum information are extensively explored and related to the many schemes in atomic physics, quantum optics, nuclear and electron magnetic resonance spectroscopy, superconducting electronics, and quantum-dot physics, for achieving quantum computing.
Abstract: After a brief introduction to the principles and promise of quantum information processing, the requirements for the physical implementation of quantum computation are discussed. These five requirements, plus two relating to the communication of quantum information, are extensively explored and related to the many schemes in atomic physics, quantum optics, nuclear and electron magnetic resonance spectroscopy, superconducting electronics, and quantum-dot physics, for achieving quantum computing.

1,754 citations


Journal ArticleDOI
14 Apr 2000-Science
TL;DR: The specific transduction, via surface stress changes, of DNA hybridization and receptor-ligand binding into a direct nanomechanical response of microfabricated cantilevers is reported, demonstrating the wide-ranging applicability of nanomechamical transduction to detect biomolecular recognition.
Abstract: We report the specific transduction, via surface stress changes, of DNA hybridization and receptor-ligand binding into a direct nanomechanical response of microfabricated cantilevers. Cantilevers in an array were functionalized with a selection of biomolecules. The differential deflection of the cantilevers was found to provide a true molecular recognition signal despite large nonspecific responses of individual cantilevers. Hybridization of complementary oligonucleotides shows that a single base mismatch between two 12-mer oligonucleotides is clearly detectable. Similar experiments on protein A-immunoglobulin interactions demonstrate the wide-ranging applicability of nanomechanical transduction to detect biomolecular recognition.

1,729 citations


Journal ArticleDOI
David P. DiVincenzo1
TL;DR: In this paper, the authors provide an overview of the common objectives of the investigations reported in the remain- der of this special issue and discuss the requirements for the physical implementation of quantum computation.
Abstract: After a brief introduction to the principles and promise of quantum information processing, the requirements for the physical implementation of quantum computation are discussed. These five requirements, plus two relating to the communication of quantum information, are extensively ex- plored and related to the many schemes in atomic physics, quantum optics, nuclear and electron magnetic resonance spectroscopy, superconducting electronics, and quantum-dot physics, for achiev- ing quantum computing. I. INTRODUCTION � The advent of quantum information processing, as an abstract concept, has given birth to a great deal of new thinking, of a very concrete form, about how to create physical computing devices that operate in the hitherto unexplored quantum mechanical regime. The efforts now underway to produce working laboratory devices that perform this profoundly new form of information pro- cessing are the subject of this book. In this chapter I provide an overview of the common objectives of the investigations reported in the remain- der of this special issue. The scope of the approaches, proposed and underway, to the implementation of quan- tum hardware is remarkable, emerging from specialties in atomic physics (1), in quantum optics (2), in nuclear (3) and electron (4) magnetic resonance spectroscopy, in su- perconducting device physics (5), in electron physics (6), and in mesoscopic and quantum dot research (7). This amazing variety of approaches has arisen because, as we will see, the principles of quantum computing are posed using the most fundamental ideas of quantum mechanics, ones whose embodiment can be contemplated in virtually every branch of quantum physics. The interdisciplinary spirit which has been fostered as a result is one of the most pleasant and remarkable fea- tures of this field. The excitement and freshness that has been produced bodes well for the prospect for discovery, invention, and innovation in this endeavor.

Journal ArticleDOI
Ran Canetti1
TL;DR: In this article, the authors present general definitions of security for multiparty cryptographic protocols, with focus on the task of evaluating a probabilistic function of the parties' inputs, and show that, with respect to these definitions, security is preserved under a natural composition operation.
Abstract: We present general definitions of security for multiparty cryptographic protocols, with focus on the task of evaluating a probabilistic function of the parties' inputs. We show that, with respect to these definitions, security is preserved under a natural composition operation. The definitions follow the general paradigm of known definitions; yet some substantial modifications and simplifications are introduced. The composition operation is the natural ``subroutine substitution'' operation, formalized by Micali and Rogaway. We consider several standard settings for multiparty protocols, including the cases of eavesdropping, Byzantine, nonadaptive and adaptive adversaries, as well as the information-theoretic and the computational models. In particular, in the computational model we provide the first definition of security of protocols that is shown to be preserved under composition.

Journal ArticleDOI
C. C. Tsuei1, John R. Kirtley1
TL;DR: The recent development of phase-sensitive tests, combined with the refinement of several other symmetry-sensitive techniques, has for the most part settled this controversy in favor of predominantly $d$-wave symmetry for a number of optimally hole-and electron-doped cuprates as mentioned in this paper.
Abstract: Pairing symmetry in the cuprate superconductors is an important and controversial topic. The recent development of phase-sensitive tests, combined with the refinement of several other symmetry-sensitive techniques, has for the most part settled this controversy in favor of predominantly $d$-wave symmetry for a number of optimally hole- and electron-doped cuprates. This paper begins by reviewing the concepts of the order parameter, symmetry breaking, and symmetry classification in the context of the cuprates. After a brief survey of some of the key non-phase-sensitive tests of pairing symmetry, the authors extensively review the phase-sensitive methods, which use the half-integer flux-quantum effect as an unambiguous signature for $d$-wave pairing symmetry. A number of related symmetry-sensitive experiments are described. The paper concludes with a brief discussion of the implications, both fundamental and applied, of the predominantly $d$-wave pairing symmetry in the cuprates.

Journal ArticleDOI
TL;DR: In this article, the authors present a review of the literature on high Ku alternative media, both for longitudinal and perpendicular recording, with data on sputtered and evaporated thin FePt films, with coercivities exceeding 10000 Oe.
Abstract: High K/sub u/, uniaxial magnetocrystalline anisotropy, materials are generally attractive for ultrahigh density magnetic recording applications as they allow smaller, thermally stable media grains. Prominent candidates are rare-earth transition metals (Co/sub 5/Sm,...), and tetragonal intermetallic compounds (L1/sub 0/ phases FePt, CoPtY,...), which have 20-40 times higher K/sub u/ than today's hexagonal Co-alloy based media. This allows for about 3 times smaller grain diameters, D, and a potential 10-fold areal density increase (/spl prop/1/D/sup 2/), well beyond the currently projected 40-100 Gbits/in/sup 2/ mark, Realization of such densities will depend on a large number of factors, not all related to solving media microstructure problems, In particular it is at present not known how to record into such media, which may require write fields in the order of 10-100 kOe. Despite this unsolved problem, there is considerable interest in high Ku alternative media, both for longitudinal and perpendicular recording. Activities in this area will be reviewed and data on sputtered and evaporated thin FePt films, with coercivities exceeding 10000 Oe will be presented.

Journal ArticleDOI
TL;DR: A filter-based fingerprint matching algorithm which uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode and is able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature.
Abstract: Biometrics-based verification, especially fingerprint-based identification, is receiving a lot of attention. There are two major shortcomings of the traditional approaches to fingerprint representation. For a considerable fraction of population, the representations based on explicit detection of complete ridge structures in the fingerprint are difficult to extract automatically. The widely used minutiae-based representation does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Further, minutiae-based matching has difficulty in quickly matching two fingerprint images containing a different number of unregistered minutiae points. The proposed filter-based algorithm uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode. The fingerprint matching is based on the Euclidean distance between the two corresponding FingerCodes and hence is extremely fast. We are able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature. Our system performs better than a state-of-the-art minutiae-based system when the performance requirement of the application system does not demand a very low false acceptance rate. Finally, we show that the matching performance can be improved by combining the decisions of the matchers based on complementary (minutiae-based and filter-based) fingerprint information.

Book
15 Jan 2000
TL;DR: RTSJ's features and the thinking behind the specification's design are explained, which aims to provide a platform-a Java execution environment and application program interface (API) that lets programmers correctly reason about the temporal behavior of executing software.
Abstract: New languages, programming disciplines, operating systems, and software engineering techniques sometimes hold considerable potential for real-time software developers. A promising area of interest-but one fairly new to the real-time community-is object-oriented programming. Java, for example, draws heavily from object orientation and is highly suitable for extension to real-time and embedded systems. Recognizing this fit between Java and real-time software development, the Real-Time for Java Experts Group (RTJEG) began developing the real-time specification for Java (RTSJ) in March 1999 under the Java Community Process. This article explains RTSJ's features and the thinking behind the specification's design. The goal of the RTJEG, of which the authors are both members, was to provide a platform-a Java execution environment and application program interface (API)-that lets programmers correctly reason about the temporal behavior of executing software.

Proceedings ArticleDOI
14 May 2000
TL;DR: This work proposes two efficient schemes, TESLA and EMSS, for secure lossy multicast streams, and offers sender authentication, strong loss robustness, high scalability and minimal overhead at the cost of loose initial time synchronization and slightly delayed authentication.
Abstract: Multicast stream authentication and signing is an important and challenging problem. Applications include the continuous authentication of radio and TV Internet broadcasts, and authenticated data distribution by satellite. The main challenges are fourfold. First, authenticity must be guaranteed even when only the sender of the data is trusted. Second, the scheme needs to scale to potentially millions of receivers. Third, streamed media distribution can have high packet loss. Finally the system needs to be efficient to support fast packet rates. We propose two efficient schemes, TESLA and EMSS, for secure lossy multicast streams. TESLA (Timed Efficient Stream Loss-tolerant Authentication), offers sender authentication, strong loss robustness, high scalability and minimal overhead at the cost of loose initial time synchronization and slightly delayed authentication. EMSS (Efficient Multi-chained Stream Signature), provides nonrepudiation of origin, high loss resistance, and low overhead, at the cost of slightly delayed verification.

Journal ArticleDOI
Jonathan Z. Sun1
TL;DR: In this paper, the authors examined the consequence of spin-current-induced angular momentum deposition in a monodomain Stoner-Wohlfarth magnetic body using the Landau-Lifshitz-Gilbert equation with a phenomenological damping coefficient.
Abstract: I examined the consequence of a spin-current-induced angular momentum deposition in a monodomain Stoner-Wohlfarth magnetic body. The magnetic dynamics of the particle are modeled using the Landau-Lifshitz-Gilbert equation with a phenomenological damping coefficient $\ensuremath{\alpha}.$ Two magnetic potential landscapes are studied in detail: One uniaxial, the other uniaxial in combination with an easy-plane potential term that could be used to model a thin-film geometry with demagnetization. Quantitative predictions are obtained for comparison with experiments.

Journal ArticleDOI
Thomas Erickson1, Wendy A. Kellogg1
TL;DR: A vision of knowledge communities, conversationally based systems that support the creation, management and reuse of knowledge in a social context, is developed and it is suggested that they have three characteristics—visbility, awareness, and accountability—which enable people to draw upon their experience and expertise to structure their interactions with one another.
Abstract: We are interested in desiging systems that support communication and collaboration among large groups of people over computing networks. We begin by asking what properties of the physical world support graceful human-human communication in face-to-face situations, and argue that it is possible to design digital systems that support coherent behavior by making participants and their activites visible to one another. We call such systems “socially translucent systems” and suggest that they have three characteristics—visbility, awareness, and accountability—which enable people to draw upon their experience and expertise to structure their interactions with one another. To motivate and focus our ideas we develop a vision of knowledge communities, conversationally based systems that support the creation, management and reuse of knowledge in a social context. We describe our experience in designing and deploying one layer of functionality for knowledge communities, embodied in a working system called “Barbie” and discuss research issues raised by a socially translucent approach to design.

Posted Content
TL;DR: In this paper, the authors take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the schemes that result from implementing the random oracle by so called "cryptographic hash functions".
Abstract: We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.

Patent
Pat Allen Buckland1
30 Nov 2000
TL;DR: In this article, a method and system for managing data in a data processing system are disclosed, where data is stored in a first portion of the main memory of the system and information is then stored in the second portion of main memory.
Abstract: A method and system for managing data in a data processing system are disclosed. Initially, data is stored in a first portion of the main memory of the system. Responsive to storing the data in the first portion of main memory, information is then stored in a second portion of the main memory. The information stored in the second portion of main memory is indicative of the data stored in the first portion. In an embodiment in which the data processing system is implemented as a multi-node system such as a NUMA system, the first portion of the main memory is in the main memory of a first node of system and the second portion of the main memory is in the main memory of a second node of the system. In one embodiment, storing information in the second portion of the main memory is achieved by storing a copy of the data in the second portion. If a fault in the first portion of the main memory is detected, the information in the second main memory portion is retrieved and stored to a persistent storage device. In another embodiment, storing information in the second portion of the main memory includes calculating a value based on the corresponding contents of other portions of the main memory using an algorithm such as checksum, parity, or ECC, and storing the calculated value in the second portion. In one embodiment, the main memory of at least one of the nodes is connectable to a persistent source of power, such as a battery, such that the main memory contents may be preserved if system power is disabled.

Book ChapterDOI
Victor Shoup1
14 May 2000
TL;DR: The RSA threshold signature scheme presented in this article is robust and unforgeable in the random oracle model, assuming the RSA problem is hard, and the signature share generation and verification is completely non-interactive.
Abstract: We present an RSA threshold signature scheme. The scheme enjoys the following properties: 1. it is unforgeable and robust in the random oracle model, assuming the RSA problem is hard; 2. signature share generation and verification is completely non-interactive; 3. the size of an individual signature share is bounded by a constant times the size of the RSA modulus.

Journal ArticleDOI
TL;DR: A new framework for distilling information from word lattices is described to improve the accuracy of the speech recognition output and obtain a more perspicuous representation of a set of alternative hypotheses.

Journal ArticleDOI
TL;DR: As people become more connected electronically, the ability to achieve a highly accurate automatic personal identification system is substantially more critical and organizations are looking to automated identity authentication systems to improve customer satisfaction and operating efficiency.
Abstract: W A LT ER S IP SE R For this reason, more and more organizations are looking to automated identity authentication systems to improve customer satisfaction and operating efficiency as well as to save critical resources (see Figure 1). Furthermore, as people become more connected electronically, the ability to achieve a highly accurate automatic personal identification system is substantially more critical [5]. Personal identification is the process of associating a particular individual with an identity. Identification can be in the form of verification (also known as authentication), which entails authenticating a claimed identity (“Am I who I claim I am?”), or recognition (also known as identification), which entails determining the identity of a given person from a database of persons known to the system (“Who am I?”). Knowledge-based and token-based automatic personal identification approaches have been the two traditional techniques widely used [8]. Token-based approaches use something you have to make a personal identification, such as a passport, driver’s license, ID card, credit card, or keys. Knowledge-based approaches use something you know to make a personal identification, such as a password or a personal identification number (PIN). Since these traditional approaches are not based on any inherent attributes of an individual to make a personal identification, they suffer from the

Book ChapterDOI
20 Aug 2000
TL;DR: A group signature scheme allows a group member to sign messages anonymously on behalf of the group but in the case of a dispute the identity of a signature's originator can be revealed (only) by a designated entity.
Abstract: A group signature scheme allows a group member to sign messages anonymously on behalf of the group However, in the case of a dispute, the identity of a signature's originator can be revealed (only) by a designated entity The interactive counterparts of group signatures are identity escrow schemes or group identification scheme with revocable anonymity This work introduces a new provably secure group signature and a companion identity escrow scheme that are significantly more efficient than the state of the art In its interactive, identity escrow form, our scheme is proven secure and coalition-resistant under the strong RSA and the decisional Diffie-Hellman assumptions The security of the noninteractive variant, ie, the group signature scheme, relies additionally on the Fiat-Shamir heuristic (also known as the random oracle model)

Posted Content
TL;DR: It is argued that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system, and a hierarchy of knowledge states that a system may be in is presented.
Abstract: Reasoning about knowledge seems to play a fundamental role in distributed systems. Indeed, such reasoning is a central part of the informal intuitive arguments used in the design of distributed protocols. Communication in a distributed system can be viewed as the act of transforming the system's state of knowledge. This paper presents a general framework for formalizing and reasoning about knowledge in distributed systems. We argue that states of knowledge of groups of processors are useful concepts for the design and analysis of distributed protocols. In particular, distributed knowledge corresponds to knowledge that is ``distributed'' among the members of the group, while common knowledge corresponds to a fact being ``publicly known''. The relationship between common knowledge and a variety of desirable actions in a distributed system is illustrated. Furthermore, it is shown that, formally speaking, in practical systems common knowledge cannot be attained. A number of weaker variants of common knowledge that are attainable in many cases of interest are introduced and investigated.

Journal ArticleDOI
TL;DR: First-principles calculations of the current-voltage characteristics of a molecular device are reported and show that the shape of the I-V curve is largely determined by the electronic structure of the molecule, while the presence of single atoms at the molecule-electrode interface play a key role in determining the absolute value of theCurrent.
Abstract: We report first-principles calculations of the current-voltage ( $I\ensuremath{-}V$) characteristics of a molecular device and compare with experiment. We find that the shape of the $I\ensuremath{-}V$ curve is largely determined by the electronic structure of the molecule, while the presence of single atoms at the molecule-electrode interface play a key role in determining the absolute value of the current. The results show that such simulations would be useful for the design of future microelectronic devices for which the Boltzmann-equation approach is no longer applicable.

Journal ArticleDOI
TL;DR: An overview of the research effort on volume holographic digital data storage is presented, highlighting new insights gained in the design and operation of working storage platforms, novel optical components and techniques, data coding and signal processing algorithms, systems tradeoffs, materials testing and tradeoff, and photon-gated storage materials.
Abstract: We present an overview of our research effort on volume holographic digital data storage. Innovations, developments, and new insights gained in the design and operation of working storage platforms, novel optical components and techniques, data coding and signal processing algorithms, systems tradeoffs, materials testing and tradeoffs, and photon-gated storage materials are summarized.

Proceedings ArticleDOI
12 Nov 2000
TL;DR: The results are two fold: it is shown that graphs generated using the proposed random graph models exhibit the statistics observed on the Web graph, and additionally, that natural graph models proposed earlier do not exhibit them.
Abstract: The Web may be viewed as a directed graph each of whose vertices is a static HTML Web page, and each of whose edges corresponds to a hyperlink from one Web page to another. We propose and analyze random graph models inspired by a series of empirical observations on the Web. Our graph models differ from the traditional G/sub n,p/ models in two ways: 1. Independently chosen edges do not result in the statistics (degree distributions, clique multitudes) observed on the Web. Thus, edges in our model are statistically dependent on each other. 2. Our model introduces new vertices in the graph as time evolves. This captures the fact that the Web is changing with time. Our results are two fold: we show that graphs generated using our model exhibit the statistics observed on the Web graph, and additionally, that natural graph models proposed earlier do not exhibit them. This remains true even when these earlier models are generalized to account for the arrival of vertices over time. In particular, the sparse random graphs in our models exhibit properties that do not arise in far denser random graphs generated by Erdos-Renyi models.

Journal ArticleDOI
16 Nov 2000-Nature
TL;DR: An explicit scheme is introduced in which the Heisenberg interaction alone suffices to implement exactly any quantum computer circuit, at a price of a Factor of three in additional qubits, and about a factor of ten in additional two-qubit operations.
Abstract: Various physical implementations of quantum computers are being investigated, although the requirements that must be met to make such devices a reality in the laboratory at present involve capabilities well beyond the state of the art. Recent solid-state approaches have used quantum dots, donor-atom nuclear spins or electron spins; in these architectures, the basic two-qubit quantum gate is generated by a tunable exchange interaction between spins (a Heisenberg interaction), whereas the one-qubit gates require control over a local magnetic field. Compared to the Heisenberg operation, the one-qubit operations are significantly slower, requiring substantially greater materials and device complexity--potentially contributing to a detrimental increase in the decoherence rate. Here we introduced an explicit scheme in which the Heisenberg interaction alone suffices to implement exactly any quantum computer circuit. This capability comes at a price of a factor of three in additional qubits, and about a factor of ten in additional two-qubit operations. Even at this cost, the ability to eliminate the complexity of one-qubit operations should accelerate progress towards solid-state implementations of quantum computation.