scispace - formally typeset
Search or ask a question

Showing papers by "IBM published in 2006"


Journal ArticleDOI
TL;DR: A comprehensive description of the primal-dual interior-point algorithm with a filter line-search method for nonlinear programming is provided, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix.
Abstract: We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.

7,966 citations


Journal ArticleDOI
TL;DR: This approach should enhance the ability to use microarray data to elucidate functional mechanisms that underlie cellular processes and to identify molecular targets of pharmacological compounds in mammalian cellular networks.
Abstract: Elucidating gene regulatory networks is crucial for understanding normal cell physiology and complex pathologic phenotypes. Existing computational methods for the genome-wide "reverse engineering" of such networks have been successful only for lower eukaryotes with simple genomes. Here we present ARACNE, a novel algorithm, using microarray expression profiles, specifically designed to scale up to the complexity of regulatory networks in mammalian cells, yet general enough to address a wider range of network deconvolution problems. This method uses an information theoretic approach to eliminate the majority of indirect interactions inferred by co-expression methods. We prove that ARACNE reconstructs the network exactly (asymptotically) if the effect of loops in the network topology is negligible, and we show that the algorithm works well in practice, even in the presence of numerous loops and complex topologies. We assess ARACNE's ability to reconstruct transcriptional regulatory networks using both a realistic synthetic dataset and a microarray dataset from human B cells. On synthetic datasets ARACNE achieves very low error rates and outperforms established methods, such as Relevance Networks and Bayesian Networks. Application to the deconvolution of genetic networks in human B cells demonstrates ARACNE's ability to infer validated transcriptional targets of the cMYC proto-oncogene. We also study the effects of misestimation of mutual information on network reconstruction, and show that algorithms based on mutual information ranking are more resilient to estimation errors. ARACNE shows promise in identifying direct transcriptional interactions in mammalian cellular networks, a problem that has challenged existing reverse engineering algorithms. This approach should enhance our ability to use microarray data to elucidate functional mechanisms that underlie cellular processes and to identify molecular targets of pharmacological compounds in mammalian cellular networks.

2,533 citations


Journal ArticleDOI
TL;DR: Expander graphs were first defined by Bassalygo and Pinsker in the early 1970s, and their existence was proved in the late 1970s as discussed by the authors and early 1980s.
Abstract: A major consideration we had in writing this survey was to make it accessible to mathematicians as well as to computer scientists, since expander graphs, the protagonists of our story, come up in numerous and often surprising contexts in both fields But, perhaps, we should start with a few words about graphs in general They are, of course, one of the prime objects of study in Discrete Mathematics However, graphs are among the most ubiquitous models of both natural and human-made structures In the natural and social sciences they model relations among species, societies, companies, etc In computer science, they represent networks of communication, data organization, computational devices as well as the flow of computation, and more In mathematics, Cayley graphs are useful in Group Theory Graphs carry a natural metric and are therefore useful in Geometry, and though they are “just” one-dimensional complexes, they are useful in certain parts of Topology, eg Knot Theory In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems The study of these models calls, then, for the comprehension of the significant structural properties of the relevant graphs But are there nontrivial structural properties which are universally important? Expansion of a graph requires that it is simultaneously sparse and highly connected Expander graphs were first defined by Bassalygo and Pinsker, and their existence first proved by Pinsker in the early ’70s The property of being an expander seems significant in many of these mathematical, computational and physical contexts It is not surprising that expanders are useful in the design and analysis of communication networks What is less obvious is that expanders have surprising utility in other computational settings such as in the theory of error correcting codes and the theory of pseudorandomness In mathematics, we will encounter eg their role in the study of metric embeddings, and in particular in work around the Baum-Connes Conjecture Expansion is closely related to the convergence rates of Markov Chains, and so they play a key role in the study of Monte-Carlo algorithms in statistical mechanics and in a host of practical computational applications The list of such interesting and fruitful connections goes on and on with so many applications we will not even

2,037 citations


Journal ArticleDOI
05 Jan 2006-Nature
TL;DR: It is demonstrated that electrical charges on sterically stabilized nanoparticles determine B NSL stoichiometry; additional contributions from entropic, van der Waals, steric and dipolar forces stabilize the variety of BNSL structures.
Abstract: The assembly of nanoparticles of two different materials into a binary nanoparticle superlattice is a promising way of synthesizing a large variety of materials (metamaterials) with precisely controlled chemical composition and tight placement of the components. In theory only a few stable binary superlattice structures can assemble from hard spheres, potentially limiting this approach. But all is not lost because at the nanometre scale there are additional forces (electrostatic, van der Waals and dipolar) that can stabilize binary nanoparticulate structures. Shevchenko et al. now report the synthesis of a dozen novel structures from various combinations of metal, semiconductor, magnetic and dielectric nanoparticles. This demonstrates the potential of self-assembly in designing families of novel materials and metamaterials with programmable physical and chemical properties. Assembly of small building blocks such as atoms, molecules and nanoparticles into macroscopic structures—that is, ‘bottom up’ assembly—is a theme that runs through chemistry, biology and material science. Bacteria1, macromolecules2 and nanoparticles3 can self-assemble, generating ordered structures with a precision that challenges current lithographic techniques. The assembly of nanoparticles of two different materials into a binary nanoparticle superlattice (BNSL)3,4,5,6,7 can provide a general and inexpensive path to a large variety of materials (metamaterials) with precisely controlled chemical composition and tight placement of the components. Maximization of the nanoparticle packing density has been proposed as the driving force for BNSL formation3,8,9, and only a few BNSL structures have been predicted to be thermodynamically stable. Recently, colloidal crystals with micrometre-scale lattice spacings have been grown from oppositely charged polymethyl methacrylate spheres10,11. Here we demonstrate formation of more than 15 different BNSL structures, using combinations of semiconducting, metallic and magnetic nanoparticle building blocks. At least ten of these colloidal crystalline structures have not been reported previously. We demonstrate that electrical charges on sterically stabilized nanoparticles determine BNSL stoichiometry; additional contributions from entropic, van der Waals, steric and dipolar forces stabilize the variety of BNSL structures.

1,981 citations


Journal ArticleDOI
22 Sep 2006-Cell
TL;DR: Rna22 as discussed by the authors identifies microRNA binding sites and their corresponding heteroduplexes, and then identifies the targeting microRNAs by finding putative microRN binding sites in the sequence of interest.

1,888 citations


Proceedings ArticleDOI
16 Oct 2006
TL;DR: This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks that improve over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements.
Abstract: Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.

1,561 citations


Journal ArticleDOI
TL;DR: It is argued for a services science discipline to integrate across academic silos and advance service innovation more rapidly to improve scientific understanding of modern services.
Abstract: The services sector has grown over the last 50 years to dominate economic activity in most advanced industrial economies, yet scientific understanding of modern services is rudimentary Here, we argue for a services science discipline to integrate across academic silos and advance service innovation more rapidly

1,089 citations


Journal ArticleDOI
TL;DR: An overview of biometrics is provided and some of the salient research issues that need to be addressed for making biometric technology an effective tool for providing information security are discussed.
Abstract: Establishing identity is becoming critical in our vastly interconnected society. Questions such as "Is she really who she claims to be?," "Is this person authorized to use this facility?," or "Is he in the watchlist posted by the government?" are routinely being posed in a variety of scenarios ranging from issuing a driver's license to gaining entry into a country. The need for reliable user authentication techniques has increased in the wake of heightened concerns about security and rapid advancements in networking, communication, and mobility. Biometrics, described as the science of recognizing an individual based on his or her physical or behavioral traits, is beginning to gain acceptance as a legitimate method for determining an individual's identity. Biometric systems have now been deployed in various commercial, civilian, and forensic applications as a means of establishing identity. In this paper, we provide an overview of biometrics and discuss some of the salient research issues that need to be addressed for making biometric technology an effective tool for providing information security. The primary contribution of this overview includes: 1) examining applications where biometric scan solve issues pertaining to information security; 2) enumerating the fundamental challenges encountered by biometric systems in real-world applications; and 3) discussing solutions to address the problems of scalability and security in large-scale authentication systems.

1,067 citations


Proceedings ArticleDOI
25 Jun 2006
TL;DR: A tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points) that shows speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.
Abstract: We present a tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points). The data structure requires O(n) space regardless of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant c, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in O (c6n log n) time. Furthermore, nearest neighbor queries require time only logarithmic in n, in particular O (c12 log n) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.

896 citations


Journal ArticleDOI
24 Mar 2006-Science
TL;DR: It is proposed that this entropically unfavorable process is offset by an enthalpy gain due to an increase in molecular contacts at dispersed nanoparticle surfaces as compared with the surfaces of phase-separated nanoparticles.
Abstract: Traditionally the dispersion of particles in polymeric materials has proven difficult and frequently results in phase separation and agglomeration. We show that thermodynamically stable dispersion of nanoparticles into a polymeric liquid is enhanced for systems where the radius of gyration of the linear polymer is greater than the radius of the nanoparticle. Dispersed nanoparticles swell the linear polymer chains, resulting in a polymer radius of gyration that grows with the nanoparticle volume fraction. It is proposed that this entropically unfavorable process is offset by an enthalpy gain due to an increase in molecular contacts at dispersed nanoparticle surfaces as compared with the surfaces of phase-separated nanoparticles. Even when the dispersed state is thermodynamically stable, it may be inaccessible unless the correct processing strategy is adopted, which is particularly important for the case of fullerene dispersion into linear polymers.

881 citations


Journal ArticleDOI
02 Mar 2006-Nature
TL;DR: The results show that the silicon nanowire growth is fundamentally limited by gold diffusion: smooth, arbitrarily long nanowires cannot be grown without eliminating gold migration.
Abstract: Silicon nanowires hold great promise as components of tiny electronic devices, but the usual method of growing them is poorly understood. New work shows that excessive cleanliness can actually stunt a nanowire's growth. They are made by the ‘vapour–liquid–solid’ method, in which a tiny liquid droplet of a metal such as gold absorbs silicon atoms from a gaseous precursor molecule. As the droplet saturates with silicon, it grows a solid, cylindrical silicon crystal whose diameter is determined by the size of the droplet. But in conditions of extreme cleanliness, gold atoms from the droplet can migrate over the surface of the growing nanowire, resulting in misshapen structures. Interest in nanowires continues to grow, fuelled in part by applications in nanotechnology1,2,3,4,5. The ability to engineer nanowire properties makes them especially promising in nanoelectronics6,7,8,9. Most silicon nanowires are grown using the vapour–liquid–solid (VLS) mechanism, in which the nanowire grows from a gold/silicon catalyst droplet during silicon chemical vapour deposition10,11,12,13. Despite over 40 years of study, many aspects of VLS growth are not well understood. For example, in the conventional picture the catalyst droplet does not change during growth, and the nanowire sidewalls consist of clean silicon facets10,11,12,13. Here we demonstrate that these assumptions are false for silicon nanowires grown on Si(111) under conditions where all of the experimental parameters (surface structure, gas cleanliness, and background contaminants) are carefully controlled. We show that gold diffusion during growth determines the length, shape, and sidewall properties of the nanowires. Gold from the catalyst droplets wets the nanowire sidewalls, eventually consuming the droplets and terminating VLS growth. Gold diffusion from the smaller droplets to the larger ones (Ostwald ripening) leads to nanowire diameters that change during growth. These results show that the silicon nanowire growth is fundamentally limited by gold diffusion: smooth, arbitrarily long nanowires cannot be grown without eliminating gold migration.

Proceedings ArticleDOI
20 Oct 2006
TL;DR: SKETCH is a language for finite programs with linguistic support for sketching and its combinatorial synthesizer is complete for the class of finite programs, guaranteed to complete any sketch in theory, and in practice has scaled to realistic programming problems.
Abstract: Sketching is a software synthesis approach where the programmer develops a partial implementation - a sketch - and a separate specification of the desired functionality. The synthesizer then completes the sketch to behave like the specification. The correctness of the synthesized implementation is guaranteed by the compiler, which allows, among other benefits, rapid development of highly tuned implementations without the fear of introducing bugs.We develop SKETCH, a language for finite programs with linguistic support for sketching. Finite programs include many highperformance kernels, including cryptocodes. In contrast to prior synthesizers, which had to be equipped with domain-specific rules, SKETCH completes sketches by means of a combinatorial search based on generalized boolean satisfiability. Consequently, our combinatorial synthesizer is complete for the class of finite programs: it is guaranteed to complete any sketch in theory, and in practice has scaled to realistic programming problems.Freed from domain rules, we can now write sketches as simpleto-understand partial programs, which are regular programs in which difficult code fragments are replaced with holes to be filled by the synthesizer. Holes may stand for index expressions, lookup tables, or bitmasks, but the programmer can easily define new kinds of holes using a single versatile synthesis operator.We have used SKETCH to synthesize an efficient implementation of the AES cipher standard. The synthesizer produces the most complex part of the implementation and runs in about an hour.

Journal ArticleDOI
TL;DR: The process steps and design aspects that were developed at IBM to enable the formation of stacked device layers are reviewed, including the descriptions of a glass substrate process to enable through-wafer alignment and a single-damascene patterning and metallization method for the creation of high-aspect-ratio capability.
Abstract: Three-dimensional (3D) integrated circuits (ICs), which contain multiple layers of active devices, have the potential to dramatically enhance chip performance, functionality, and device packing density. They also provide for microchip architecture and may facilitate the integration of heterogeneous materials, devices, and signals. However, before these advantages can be realized, key technology challenges of 3D ICs must be addressed. More specifically, the processes required to build circuits with multiple layers of active devices must be compatible with current state-of-the-art silicon processing technology. These processes must also show manufacturability, i.e., reliability, good yield, maturity, and reasonable cost. To meet these requirements, IBM has introduced a scheme for building 3D ICs based on the layer transfer of functional circuits, and many process and design innovations have been implemented. This paper reviews the process steps and design aspects that were developed at IBM to enable the formation of stacked device layers. Details regarding an optimized layer transfer process are presented, including the descriptions of 1) a glass substrate process to enable through-wafer alignment; 2) oxide fusion bonding and wafer bow compensation methods for improved alignment tolerance during bonding; 3) and a single-damascene patterning and metallization method for the creation of high-aspect-ratio (6:1 108 vias/cm2), and extremely aggressive wafer-to-wafer alignment (submicron) capability.

Journal ArticleDOI
TL;DR: Templated self-assembly of block copolymers as discussed by the authors provides a path towards the rational design of hierarchical device structures with periodic features that cover several length scales, and provides a promising route to control bottom-up self-organization processes through top-down lithographic templates.
Abstract: One of the key challenges in nanotechnology is to control a self-assembling system to create a specific structure. Self-organizing block copolymers offer a rich variety of periodic nanoscale patterns, and researchers have succeeded in finding conditions that lead to very long range order of the domains. However, the array of microdomains typically still contains some uncontrolled defects and lacks global registration and orientation. Recent efforts in templated self-assembly of block copolymers have demonstrated a promising route to control bottom-up self-organization processes through top-down lithographic templates. The orientation and placement of block-copolymer domains can be directed by topographically or chemically patterned templates. This templated self-assembly method provides a path towards the rational design of hierarchical device structures with periodic features that cover several length scales.

Journal ArticleDOI
TL;DR: In this article, the self-assembly of one-dimensional semiconductor nanowires is used to bring new, high-performance nanowire devices as an add-on to mainstream Si technology.

Journal ArticleDOI
TL;DR: A metagenomic analysis of two lab-scale EBPR sludges dominated by the uncultured bacterium, “Candidatus Accumulibacter phosphatis,” sheds light on several controversies in EBPR metabolic models and provides hypotheses explaining the dominance of A. phosphatis.
Abstract: Enhanced biological phosphorus removal (EBPR) is one of the best-studied microbially mediated industrial processes because of its ecological and economic relevance. Despite this, it is not well understood at the metabolic level. Here we present a metagenomic analysis of two lab-scale EBPR sludges dominated by the uncultured bacterium, "Candidatus Accumulibacter phosphatis." The analysis sheds light on several controversies in EBPR metabolic models and provides hypotheses explaining the dominance of A. phosphatis in this habitat, its lifestyle outside EBPR and probable cultivation requirements. Comparison of the same species from different EBPR sludges highlights recent evolutionary dynamics in the A. phosphatis genome that could be linked to mechanisms for environmental adaptation. In spite of an apparent lack of phylogenetic overlap in the flanking communities of the two sludges studied, common functional themes were found, at least one of them complementary to the inferred metabolism of the dominant organism. The present study provides a much needed blueprint for a systems-level understanding of EBPR and illustrates that metagenomics enables detailed, often novel, insights into even well-studied biological systems.

Proceedings ArticleDOI
29 Sep 2006
TL;DR: This paper designs and implements a new Robust Rate Adaptation Algorithm (RRAA), which uses short-term loss ratio to opportunistically guide its rate change decisions, and an adaptive RTS filter to prevent collision losses from triggering rate decrease.
Abstract: Rate adaptation is a mechanism unspecified by the 802.11 standards, yet critical to the system performance by exploiting the multi-rate capability at the physical layer.I n this paper, we conduct a systematic and experimental study on rate adaptation over 802.11 wireless networks. Our main contributions are two-fold. First, we critique five design guidelines adopted by most existing algorithms. Our study reveals that these seemingly correct guidelines can be misleading in practice, thus incur significant performance penalty in certain scenarios. The fundamental challenge is that rate adaptation must accurately estimate the channel condition despite the presence of various dynamics caused by fading, mobility and hidden terminals. Second, we design and implement a new Robust Rate Adaptation Algorithm (RRAA)that addresses the above challenge. RRAA uses short-term loss ratio to opportunistically guide its rate change decisions, and an adaptive RTS filter to prevent collision losses from triggering rate decrease. Our extensive experiments have shown that RRAA outperforms three well-known rate adaptation solutions (ARF, AARF, and SampleRate) in all tested scenarios, with throughput improvement up to 143%.

Proceedings ArticleDOI
09 Dec 2006
TL;DR: The results show that the best architected policies can come within 1% of the performance of an ideal oracle, while meeting a given chip-level power budget, and are significantly better than static management, even if static scheduling is given oracular knowledge.
Abstract: Chip-level power and thermal implications will continue to rule as one of the primary design constraints and performance limiters. The gap between average and peak power actually widens with increased levels of core integration. As such, if per-core control of power levels (modes) is possible, a global power manager should be able to dynamically set the modes suitably. This would be done in tune with the workload characteristics, in order to always maintain a chip-level power that is below the specified budget. Furthermore, this should be possible without significant degradation of chip-level throughput performance. We analyze and validate this concept in detail in this paper. We assume a per-core DVFS (dynamic voltage and frequency scaling) knob to be available to such a conceptual global power manager. We evaluate several different policies for global multi-core power management. In this analysis, we consider various different objectives such as prioritization and optimized throughput. Overall, our results show that in the context of a workload comprised of SPEC benchmark threads, our best architected policies can come within 1% of the performance of an ideal oracle, while meeting a given chip-level power budget. Furthermore, we show that these global dynamic management policies perform significantly better than static management, even if static scheduling is given oracular knowledge.

Proceedings Article
31 Jul 2006
TL;DR: The design and implementation of a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform and four designs for certificate chains to link the virtual TPM to a hardware TPM are presented, with security vs. efficiency trade-offs based on threat models.
Abstract: We present the design and implementation of a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. To this end, we virtualized the Trusted Platform Module (TPM). As a result, the TPM's secure storage and cryptographic functions are available to operating systems and applications running in virtual machines. Our new facility supports higher-level services for establishing trust in virtualized environments, for example remote attestation of software integrity. We implemented the full TPM specification in software and added functions to create and destroy virtual TPM instances. We integrated our software TPM into a hypervisor environment to make TPM functions available to virtual machines. Our virtual TPM supports suspend and resume operations, as well as migration of a virtual TPM instance with its respective virtual machine across platforms. We present four designs for certificate chains to link the virtual TPM to a hardware TPM, with security vs. efficiency trade-offs based on threat models. Finally, we demonstrate a working system by layering an existing integrity measurement application on top of our virtual TPM facility.

Journal ArticleDOI
TL;DR: The large-scale concept ontology for multimedia (LSCOM) is the first of its kind designed to simultaneously optimize utility to facilitate end-user access, cover a large semantic space, make automated extraction feasible, and increase observability in diverse broadcast news video data sets.
Abstract: As increasingly powerful techniques emerge for machine tagging multimedia content, it becomes ever more important to standardize the underlying vocabularies. Doing so provides interoperability and lets the multimedia community focus ongoing research on a well-defined set of semantics. This paper describes a collaborative effort of multimedia researchers, library scientists, and end users to develop a large standardized taxonomy for describing broadcast news video. The large-scale concept ontology for multimedia (LSCOM) is the first of its kind designed to simultaneously optimize utility to facilitate end-user access, cover a large semantic space, make automated extraction feasible, and increase observability in diverse broadcast news video data sets

Journal ArticleDOI
TL;DR: In this article, a number of implementable customer lifetime value (CLV) models that are useful for market segmentation and allocation of marketing resources for acquisition, retention, and cross-selling are presented.
Abstract: As modern economies become predominantly service-based, companies increasingly derive revenue from the creation and sustenance of long-term relationships with their customers. In such an environment, marketing serves the purpose of maximizing customer lifetime value (CLV) and customer equity, which is the sum of the lifetime values of the company’s customers. This article reviews a number of implementable CLV models that are useful for market segmentation and the allocation of marketing resources for acquisition, retention, and cross-selling. The authors review several empirical insights that were obtained from these models and conclude with an agenda of areas that are in need of further research.

Journal ArticleDOI
19 May 2006-Science
TL;DR: This work used a scanning tunneling microscope to probe the interactions between spins in individual atomic-scale magnetic structures and observed excitations of the coupled atomic spins that can change both the total spin and its orientation.
Abstract: We used a scanning tunneling microscope to probe the interactions between spins in individual atomic-scale magnetic structures. Linear chains of 1 to 10 manganese atoms were assembled one atom at a time on a thin insulating layer, and the spin excitation spectra of these structures were measured with inelastic electron tunneling spectroscopy. We observed excitations of the coupled atomic spins that can change both the total spin and its orientation. Comparison with a model spin-interaction Hamiltonian yielded the collective spin configuration and the strength of the coupling between the atomic spins.

Journal ArticleDOI
TL;DR: This survey summary summarizes different modeling and solution concepts of networking games, as well as a number of different applications in telecommunications that make use of or can make useof networking games.

Journal ArticleDOI
TL;DR: The performance of CMOS is described and variability isn't likely to decrease, since smaller devices contain fewer atoms and consequently exhibit less self-averaging, but the situation may be improved by removing most of the doping.
Abstract: Recent changes in CMOS device structures and materials motivated by impending atomistic and quantum-mechanical limitations have profoundly influenced the nature of delay and power variability. Variations in process, temperature, power supply, wear-out, and use history continue to strongly influence delay. The manner in which tolerance is specified and accommodated in high-performance design changes dramatically as CMOS technologies scale beyond a 90-nm minimum lithographic linewidth. In this paper, predominant contributors to variability in new CMOS devices are surveyed, and preferred approaches to mitigate their sources of variability are proposed. Process-, device-, and circuit-level responses to systematic and random components of tolerance are considered. Exploratory, novel structures emerging as evolutionary CMOS replacements are likely to change the nature of variability in the coming generations.

Journal ArticleDOI
TL;DR: The four examples here document some of the early efforts to establish a new academic discipline and new profession.
Abstract: Computer scientists work with formal models of algorithms and computation, and someday service scientists may work with formal models of service systems. The four examples here document some of the early efforts to establish a new academic discipline and new profession.

Journal ArticleDOI
TL;DR: The Lieb-Robinson bound states that local Hamiltonian evolution in nonrelativistic quantum mechanical theories gives rise to the notion of an effective light cone with exponentially decaying tails, and several consequences of this result are discussed in the context of quantum information theory.
Abstract: The Lieb-Robinson bound states that local Hamiltonian evolution in nonrelativistic quantum mechanical theories gives rise to the notion of an effective light cone with exponentially decaying tails. We discuss several consequences of this result in the context of quantum information theory. First, we show that the information that leaks out to spacelike separated regions is negligible and that there is a finite speed at which correlations and entanglement can be distributed. Second, we discuss how these ideas can be used to prove lower bounds on the time it takes to convert states without topological quantum order to states with that property. Finally, we show that the rate at which entropy can be created in a block of spins scales like the boundary of that block.

Journal ArticleDOI
TL;DR: New PEG-based hydrogel materials have been synthesized by Click chemistry and shown to result in well-defined networks having significantly improved mechanical properties.

Proceedings ArticleDOI
10 Nov 2006
TL;DR: Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but theOn-demand incurs longer response time.
Abstract: This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.

Patent
18 Aug 2006
TL;DR: In this article, a handheld electronic book reader, program product, and method incorporate enhanced annotation and/or usage tracking capabilities, such as context, comments, highlighting, and highlighting, which can be associated with various users, and displayed in connection with the display of an electronic document so as to indicate that different annotation data has been originated by different users.
Abstract: A handheld electronic book reader, program product, and method incorporate enhanced annotation and/or usage tracking capabilities. Support is provided for user creation of “contexts” for defined terms in an electronic document. Moreover, annotation data such as contexts, comments and highlighting may be associated with various users, and displayed in connection with the display of an electronic document so as to indicate that different annotation data has been originated by different users. In addition, from the standpoint of usage tracking, usage statistics for an electronic document displayed in a handheld electronic reader may be generated on a page-by-page basis, and/or in association with term definitions. Moreover, usage statistics for multiple users may be combined and analyzed. Through such analysis, the usage statistics may be used in the conduct of various beneficial actions such as revising an electronic document, revising a lesson plan with which an electronic document is associated, determining whether a user has read a selected portion of an electronic document, or determining whether a user needs supplemental assistance.

Journal ArticleDOI
TL;DR: In this paper, a measure-preserving reversible geometric integrator for the equations of motion is presented, which preserves the correct phase-space volume element and is demonstrated to perform well in realistic examples.
Abstract: The constant-pressure, constant-temperature (NPT) molecular dynamics approach is re-examined from the viewpoint of deriving a new measure-preserving reversible geometric integrator for the equations of motion. The underlying concepts of non-Hamiltonian phase-space analysis, measure-preserving integrators and the symplectic property for Hamiltonian systems are briefly reviewed. In addition, current measure-preserving schemes for the constant-volume, constant-temperature ensemble are also reviewed. A new geometric integrator for the NPT method is presented, is shown to preserve the correct phase-space volume element and is demonstrated to perform well in realistic examples. Finally, a multiple time-step version of the integrator is presented for treating systems with motion on several time scales.