scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 2005"


Proceedings ArticleDOI
21 Aug 2005
TL;DR: Differences in the behavior of liberal and conservative blogs are found, with conservative blogs linking to each other more frequently and in a denser pattern.
Abstract: In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 "A-list" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.

2,800 citations


Journal ArticleDOI
TL;DR: It is shown that lipid accumulation in the liver leads to subacute hepatic 'inflammation' through NF-κB activation and downstream cytokine production, which causes insulin resistance both locally in liver and systemically.
Abstract: We show that NF-κB and transcriptional targets are activated in liver by obesity and high-fat diet (HFD). We have matched this state of chronic, subacute 'inflammation' by low-level activation of NF-κB in the liver of transgenic mice, designated LIKK, by selectively expressing constitutively active IKK-b in hepatocytes. These mice exhibit a type 2 diabetes phenotype, characterized by hyperglycemia, profound hepatic insulin resistance, and moderate systemic insulin resistance, including effects in muscle. The hepatic production of proinflammatory cytokines, including IL-6, IL-1β and TNF-α, was increased in LIKK mice to a similar extent as induced by HFD in in wild-type mice. Parallel increases were observed in cytokine signaling in liver and mucscle of LIKK mice. Insulin resistance was improved by systemic neutralization of IL-6 or salicylate inhibition of IKK-β. Hepatic expression of the IκBα superrepressor (LISR) reversed the phenotype of both LIKK mice and wild-type mice fed an HFD. These findings indicate that lipid accumulation in the liver leads to subacute hepatic 'inflammation' through NF-κB activation and downstream cytokine production. This causes insulin resistance both locally in liver and systemically.

2,082 citations


Journal ArticleDOI
TL;DR: The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level, indicating that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring.
Abstract: This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.

1,777 citations


Patent
22 Jun 2005
Abstract: A method for forming a conductive material on a substrate includes laser annealing a selected portion of a blanket coated material to form a conductive region.

1,055 citations


Patent
17 Aug 2005
TL;DR: In this paper, a method for depositing a first material over at least a portion of a substrate (102) by use of one or more solution processes to form a first-material layer (108), including inorganic dielectric material, was proposed.
Abstract: A method, comprising: depositing (142) a first material over at least a portion of a substrate (102) by use of one or more solution processes to form a first material layer (108), at least a portion of said first material layer (108) comprising inorganic dielectric material; depositing (146) a second material over and/or in contact with said at least a portion of said first material layer (108) by use of one or more solution processes to form a second material layer (108), at least a portion of said second material layer (108) comprising organic dielectric material, to form at least a portion of a dielectric device layer; and altering (144, 148) said at least a portion of said first and/or said second material layer (108) at least in part.

1,013 citations


Journal ArticleDOI
TL;DR: The article describes two techniques, error context reporting and error localization, for helping the user to determine the reason that a false conjecture is false, and includes detailed performance figures on conjectures derived from realistic program-checking problems.
Abstract: This article provides a detailed description of the automatic theorem prover Simplify, which is the proof engine of the Extended Static Checkers ESC/Java and ESC/Modula-3. Simplify uses the Nelson--Oppen method to combine decision procedures for several important theories, and also employs a matcher to reason about quantifiers. Instead of conventional matching in a term DAG, Simplify matches up to equivalence in an E-graph, which detects many relevant pattern instances that would be missed by the conventional approach. The article describes two techniques, error context reporting and error localization, for helping the user to determine the reason that a false conjecture is false. The article includes detailed performance figures on conjectures derived from realistic program-checking problems.

878 citations


Journal ArticleDOI
27 Oct 2005-Nature
TL;DR: The discovery of the QCSE, at room temperature, in thin germanium quantum-well structures grown on silicon is very promising for small, high-speed, low-power optical output devices fully compatible with silicon electronics manufacture.
Abstract: Silicon chips dominate electronics while optical fibres dominate long-distance information transfer. Recent work, in search of the best of both worlds, has led to silicon devices capable of modulating light; these show promise but still rely on weak physical mechanisms found in silicon itself. Now a team working at Stanford University and at Hewlett-Packard's Palo Alto labs has developed thin germanium ‘quantum well’ nanostructures grown on silicon that generate a strong quantum-mechanical effect capable of turning light beams on and off. Their performance rivals the best seen in any material. This development may allow silicon/germanium chips to handle both electronics and optics, uniting computing and communications at the integrated chip level. Silicon is the dominant semiconductor for electronics, but there is now a growing need to integrate such components with optoelectronics for telecommunications and computer interconnections1. Silicon-based optical modulators have recently been successfully demonstrated2,3; but because the light modulation mechanisms in silicon4 are relatively weak, long (for example, several millimetres) devices2 or sophisticated high-quality-factor resonators3 have been necessary. Thin quantum-well structures made from III-V semiconductors such as GaAs, InP and their alloys exhibit the much stronger quantum-confined Stark effect (QCSE) mechanism5, which allows modulator structures with only micrometres of optical path length6,7. Such III-V materials are unfortunately difficult to integrate with silicon electronic devices. Germanium is routinely integrated with silicon in electronics8, but previous silicon–germanium structures have also not shown strong modulation effects9,10,11,12,13. Here we report the discovery of the QCSE, at room temperature, in thin germanium quantum-well structures grown on silicon. The QCSE here has strengths comparable to that in III-V materials. Its clarity and strength are particularly surprising because germanium is an indirect gap semiconductor; such semiconductors often display much weaker optical effects than direct gap materials (such as the III-V materials typically used for optoelectronics). This discovery is very promising for small, high-speed14, low-power15,16,17 optical output devices fully compatible with silicon electronics manufacture.

789 citations


Proceedings Article
10 Apr 2005
TL;DR: This paper examines a theoretic thermodynamic formulation that uses information about steady state hot spots and cold spots in the data center and develops real-world scheduling algorithms, and develops an alternate approach to address the problem of heat management through temperature-aware workload placement.
Abstract: Trends towards consolidation and higher-density computing configurations make the problem of heat management one of the critical challenges in emerging data centers Conventional approaches to addressing this problem have focused at the facilities level to develop new cooling technologies or optimize the delivery of cooling In contrast to these approaches, our paper explores an alternate dimension to address this problem, namely a systems-level solution to control the heat generation through temperature-aware workload placement We first examine a theoretic thermodynamic formulation that uses information about steady state hot spots and cold spots in the data center and develop real-world scheduling algorithms Based on the insights from these results, we develop an alternate approach Our new approach leverages the non-intuitive observation that the source of cooling inefficiencies can often be in locations spatially uncorrelated with its manifested consequences; this enables additional energy savings Overall, our results demonstrate up to a factor of two reduction in annual data center cooling costs over location-agnostic workload distribution, purely through software optimizations without the need for any costly capital investment

740 citations


Journal ArticleDOI
TL;DR: This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.
Abstract: Data replication is a key technology in distributed systems that enables higher availability and performance. This article surveys optimistic replication algorithms. They allow replica contents to diverge in the short term to support concurrent work practices and tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular.Optimistic replication deploys algorithms not seen in traditional “pessimistic” systems. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen, and reaches agreement on the final contents incrementally.We explore the solution space for optimistic replication algorithms. This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.

733 citations


Journal ArticleDOI
TL;DR: The author begins by discussing the image formation process and examines the demosaicking methods in three groups: the first group consists of heuristic approaches, the second group formulates demosaicked as a restoration problem, and the third group is a generalization that uses the spectral filtering model given in Wandell.
Abstract: The author begins by discussing the image formation process. The demosaicking methods are examined in three groups: the first group consists of heuristic approaches. The second group formulates demosaicking as a restoration problem. The third group is a generalization that uses the spectral filtering model given in Wandell.

616 citations


Patent
28 Oct 2005
TL;DR: In this paper, a plurality of permissions associated with a cloud customer is created, and each of the permissions describes an action performed on an object, while the second set of permissions describe an action to be performed by one or more users.
Abstract: A cloud computing environment having a plurality of computing nodes is described. A plurality of permissions associated with a cloud customer is created. A first set of permissions from the plurality of permissions is associated with one or more objects. Each of the first set of permissions describes an action performed on an object. A second set of permissions from the plurality of permissions is associated with one or more users. Each of the second set of permissions describes an action to be performed by one or more users.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the set of correlations that are constrained only by the no-signaling principle and determine the vertices of such correlations in the case that two observers each choose from two possible measurements with $d$ outcomes.
Abstract: It is well known that measurements performed on spatially separated entangled quantum systems can give rise to correlations that are nonlocal, in the sense that a Bell inequality is violated. They cannot, however, be used for superluminal signaling. It is also known that it is possible to write down sets of ``superquantum'' correlations that are more nonlocal than is allowed by quantum mechanics, yet are still nonsignaling. Viewed as an information-theoretic resource, superquantum correlations are very powerful at reducing the amount of communication needed for distributed computational tasks. An intriguing question is why quantum mechanics does not allow these more powerful correlations. We aim to shed light on the range of quantum possibilities by placing them within a wider context. With this in mind, we investigate the set of correlations that are constrained only by the no-signaling principle. These correlations form a polytope, which contains the quantum correlations as a (proper) subset. We determine the vertices of the no-signaling polytope in the case that two observers each choose from two possible measurements with $d$ outcomes. We then consider how interconversions between different sorts of correlations may be achieved. Finally, we consider some multipartite examples.

Proceedings ArticleDOI
10 May 2005
TL;DR: The extension of RDF to Named Graphs provides a formally defined framework to be a foundation for the Semantic Web trust layer.
Abstract: The Semantic Web consists of many RDF graphs nameable by URIs. This paper extends the syntax and semantics of RDF to cover such Named Graphs. This enables RDF statements that describe graphs, which is beneficial in many Semantic Web application areas. As a case study, we explore the application area of Semantic Web publishing: Named Graphs allow publishers to communicate assertional intent, and to sign their graphs; information consumers can evaluate specific graphs using task-specific trust policies, and act on information from those Named Graphs that they accept. Graphs are trusted depending on: their content; information about the graph; and the task the user is performing. The extension of RDF to Named Graphs provides a formally defined framework to be a foundation for the Semantic Web trust layer.

Journal ArticleDOI
TL;DR: It is found that small world search strategies using a contact’s position in physical space or in an organizational hierarchy relative to the target can effectively be used to locate most individuals, but in the online student network, where the data is incomplete and hierarchical structures are not well defined, local search strategies are less effective.

Proceedings ArticleDOI
11 Jun 2005
TL;DR: Xenoprof is presented, a system-wide statistical profiling toolkit implemented for the Xen virtual machine environment that will facilitate a better understanding of performance characteristics of Xen's mechanisms allowing the community to optimize the Xen implementation.
Abstract: Virtual Machine (VM) environments (e.g., VMware and Xen) are experiencing a resurgence of interest for diverse uses including server consolidation and shared hosting. An application's performance in a virtual machine environment can differ markedly from its performance in a non-virtualized environment because of interactions with the underlying virtual machine monitor and other virtual machines. However, few tools are currently available to help debug performance problems in virtual machine environments.In this paper, we present Xenoprof, a system-wide statistical profiling toolkit implemented for the Xen virtual machine environment. The toolkit enables coordinated profiling of multiple VMs in a system to obtain the distribution of hardware events such as clock cycles and cache and TLB misses. The toolkit will facilitate a better understanding of performance characteristics of Xen's mechanisms allowing the community to optimize the Xen implementation.We use our toolkit to analyze performance overheads incurred by networking applications running in Xen VMs. We focus on networking applications since virtualizing network I/O devices is relatively expensive. Our experimental results quantify Xen's performance overheads for network I/O device virtualization in uni- and multi-processor systems. With certain Xen configurations, networking workloads in the Xen environment can suffer significant performance degradation. Our results identify the main sources of this overhead which should be the focus of Xen optimization efforts. We also show how our profiling toolkit was used to uncover and resolve performance bugs that we encountered in our experiments which caused unexpected application behavior.

Patent
29 Apr 2005
TL;DR: In this paper, a battery management system for managing current supplied by a battery to a load is presented, where the battery detects an input current and drives the load at a substantially constant voltage if the detected input current reaches a predetermined current threshold.
Abstract: A battery management system for managing current supplied by a battery to a load. The battery management system detects an input current and drives the load at a substantially constant voltage if the detected input current reaches a predetermined current threshold. In addition, the circuit limits the input current to the predetermined current threshold, thereby allowing the output voltage to decrease when the input current is being limited to the threshold by the circuit.

Journal ArticleDOI
TL;DR: In this article, the authors give an operational definition of the quantum, classical and total amounts of correlations in a bipartite quantum state, which can be defined via the amount of work (noise) that is required to erase (destroy) the correlations.
Abstract: We give an operational definition of the quantum, classical, and total amounts of correlations in a bipartite quantum state. We argue that these quantities can be defined via the amount of work (noise) that is required to erase (destroy) the correlations: for the total correlation, we have to erase completely, for the quantum correlation we have to erase until a separable state is obtained, and the classical correlation is the maximal correlation left after erasing the quantum correlations. In particular, we show that the total amount of correlations is equal to the quantum mutual information, thus providing it with a direct operational interpretation. As a by-product, we obtain a direct, operational, and elementary proof of strong subadditivity of quantum entropy.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a scheme for quantum computation using spatially separated matter qubits and single-photon interference effects, which can be used to efficiently generate cluster states of many qubits, together with single-qubit operations and readout.
Abstract: We propose a practical, scalable, and efficient scheme for quantum computation using spatially separated matter qubits and single-photon interference effects. The qubit systems can be nitrogen-vacancy centers in diamond, Pauli-blockade quantum dots with an excess electron, or trapped ions with optical transitions, which are each placed in a cavity and subsequently entangled using a double-heralded single-photon detection scheme. The fidelity of the resulting entanglement is extremely robust against the most important errors such as detector loss, spontaneous emission, and mismatch of cavity parameters. We demonstrate how this entangling operation can be used to efficiently generate cluster states of many qubits, which, together with single-qubit operations and readout, can be used to implement universal quantum computation. Existing experimental parameters indicate that high-fidelity clusters can be generated with a moderate constant overhead.

Journal ArticleDOI
TL;DR: Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing.
Abstract: Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts the data center's thermal properties. Effective thermal management is essential to the robustness of mission-critical applications. Internet service architectures can address multisystem resource management as well as thermal management within data centers.

Proceedings ArticleDOI
19 Sep 2005
TL;DR: This paper describes a novel inference scheme that takes advantage of data describing historical, repeating patterns of "infection" to track information flow in blogspace, as well as a visualization system that allows for the graphical tracking of information flow.
Abstract: Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of "infection." Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow.

Journal ArticleDOI
TL;DR: Christandl et al. as discussed by the authors proposed a class of qubit networks that admit perfect state transfer of any two-dimensional quantum state in a fixed period of time, and they further showed that such networks can distribute arbitrary entangled states between two distant parties, and can, by using such systems in parallel, transmit the higher-dimensional systems states across the network.
Abstract: We propose a class of qubit networks that admit perfect state transfer of any two-dimensional quantum state in a fixed period of time. We further show that such networks can distribute arbitrary entangled states between two distant parties, and can, by using such systems in parallel, transmit the higher-dimensional systems states across the network. Unlike many other schemes for quantum computation and communication, these networks do not require qubit couplings to be switched on and off. When restricted to $N$-qubit spin networks of identical qubit couplings, we show that $2\phantom{\rule{0.2em}{0ex}}{\mathrm{log}}_{3}N$ is the maximal perfect communication distance for hypercube geometries. Moreover, if one allows fixed but different couplings between the qubits then perfect state transfer can be achieved over arbitrarily long distances in a linear chain. This paper expands and extends the work done by Christandl et al., Phys. Rev. Lett. 92, 187902 (2004).

Patent
10 Aug 2005
TL;DR: In this article, an interface is presented for enabling the computing device to control a voicemail system, which includes one or more display objects, wherein each display object is selectable by a user to enter a command input assigned to that display object.
Abstract: Embodiments described herein provide a method and technique for operating a computing device. An interface is displayed for enabling the computing device to control a voicemail system. The interface includes one or more display objects, wherein each display object is selectable by a user to enter a command input assigned to that display object. A selection is detected of any one of the one or more display objects, and the command input assigned to the display object is identified. A signal tone is generated corresponding to the command input. The signal input may be transmitted across a network to the voicemail system to communicate a command to the voicemail system.

Book ChapterDOI
10 Feb 2005
TL;DR: This work investigates the problem of privacy-preserving access to a database, where records in the database are accessed according to their associated keywords and gives efficient solutions for various settings of KS.
Abstract: We study the problem of privacy-preserving access to a database. Particularly, we consider the problem of privacy-preserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. We provide efficient solutions for various settings of KS, based either on specific assumptions or on general primitives (mainly oblivious transfer). Our general solutions rely on a new connection between KS and the oblivious evaluation of pseudorandom functions (OPRFs). We therefore study both the definition and construction of OPRFs and, as a corollary, give improved constructions of OPRFs that may be of independent interest.

Journal ArticleDOI
TL;DR: Heterogeneous (or asymmetric) chip multiprocessors present unique opportunities for improving system throughput, reducing processor power, and mitigating Amdahl's law.
Abstract: Heterogeneous (or asymmetric) chip multiprocessors present unique opportunities for improving system throughput, reducing processor power, and mitigating Amdahl's law. On-chip heterogeneity allow the processor to better match execution resources to each application's needs and to address a much wider spectrum of system loads - from low to high thread parallelism - with high efficiency.

Journal ArticleDOI
TL;DR: A betweenness centrality algorithm is used that can rapidly find communities within a graph representing information flows and is effective at identifying true communities, both formal and informal, within these scale-free graphs.
Abstract: We describe a method for the automatic identification of communities of practice from e-mail logs within an organization. We use a betweenness centrality algorithm that can rapidly find communities within a graph representing information flows. We apply this algorithm to an initial e-mail corpus of nearly 1 million messages collected over a 2-month span, and show that the method is effective at identifying true communities, both formal and informal, within these scale-free graphs. This approach also enables the identification of leadership roles within the communities. These studies are complemented by a qualitative evaluation of the results in the field.

Journal ArticleDOI
20 Oct 2005
TL;DR: This work presents a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state.
Abstract: We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.

Journal ArticleDOI
20 Oct 2005
TL;DR: This paper describes the Hibernator design, and presents evaluations of it using both trace-driven simulations and a hybrid system comprised of a real database server (IBM DB2) and an emulated storage server with multi-speed disks.
Abstract: Energy consumption has become an important issue in high-end data centers, and disk arrays are one of the largest energy consumers within them. Although several attempts have been made to improve disk array energy management, the existing solutions either provide little energy savings or significantly degrade performance for data center workloads.Our solution, Hibernator, is a disk array energy management system that provides improved energy savings while meeting performance goals. Hibernator combines a number of techniques to achieve this: the use of disks that can spin at different speeds, a coarse-grained approach for dynamically deciding which disks should spin at which speeds, efficient ways to migrate the right data to an appropriate-speed disk automatically, and automatic performance boosts if there is a risk that performance goals might not be met due to disk energy management.In this paper, we describe the Hibernator design, and present evaluations of it using both trace-driven simulations and a hybrid system comprised of a real database server (IBM DB2) and an emulated storage server with multi-speed disks. Our file-system and on-line transaction processing (OLTP) simulation results show that Hibernator can provide up to 65% energy savings while continuing to satisfy performance goals (6.5--26 times better than previous solutions). Our OLTP emulated system results show that Hibernator can save more energy (29%) than previous solutions, while still providing an OLTP transaction rate comparable to a RAID5 array with no energy management.

Proceedings Article
10 Apr 2005
TL;DR: This work presents a light weight monitoring system for measuring the CPU usage of different virtual machines including the CPU overhead in the device driver domain caused by I/O processing on behalf of a particular virtual machine.
Abstract: Virtual Machine Monitors (VMMs) are gaining popularity in enterprise environments as a software-based solution for building shared hardware infrastructures via virtualization. In this work, using the Xen VMM, we present a light weight monitoring system for measuring the CPU usage of different virtual machines including the CPU overhead in the device driver domain caused by I/O processing on behalf of a particular virtual machine. Our performance study attempts to quantify and analyze this overhead for a set of I/O intensive workloads.

Patent
28 Feb 2005
TL;DR: In this article, the authors present a method for migrating a virtual machine from a first node to a second node of a plurality of nodes in response to the analysis of application performance.
Abstract: In one embodiment, a method comprises executing a plurality of virtual machines on a plurality of nodes of a cluster computing system, wherein at least one application is executed within each of the plurality of virtual machines, generating data that is related to performance of applications in the virtual machines, analyzing, by a management process, the data in view of parameters that encode desired performance levels of applications, and migrating, by the management process, a virtual machine on a first node to a second node of the plurality of nodes in response to the analyzing.

Journal ArticleDOI
TL;DR: In this paper, a new route for distributed optical quantum information processing (QIP) based on generalized quantum non-demolition measurements is presented, providing a unified approach for quantum communication and computing.
Abstract: Quantum information processing (QIP) offers the promise of being able to do things that we cannot do with conventional technology. Here we present a new route for distributed optical QIP, based on generalized quantum non-demolition measurements, providing a unified approach for quantum communication and computing. Interactions between photons are generated using weak nonlinearities and intense laser fields—the use of such fields provides for robust distribution of quantum information. Our approach only requires a practical set of resources, and it uses these very efficiently. Thus it promises to be extremely useful for the first quantum technologies, based on scarce resources. Furthermore, in the longer term this approach provides both options and scalability for efficient many-qubit QIP.