scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 2006"


Journal ArticleDOI
TL;DR: A dynamic model of collaborative tagging is presented that predicts regularities in user activity, tag frequencies, kinds of tags used, bursts of popularity in bookmarking and a remarkable stability in the relative proportions of tags within a given URL.
Abstract: Collaborative tagging describes the process by which many users add metadata in the form of keywords to shared content. Recently, collaborative tagging has grown in popularity on the web, on sites that allow users to tag bookmarks, photographs and other content. In this paper we analyze the structure of collaborative tagging systems as well as their dynamic aspects. Specifically, we discovered regularities in user activity, tag frequencies, kinds of tags used, bursts of popularity in bookmarking and a remarkable stability in the relative proportions of tags within a given URL. We also present a dynamic model of collaborative tagging that predicts these stable patterns and relates them to imitation and shared knowledge.

1,965 citations


Patent
10 Oct 2006
TL;DR: In this paper, a thin-film semiconductor and a method of its fabrication use induced crystallization and aggregation of a nanocrystal seed layer to form a merged-domain layer.
Abstract: A thin film semiconductor and a method of its fabrication use induced crystallization and aggregation of a nanocrystal seed layer to form a merged-domain layer. The nanocrystal seed layer is deposited onto a substrate surface within a defined boundary. A reaction temperature below a boiling point of a reaction solution is employed. A thin film metal-oxide transistor and a method of its production employ the thin film semiconductor as a channel of the transistor. The merged-domain layer exhibits high carrier mobility.

1,026 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that the main postulate of statistical mechanics, the equal a priori probability postulate, should be abandoned as misleading and unnecessary, and they argue that it should be replaced by a general canonical principle, whose physical content is fundamentally different from the postulate it replaces: it refers to individual states, rather than to ensemble or time averages.
Abstract: Statistical mechanics is one of the most successful areas of physics. Yet, almost 150 years since its inception, its foundations and basic postulates are still the subject of debate. Here we suggest that the main postulate of statistical mechanics, the equal a priori probability postulate, should be abandoned as misleading and unnecessary. We argue that it should be replaced by a general canonical principle, whose physical content is fundamentally different from the postulate it replaces: it refers to individual states, rather than to ensemble or time averages. Furthermore, whereas the original postulate is an unprovable assumption, the principle we propose is mathematically proven. The key element in this proof is the quantum entanglement between the system and its environment. Our approach separates the issue of finding the canonical state from finding out how close a system is to it, allowing us to go even beyond the usual boltzmannian situation.

876 citations


Proceedings ArticleDOI
01 Nov 2006
TL;DR: The design and evaluation of a set of primitives implemented in Xen to address performance isolation across virtual machines in Xen are presented and it is indicated that these mechanisms effectively enforce performance isolation for a variety of workloads and configurations.
Abstract: Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers. One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics. However, such multiplexing must often be done while observing per-VM performance guarantees or service level agreements. Thus, one important requirement in this environment is effective performance isolation among VMs. In this paper, we address performance isolation across virtual machines in Xen [1]. For instance, while Xen can allocate fixed shares of CPU among competing VMs, it does not currently account for work done on behalf of individual VMs in device drivers. Thus, the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place.In this paper, we present the design and evaluation of a set of primitives implemented in Xen to address this issue. First, XenMon accurately measures per-VM resource consumption, including work done on behalf of a particular VM in Xen's driver domains. Next, our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU. Finally, ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits. Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations.

432 citations


Journal ArticleDOI
01 May 2006
TL;DR: This paper proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble") by discussing an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis.
Abstract: One of the key challenges for high-density servers (e.g., blades) is the increased costs in addressing the power and heat density associated with compaction. Prior approaches have mainly focused on reducing the heat generated at the level of an individual server. In contrast, this work proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble"). Specifically, we discuss an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis. Our approach requires low-cost hardware modifications and relatively simple software support. We evaluate our architecture through both prototyping and simulation. For workloads representing 132 servers from nine different enterprise deployments, we show significant power budget reductions at performances comparable to conventional systems.

421 citations


Journal ArticleDOI
TL;DR: In this article, the authors reported triggered single-photon emission from gallium nitride quantum dots at temperatures up to 200 K, a temperature easily reachable with thermo-electric cooling.
Abstract: Fundamentally secure quantum cryptography has still not seen widespread application owing to the difficulty of generating single photons on demand. Semiconductor quantum-dot structures have recently shown great promise as practical single-photon sources, and devices with integrated optical cavities and electrical-carrier injection have already been demonstrated. However, a significant obstacle for the application of commonly used III–V quantum dots to quantum-information-processing schemes is the requirement of liquid-helium cryogenic temperatures. Epitaxially grown gallium nitride quantum dots embedded in aluminium nitride have the potential for operation at much higher temperatures. Here, we report triggered single-photon emission from gallium nitride quantum dots at temperatures up to 200 K, a temperature easily reachable with thermo-electric cooling. Gallium nitride quantum dots also open a new wavelength region in the blue and near-ultraviolet portions of the spectrum for single-photon sources.

417 citations


01 Jan 2006
TL;DR: This work examines the validity of prior adhoc approaches to understanding power breakdown and quantify several interesting trends important for power modeling and management in the future, and introduces Mantis, a nonintrusive method for modeling full-system power consumption and providing real-time power prediction.
Abstract: The increasing costs of power delivery and cooling, as well as the trend toward higher-density computer systems, have created a growing demand for better power management in server environments. Despite the increasing interest in this issue, little work has been done in quantitatively understanding power consumption trends and developing simple yet accurate models to predict full-system power. We study the component-level power breakdown and variation, as well as temporal workload-specific power consumption of an instrumented power-optimized blade server. Using this analysis, we examine the validity of prior adhoc approaches to understanding power breakdown and quantify several interesting trends important for power modeling and management in the future. We also introduce Mantis, a nonintrusive method for modeling full-system power consumption and providing real-time power prediction. Mantis uses a onetime calibration phase to generate a model by correlating AC power measurements with user-level system utilization metrics. We experimentally validate the model on two server systems with drastically different power footprints and characteristics (a low-end blade and high-end compute-optimized server) using a variety of workloads. Mantis provides power estimates with high accuracy for both overall and temporal power consumption, making it a valuable tool for power-aware scheduling and analysis.

377 citations


Proceedings Article
08 May 2006
TL;DR: Pip is an infrastructure for comparing actual behavior and expected behavior to expose structural errors and performance problems in distributed systems, and allows programmers to express, in a declarative language, expectations about the system's communications structure, timing, and resource consumption.
Abstract: Bugs in distributed systems are often hard to find. Many bugs reflect discrepancies between a system's behavior and the programmer's assumptions about that behavior. We present Pip, an infrastructure for comparing actual behavior and expected behavior to expose structural errors and performance problems in distributed systems. Pip allows programmers to express, in a declarative language, expectations about the system's communications structure, timing, and resource consumption. Pip includes system instrumentation and annotation tools to log actual system behavior, and visualization and query tools for exploring expected and unexpected behavior. Pip allows a developer to quickly understand and debug both familiar and unfamiliar systems. We applied Pip to several applications, including FAB, SplitStream, Bullet, and RanSub. We generated most of the instrumentation for all four applications automatically. We found the needed expectations easy to write, starting in each case with automatically generated expectations. Pip found unexpected behavior in each application, and helped to isolate the causes of poor performance and incorrect behavior.

373 citations


Journal ArticleDOI
TL;DR: In this paper, a superoscillatory function is defined as a band-limited function oscillating faster than its fastest Fourier component, which is the initial state of a freely-evolving quantum wavefunction ψ.
Abstract: A superoscillatory function—that is, a band-limited function f(x) oscillating faster than its fastest Fourier component—is taken to be the initial state of a freely-evolving quantum wavefunction ψ. The superoscillations persist for unexpectedly long times, but eventually disappear through the interaction of contributions to ψ with complex momenta that are exponentially disparate in magnitude; this is established by applying the asymptotics of integrals, supported by numerics. f(x) can alternatively be regarded as the wave generated by a diffraction grating, propagating paraxially and without evanescence as ψ in the space beyond. The persistence of superoscillations is then interpreted as the propagation of sub-wavelength structure farther into the field than the more familiar evanescent waves.

367 citations


Patent
27 Oct 2006
TL;DR: In this paper, a load balancer receives a request from a client and decides whether at least one additional virtual machine should be started up in response to the request, in order to satisfy the request.
Abstract: A system has plural physical machines that contain virtual machines. A load balancer receives a request from a client. In response to the request, it is determined whether at least one additional virtual machine should be started up. In response to determining that at least one additional virtual machine should be started up, the load balancer sends at least one command to start up the at least one additional virtual machine in at least one of the physical machines.

361 citations


Posted Content
TL;DR: This paper presents a method incorporating a built-in decisional function into the protocols, and discusses the resulting efficiency of the schemes and the relevant security reductions, in the random oracle model, inThe context of different pairings one can use.
Abstract: In recent years, a large number of identity-based key agreement protocols from pairings have been proposed. Some of them are elegant and practical. However, the security of this type of protocols has been surprisingly hard to prove. The main issue is that a simulator is not able to deal with reveal queries, because it requires solving either a computational problem or a decisional problem, both of which are generally believed to be hard (i.e., computationally infeasible). The best solution of security proof published so far uses the gap assumption, which means assuming that the existence of a decisional oracle does not change the hardness of the corresponding computational problem. The disadvantage of using this solution to prove the security for this type of protocols is that such decisional oracles, on which the security proof relies, cannot be performed by any polynomial time algorithm in the real world, because of the hardness of the decisional problem. In this paper we present a method incorporating a built-in decisional function in this type of protocols. The function transfers a hard decisional problem in the proof to an easy decisional problem. We then discuss the resulting efficiency of the schemes and the relevant security reductions in the context of different pairings one can use. We pay particular attention, unlike most other papers in the area, to the issues which arise when using asymmetric pairings.

Journal ArticleDOI
TL;DR: A quantum repeater protocol for long-distance quantum communication that creates entanglement between qubits at intermediate stations of the channel by using a weak dispersive light-matter interaction and distributing the outgoing bright coherent-light pulses among the stations.
Abstract: We describe a quantum repeater protocol for long-distance quantum communication. In this scheme, entanglement is created between qubits at intermediate stations of the channel by using a weak dispersive light-matter interaction and distributing the outgoing bright coherent-light pulses among the stations. Noisy entangled pairs of electronic spin are then prepared with high success probability via homodyne detection and postselection. The local gates for entanglement purification and swapping are deterministic and measurement-free, based upon the same coherent-light resources and weak interactions as for the initial entanglement distribution. Finally, the entanglement is stored in a nuclear-spin-based quantum memory. With our system, qubit-communication rates approaching 100 Hz over 1280 km with fidelities near 99% are possible for reasonable local gate errors.

Proceedings ArticleDOI
11 Jun 2006
TL;DR: A model is presented that successfully identifies product and pricing categories for which viral marketing seems to be very effective and how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations is established.
Abstract: We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.

Journal ArticleDOI
TL;DR: Coherent population trapping is demonstrated in single nitrogen-vacancy centers in diamond under optical excitation, showing that all-optical control of single spins is possible in diamond.
Abstract: Coherent population trapping is demonstrated in single nitrogen-vacancy centers in diamond under optical excitation. For sufficient excitation power, the fluorescence intensity drops almost to the background level when the laser modulation frequency matches the 2.88 GHz splitting of the ground states. The results are well described theoretically by a four-level model, allowing the relative transition strengths to be determined for individual centers. The results show that all-optical control of single spins is possible in diamond.

Patent
21 Mar 2006
TL;DR: In this article, the authors present a device client that supports customer care and distribution of update packages to electronic devices, making it possible to efficiently manage and update firmware and software in electronic devices.
Abstract: A device client that supports customer care and distribution of update packages to electronic devices makes it possible to efficiently manage and update firmware and software in electronic devices. A terminal management/device management server employs extensions to an industry standard device management protocol to update configuration information, to provision the electronic device, and to manage the electronic device, for example. The electronic device may receive update packages, and update agent(s) in the electronic device may update the firmware and/or software of the electronic device. A diagnostic client in the electronic device facilitates remote diagnosis and a traps client facilitates setting traps and retrieving collected information. A terminal management server may remotely invoke control actions within the electronic device using management objects not supported by the industry standard device management protocol. A user of the electronic device may use a self-care portal to administer self-care and to conduct diagnostics. A subsequent customer-care call may use such information collected during self-care.

Journal ArticleDOI
TL;DR: A selective review of relevant literatures demonstrates that concepts are not abstracted out of situations but instead are situated, and a taxonomy of situations is proposed in which grain size, meaningfulness, and tangibility distinguish the cumulative situations that structure cognition hierarchically.
Abstract: For decades the importance of background situations has been documented across all areas of cognition. Nevertheless, theories of concepts generally ignore background situations, focusing largely on bottom-up, stimulus-based processing. Furthermore, empirical research on concepts typically ignores background situations, not incorporating them into experimental designs. A selective review of relevant literatures demonstrates that concepts are not abstracted out of situations but instead are situated. Background situations constrain conceptual processing in many tasks (e.g., recall, recognition, categorization, lexical decision, color naming, property verification, property generation) across many areas of cognition (e.g., episodic memory, conceptual processing, visual object recognition, language comprehension). A taxonomy of situations is proposed in which grain size, meaningfulness, and tangibility distinguish the cumulative situations that structure cognition hierarchically.

Proceedings ArticleDOI
22 Apr 2006
TL;DR: The interface and content-parsing algorithms in Themail are described and the results from a user study where two main interaction modes with the visualization emerged: exploration of "big picture" trends and themes in email (haystack mode) and more detail-oriented exploration (needle mode).
Abstract: We present Themail, a visualization that portrays relationships using the interaction histories preserved in email archives. Using the content of exchanged messages, it shows the words that characterize one's correspondence with an individual and how they change over the period of the relationship.This paper describes the interface and content-parsing algorithms in Themail. It also presents the results from a user study where two main interaction modes with the visualization emerged: exploration of "big picture" trends and themes in email (haystack mode) and more detail-oriented exploration (needle mode). Finally, the paper discusses the limitations of the content parsing approach in Themail and the implications for further research on email content visualization.

Patent
25 Oct 2006
TL;DR: In this article, a user self-care interface allows subscribers to interact with the device management system to identify problems in the mobile electronic devices, and to schedule updates, via a web-based interface.
Abstract: A device management system enables management of firmware, provisioning, and configuration information updates to of a plurality of mobile electronic devices via a wireless communication network. A user self-care interface allows subscribers to interact with the device management system to identify problems in the mobile electronic devices, and to schedule updates, via a web-based interface. Delivery of updates to the mobile electronic devices is managed so as to maintain device management system loading at acceptable levels, within scheduling constraints.

Journal ArticleDOI
30 Nov 2006-Nature
TL;DR: The Antikythera Mechanism as discussed by the authors is a unique Greek geared device, constructed around the end of the second century bc. It is known that it calculated and displayed celestial information, particularly cycles such as the phases of the moon and a luni-solar calendar.
Abstract: The Antikythera Mechanism is a unique Greek geared device, constructed around the end of the second century bc. It is known1, 2, 3, 4, 5, 6, 7, 8, 9 that it calculated and displayed celestial information, particularly cycles such as the phases of the moon and a luni-solar calendar. Calendars were important to ancient societies10 for timing agricultural activity and fixing religious festivals. Eclipses and planetary motions were often interpreted as omens, while the calm regularity of the astronomical cycles must have been philosophically attractive in an uncertain and violent world. Named after its place of discovery in 1901 in a Roman shipwreck, the Antikythera Mechanism is technically more complex than any known device for at least a millennium afterwards. Its specific functions have remained controversial11, 12, 13, 14 because its gears and the inscriptions upon its faces are only fragmentary. Here we report surface imaging and high-resolution X-ray tomography of the surviving fragments, enabling us to reconstruct the gear function and double the number of deciphered inscriptions. The mechanism predicted lunar and solar eclipses on the basis of Babylonian arithmetic-progression cycles. The inscriptions support suggestions of mechanical display of planetary positions9, 14, 15, now lost. In the second century bc, Hipparchos developed a theory to explain the irregularities of the Moon's motion across the sky caused by its elliptic orbit. We find a mechanical realization of this theory in the gearing of the mechanism, revealing an unexpected degree of technical sophistication for the period.

Proceedings ArticleDOI
16 Sep 2006
TL;DR: This work assumes the flexibility to design a multi-core architecture from the ground up and seeks to address the following question: what should be the characteristics of the cores for a heterogeneous multi-processor for the highest area or power efficiency?
Abstract: Previous studies have demonstrated the advantages of single-ISA heterogeneous multi-core architectures for power and performance. However, none of those studies examined how to design such a processor; instead, they started with an assumed combination of pre-existing cores. This work assumes the flexibility to design a multi-core architecture from the ground up and seeks to address the following question: what should be the characteristics of the cores for a heterogeneous multi-processor for the highest area or power efficiency? The study is done for varying degrees of thread-level parallelism and for different area and power budgets. The most efficient chip multiprocessors are shown to be heterogeneous, with each core customized to a different subset of application characteristics — no single core is necessarily well suited to all applications. The performance ordering of cores on such processors is different for different applications; there is only a partial ordering among cores in terms of resources and complexity. This methodology produces performance gains as high as 40%. The performance improvements come with the added cost of customization.

Proceedings ArticleDOI
12 Jun 2006
TL;DR: Experimental results from a representative data center show that automatic thermal mapping can predict accurately the heat distribution resulting from a given workload distribution and cooling configuration, thereby removing the need for static or manual configuration of thermal load management systems.
Abstract: Recent advances have demonstrated the potential benefits of coordinated management of thermal load in data centers, including reduced cooling costs and improved resistance to cooling system failures. A key unresolved obstacle to the practical implementation of thermal load management is the ability to predict the effects of workload distribution and cooling configurations on temperatures within a data center enclosure. The interactions between workload, cooling and temperature are dependent on complex factors that are unique to each data center, including physical room layout, hardware power consumption and cooling capacity; this dictates an approach that formulates management policies for each data center based on these properties. We propose and evaluate a simple, flexible method to infer a detailed model of thermal behavior within a data center from a stream of instrumentation data. This data - taken during normal data center operation - includes continuous readings taken from external temperature sensors, server instrumentation and computer room air conditioning units. Experimental results from a representative data center show that automatic thermal mapping can predict accurately the heat distribution resulting from a given workload distribution and cooling configuration, thereby removing the need for static or manual configuration of thermal load management systems. We also demonstrate how our approach adapts to preserve accuracy across changes to cluster attributes that affect thermal behavior - such as cooling settings, workload distribution and power consumption.

Book ChapterDOI
17 Sep 2006
TL;DR: In this article, the design of ubiquitous computing systems in the urban environment is considered as integral to urban design, and the authors describe how they have combined scanning for discoverable Bluetooth devices with two such methods, gatecounts and static snapshots.
Abstract: We approach the design of ubiquitous computing systems in the urban environment as integral to urban design. To understand the city as a system encompassing physical and digital forms and their relationships with people's behaviours, we are developing, applying and refining methods of observing, recording, modelling and analysing the city, physically, digitally and socially. We draw on established methods used in the space syntax approach to urban design. Here we describe how we have combined scanning for discoverable Bluetooth devices with two such methods, gatecounts and static snapshots. We report our experiences in developing, field testing and refining these augmented methods. We present initial findings on the Bluetooth landscape in a city in terms of patterns of Bluetooth presence and Bluetooth naming practices.

Journal ArticleDOI
TL;DR: A neural algorithm is proposed using a Newton-like approach to obtain an optimal solution to the constrained optimization problem and experiments with synthetic signals and real fMRI data demonstrate the efficacy and accuracy of the proposed algorithm.

01 Jan 2006
TL;DR: Here it is described how the design of ubiquitous computing systems in the urban environment as integral to urban design is approached, and how scanning for discoverable Bluetooth devices with two such methods, gatecounts and static snapshots are combined.
Abstract: We approach the design of ubiquitous computing systems in the urban environment as integral to urban design. To understand the city as a system encompassing physical and digital forms and their relationships with people's behaviours, we are developing, applying and refining methods of observing, recording, modelling and analysing the city, physically, digitally and socially. We draw on established methods used in the space syntax approach to urban design. Here we describe how we have combined scanning for discoverable Bluetooth devices with two such methods, gatecounts and static snapshots. We report our experiences in developing, field testing and refining these augmented methods. We present initial findings on the Bluetooth landscape in a city in terms of patterns of Bluetooth presence and Bluetooth naming practices.

Patent
17 Nov 2006
TL;DR: In this paper, a web application is analyzed to determine filtering and acceptance characteristics of the web site and a vocabulary of allowed symbols is created to be used in the building of attack strings.
Abstract: A web application is more efficiently analyzed by intelligently generating attack sequences to be used in the assessment. Rather than simply sending a canned list of static strings at a web application, the operation of the web application is analyzed to determine the filtering and acceptance characteristics of the web site. As this information is ascertained, a vocabulary of allowed symbols is created. This vocabulary is used in the building of attack strings and as such, the number of attack strings fired at the web application is greatly reduced, as well as the number of false positives.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: The problem of bimodal emotion recognition is described and the use of probabilistic graphical models when fusing the different modalities is advocated.
Abstract: Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. However, one necessary ingredient for natural interaction is still missing - emotions. This paper describes the problem of bimodal emotion recognition and advocates the use of probabilistic graphical models when fusing the different modalities. We test our audio-visual emotion recognition approach on 38 subjects with 11 HCI-related affect states. The experimental results show that the average person-dependent emotion recognition accuracy is greatly improved when both visual and audio information are used in classification.

Journal ArticleDOI
01 Sep 2006
TL;DR: This paper discusses the different ways in which the middleware can leverage protocol descriptions, and focuses in particular on the notions of protocol compatibility, equivalence, and replaceability.
Abstract: In the area of Web services and service-oriented architectures, business protocols are rapidly gaining importance and mindshare as a necessary part of Web service descriptions. Their immediate benefit is that they provide developers with information on how to write clients that can correctly interact with a given service or with a set of services. In addition, once protocols become an accepted practice and service descriptions become endowed with protocol information, the middleware can be significantly extended to better support service development, binding, and execution in a number of ways, considerably simplifying the whole service life-cycle. This paper discusses the different ways in which the middleware can leverage protocol descriptions, and focuses in particular on the notions of protocol compatibility, equivalence, and replaceability. They characterize whether two services can interact based on their protocol definition, whether a service can replace another in general or when interacting with specific clients, and which are the set of possible interactions among two services.

Proceedings ArticleDOI
05 Jul 2006
TL;DR: In this paper, a data center environmental control system that utilizes a distributed sensor network to manipulate conventional computer room air conditioning (CRAC) units within an air-cooled environment is presented.
Abstract: Increases in server power dissipation time placed significant pressure on traditional data center thermal management systems. Traditional systems utilize computer room air conditioning (CRAC) units to pressurize a raised floor plenum with cool air that is passed to equipment racks via ventilation tiles distributed throughout the raised floor. Temperature is typically controlled at the hot air return of the CRAC units away from the equipment racks. Due primarily to a lack of distributed environmental sensing, these CRAC systems are often operated conservatively resulting in reduced computational density and added operational expense. This paper introduces a data center environmental control system that utilizes a distributed sensor network to manipulate conventional CRAC units within an air-cooled environment. The sensor network is attached to standard racks and provides a direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. The combined controller and sensor network has been deployed in a production data center environment. Results from the algorithm will be presented that demonstrate the performance of the system and evaluate the energy savings compared with conventional data center environmental control architecture.

Journal ArticleDOI
TL;DR: OurGrid is an open, free-to-join, cooperative Grid in which labs donate their idle computational resources in exchange for accessing other labs’ idle resources when needed, and employs a novel application scheduling technique that demands very little information.
Abstract: eScience is rapidly changing the way we do research. As a result, many research labs now need non-trivial computational power. Grid and voluntary computing are well-established solutions for this need. However, not all labs can effectively benefit from these technologies. In particular, small and medium research labs (which are the majority of the labs in the world) have a hard time using these technologies as they demand high visibility projects and/or high-qualified computer personnel. This paper describes OurGrid, a system designed to fill this gap. OurGrid is an open, free-to-join, cooperative Grid in which labs donate their idle computational resources in exchange for accessing other labs’ idle resources when needed. It relies on an incentive mechanism that makes it in the best interest of participants to collaborate with the system, employs a novel application scheduling technique that demands very little information, and uses virtual machines to isolate applications and thus provide security. The vision is that OurGrid enables labs to combine their resources in a massive worldwide computing platform. OurGrid is in production since December 2004. Any lab can join it by downloading its software from http://www.ourgrid.org .

Proceedings ArticleDOI
Jeffrey C. Mogul1
18 Apr 2006
TL;DR: Unpredictable software systems are hard to debug and hard to manage, so better tools and methods for anticipating, detecting, diagnosing, and ameliorating emergent misbehavior are needed.
Abstract: Complex systems often behave in unexpected ways that are not easily predictable from the behavior of their components; this is known as emergent behavior. As software systems grow in complexity, interconnectedness, and geographic distribution, we will increasingly face unwanted emergent behavior.Unpredictable software systems are hard to debug and hard to manage. We need better tools and methods for anticipating, detecting, diagnosing, and ameliorating emergent misbehavior. These tools and methods will require research into the causes and nature of emergent misbehavior in software systems.