scispace - formally typeset
Search or ask a question

Showing papers by "Hewlett-Packard published in 2010"


Proceedings ArticleDOI
31 Aug 2010
TL;DR: It is shown that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors and improve the forecasting power of social media.
Abstract: In recent years, social media has become ubiquitous and important for social networking and content sharing. And yet, the content that is generated from these websites remains largely untapped. In this paper, we demonstrate how social media content can be used to predict real-world outcomes. In particular, we use the chatter from Twitter.com to forecast box-office revenues for movies. We show that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors. We further demonstrate how sentiments extracted from Twitter can be utilized to improve the forecasting power of social media.

1,909 citations


Journal ArticleDOI
08 Apr 2010-Nature
TL;DR: Bipolar voltage-actuated switches, a family of nonlinear dynamical memory devices, can execute material implication (IMP), which is a fundamental Boolean logic operation on two variables p and q such that pIMPq is equivalent to (NOTp)ORq.
Abstract: The authors of the International Technology Roadmap for Semiconductors-the industry consensus set of goals established for advancing silicon integrated circuit technology-have challenged the computing research community to find new physical state variables (other than charge or voltage), new devices, and new architectures that offer memory and logic functions beyond those available with standard transistors. Recently, ultra-dense resistive memory arrays built from various two-terminal semiconductor or insulator thin film devices have been demonstrated. Among these, bipolar voltage-actuated switches have been identified as physical realizations of 'memristors' or memristive devices, combining the electrical properties of a memory element and a resistor. Such devices were first hypothesized by Chua in 1971 (ref. 15), and are characterized by one or more state variables that define the resistance of the switch depending upon its voltage history. Here we show that this family of nonlinear dynamical memory devices can also be used for logic operations: we demonstrate that they can execute material implication (IMP), which is a fundamental Boolean logic operation on two variables p and q such that pIMPq is equivalent to (NOTp)ORq. Incorporated within an appropriate circuit, memristive switches can thus perform 'stateful' logic operations for which the same devices serve simultaneously as gates (logic) and latches (memory) that use resistance instead of voltage or charge as the physical state variable.

1,642 citations


Proceedings ArticleDOI
28 Apr 2010
TL;DR: This work presents ElasticTree, a network-wide power1 manager, which dynamically adjusts the set of active network elements -- links and switches--to satisfy changing data center traffic loads, and demonstrates that for data center workloads, ElasticTree can save up to 50% of network energy, while maintaining the ability to handle traffic surges.
Abstract: Networks are a shared resource connecting critical IT infrastructure, and the general practice is to always leave them on. Yet, meaningful energy savings can result from improving a network's ability to scale up and down, as traffic demands ebb and flow. We present ElasticTree, a network-wide power1 manager, which dynamically adjusts the set of active network elements -- links and switches--to satisfy changing data center traffic loads.We first compare multiple strategies for finding minimum-power network subsets across a range of traffic patterns. We implement and analyze ElasticTree on a prototype testbed built with production OpenFlow switches from three network vendors. Further, we examine the trade-offs between energy efficiency, performance and robustness, with real traces from a production e-commerce website. Our results demonstrate that for data center workloads, ElasticTree can save up to 50% of network energy, while maintaining the ability to handle traffic surges. Our fast heuristic for computing network subsets enables ElasticTree to scale to data centers containing thousands of nodes. We finish by showing how a network admin might configure ElasticTree to satisfy their needs for performance and fault tolerance, while minimizing their network power bill.

1,019 citations


Journal ArticleDOI
TL;DR: In this article, a method for predicting the long-term popularity of online content from early measurements of user access is presented, using two content sharing portals, Youtube and Digg, using the accrual of views and votes on content offered by these services.
Abstract: We present a method for accurately predicting the long time popularity of online content from early measurements of user’s access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.

910 citations


Journal ArticleDOI
TL;DR: In this paper, a non-periodic pattern of the grating surface is proposed to give full control over the phase front of reflected light while maintaining a high reflectivity, which could have a substantial impact on a number of applications that depend on low-cost, compact optical components.
Abstract: Sub-wavelength dielectric gratings have emerged recently as a promising alternative to distributed Bragg reflection dielectric stacks for broadband, high-reflectivity filtering applications. Such a grating structure composed of a single dielectric layer with the appropriate patterning can sometimes perform as well as 30 or 40 dielectric distributed Bragg reflection layers, while providing new functionalities such as polarization control and near-field amplification. In this Letter, we introduce an interesting property of grating mirrors that cannot be realized by their distributed Bragg reflection counterpart: we show that a non-periodic patterning of the grating surface can give full control over the phase front of reflected light while maintaining a high reflectivity. This new feature of dielectric gratings allows the creation of miniature planar focusing elements that could have a substantial impact on a number of applications that depend on low-cost, compact optical components, from laser cavities to CD/DVD read/write heads.

561 citations


Proceedings ArticleDOI
30 Nov 2010
TL;DR: This paper assesses how security, trust and privacy issues occur in the context of cloud computing and discusses ways in which they may be addressed.
Abstract: Cloud computing is an emerging paradigm for large scale infrastructures. It has the advantage of reducing cost by sharing computing and storage resources, combined with an on-demand provisioning mechanism relying on a pay-per-use business model. These new features have a direct impact on the budgeting of IT budgeting but also affect traditional security, trust and privacy mechanisms. Many of these mechanisms are no longer adequate, but need to be rethought to fit this new paradigm. In this paper we assess how security, trust and privacy issues occur in the context of cloud computing and discuss ways in which they may be addressed.

530 citations


Journal ArticleDOI
TL;DR: It is shown by experiment that all but one of these computation methods leads to biased measurements, especially under high class imbalance, which is of particular interest to those designing machine learning software libraries and researchers focused onhigh class imbalance.
Abstract: Cross-validation is a mainstay for measuring performance and progress in machine learning. There are subtle differences in how exactly to compute accuracy, F-measure and Area Under the ROC Curve (AUC) in cross-validation studies. However, these details are not discussed in the literature, and incompatible methods are used by various papers and software packages. This leads to inconsistency across the research literature. Anomalies in performance calculations for particular folds and situations go undiscovered when they are buried in aggregated results over many folds and datasets, without ever a person looking at the intermediate performance measurements. This research note clarifies and illustrates the differences, and it provides guidance for how best to measure classification performance under cross-validation. In particular, there are several divergent methods used for computing F-measure, which is often recommended as a performance measure under class imbalance, e.g., for text classification domains and in one-vs.-all reductions of datasets having many classes. We show by experiment that all but one of these computation methods leads to biased measurements, especially under high class imbalance. This paper is of particular interest to those designing machine learning software libraries and researchers focused on high class imbalance.

367 citations


Patent
12 Oct 2010
TL;DR: In this paper, a DC-DC converter (110, 204, 310) is provided between one of the multiple power sources and the load to provide power to a load in an electronic device.
Abstract: An electronic device includes multiple power sources (102, 104, 202, 206, 302, 304) that can provide power to a load (106, 208, 306) in the electronic device. A DC-DC converter (110, 204, 310) is provided between one of the multiple power sources and the load.

346 citations


Journal ArticleDOI
TL;DR: This work probes within a functioning TiO 2 memristor using synchrotron-based x-ray absorption spectromicroscopy and transmission electron microscopy (TEM) and observed that electroforming of the device generated an ordered Ti 4 O 7 Magnéli phase within the initially deposited TiO2 matrix.
Abstract: Structures composed of transition metal oxides can display a rich variety of electronic and magnetic properties including superconductivity, multiferroic behavior, and colossal magnetoresistance. [ 1 ] An additional property of technological relevance is the bipolar resistance switching phenomenon [ 2–4 ] seen in many perovskites [ 5–7 ] and binary oxides [ 8 ] when arranged in metal/insulator/metal (MIM) structures. These devices exhibit electrically driven switching of the resistance by 1000x or greater and have recently been identifi ed [ 9 ] as memristive systems, the fourth fundamental passive circuit element. [ 10 , 11 ] A full understanding of the atomic-scale mechanism and identifi cation of the material changes within the oxide remains an important goal. [ 12 ] Here, we probe within a functioning TiO 2 memristor using synchrotron-based x-ray absorption spectromicroscopy and transmission electron microscopy (TEM). We observed that electroforming of the device generated an ordered Ti 4 O 7 Magnéli phase within the initially deposited TiO 2 matrix. In a memristive system, [ 11 ] the fl ow of charge dynamically changes the material conductivity, which is “remembered” even with the removal of bias. While bipolar resistance switching of metal oxides has been observed since the 1960s, [ 2 , 4 ] only recently has the connection to the analytical theory of the memristor been made. [ 9 ] In an attempt to describe microscopically the source of the resistance change, many physical models have been put forth, including generation and dissolution of conductive channels, [ 3 , 6 ] electronic trapping and space-charge current limiting effects, [ 13 ] strongly correlated electron effects such as a metal-insulator transition, [ 14 ] and changes localized to the interface. [ 15 ] Identifying the correct model and quantifying its physical parameters has been diffi cult using primarily electrical characterization. Meanwhile, direct physical characterization [ 7 ]

333 citations


Proceedings ArticleDOI
26 Apr 2010
TL;DR: It is shown that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content, and that incorporating aspects of the web site design improves on predictions based on simply extrapolating from the early votes.
Abstract: Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both companies that host social media sites and their users. Accurate and timely prediction would enable the companies to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions among content quality, how the social media site chooses to highlight content, and influence among users. While these factors make it difficult to predict popularity a priori, we show that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content. By incorporating aspects of the web site design, such models improve on predictions based on simply extrapolating from the early votes. We validate this claim on the social news portal Digg using a previously-developed model of social voting based on the Digg user interface.

330 citations


Journal ArticleDOI
TL;DR: The waterfall model is considered before the other models because it has had a profound effect on software development, and has additionally influenced many SDLC models prevalent today.
Abstract: This history column article provides a tour of the main software development life cycle (SDLC) models. (A lifecycle covers all the stages of software from its inception with requirements definition through to fielding and maintenance.) System development lifecycle models have drawn heavily on software and so the two terms can be used interchangeably in terms of SDLC, especially since software development in this respect encompasses software systems development. Because the merits of selecting and using an SDLC vary according to the environment in which software is developed as well as its application, I discuss three broad categories for consideration when analyzing the relative merits of SDLC models. I consider the waterfall model before the other models because it has had a profound effect on software development, and has additionally influenced many SDLC models prevalent today. Thereafter, I consider some of the mainstream models and finish with a discussion of what the future could hold for SDLC models.

Journal ArticleDOI
TL;DR: A single hybrid transistor can replace presently utilized complex and energyconsuming electronic circuits to emulate the synapse for spike signal processing, learning, and memory, which could provide a new pathway to construct neuromorphic circuits approaching the scale and functions of the brain.
Abstract: 2010 WILEY-VCH Verlag Gmb Signal processing, memory, and learning functions are established in the human brain by modifying ionic fluxes in neurons and synapses. Through a synapse, a potential spike signal in a presynaptic neuron can trigger an ionic excitatory postsynaptic current (EPSC) or inhibitory postsynaptic current (IPSC) that temporally lasts for 1–10ms in a postsynaptic neuron. This enables the postsynaptic neuron to collectively process the EPSC or IPSC through 10–10 synapses to establish spatial and temporal correlated functions. The synaptic transmission efficacy can be modified by temporally correlated preand post-synaptic spikes via spike-timing-dependent plasticity (STDP). For example, if a postsynaptic spike is triggered momentarily after a presynaptic spike by a few milliseconds, the synaptic efficacy will be increased, resulting in long-term potentiation (LTP), but if the temporal order is reversed, the synaptic efficacy will be decreased, resulting in long-term depression (LTD). The synaptic efficacy can also be modified with reversed polarities in STDP in different types of synapses. STDP is essential to modify synapses in a neural network for learning and memory functions of the brain. Electronic materials, devices, and circuits have been explored extensively to emulate synapses, but to date they have not been able to match the synaptic functions in the brain. Synaptic transistors with nonvolatile analog memory were fabricated by integrating a charge-storage or ferroelectric materials onto the gate structure of Si metal-oxide-semiconductor (MOS) transistors, but these devices cannot emulate the essential synaptic dynamic functions such as EPSC/IPSC or STDP. Electronic neuromorphic circuits have been designed and fabricated to supply EPSC/IPSC and STDP, but these nonlinear dynamic analog circuits require many transistors and several capacitors to emulate a single synapse. The large capacitor size, complex architecture, and energy consumption of these synaptic circuits limited the number of synapses that could be integrated onto a single chip to about 10–10. The lack of a small, cheap device with the essential synaptic dynamic properties for signal processing, learning, and memory prohibits the circuits from approaching the scale and functions of the human brain that contains 10 synapses. We have designed and fabricated a synaptic transistor based on ionic/electronic hybrid materials by integrating a layer of ionic conductor and a layer of ion-doped conjugated polymer, onto the gate of a Si-based transistor. In analogy to the synapse, a potential spike can trigger ionic fluxes with a temporal lapse of a few milliseconds in the polymer, which in turn spontaneously generates EPSC in the Si layer. Temporally correlated preand post-synaptic spikes can modify ions stored in the polymer, resulting in a nonvolatile strengthening or weakening of the device transmission efficacy with STDP. A single hybrid transistor can replace presently utilized complex and energyconsuming electronic circuits to emulate the synapse for spike signal processing, learning, and memory, which could provide a new pathway to construct neuromorphic circuits approaching the scale and functions of the brain. The synaptic transistor has a Si n-p-n source-channel-drain structure of a conventional MOS transistor, with the Si channel covered by a 3-nm-thick SiO2 insulating layer (Fig. 1a). A 70-nm-thick conjugated polymer layer of poly[2-methoxy-5(20-ethylhexyloxy)-p-phenylene vinylene] (MEH-PPV) and a 70-nm-thick ionic conductive layer of RbAg4I5 were sandwiched between the gate SiO2 insulator and an Al/Ti electrode. To emulate synaptic functions, presynaptic spikes were applied to the transistor gate, and postsynaptic currents, I, were measured from the source. Postsynaptic spikes were also applied to the source. A spike was composed of a 1ms-wide positive voltage pulse with an amplitude Vþ1⁄4 3–5V immediately followed by a 1 ms-wide negative voltage pulse with an amplitude V 1⁄4 3 to 5V (Fig. 1a, Inset). After the spike, the transistor was operated at its rest state under a subthreshold condition by setting the gate voltage Vg1⁄4 0 V. A drain voltage Vd1⁄4 0.1 V was applied continuously. When a presynaptic spike with amplitudes of Vþ/V 1⁄4 4V/ 5V was applied to the transistor gate, the typical I is

Proceedings Article
23 Aug 2010
TL;DR: The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high and demonstrates promising results on diverse real-life datasets.
Abstract: An important task of opinion mining is to extract people's opinions on features of an entity. For example, the sentence, "I love the GPS function of Motorola Droid" expresses a positive opinion on the "GPS function" of the Motorola phone. "GPS function" is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and "no" patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.

Proceedings ArticleDOI
19 Jun 2010
TL;DR: This paper examines three primary innovations in DRAM chip microarchitecture that lead to a dramatic reduction in the energy and storage overheads for reliability, and further penalizes the cost-per-bit metric by adding a checksum feature to each cache line.
Abstract: DRAM vendors have traditionally optimized the cost-per-bit metric, often making design decisions that incur energy penalties. A prime example is the overfetch feature in DRAM, where a single request activates thousands of bit-lines in many DRAM chips, only to return a single cache line to the CPU. The focus on cost-per-bit is questionable in modern-day servers where operating costs can easily exceed the purchase cost. Modern technology trends are also placing very different demands on the memory system: (i)queuing delays are a significant component of memory access time, (ii) there is a high energy premium for the level of reliability expected for business-critical computing, and (iii) the memory access stream emerging from multi-core systems exhibits limited locality. All of these trends necessitate an overhaul of DRAM architecture, even if it means a slight compromise in the cost-per-bit metric. This paper examines three primary innovations. The first is a modification to DRAM chip microarchitecture that re tains the traditional DDRx SDRAMinterface. Selective Bit-line Activation (SBA) waits for both RAS (row address) and CAS (column address) signals to arrive before activating exactly those bitlines that provide the requested cache line. SBA reduces energy consumption while incurring slight area and performance penalties. The second innovation, Single Subarray Access (SSA), fundamentally re-organizes the layout of DRAM arrays and the mapping of data to these arrays so that an entire cache line is fetched from a single subarray. It requires a different interface to the memory controller, reduces dynamic and background energy (by about 6X), incurs a slight area penalty (4%), and can even lead to performance improvements (54% on average) by reducing queuing delays. The third innovation further penalizes the cost-per-bit metric by adding a checksum feature to each cache line. This checksum error-detection feature can then be used to build stronger RAID-like fault tolerance, including chipkill-level reliability. Such a technique is especially crucial for the SSA architecture where the entire cache line is localized to a single chip. This DRAM chip microarchitectural change leads to a dramatic reduction in the energy and storage overheads for reliability. The proposed architectures will also apply to other emerging memory technologies (such as resistive memories) and will be less disruptive to standards, interfaces, and the design flow if they can be incorporated into first-generation designs.

Proceedings ArticleDOI
04 Oct 2010
TL;DR: The algorithm, mClock, supports proportional-share fairness subject to minimum reservations and maximum limits on the IO allocations for VMs and indicates that these rich QoS controls are quite effective in isolating VM performance and providing better application latency.
Abstract: Virtualized servers run a diverse set of virtual machines (VMs), ranging from interactive desktops to test and development environments and even batch workloads. Hypervisors are responsible for multiplexing the underlying hardware resources among VMs while providing them the desired degree of isolation using resource management controls. Existing methods provide many knobs for allocating CPU and memory to VMs, but support for control of IO resource allocation has been quite limited. IO resource management in a hypervisor introduces significant new challenges and needs more extensive controls than in commodity operating systems.This paper introduces a novel algorithm for IO resource allocation in a hypervisor. Our algorithm, mClock, supports proportional-share fairness subject to minimum reservations and maximum limits on the IO allocations for VMs. We present the design of mClock and a prototype implementation inside the VMware ESX server hypervisor. Our results indicate that these rich QoS controls are quite effective in isolating VM performance and providing better application latency. We also show an adaptation of mClock (called dmClock) for a distributed storage environment, where storage is jointly provided by multiple nodes.

Patent
17 Nov 2010
TL;DR: In this article, a portable power supply device for a mobile computing device is provided, which comprises a retention structure to retain the mobile computing devices, a power source, and an inductive signal interface.
Abstract: A portable power supply device for a mobile computing device is provided. The portable power supply device comprises a retention structure to retain the mobile computing device, a power source, and an inductive signal interface. The inductive signal interface is used to inductively signal power from the power source to a corresponding inductive signal interface of the mobile computing device.

Proceedings ArticleDOI
06 Jun 2010
TL;DR: This paper characterize the power-use profiles of database operators under different configuration parameters, and finds that within a single node intended for use in scale-out (shared-nothing) architectures, the most energy-efficient configuration is typically the highest performing one.
Abstract: Rising energy costs in large data centers are driving an agenda for energy-efficient computing. In this paper, we focus on the role of database software in affecting, and, ultimately, improving the energy efficiency of a server. We first characterize the power-use profiles of database operators under different configuration parameters. We find that common database operations can exercise the full dynamic power range of a server, and that the CPU power consumption of different operators, for the same CPU utilization, can differ by as much as 60%. We also find that for these operations CPU power does not vary linearly with CPU utilization. We then experiment with several classes of database systems and storage managers, varying parameters that span from different query plans to compression algorithms and from physical layout to CPU frequency and operating system scheduling. Contrary to what recent work has suggested, we find that within a single node intended for use in scale-out (shared-nothing) architectures, the most energy-efficient configuration is typically the highest performing one. We explain under which circumstances this is not the case, and argue that these circumstances do not warrant a retargeting of database system optimization goals. Further, our results reveal opportunities for cross-node energy optimizations and point out directions for new scale-out architectures.

Journal ArticleDOI
TL;DR: In this paper, the spectral properties of the charge-converted nitrogen-vacancy centers were investigated and the charge state control of nitrogen vacancy centers close to the diamond surface is discussed.
Abstract: The conversion of neutral nitrogen-vacancy centers to negatively charged nitrogen-vacancy centers is demonstrated for centers created by ion implantation and annealing in high-purity diamond. Conversion occurs with surface exposure to an oxygen atmosphere at 465 °C. The spectral properties of the charge-converted centers are investigated. Charge state control of nitrogen-vacancy centers close to the diamond surface is an important step toward the integration of these centers into devices for quantum information and magnetic sensing applications.

Book ChapterDOI
23 Apr 2010
TL;DR: The Dynamic Priority (DP) parallel task scheduler for Hadoop allows users to control their allocated capacity by adjusting their spending over time and enforces service levels more accurately and also scales to more users with distinct service levels than existing schedulers.
Abstract: We present the Dynamic Priority (DP) parallel task scheduler for Hadoop. It allows users to control their allocated capacity by adjusting their spending over time. This simple mechanism allows the scheduler to make more efficient decisions about which jobs and users to prioritize and gives users the tool to optimize and customize their allocations to fit the importance and requirements of their jobs. Additionally, it gives users the incentive to scale back their jobs when demand is high, since the cost of running on a slot is then also more expensive. We envision our scheduler to be used by deadline or budget optimizing agents on behalf of users. We describe the design and implementation of the DP scheduler and experimental results. We show that our scheduler enforces service levels more accurately and also scales to more users with distinct service levels than existing schedulers.

Journal ArticleDOI
TL;DR: A molecular trap structure that can be formed to capture analyte molecules in solution for detection and identification based on surface enhanced Raman spectroscopy (SERS) based on gold-coated nanoscale polymer fingers made by nanoimprinting technique is demonstrated.
Abstract: Here we demonstrate a molecular trap structure that can be formed to capture analyte molecules in solution for detection and identification. The structure is based on gold-coated nanoscale polymer fingers made by nanoimprinting technique. The nanofingers are flexible and their tips can be brought together to trap molecules, while at the same time the gold-coated fingertips form a reliable Raman hot spot for molecule detection and identification based on surface enhanced Raman spectroscopy (SERS). The molecule self-limiting gap size control between fingertips ensures ultimate SERS enhancement for sensitive molecule detection. Furthermore, these type of structures, resulting from top-down meeting self-assembly, can be generalized for other applications, such as plasmonics, meta-materials, and other nanophotonic systems.

Journal ArticleDOI
TL;DR: The results show that the Minnesota functionals, M05, M06, and M06-L give the best performance for the two diverse databases, which suggests that they deserve more attention for applications to catalysis.
Abstract: Thirty four density functional approximations are tested against two diverse databases, one with 18 bond energies and one with 24 barriers. These two databases are chosen to include bond energies and barrier heights which are relevant to catalysis, and in particular the bond energy database includes metal-metal bonds, metal-ligand bonds, alkyl bond dissociation energies, and atomization energies of small main group molecules. Two revised versions of the Perdew–Burke–Ernzerhof (PBE) functional, namely the RPBE and revPBE functionals, widely used for catalysis, do improve the performance of PBE against the two diverse databases, but give worse results than B3LYP (which denotes the combination of Becke's 3-parameter hybrid treatment with Lee–Yang–Parr correlation functional). Our results show that the Minnesota functionals, M05, M06, and M06-L give the best performance for the two diverse databases, which suggests that they deserve more attention for applications to catalysis. We also obtain notably good performance with the τ-HCTHhyb, ωB97X-D, and MOHLYP functional (where MOHLYP denotes the combination of the OptX exchange functional as modified by Schultz, Zhao, and Truhlar with half of the LYP correlation functional).

Book ChapterDOI
18 Oct 2010
TL;DR: A model of IT organization, a methodology for deriving it - based both on ethnography and data mining - and a suite of tools for representing and visualizing the model are presented, to help design changes to bring the organization from its current (AS-IS) state to a desired (TO-BE) state.
Abstract: Understanding IT organization is essential for ensuring a successful transformation phase in IT outsourcing deals We present a model of IT organization, a methodology for deriving it - based both on ethnography and data mining - and a suite of tools for representing and visualizing the model, and to help design changes to bring the organization from its current (AS-IS) state to a desired (TO-BE) state, along with tools for comparing models of organizations based on qualities and characteristics that are expected to have a bearing on the success of the IT transformation step

Proceedings ArticleDOI
28 Apr 2010
TL;DR: SPAIN ("Smart Path Assignment In Networks") provides multipath forwarding using inexpensive, commodity off-the-shelf (COTS) Ethernet switches, over arbitrary topologies, and is demonstrated to improve bisection bandwidth over both simulated and experimental data-center networks.
Abstract: Operators of data centers want a scalable network fabric that supports high bisection bandwidth and host mobility, but which costs very little to purchase and administer. Ethernet almost solves the problem - it is cheap and supports high link bandwidths - but traditional Ethernet does not scale, because its spanning-tree topology forces traffic onto a single tree. Many researchers have described "scalable Ethernet" designs to solve the scaling problem, by enabling the use of multiple paths through the network. However, most such designs require specific wiring topologies, which can create deployment problems, or changes to the network switches, which could obviate the commodity pricing of these parts.In this paper, we describe SPAIN ("Smart Path Assignment In Networks"). SPAIN provides multipath forwarding using inexpensive, commodity off-the-shelf (COTS) Ethernet switches, over arbitrary topologies. SPAIN pre-computes a set of paths that exploit the redundancy in a given network topology, then merges these paths into a set of trees; each tree is mapped as a separate VLAN onto the physical Ethernet. SPAIN requires only minor end-host software modifications, including a simple algorithm that chooses between pre-installed paths to efficiently spread load over the network. We demonstrate SPAIN's ability to improve bisection bandwidth over both simulated and experimental data-center networks

Journal ArticleDOI
TL;DR: In this article, the spectral properties of charge-converted nitrogen-vacancy centers were investigated for high-purity diamond surfaces with ion implantation and annealing.
Abstract: The conversion of neutral nitrogen-vacancy centers to negatively charged nitrogen-vacancy centers is demonstrated for centers created by ion implantation and annealing in high-purity diamond. Conversion occurs with surface exposure to an oxygen atmosphere at 465 C. The spectral properties of the charge-converted centers are investigated. Charge state control of nitrogen-vacancy centers close to the diamond surface is an important step toward the integration of these centers into devices for quantum information and magnetic sensing applications.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a theoretical mechanism that may ensure high-fidelity entanglement of photons, and thus could be used to construct a practical quantum repeater The communication rate is shown to be a function of the maximum distance between any two adjacent quantum repeaters, rather than of the entire length of the network.
Abstract: Researchers describe a theoretical mechanism that may ensure high-fidelity entanglement of photons, and thus could be used to construct a practical quantum repeater The communication rate is shown to be a function of the maximum distance between any two adjacent quantum repeaters, rather than of the entire length of the network

Patent
09 Jun 2010
TL;DR: In this paper, a user interface is presented for initiating activities in an electronic device, which includes an element referred to as a "launch wave" which can be activated at substantially any time, even if the user is engaged with an activity, without requiring the user to first return to a home screen.
Abstract: In one embodiment, a user interface is presented for initiating activities in an electronic device. The user interface includes an element referred to as a “launch wave”, which can be activated at substantially any time, even if the user is engaged with an activity, without requiring the user to first return to a home screen. In various embodiments, the user can activate the launch wave by performing a gesture, or by pressing a physical button, or by tapping at a particular location on a touchscreen, or by activating a keyboard command. In one embodiment, activation of the launch wave and selection of an item from the launch wave can be performed in one continuous operation on a touch-sensitive screen, so as to improve the expediency and convenience of launching applications and other items.

Journal ArticleDOI
TL;DR: At low temperature, nitrogen-vacancy centers in bulk diamond are spectrally more stable, and it is expected that in the long term the bulk diamond approach will be better suited for on-chip integration of a photonic network.
Abstract: Optical microcavities and waveguides coupled to diamond are needed to enable efficient communication between quantum systems such as nitrogen-vacancy centers which are known already to have long electron spin coherence lifetimes. This paper describes recent progress in realizing microcavities with low loss and small mode volume in two hybrid systems: silica microdisks coupled to diamond nanoparticles, and gallium phosphide microdisks coupled to single-crystal diamond. A theoretical proposal for a gallium phosphide nanowire photonic crystal cavity coupled to diamond is also discussed. Comparing the two material systems, silica microdisks are easier to fabricate and test. However, at low temperature, nitrogen-vacancy centers in bulk diamond are spectrally more stable, and we expect that in the long term the bulk diamond approach will be better suited for on-chip integration of a photonic network.

Journal ArticleDOI
TL;DR: Silicon nanowire sensors developed by using top-down fabrication that is CMOS (complementary metal-oxide-semiconductor) compatible for resistive chemical detection with fast response and high sensitivity for pH detection and the long term drifting effects were investigated.
Abstract: Silicon nanowire (SiNW) sensors have been developed by using top-down fabrication that is CMOS (complementary metal‐oxide‐semiconductor) compatible for resistive chemical detection with fast response and high sensitivity. Top-down fabrication by electron beam lithography and reactive ion etching of a silicon on insulator (SOI) substrate enables compatibility with the CMOS fabrication process, accurate alignment with other electrical components, flexible design of the nanowire geometry and good control of the electrical characteristics. The SiNW sensors showed a large operation range for pH detection (pH = 4‐10) with an average sensitivity of (� R/R)/pH = 2.6%/pH and a rise time of 8 s. A small pH level difference (� pH = 0.2) near neutral pH conditions (pH = 7) could be resolved with the SiNW sensors. The sensor response to the presence of alkali metal ions and the long term drifting effects were also investigated. (Some figures in this article are in colour only in the electronic version)

Patent
14 Jul 2010
TL;DR: In this paper, a method, system and gateway for remotely accessing an MPLS VPN are provided, in which multiple virtual interfaces are established in an SSL VPN gateway, one virtual interface is bound with one VPN, different VPN users are differentiated according to authentication and authorization information of users, and the users are respectively bound with corresponding VPNs.
Abstract: A method, system and gateway for remotely accessing an MPLS VPN are provided. In the method, multiple virtual interfaces are established in an SSL VPN gateway, one virtual interface is bound with one VPN, different VPN users are differentiated according to authentication and authorization information of users, and the authentication and authorization information of the users is respectively bound with corresponding VPNs. When the SSL VPN gateway receives a packet sent by a user, an inner label and an outer label are added to the packet according to a VPN instance bound with the user; when receiving a response packet from a resource server, the SSL VPN gateway searches for a VPN instance according to the VPN label, and forwards the response packet to the user through the SSL connection according to the found VPN instance.

Proceedings ArticleDOI
13 Nov 2010
TL;DR: These measurements motivate developing a 'managed' IO approach using adaptive algorithms varying the IO system workload based on current levels and use areas, which achieves higher overall performance and less variability in both a typical usage environment and with artificially introduced levels of 'noise'.
Abstract: Significant challenges exist for achieving peak or even consistent levels of performance when using IO systems at scale They stem from sharing IO system resources across the processes of single largescale applications and/or multiple simultaneous programs causing internal and external interference, which in turn, causes substantial reductions in IO performance This paper presents interference effects measurements for two different file systems at multiple supercomputing sites These measurements motivate developing a 'managed' IO approach using adaptive algorithms varying the IO system workload based on current levels and use areas An implementation of these methods deployed for the shared, general scratch storage system on Oak Ridge National Laboratory machines achieves higher overall performance and less variability in both a typical usage environment and with artificially introduced levels of 'noise' The latter serving to clearly delineate and illustrate potential problems arising from shared system usage and the advantages derived from actively managing it