scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 2009"


Journal ArticleDOI
TL;DR: This work proposes a principled statistical framework for discerning and quantifying power-law behavior in empirical data by combining maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios.
Abstract: Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.

8,753 citations


Journal ArticleDOI
TL;DR: The results from a proof-of-concept prototype suggest that VM technology can indeed help meet the need for rapid customization of infrastructure for diverse applications, and this article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them.
Abstract: Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the service. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement.

3,599 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings, which analyzes and categorizes the multiple ways in which examples are gathered, as well as the various techniques for policy derivation.

3,343 citations


Journal ArticleDOI
TL;DR: In this article, an overview of the atmospheric degradation mechanisms for SOA precursors, gas-particle partitioning theory and analytical techniques used to determine the chemical composition of SOA is presented.
Abstract: Secondary organic aerosol (SOA) accounts for a significant fraction of ambient tropospheric aerosol and a detailed knowledge of the formation, properties and transformation of SOA is therefore required to evaluate its impact on atmospheric processes, climate and human health. The chemical and physical processes associated with SOA formation are complex and varied, and, despite considerable progress in recent years, a quantitative and predictive understanding of SOA formation does not exist and therefore represents a major research challenge in atmospheric science. This review begins with an update on the current state of knowledge on the global SOA budget and is followed by an overview of the atmospheric degradation mechanisms for SOA precursors, gas-particle partitioning theory and the analytical techniques used to determine the chemical composition of SOA. A survey of recent laboratory, field and modeling studies is also presented. The following topical and emerging issues are highlighted and discussed in detail: molecular characterization of biogenic SOA constituents, condensed phase reactions and oligomerization, the interaction of atmospheric organic components with sulfuric acid, the chemical and photochemical processing of organics in the atmospheric aqueous phase, aerosol formation from real plant emissions, interaction of atmospheric organic components with water, thermodynamics and mixtures in atmospheric models. Finally, the major challenges ahead in laboratory, field and modeling studies of SOA are discussed and recommendations for future research directions are proposed.

3,324 citations


Journal ArticleDOI
11 Dec 2009-Science
TL;DR: A unifying model framework describing the atmospheric evolution of OA that is constrained by high–time-resolution measurements of its composition, volatility, and oxidation state is presented, which can serve as a basis for improving parameterizations in regional and global models.
Abstract: Organic aerosol (OA) particles affect climate forcing and human health, but their sources and evolution remain poorly characterized. We present a unifying model framework describing the atmospheric evolution of OA that is constrained by high-time-resolution measurements of its composition, volatility, and oxidation state. OA and OA precursor gases evolve by becoming increasingly oxidized, less volatile, and more hygroscopic, leading to the formation of oxygenated organic aerosol (OOA), with concentrations comparable to those of sulfate aerosol throughout the Northern Hemisphere. Our model framework captures the dynamic aging behavior observed in both the atmosphere and laboratory: It can serve as a basis for improving parameterizations in regional and global models.

3,104 citations


Journal ArticleDOI
TL;DR: It is argued that evidence for novel size-dependent properties alone, rather than particle size, should be the primary criterion in any definition of nanoparticles when making decisions about their regulation for environmental, health and safety reasons.
Abstract: The regulation of engineered nanoparticles requires a widely agreed definition of such particles. Nanoparticles are routinely defined as particles with sizes between about 1 and 100 nm that show properties that are not found in bulk samples of the same material. Here we argue that evidence for novel size-dependent properties alone, rather than particle size, should be the primary criterion in any definition of nanoparticles when making decisions about their regulation for environmental, health and safety reasons. We review the size-dependent properties of a variety of inorganic nanoparticles and find that particles larger than about 30 nm do not in general show properties that would require regulatory scrutiny beyond that required for their bulk counterparts.

1,656 citations


Journal ArticleDOI
TL;DR: This tutorial overview shows how, in addition to answering the question of 'is there information about a variable of interest' ( pattern discrimination), classifiers can be used to tackle other classes of question, namely 'where is the information' and 'how is that information encoded' (pattern characterization).

1,582 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: This work proposes, crafted from a fundamental understanding of PCM technology parameters, area-neutral architectural enhancements that address these limitations and make PCM competitive with DRAM.
Abstract: Memory scaling is in jeopardy as charge storage and sensing mechanisms become less reliable for prevalent memory technologies, such as DRAM. In contrast, phase change memory (PCM) storage relies on scalable current and thermal mechanisms. To exploit PCM's scalability as a DRAM alternative, PCM must be architected to address relatively long latencies, high energy writes, and finite endurance.We propose, crafted from a fundamental understanding of PCM technology parameters, area-neutral architectural enhancements that address these limitations and make PCM competitive with DRAM. A baseline PCM system is 1.6x slower and requires 2.2x more energy than a DRAM system. Buffer reorganizations reduce this delay and energy gap to 1.2x and 1.0x, using narrow rows to mitigate write energy and multiple rows to improve locality and write coalescing. Partial writes enhance memory endurance, providing 5.6 years of lifetime. Process scaling will further reduce PCM energy costs and improve endurance.

1,568 citations


01 Jan 2009
TL;DR: This dissertation aims to provide a history of web exceptionalism from 1989 to 2002, a period chosen in order to explore its roots as well as specific cases up to and including the year in which descriptions of “Web 2.0” began to circulate.
Abstract: Boss is an autonomous vehicle that uses on-board sensors (global positioning system, lasers, radars, and cameras) to track other vehicles, detect static obstacles, and localize itself relative to a road model. A three-layer planning system combines mission, behavioral, and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes and precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress toward local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85-km Urban Challenge Final Event, Boss demonstrated some of its capabilities, qualifying first and winning the challenge. © 2008 Wiley Periodicals, Inc.

1,275 citations


Journal ArticleDOI
TL;DR: In this paper, the perturbation approach originally introduced by Moller and Plesset, terminated at finite order, is compared from the point of view of requirements for theoretical chemical models.
Abstract: Some methods of describing electron correlation are compared from the point of view of requirements for theoretical chemical models. The perturbation approach originally introduced by Moller and Plesset, terminated at finite order, is found to satisfy most of these requirements. It is size consistent, that is, applicable to an ensemble of isolated systems in an additive manner. On the other hand, it does not provide an upper bound for the electronic energy. The independent electron-pair approximation is accurate to second order in a Moller-Plesset expansion, but inaccurate in third order. A series of variational methods is discussed which gives upper bounds for the energy, but which lacks size consistency. Finally, calculations on some small molecules using a moderately large Gaussian basis are presented to illustrate these points. Equilibrium geometries, dissociation energies, and energy separations between electronic states of different spin multiplicities are describe substantially better by Moller--Plesset theory to second or third order than by Hartree--Fock theory.

1,217 citations


Journal ArticleDOI
TL;DR: The fundamentals of the technique are discussed, along with how it can be used to synthesize macromolecules with controlled molecular architecture, and how their self-assembly can create nanostructured functional materials.
Abstract: The simplicity and broad applicabilty of atom transfer radical polymerization make it a rapidly developing area of synthetic polymer chemistry. Here, the fundamentals of the technique are discussed, along with how it can be used to synthesize macromolecules with controlled molecular architecture, and how their self-assembly can create nanostructured functional materials.

Proceedings ArticleDOI
07 Mar 2009
TL;DR: The PowerNap concept, an energy-conservation approach where the entire system transitions rapidly between a high-performance active state and a near-zero-power idle state in response to instantaneous load, is proposed and the Redundant Array for Inexpensive Load Sharing (RAILS) is introduced.
Abstract: Data center power consumption is growing to unprecedented levels: the EPA estimates U.S. data centers will consume 100 billion kilowatt hours annually by 2011. Much of this energy is wasted in idle systems: in typical deployments, server utilization is below 30%, but idle servers still consume 60% of their peak power draw. Typical idle periods though frequent--last seconds or less, confounding simple energy-conservation approaches.In this paper, we propose PowerNap, an energy-conservation approach where the entire system transitions rapidly between a high-performance active state and a near-zero-power idle state in response to instantaneous load. Rather than requiring fine-grained power-performance states and complex load-proportional operation from each system component, PowerNap instead calls for minimizing idle power and transition time, which are simpler optimization goals. Based on the PowerNap concept, we develop requirements and outline mechanisms to eliminate idle power waste in enterprise blade servers. Because PowerNap operates in low-efficiency regions of current blade center power supplies, we introduce the Redundant Array for Inexpensive Load Sharing (RAILS), a power provisioning approach that provides high conversion efficiency across the entire range of PowerNap's power demands. Using utilization traces collected from enterprise-scale commercial deployments, we demonstrate that, together, PowerNap and RAILS reduce average server power consumption by 74%.

Proceedings ArticleDOI
07 Dec 2009
TL;DR: A semantic output code classifier which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes and can often predict words that people are thinking about from functional magnetic resonance images of their neural activity, even without training examples for those words.
Abstract: We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.

Proceedings ArticleDOI
16 Aug 2009
TL;DR: The variation due to fluctuating electricity prices is characterized and it is argued that existing distributed systems should be able to exploit this variation for significant economic gains.
Abstract: Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai's CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences.

Journal ArticleDOI
TL;DR: In this paper, the magnetic resonance coupling between source and load coils is achieved with lumped capacitors terminating the coils, and a circuit model is developed to describe the system with a single receiver and extended to describe two receivers.
Abstract: Wireless power transfer via magnetic resonant coupling is experimentally demonstrated in a system with a large source coil and either one or two small receivers. Resonance between source and load coils is achieved with lumped capacitors terminating the coils. A circuit model is developed to describe the system with a single receiver, and extended to describe the system with two receivers. With parameter values chosen to obtain good fits, the circuit models yield transfer frequency responses that are in good agreement with experimental measurements over a range of frequencies that span the resonance. Resonant frequency splitting is observed experimentally and described theoretically for the multiple receiver system. In the single receiver system at resonance, more than 50% of the power that is supplied by the actual source is delivered to the load. In a multiple receiver system, a means for tracking frequency shifts and continuously retuning the lumped capacitances that terminate each receiver coil so as to maximize efficiency is a key issue for future work.

Book
15 Jul 2009
TL;DR: Leoni as discussed by the authors takes a novel approach to the theory by looking at Sobolev spaces as the natural development of monotone, absolutely continuous, and BV functions of one variable.
Abstract: Sobolev spaces are a fundamental tool in the modern study of partial differential equations. In this book, Leoni takes a novel approach to the theory by looking at Sobolev spaces as the natural development of monotone, absolutely continuous, and BV functions of one variable. In this way, the majority of the text can be read without the prerequisite of a course in functional analysis. The first part of this text is devoted to studying functions of one variable. Several of the topics treated occur in courses on real analysis or measure theory. Here, the perspective emphasizes their applications to Sobolev functions, giving a very different flavor to the treatment. This elementary start to the book makes it suitable for advanced undergraduates or beginning graduate students. Moreover, the one-variable part of the book helps to develop a solid background that facilitates the reading and understanding of Sobolev functions of several variables. The second part of the book is more classical, although it also contains some recent results. Besides the standard results on Sobolev functions, this part of the book includes chapters on BV functions, symmetric rearrangement, and Besov spaces. The book contains over 200 exercises.

Proceedings ArticleDOI
30 Sep 2009
TL;DR: This paper analyzes the effect of replay attacks on a control system and proposes a countermeasure that guarantees a desired probability of detection by trading off either detection delay or LQG performance, either by decreasing control accuracy or increasing control effort.
Abstract: This paper analyzes the effect of replay attacks on a control system. We assume an attacker wishes to disrupt the operation of a control system in steady state. In order to inject an exogenous control input without being detected the attacker will hijack the sensors, observe and record their readings for a certain amount of time and repeat them afterwards while carrying out his attack. This is a very common and natural attack (we have seen numerous times intruders recording and replaying security videos while performing their attack undisturbed) for an attacker who does not know the dynamics of the system but is aware of the fact that the system itself is expected to be in steady state for the duration of the attack. We assume the control system to be a discrete time linear time invariant gaussian system applying an infinite horizon Linear Quadratic Gaussian (LQG) controller. We also assume that the system is equipped with a χ2 failure detector. The main contributions of the paper, beyond the novelty of the problem formulation, consist in 1) providing conditions on the feasibility of the replay attack on the aforementioned system and 2) proposing a countermeasure that guarantees a desired probability of detection (with a fixed false alarm rate) by trading off either detection delay or LQG performance, either by decreasing control accuracy or increasing control effort.

Journal ArticleDOI
TL;DR: This tutorial focuses on the sense of touch within the context of a fully active human observer and describes an extensive body of research on “what” and “where” channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver.
Abstract: This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on “what” and “where” channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.

Proceedings ArticleDOI
12 May 2009
TL;DR: This paper presents CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories and relax the collision-free feasibility prerequisite on input paths required by those strategies.
Abstract: Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot.

Journal ArticleDOI
TL;DR: An algorithm (mrFAST) is presented to comprehensively map next-generation sequence reads, which allows for the prediction of absolute copy-number variation of duplicated segments and genes, and can distinguish between different copies of highly identical genes.
Abstract: Despite their importance in gene innovation and phenotypic variation, duplicated regions have remained largely intractable owing to difficulties in accurately resolving their structure, copy number and sequence content. We present an algorithm (mrFAST) to comprehensively map next-generation sequence reads, which allows for the prediction of absolute copy-number variation of duplicated segments and genes. We examine three human genomes and experimentally validate genome-wide copy number differences. We estimate that, on average, 73-87 genes vary in copy number between any two individuals and find that these genic differences overwhelmingly correspond to segmental duplications (odds ratio = 135; P < 2.2 x 10(-16)). Our method can distinguish between different copies of highly identical genes, providing a more accurate assessment of gene content and insight into functional constraint without the limitations of array-based technology.

Journal ArticleDOI
B. I. Abelev1, Madan M. Aggarwal2, Zubayer Ahammed3, B. D. Anderson4  +367 moreInstitutions (47)
TL;DR: In this article, the authors measured the charged-particle spectra at the BNL Relativistic Heavy Ion Collider (RHIC) time projection chamber and reported the average transverse momenta, total particle production, particle yield ratios, strangeness, and baryon production rates as a function of collision system and centrality.
Abstract: Identified charged-particle spectra of pi(+/-), K(+/-), p, and (p) over bar at midrapidity (vertical bar y vertical bar < 0.1) measured by the dE/dx method in the STAR (solenoidal tracker at the BNL Relativistic Heavy Ion Collider) time projection chamber are reported for pp and d + Au collisions at root s(NN) = 200 GeV and for Au + Au collisions at 62.4, 130, and 200 GeV. Average transverse momenta, total particle production, particle yield ratios, strangeness, and baryon production rates are investigated as a function of the collision system and centrality. The transverse momentum spectra are found to be flatter for heavy particles than for light particles in all collision systems; the effect is more prominent for more central collisions. The extracted average transverse momentum of each particle species follows a trend determined by the total charged-particle multiplicity density. The Bjorken energy density estimate is at least several GeV/fm(3) for a formation time less than 1 fm/c. A significantly larger net-baryon density and a stronger increase of the net-baryon density with centrality are found in Au + Au collisions at 62.4 GeV than at the two higher energies. Antibaryon production relative to total particle multiplicity is found to be constant over centrality, but increases with the collision energy. Strangeness production relative to total particle multiplicity is similar at the three measured RHIC energies. Relative strangeness production increases quickly with centrality in peripheral Au + Au collisions, to a value about 50% above the pp value, and remains rather constant in more central collisions. Bulk freeze-out properties are extracted from thermal equilibrium model and hydrodynamics-motivated blast-wave model fits to the data. Resonance decays are found to have little effect on the extracted kinetic freeze-out parameters because of the transverse momentum range of our measurements. The extracted chemical freeze-out temperature is constant, independent of collision system or centrality; its value is close to the predicted phase-transition temperature, suggesting that chemical freeze-out happens in the vicinity of hadronization and the chemical freeze-out temperature is universal despite the vastly different initial conditions in the collision systems. The extracted kinetic freeze-out temperature, while similar to the chemical freeze-out temperature in pp, d + Au, and peripheral Au + Au collisions, drops significantly with centrality in Au + Au collisions, whereas the extracted transverse radial flow velocity increases rapidly with centrality. There appears to be a prolonged period of particle elastic scatterings from chemical to kinetic freeze-out in central Au + Au collisions. The bulk properties extracted at chemical and kinetic freeze-out are observed to evolve smoothly over the measured energy range, collision systems, and collision centralities.

Proceedings ArticleDOI
06 Dec 2009
TL;DR: This paper describes PEGASUS, an open source Peta Graph Mining library which performs typical graph mining tasks such as computing the diameter of the graph, computing the radius of each node and finding the connected components, and describes a very important primitive for PEGasUS, called GIM-V (Generalized Iterated Matrix-Vector multiplication).
Abstract: In this paper, we describe PEGASUS, an open source Peta Graph Mining library which performs typical graph mining tasks such as computing the diameter of the graph, computing the radius of each node and finding the connected components. As the size of graphs reaches several Giga-, Tera- or Peta-bytes, the necessity for such a library grows too. To the best of our knowledge, PEGASUS is the first such library, implemented on the top of the Hadoop platform, the open source version of MapReduce. Many graph mining operations (PageRank, spectral clustering, diameter estimation, connected components etc.) are essentially a repeated matrix-vector multiplication. In this paper we describe a very important primitive for PEGASUS, called GIM-V (Generalized Iterated Matrix-Vector multiplication). GIM-V is highly optimized, achieving (a) good scale-up on the number of available machines (b) linear running time on the number of edges, and (c) more than 5 times faster performance over the non-optimized version of GIM-V. Our experiments ran on M45, one of the top 50 supercomputers in the world. We report our findings on several real graphs, including one of the largest publicly available Web Graphs, thanks to Yahoo!, with 6,7 billion edges.

Journal ArticleDOI
TL;DR: This paper provides a general description of NoC architectures and applications and enumerates several related research problems organized under five main categories: Application characterization, communication paradigm, communication infrastructure, analysis, and solution evaluation.
Abstract: To alleviate the complex communication problems that arise as the number of on-chip components increases, network-on-chip (NoC) architectures have been recently proposed to replace global interconnects. In this paper, we first provide a general description of NoC architectures and applications. Then, we enumerate several related research problems organized under five main categories: Application characterization, communication paradigm, communication infrastructure, analysis, and solution evaluation. Motivation, problem description, proposed approaches, and open issues are discussed for each problem from system, microarchitecture, and circuit perspectives. Finally, we address the interactions among these research problems and put the NoC design process into perspective.

Journal ArticleDOI
TL;DR: In this paper, a review of recent developments in the synthesis of functional polymers with controlled architecture and site-specific functionality via applying controlled radical polymerization (CRP) techniques is reviewed.

Journal ArticleDOI
TL;DR: It is shown that A-ND represents the best of both worlds-zero bias and low variance-at the cost of a slow convergence rate; rescaling the weights balances the variance versus the rate of bias reduction (convergence rate).
Abstract: The paper studies average consensus with random topologies (intermittent links) and noisy channels. Consensus with noise in the network links leads to the bias-variance dilemma-running consensus for long reduces the bias of the final average estimate but increases its variance. We present two different compromises to this tradeoff: the A-ND algorithm modifies conventional consensus by forcing the weights to satisfy a persistence condition (slowly decaying to zero;) and the A-NC algorithm where the weights are constant but consensus is run for a fixed number of iterations [^(iota)], then it is restarted and rerun for a total of [^(p)] runs, and at the end averages the final states of the [^(p)] runs (Monte Carlo averaging). We use controlled Markov processes and stochastic approximation arguments to prove almost sure convergence of A-ND to a finite consensus limit and compute explicitly the mean square error (mse) (variance) of the consensus limit. We show that A-ND represents the best of both worlds-zero bias and low variance-at the cost of a slow convergence rate; rescaling the weights balances the variance versus the rate of bias reduction (convergence rate). In contrast, A-NC, because of its constant weights, converges fast but presents a different bias-variance tradeoff. For the same number of iterations [^(iota)][^(p)] , shorter runs (smaller [^(iota)] ) lead to high bias but smaller variance (larger number [^(p)] of runs to average over.) For a static nonrandom network with Gaussian noise, we compute the optimal gain for A-NC to reach in the shortest number of iterations [^(iota)][^(p)] , with high probability (1-delta), (epsiv, delta)-consensus (epsiv residual bias). Our results hold under fairly general assumptions on the random link failures and communication noise.

Journal ArticleDOI
TL;DR: Optimism is a major predictor of positive physical health outcomes and a significant predictor of health outcomes or markers according to objective measures of physical health.
Abstract: Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated.

Journal ArticleDOI
TL;DR: This article conducted a set of experiments in the U.S. and in India in which subjects worked on different tasks and received performance-contingent payments that varied in amount from small to very large relative to their typical levels of pay.
Abstract: Workers in a wide variety of jobs are paid based on performance, which is commonly seen as enhancing effort and productivity relative to non-contingent pay schemes. However, psychological research suggests that excessive rewards can, in some cases, result in a decline in performance. To test whether very high monetary rewards can decrease performance, we conducted a set of experiments in the U.S. and in India in which subjects worked on different tasks and received performance-contingent payments that varied in amount from small to very large relative to their typical levels of pay. With some important exceptions, very high reward levels had a detrimental effect on performance.

Journal ArticleDOI
TL;DR: An sBB software package named couenne (Convex Over- and Under-ENvelopes for Non-linear Estimation) is developed and used for extensive tests on several combinations of BT and branching techniques on a set of publicly available and real-world MINLP instances and is compared with a state-of-the-art MINLP solver.
Abstract: Many industrial problems can be naturally formulated using mixed integer non-linear programming (MINLP) models and can be solved by spatial Branch&Bound (sBB) techniques. We study the impact of two important parts of sBB methods: bounds tightening (BT) and branching strategies. We extend a branching technique originally developed for MILP, reliability branching, to the MINLP case. Motivated by the demand for open-source solvers for real-world MINLP problems, we have developed an sBB software package named couenne (Convex Over-and Under-ENvelopes for Non-linear Estimation) and used it for extensive tests on several combinations of BT and branching techniques on a set of publicly available and real-world MINLP instances. We also compare the performance of couenne with a state-of-the-art MINLP solver.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: The key contributions of this paper are the principles of the FAWN architecture and the design and implementation of FAWN-KV--a consistent, replicated, highly available, and high-performance key-value storage system built on a FAWN prototype.
Abstract: This paper presents a new cluster architecture for low-power data-intensive computing. FAWN couples low-power embedded CPUs to small amounts of local flash storage, and balances computation and I/O capabilities to enable efficient, massively parallel access to data.The key contributions of this paper are the principles of the FAWN architecture and the design and implementation of FAWN-KV--a consistent, replicated, highly available, and high-performance key-value storage system built on a FAWN prototype. Our design centers around purely log-structured datastores that provide the basis for high performance on flash storage, as well as for replication and consistency obtained using chain replication on a consistent hashing ring. Our evaluation demonstrates that FAWN clusters can handle roughly 350 key-value queries per Joule of energy--two orders of magnitude more than a disk-based system.

Book ChapterDOI
TL;DR: The state-of-the-art in engineering self-adaptive systems is explored and the critical challenges the community must address to enable systematic and well-organized engineering of self- Adaptive and self-managing software systems are identified.
Abstract: To deal with the increasing complexity of software systems and uncertainty of their environments, software engineers have turned to self-adaptivity. Self-adaptive systems are capable of dealing with a continuously changing environment and emerging requirements that may be unknown at design-time. However, building such systems cost-effectively and in a predictable manner is a major engineering challenge. In this paper, we explore the state-of-the-art in engineering self-adaptive systems and identify potential improvements in the design process. Our most important finding is that in designing self-adaptive systems, the feedback loops that control self-adaptation must become first-class entities. We explore feedback loops from the perspective of control engineering and within existing self-adaptive systems in nature and biology. Finally, we identify the critical challenges our community must address to enable systematic and well-organized engineering of self-adaptive and self-managing software systems.