scispace - formally typeset
Search or ask a question

Showing papers by "University of Waterloo published in 2014"


Journal ArticleDOI
TL;DR: The major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems).
Abstract: The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.

2,358 citations


Journal ArticleDOI
TL;DR: It is argued that the ability of plant growth-promoting bacteria that produce 1-aminocyclopropane-1-carboxylate (ACC) deaminase to lower plant ethylene levels, often a result of various stresses, is a key component in the efficacious functioning of these bacteria.

1,522 citations


Journal ArticleDOI
TL;DR: This work reports a different strategy based on an inherently polar, high surface area metallic oxide cathode host and shows that it mitigates polysulphide dissolution by forming an excellent interface with Li2S and provides experimental evidence for surface-mediated redox chemistry.
Abstract: In lithium-sulfur batteries, many porous conductive carbon materials are proposed to confine soluble polysulfides, but the efficiency is generally low. Here, the authors use a Magneli phase of titanium oxide as the cathode host and electron conduit, which binds the lithium (poly)sulfides well, leading to excellent battery performance.

1,090 citations


Journal ArticleDOI
TL;DR: A comprehensive review about the most recent progress in synthesis, characterization, fundamental understanding, and performance of graphene and graphene oxide sponges can be found in this paper, where the technical challenges are discussed, and several future research directions are also suggested.
Abstract: This paper gives a comprehensive review about the most recent progress in synthesis, characterization, fundamental understanding, and the performance of graphene and graphene oxide sponges. Practical applications are considered including use in composite materials, as the electrode materials for electrochemical sensors, as absorbers for both gases and liquids, and as electrode materials for devices involved in electrochemical energy storage and conversion. Several advantages of both graphene and graphene oxide sponges such as three dimensional graphene networks, high surface area, high electro/thermo conductivities, high chemical/electrochemical stability, high flexibility and elasticity, and extremely high surface hydrophobicity are emphasized. To facilitate further research and development, the technical challenges are discussed, and several future research directions are also suggested in this paper.

966 citations


Journal ArticleDOI
Ning Lu1, Nan Cheng1, Ning Zhang1, Xuemin Shen1, Jon W. Mark1 
TL;DR: The challenges and potential challenges to provide vehicle-to-x connectivity are discussed and the state-of-the-art wireless solutions for vehicle-To-sensor, vehicle- to-vehicle, motorway infrastructure connectivities are reviewed.
Abstract: Providing various wireless connectivities for vehicles enables the communication between vehicles and their internal and external environments. Such a connected vehicle solution is expected to be the next frontier for automotive revolution and the key to the evolution to next generation intelligent transportation systems (ITSs). Moreover, connected vehicles are also the building blocks of emerging Internet of Vehicles (IoV). Extensive research activities and numerous industrial initiatives have paved the way for the coming era of connected vehicles. In this paper, we focus on wireless technologies and potential challenges to provide vehicle-to-x connectivity. In particular, we discuss the challenges and review the state-of-the-art wireless solutions for vehicle-to-sensor, vehicle-to-vehicle, vehicle-to-Internet, and vehicle-to-road infrastructure connectivities. We also identify future research issues for building connected vehicles.

936 citations


Journal ArticleDOI
TL;DR: In this paper, a natural formulation for a massless colored cubic scalar theory is presented, which is an integral over the space of n marked points on a sphere and has as integrand two factors: the first is a combination of Parke-Taylor-like terms dressed with U(N ) color structures while the second is a Pfaffian.
Abstract: In a recent note we presented a compact formula for the complete tree-level S-matrix of pure Yang-Mills and gravity theories in arbitrary spacetime dimension. In this paper we show that a natural formulation also exists for a massless colored cubic scalar theory. In Yang-Mills, the formula is an integral over the space of n marked points on a sphere and has as integrand two factors. The first factor is a combination of Parke-Taylor-like terms dressed with U(N ) color structures while the second is a Pfaffian. The S-matrix of a U(N ) × U(N ) cubic scalar theory is obtained by simply replacing the Pfaffian with a U(N ) version of the previous U(N ) factor. Given that gravity amplitudes are obtained by replacing the U(N ) factor in Yang-Mills by a second Pfaffian, we are led to a natural color-kinematics correspondence. An expansion of the integrand of the scalar theory leads to sums over trivalent graphs and are directly related to the KLT matrix. Combining this and the Yang-Mills formula we find a connection to the BCJ color-kinematics duality as well as a new proof of the BCJ doubling property that gives rise to gravity amplitudes. We end by considering a special kinematic point where the partial amplitude simply counts the number of color-ordered planar trivalent trees, which equals a Catalan number. The scattering equations simplify dramatically and are equivalent to a special Y-system with solutions related to roots of Chebyshev polynomials. The sum of the integrand over the solutions gives rise to a representation of Catalan numbers in terms of eigenvectors and eigenvalues of the adjacency matrix of an A-type Dynkin diagram.

611 citations


Journal ArticleDOI
TL;DR: Early observations suggest that activated carbon adsorption, ion exchange, and high pressure membrane filtration may be effective in controlling these contaminants, however, branched isomers and the increasingly used shorter chain PFAS replacement products may be problematic as it pertains to the accurate assessment of PFAS behaviour through drinking water treatment processes.

576 citations


Proceedings ArticleDOI
11 Nov 2014
TL;DR: This paper investigates whether mutants are indeed a valid substitute for real faults, i.e., whether a test suite’s ability to detect mutants is correlated with its able to detect real faults that developers have fixed, and shows a statistically significant correlation between mutant detection and real fault detection, independently of code coverage.
Abstract: A good test suite is one that detects real faults Because the set of faults in a program is usually unknowable, this definition is not useful to practitioners who are creating test suites, nor to researchers who are creating and evaluating tools that generate test suites In place of real faults, testing research often uses mutants, which are artificial faults -- each one a simple syntactic variation -- that are systematically seeded throughout the program under test Mutation analysis is appealing because large numbers of mutants can be automatically-generated and used to compensate for low quantities or the absence of known real faults Unfortunately, there is little experimental evidence to support the use of mutants as a replacement for real faults This paper investigates whether mutants are indeed a valid substitute for real faults, ie, whether a test suite’s ability to detect mutants is correlated with its ability to detect real faults that developers have fixed Unlike prior studies, these investigations also explicitly consider the conflating effects of code coverage on the mutant detection rate Our experiments used 357 real faults in 5 open-source applications that comprise a total of 321,000 lines of code Furthermore, our experiments used both developer-written and automatically-generated test suites The results show a statistically significant correlation between mutant detection and real fault detection, independently of code coverage The results also give concrete suggestions on how to improve mutation analysis and reveal some inherent limitations

566 citations


Journal ArticleDOI
TL;DR: The Kawai-Lewellen-Tye (KLT) orthogonality as discussed by the authors is a property of the parke-Taylor vectors constructed from the solutions to the scattering equations that are mutually orthogonal with respect to the kinematic invariants.
Abstract: Several recent developments point to the fact that rational maps from $n$-punctured spheres to the null cone of $D$-dimensional momentum space provide a natural language for describing the scattering of massless particles in $D$ dimensions. In this paper we identify and study equations relating the kinematic invariants ${s}_{ab}$ and the puncture locations ${\ensuremath{\sigma}}_{c}$, which we call the scattering equations. We provide an inductive algorithm in the number of particles for their solutions and prove a remarkable property which we call Kawai--Lewellen--Tye (KLT) orthogonality. In a nutshell, KLT orthogonality means that ``Parke--Taylor'' vectors constructed from the solutions to the scattering equations are mutually orthogonal with respect to the KLT bilinear form. We end with comments on possible connections to gauge theory and gravity amplitudes in any dimension and to the high-energy limit of string theory amplitudes.

538 citations


Journal ArticleDOI
TL;DR: Using the model predictive control technique, the optimal operation of the microgrid is determined using an extended horizon of evaluation and recourse, which allows a proper dispatch of the energy storage units.
Abstract: This paper presents the mathematical formulation of the microgrid's energy management problem and its implementation in a centralized Energy Management System (EMS) for isolated microgrids Using the model predictive control technique, the optimal operation of the microgrid is determined using an extended horizon of evaluation and recourse, which allows a proper dispatch of the energy storage units The energy management problem is decomposed into Unit Commitment (UC) and Optimal Power Flow (OPF) problems in order to avoid a mixed-integer non-linear formulation The microgrid is modeled as a three-phase unbalanced system with presence of both dispatchable and non-dispatchable distributed generation The proposed EMS is tested in an isolated microgrid based on a CIGRE medium-voltage benchmark system Results justify the need for detailed three-phase models of the microgrid in order to properly account for voltage limits and procure reactive power support

537 citations


Journal ArticleDOI
19 Jun 2014-Nature
TL;DR: In this paper, a connection between the onset of quantum contextuality and the possibility of universal quantum computation via "magic state distillation" was made. But the connection was not made in the context of quantum computing.
Abstract: Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via ‘magic state’ distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple ‘hidden variable’ model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms. Quantum computing promises advantages over classical computing for certain problems; now ‘quantum contextuality’ — a generalization of the concept of quantum non-locality — is shown to be a critical resource that gives the most promising class of quantum computers their power. It is widely appreciated that quantum computing promises advantages over classical computing in certain circumstances and for certain problems. But what are the specific features of quantum mechanics that are ultimately responsible for this enhanced potential? Mark Howard and colleagues identify 'quantum contextuality' — a generalization of the concept of quantum non-locality — as the critical resource that gives quantum computers their power. This finding not only provides clarification of the theoretical basis of quantum computing, it also provides a framework for directing experimental efforts to most effectively harness the weirdness of quantum mechanics for computational tasks.

Journal ArticleDOI
TL;DR: Resilience management goes beyond risk management to address the complexities of large integrated systems and the uncertainty of future threats, especially those associated with climate change as mentioned in this paper, which is a common theme in our work.
Abstract: Resilience management goes beyond risk management to address the complexities of large integrated systems and the uncertainty of future threats, especially those associated with climate change.

Journal ArticleDOI
TL;DR: This chapter reviews the factors that effect the production of this phytohormone, the role of IAA in bacterial physiology and in plant–microbe interactions including phytostimulation and phytopathogenesis.
Abstract: Indole-3-acetic acid (IAA) is an important phytohormone with the capacity to control plant development in both beneficial and deleterious ways. The ability to synthesize IAA is an attribute that many bacteria including both plant growth-promoters and phytopathogens possess. There are three main pathways through which IAA is synthesized; the indole-3-pyruvic acid, indole-3-acetamide and indole-3-acetonitrile pathways. This chapter reviews the factors that effect the production of this phytohormone, the role of IAA in bacterial physiology and in plant–microbe interactions including phytostimulation and phytopathogenesis.

Journal ArticleDOI
27 Feb 2014
TL;DR: In this paper, the authors highlight some of the underlying issues in the modeling of risk-weighted assets (RWAs), and frame this discussion in the context of two recent regulatory documents referred to as Basel 3.5.
Abstract: Recent crises in the financial industry have shown weaknesses in the modeling of Risk-Weighted Assets (RWAs). Relatively minor model changes may lead to substantial changes in the RWA numbers. Similar problems are encountered in the Value-at-Risk (VaR)-aggregation of risks. In this article, we highlight some of the underlying issues, both methodologically, as well as through examples. In particular, we frame this discussion in the context of two recent regulatory documents we refer to as Basel 3.5.

Book ChapterDOI
TL;DR: In this article, the authors developed and applied molecular tools to determine the activity and role of microorganisms in sulfide-mineral-bearing systems and developed tools for assessing the toxicity of mine-waste effluent.
Abstract: Mining and mineral processing generates large volumes of waste, including waste rock, mill tailings, and mineral refinery wastes. The oxidation of sulfide minerals in the materials can result in the release of acidic water containing high concentrations of dissolved metals. Recent studies have determined the mechanisms of abiotic sulfide-mineral oxidation. Within mine wastes, the oxidation of sulfide minerals is catalyzed by microorganisms. Molecular tools have been developed and applied to determine the activity and role of these organisms in sulfide-mineral-bearing systems. Novel tools have been developed for assessing the toxicity of mine-waste effluent. Dissolved constituents released by sulfide oxidation may be attenuated through the precipitation of secondary minerals, including metal sulfate, oxyhydroxide, and basic sulfate minerals. Geochemical models have been developed to provide improved predictions of the magnitude and duration of environmental concerns. Novel techniques have been developed to prevent and remediate environmental problems associated with these materials.

Journal ArticleDOI
TL;DR: A detailed account of one systematic review team's experience in searching for grey literature and including it throughout the review is provided, as well as the strengths and limitations to the approach.
Abstract: There is ongoing interest in including grey literature in systematic reviews. Including grey literature can broaden the scope to more relevant studies, thereby providing a more complete view of available evidence. Searching for grey literature can be challenging despite greater access through the Internet, search engines and online bibliographic databases. There are a number of publications that list sources for finding grey literature in systematic reviews. However, there is scant information about how searches for grey literature are executed and how it is included in the review process. This level of detail is important to ensure that reviews follow explicit methodology to be systematic, transparent and reproducible. The purpose of this paper is to provide a detailed account of one systematic review team's experience in searching for grey literature and including it throughout the review. We provide a brief overview of grey literature before describing our search and review approach. We also discuss the benefits and challenges of including grey literature in our systematic review, as well as the strengths and limitations to our approach. Detailed information about incorporating grey literature in reviews is important in advancing methodology as review teams adapt and build upon the approaches described.

Journal ArticleDOI
TL;DR: In this article, the authors discuss recent experiments that directly measure mobility at or near the surface of glassy polymers and indicate that enhanced mobility near the free surface can exceed bulk mobility by several orders of magnitude and extend for several nanometers into the bulk polymer.
Abstract: The past 20 years have seen a substantial effort to understand dynamics and the glass transition in thin polymer films. In this Perspective, we consider developments in this field and offer a consistent interpretation of some major findings. We discuss recent experiments that directly measure mobility at or near the surface of glassy polymers. These experiments indicate that enhanced mobility near the free surface can exceed bulk mobility by several orders of magnitude and extend for several nanometers into the bulk polymer. Enhanced mobility near the free surface allows a qualitative understanding of many of the observations of a reduced glass transition temperature Tg in thin films. For thin films, knowledge of Tg by itself is less useful than for bulk materials. Because of this, new experimental methods that directly measure important material properties are being developed.

Journal ArticleDOI
06 May 2014-Analyst
TL;DR: This Critical Review aims to summarize progress that might enable practical applications of aptamers for biological samples that have been discussed in more detail since they are more likely to work in a complex sample matrix.
Abstract: Aptamers are single-stranded nucleic acids that selectively bind to target molecules. Most aptamers are obtained through a combinatorial biology technique called SELEX. Since aptamers can be isolated to bind to almost any molecule of choice, can be readily modified at arbitrary positions and they possess predictable secondary structures, this platform technology shows great promise in biosensor development. Over the past two decades, more than one thousand papers have been published on aptamer-based biosensors. Given this progress, the application of aptamer technology in biomedical diagnosis is still in a quite preliminary stage. Most previous work involves only a few model aptamers to demonstrate the sensing concept with limited biomedical impact. This Critical Review aims to summarize progress that might enable practical applications of aptamers for biological samples. First, general sensing strategies based on the unique properties of aptamers are summarized. Each strategy can be coupled to various signaling methods. Among these, a few detection methods including fluorescence lifetime, flow cytometry, upconverting nanoparticles, nanoflare technology, magnetic resonance imaging, electronic aptamer-based sensors, and lateral flow devices have been discussed in more detail since they are more likely to work in a complex sample matrix. The current limitations of this field include the lack of high quality aptamers for clinically important targets. In addition, the aptamer technology has to be extensively tested in a clinical sample matrix to establish reliability and accuracy. Future directions are also speculated to overcome these challenges.

Journal ArticleDOI
TL;DR: It is concluded that this enzyme is directly responsible for the different behavior of tomato plants in response to salt stress and has the potential to facilitate plant growth on land that is not normally suitable for the majority of crops due to their high salt contents.

Journal ArticleDOI
TL;DR: The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods.
Abstract: Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing - blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts - discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research - are elegant and very different from statistical independence-based BSS approaches established in the SP field. Moreover, the latest research on blind HU is rapidly adopting advanced techniques, such as those in sparse SP and optimization. The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods.

Journal ArticleDOI
TL;DR: It is proved that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions.
Abstract: In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K-user IC is (K/2) for almost all channel parameters. We also prove that the sum DoF of the X-channel with K transmitters and M receivers is (K M/K + M - 1) for almost all channel parameters.

Journal ArticleDOI
TL;DR: This consensus statement represents a set of recommendations developed following the 1st and 2nd International Symposia on the Female Athlete Triad and is intended to provide clinical guidelines for physicians, athletic trainers and other healthcare providers for the screening, diagnosis and treatment of the Female athlete Triad.
Abstract: The Female Athlete Triad is a medical condition often observed in physically active girls and women, and involves three components: (1) low energy availability with or without disordered eating, (2) menstrual dysfunction and (3) low bone mineral density. Female athletes often present with one or more of the three Triad components, and an early intervention is essential to prevent its progression to serious endpoints that include clinical eating disorders, amenorrhoea and osteoporosis. This consensus statement represents a set of recommendations developed following the 1st (San Francisco, California, USA) and 2nd (Indianapolis, Indiana, USA) International Symposia on the Female Athlete Triad. It is intended to provide clinical guidelines for physicians, athletic trainers and other healthcare providers for the screening, diagnosis and treatment of the Female Athlete Triad and to provide clear recommendations for return to play. The 2014 Female Athlete Triad Coalition Consensus Statement on Treatment and Return to Play of the Female Athlete Triad expert panel has proposed a risk stratification point system that takes into account magnitude of risk to assist the physician in decision-making regarding sport participation, clearance and return to play. Guidelines are offered for clearance categories, management by a multidisciplinary team and implementation of treatment contracts. This consensus paper has been endorsed by the Female Athlete Triad Coalition, an International Consortium of leading Triad researchers, physicians and other healthcare professionals, the American College of Sports Medicine and the American Medical Society for Sports Medicine.

Journal ArticleDOI
TL;DR: Nengo 2.0 is described, which is implemented in Python and uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Abstract: Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world’s largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4’s ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.

Journal ArticleDOI
TL;DR: In this article, the authors present griz P1 light curves of 146 spectroscopically confirmed Type Ia supernovae (SNe Ia; 0.03 < z < 0.65) discovered during the first 1.5 yr of the Pan-STARRS1 Medium Deep Survey.
Abstract: We present griz P1 light curves of 146 spectroscopically confirmed Type Ia supernovae (SNe Ia; 0.03 < z < 0.65) discovered during the first 1.5 yr of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the Hubble Space Telescope Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNe Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only supernovae (SNe) and assuming a constant dark energy equation of state and flatness, yields $w=-1.120^{+0.360}_{-0.206}\hbox{(Stat)} ^{+0.269}_{-0.291}\hbox{(Sys)}$. When combined with BAO+CMB(Planck)+H 0, the analysis yields $\Omega _{\rm M}=0.280^{+0.013}_{-0.012}$ and $w=-1.166^{+0.072}_{-0.069}$ including all identified systematics. The value of w is inconsistent with the cosmological constant value of –1 at the 2.3σ level. Tension endures after removing either the baryon acoustic oscillation (BAO) or the H 0 constraint, though it is strongest when including the H 0 constraint. If we include WMAP9 cosmic microwave background (CMB) constraints instead of those from Planck, we find $w=-1.124^{+0.083}_{-0.065}$, which diminishes the discord to <2σ. We cannot conclude whether the tension with flat ΛCDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 SN sample with ~three times as many SNe should provide more conclusive results.

Journal ArticleDOI
TL;DR: In this paper, a landscape management framework that incorporates all systems, across the spectrum of degrees of alteration, provides a fuller set of options for how and when to intervene, uses limited resources more effectively, and increases the chances of achieving management goals.
Abstract: The reality confronting ecosystem managers today is one of heterogeneous, rapidly transforming landscapes, particularly in the areas more affected by urban and agricultural development. A landscape management framework that incorporates all systems, across the spectrum of degrees of alteration, provides a fuller set of options for how and when to intervene, uses limited resources more effectively, and increases the chances of achieving management goals. That many ecosystems have departed so substantially from their historical trajectory that they defy conventional restoration is not in dispute. Acknowledging novel ecosystems need not constitute a threat to existing policy and management approaches. Rather, the development of an integrated approach to management interventions can provide options that are in tune with the current reality of rapid ecosystem change.

Journal ArticleDOI
TL;DR: This study identifies supply‐driven feedforward activation of ribosomal protein synthesis as the key regulatory motif maximizing amino acid flux, and autonomously guiding a cell to achieve optimal growth in different environments, with implications for endogenous and synthetic design of microorganisms.
Abstract: Bacteria must constantly adapt their growth to changes in nutrient availability; yet despite large-scale changes in protein expression associated with sensing, adaptation, and processing different environmental nutrients, simple growth laws connect the ribosome abundance and the growth rate. Here, we investigate the origin of these growth laws by analyzing the features of ribosomal regulation that coordinate proteome-wide expression changes with cell growth in a variety of nutrient conditions in the model organism Escherichia coli. We identify supply-driven feedforward activation of ribosomal protein synthesis as the key regulatory motif maximizing amino acid flux, and autonomously guiding a cell to achieve optimal growth in different environments. The growth laws emerge naturally from the robust regulatory strategy underlying growth rate control, irrespective of the details of the molecular implementation. The study highlights the interplay between phenomenological modeling and molecular mechanisms in uncovering fundamental operating constraints, with implications for endogenous and synthetic design of microorganisms.

Journal ArticleDOI
TL;DR: In this article, two brief interventions designed to mitigate the effects of a “chilly climate” women may experience in engineering, especially in male-dominated fields, were tested.
Abstract: In a randomized-controlled trial, we tested 2 brief interventions designed to mitigate the effects of a “chilly climate” women may experience in engineering, especially in male-dominated fields. Participants were students entering a selective university engineering program. The social-belonging intervention aimed to protect students’ sense of belonging in engineering by providing a nonthreatening narrative with which to interpret instances of adversity. The affirmation-training intervention aimed to help students manage stress that can arise from social marginalization by incorporating diverse aspects of their self-identity in their daily academic lives. As expected, gender differences and intervention effects were concentrated in male-dominated majors (20% women). In these majors, compared with control conditions, both interventions raised women’s school-reported engineering grade-point-average (GPA) over the full academic year, eliminating gender differences. Both also led women to view daily adversities as more manageable and improved women’s academic attitudes. However, the 2 interventions had divergent effects on women’s social experiences. The social-belonging intervention helped women integrate into engineering, for instance, increasing friendships with male engineers. Affirmation-training helped women develop external resources, deepening their identification with their gender group. The results highlight how social marginalization contributes to gender inequality in quantitative fields and 2 potential remedies.

Proceedings ArticleDOI
31 May 2014
TL;DR: It is found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for, and that stronger forms of coverage do not provide greater insight into the effectiveness of the suite.
Abstract: The coverage of a test suite is often used as a proxy for its ability to detect faults. However, previous studies that investigated the correlation between code coverage and test suite effectiveness have failed to reach a consensus about the nature and strength of the relationship between these test suite characteristics. Moreover, many of the studies were done with small or synthetic programs, making it unclear whether their results generalize to larger programs, and some of the studies did not account for the confounding influence of test suite size. In addition, most of the studies were done with adequate suites, which are are rare in practice, so the results may not generalize to typical test suites. We have extended these studies by evaluating the relationship between test suite size, coverage, and effectiveness for large Java programs. Our study is the largest to date in the literature: we generated 31,000 test suites for five systems consisting of up to 724,000 lines of source code. We measured the statement coverage, decision coverage, and modified condition coverage of these suites and used mutation testing to evaluate their fault detection effectiveness. We found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for. In addition, we found that stronger forms of coverage do not provide greater insight into the effectiveness of the suite. Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.

Journal ArticleDOI
TL;DR: One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined as discussed by the authors. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class.
Abstract: One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/ novelty detection and concept learning. In this paper, we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.

Journal ArticleDOI
TL;DR: The structural and mechanical organization reported here provides cells with a mechanism to close the wound by cooperatively compressing the underlying substrate through focal adhesions.
Abstract: A fundamental feature of multicellular organisms is their ability to self-repair wounds through the movement of epithelial cells into the damaged area. This collective cellular movement is commonly attributed to a combination of cell crawling and "purse-string" contraction of a supracellular actomyosin ring. Here we show by direct experimental measurement that these two mechanisms are insufficient to explain force patterns observed during wound closure. At early stages of the process, leading actin protrusions generate traction forces that point away from the wound, showing that wound closure is initially driven by cell crawling. At later stages, we observed unanticipated patterns of traction forces pointing towards the wound. Such patterns have strong force components that are both radial and tangential to the wound. We show that these force components arise from tensions transmitted by a heterogeneous actomyosin ring to the underlying substrate through focal adhesions. The structural and mechanical organization reported here provides cells with a mechanism to close the wound by cooperatively compressing the underlying substrate.