scispace - formally typeset
Search or ask a question

Showing papers by "Pennsylvania State University published in 2016"


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +1008 moreInstitutions (96)
TL;DR: This is the first direct detection of gravitational waves and the first observation of a binary black hole merger, and these observations demonstrate the existence of binary stellar-mass black hole systems.
Abstract: On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of $1.0 \times 10^{-21}$. It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of $410^{+160}_{-180}$ Mpc corresponding to a redshift $z = 0.09^{+0.03}_{-0.04}$. In the source frame, the initial black hole masses are $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$, and the final black hole mass is $62^{+4}_{-4} M_\odot$, with $3.0^{+0.5}_{-0.5} M_\odot c^2$ radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.

9,596 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +970 moreInstitutions (114)
TL;DR: This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.
Abstract: We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a significance greater than 5 σ. The signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz, and reached a peak gravitational strain of 3.4+0.7−0.9×10−22. The inferred source-frame initial black hole masses are 14.2+8.3−3.7M⊙ and 7.5+2.3−2.3M⊙ and the final black hole mass is 20.8+6.1−1.7M⊙. We find that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440+180−190 Mpc corresponding to a redshift 0.09+0.03−0.04. All uncertainties define a 90 % credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from general relativity.

3,448 citations


Proceedings ArticleDOI
21 Mar 2016
TL;DR: This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Abstract: Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.

3,114 citations


Journal ArticleDOI
TL;DR: In this paper, the electronic and optical properties and the recent progress in applications of 2D semiconductor transition metal dichalcogenides with emphasis on strong excitonic effects, and spin- and valley-dependent properties are reviewed.
Abstract: The electronic and optical properties and the recent progress in applications of 2D semiconductor transition metal dichalcogenides with emphasis on strong excitonic effects, and spin- and valley-dependent properties are reviewed. Recent advances in the development of atomically thin layers of van der Waals bonded solids have opened up new possibilities for the exploration of 2D physics as well as for materials for applications. Among them, semiconductor transition metal dichalcogenides, MX2 (M = Mo, W; X = S, Se), have bandgaps in the near-infrared to the visible region, in contrast to the zero bandgap of graphene. In the monolayer limit, these materials have been shown to possess direct bandgaps, a property well suited for photonics and optoelectronics applications. Here, we review the electronic and optical properties and the recent progress in applications of 2D semiconductor transition metal dichalcogenides with emphasis on strong excitonic effects, and spin- and valley-dependent properties.

2,612 citations


Journal ArticleDOI
TL;DR: In this article, a deep convolutional neural network was used to identify 14 crop species and 26 diseases (or absence thereof) using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions.
Abstract: Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.

2,150 citations


Proceedings ArticleDOI
22 May 2016
TL;DR: In this article, the authors introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs, which increases the average minimum number of features that need to be modified to create adversarial examples by about 800%.
Abstract: Deep learning algorithms have been shown to perform extremely well on manyclassical machine learning problems. However, recent studies have shown thatdeep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force adeep neural network (DNN) to provide adversary-selected outputs. Such attackscan seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles canbe crashed, illicit or illegal content can bypass content filters, or biometricauthentication systems can be manipulated to allow improper access. In thiswork, we introduce a defensive mechanism called defensive distillationto reduce the effectiveness of adversarial samples on DNNs. We analyticallyinvestigate the generalizability and robustness properties granted by the useof defensive distillation when training DNNs. We also empirically study theeffectiveness of our defense mechanisms on two DNNs placed in adversarialsettings. The study shows that defensive distillation can reduce effectivenessof sample creation from 95% to less than 0.5% on a studied DNN. Such dramaticgains can be explained by the fact that distillation leads gradients used inadversarial sample creation to be reduced by a factor of 1030. We alsofind that distillation increases the average minimum number of features thatneed to be modified to create adversarial samples by about 800% on one of theDNNs we tested.

2,130 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate a highly reversible zinc/manganese oxide system in which optimal mild aqueous ZnSO4-based solution is used as the electrolyte, and nanofibres of a manganese oxide phase, α-MnO2, are used as a cathode.
Abstract: Rechargeable aqueous batteries such as alkaline zinc/manganese oxide batteries are highly desirable for large-scale energy storage owing to their low cost and high safety; however, cycling stability is a major issue for their applications. Here we demonstrate a highly reversible zinc/manganese oxide system in which optimal mild aqueous ZnSO4-based solution is used as the electrolyte, and nanofibres of a manganese oxide phase, α-MnO2, are used as the cathode. We show that a chemical conversion reaction mechanism between α-MnO2 and H+ is mainly responsible for the good performance of the system. This includes an operating voltage of 1.44 V, a capacity of 285 mAh g−1 (MnO2), and capacity retention of 92% over 5,000 cycles. The Zn metal anode also shows high stability. This finding opens new opportunities for the development of low-cost, high-performance rechargeable aqueous batteries. Rechargeable aqueous batteries are attractive owing to their relatively low cost and safety. Here the authors report an aqueous zinc/manganese oxide battery that operates via a conversion reaction mechanism and exhibits a long-term cycling stability.

1,965 citations


Journal ArticleDOI
TL;DR: Galaxy seeks to make data-intensive research more accessible, transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication, or reuse.
Abstract: High-throughput data production technologies, particularly 'next-generation' DNA sequencing, have ushered in widespread and disruptive changes to biomedical research. Making sense of the large datasets produced by these technologies requires sophisticated statistical and computational methods , as well as substantial computational power. This has led to an acute crisis in life sciences, as researchers without informatics training attempt to perform computation-dependent analyses. Since 2005, the Galaxy project has worked to address this problem by providing a framework that makes advanced computational tools usable by non experts. Galaxy seeks to make data-intensive research more accessible , transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication , or reuse. In this report we highlight recently added features enabling biomedical analyses on a large scale.

1,774 citations


Journal ArticleDOI
TL;DR: The eigenstate thermalization hypothesis (ETH) as discussed by the authors is a natural extension of quantum chaos and random matrix theory (RMT) that allows one to describe thermalization in isolated chaotic systems without invoking the notion of an external bath.
Abstract: This review gives a pedagogical introduction to the eigenstate thermalization hypothesis (ETH), its basis, and its implications to statistical mechanics and thermodynamics. In the first part, ETH is introduced as a natural extension of ideas from quantum chaos and random matrix theory (RMT). To this end, we present a brief overview of classical and quantum chaos, as well as RMT and some of its most important predictions. The latter include the statistics of energy levels, eigenstate components, and matrix elements of observables. Building on these, we introduce the ETH and show that it allows one to describe thermalization in isolated chaotic systems without invoking the notion of an external bath. We examine numerical evidence of eigenstate thermalization from studies of many-body lattice systems. We also introduce the concept of a quench as a means of taking isolated systems out of equilibrium, and discuss results of numerical experiments on quantum quenches. The second part of the review explores the i...

1,536 citations


Journal ArticleDOI
31 Mar 2016-Nature
TL;DR: A model coupling ice sheet and climate dynamics—including previously underappreciated processes linking atmospheric warming with hydrofracturing of buttressing ice shelves and structural collapse of marine-terminating ice cliffs—is calibrated against Pliocene and Last Interglacial sea-level estimates and applied to future greenhouse gas emission scenarios.
Abstract: Polar temperatures over the last several million years have, at times, been slightly warmer than today, yet global mean sea level has been 6-9 metres higher as recently as the Last Interglacial (130,000 to 115,000 years ago) and possibly higher during the Pliocene epoch (about three million years ago). In both cases the Antarctic ice sheet has been implicated as the primary contributor, hinting at its future vulnerability. Here we use a model coupling ice sheet and climate dynamics-including previously underappreciated processes linking atmospheric warming with hydrofracturing of buttressing ice shelves and structural collapse of marine-terminating ice cliffs-that is calibrated against Pliocene and Last Interglacial sea-level estimates and applied to future greenhouse gas emission scenarios. Antarctica has the potential to contribute more than a metre of sea-level rise by 2100 and more than 15 metres by 2500, if emissions continue unabated. In this case atmospheric warming will soon become the dominant driver of ice loss, but prolonged ocean warming will delay its recovery for thousands of years.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +976 moreInstitutions (107)
TL;DR: It is found that the final remnant's mass and spin, as determined from the low-frequency and high-frequency phases of the signal, are mutually consistent with the binary black-hole solution in general relativity.
Abstract: The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

Journal ArticleDOI
04 Mar 2016
TL;DR: The reactive force field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties as mentioned in this paper, but it is often too computationally intense for simulations that consider the full dynamic evolution of a system.
Abstract: The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFF method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. This article provides an overview of the development, application, and future directions of the ReaxFF method.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Journal ArticleDOI
TL;DR: There are significant roots in general and in particular to the CIRP community – which point towards CPPS, and expectations towards research in and implementation of CPS and CPPS are outlined.
Abstract: One of the most significant advances in the development of computer science, information and communication technologies is represented by the cyber-physical systems (CPS). They are systems of collaborating computational entities which are in intensive connection with the surrounding physical world and its on-going processes, providing and using, at the same time, data-accessing and data-processing services available on the Internet. Cyber-physical production systems (CPPS), relying on the latest, and the foreseeable further developments of computer science, information and communication technologies on one hand, and of manufacturing science and technology, on the other, may lead to the 4th industrial revolution, frequently noted as Industrie 4.0. The paper underlines that there are significant roots in general – and in particular to the CIRP community – which point towards CPPS. Expectations towards research in and implementation of CPS and CPPS are outlined and some case studies are introduced. Related new R&D challenges are highlighted.

Journal ArticleDOI
11 Aug 2016
TL;DR: PCOS can impact women’s reproductive health, leading to anovulatory infertility and higher rate of early pregnancy loss, and the risks of diabetes, cardiovascular disease, hypertension, metabolic syndrome, and endometrial cancer among PCOS patients are significantly increased.
Abstract: Polycystic ovary syndrome (PCOS) is characterized by a constellation of clinical symptoms that include irregular menses due to chronic oligo-ovulation, phenotypic features of hyperandrogenism, and obesity The term “polycystic ovary” refers to ovarian morphology with increased ovarian stroma and a ring of cortical follicles Core biochemical features include hyperandrogenism and insulin resistance The pathogenesis of PCOS remains a topic of debate Treatment of PCOS typically focuses on mitigating the impact of hyperandrogenism, insulin resistance, and chronic oligo-ovulation and restoring fertility when desired

Journal ArticleDOI
TL;DR: This paper, presenting a first-time comprehensive review of EBB, discusses the current advancements in EBB technology and highlights future directions to transform the technology to generate viable end products for tissue engineering and regenerative medicine.

Journal ArticleDOI
TL;DR: In this article, the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community, is presented.
Abstract: This White Paper presents the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community. It was commissioned by the managements of Brookhaven National Laboratory (BNL) and Thomas Jefferson National Accelerator Facility (JLab) with the objective of presenting a summary of scientific opportunities and goals of the EIC as a follow-up to the 2007 NSAC Long Range plan. This document is a culmination of a community-wide effort in nuclear science following a series of workshops on EIC physics over the past decades and, in particular, the focused ten-week program on “Gluons and quark sea at high energies” at the Institute for Nuclear Theory in Fall 2010. It contains a brief description of a few golden physics measurements along with accelerator and detector concepts required to achieve them. It has been benefited profoundly from inputs by the users’ communities of BNL and JLab. This White Paper offers the promise to propel the QCD science program in the US, established with the CEBAF accelerator at JLab and the RHIC collider at BNL, to the next QCD frontier.

Journal ArticleDOI
TL;DR: In this article, the superconducting properties of NbSe2 as it approaches the monolayer limit are investigated by means of magnetotransport measurements, uncovering evidence of spin-momentum locking.
Abstract: The superconducting properties of NbSe2 as it approaches the monolayer limit are investigated by means of magnetotransport measurements, uncovering evidence of spin–momentum locking.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +984 moreInstitutions (116)
TL;DR: The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity.
Abstract: On September 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterise the properties of the source and its parameters. The data around the time of the event were analysed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of $36^{+5}_{-4} M_\odot$ and $29^{+4}_{-4} M_\odot$ (for each parameter we report the median value and the range of the 90% credible interval). The dimensionless spin magnitude of the more massive black hole is bound to be $0.7$ (at 90% probability). The luminosity distance to the source is $410^{+160}_{-180}$ Mpc, corresponding to a redshift $0.09^{+0.03}_{-0.04}$ assuming standard cosmology. The source location is constrained to an annulus section of $590$ deg$^2$, primarily in the southern hemisphere. The binary merges into a black hole of $62^{+4}_{-4} M_\odot$ and spin $0.67^{+0.05}_{-0.07}$. This black hole is significantly more massive than any other known in the stellar-mass regime.

Journal ArticleDOI
TL;DR: In this article, the authors present a special research forum on "Grand Challenges" which are formulations of global problems that can be plausibly addressed through coordinated and collaborative effort through management research.
Abstract: “Grand challenges” are formulations of global problems that can be plausibly addressed through coordinated and collaborative effort. In this Special Research Forum, we showcase management research ...

Journal ArticleDOI
TL;DR: In this article, a detailed discussion of the unobserved heterogeneity in highway accident data and analysis is presented along with their strengths and weaknesses, as well as a summary of the fundamental issues and directions for future methodological work that address this problem.

Posted Content
TL;DR: In this article, a black-box attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the targeted DNN.
Abstract: Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.

Journal ArticleDOI
13 Apr 2016
TL;DR: In this article, structural defects in two-dimensional transition metal dichalcogenides (TMDs) have been studied and the authors provide a comprehensive understanding of structural defects and the pathways to generating structural defects during and after synthesis.
Abstract: Two-dimensional transition metal dichalcogenides (TMDs), an emerging family of layered materials, have provided researchers a fertile ground for harvesting fundamental science and emergent applications. TMDs can contain a number of different structural defects in their crystal lattices which significantly alter their physico-chemical properties. Having structural defects can be either detrimental or beneficial, depending on the targeted application. Therefore, a comprehensive understanding of structural defects is required. Here we review different defects in semiconducting TMDs by summarizing: (i) the dimensionalities and atomic structures of defects; (ii) the pathways to generating structural defects during and after synthesis and, (iii) the effects of having defects on the physico-chemical properties and applications of TMDs. Thus far, significant progress has been made, although we are probably still witnessing the tip of the iceberg. A better understanding and control of defects is important in order to move forward the field of Defect Engineering in TMDs. Finally, we also provide our perspective on the challenges and opportunities in this emerging field.

Journal ArticleDOI
TL;DR: Ease-of-use improvements include helper functions to standardize model parameters and compute their Jacobian-based standard errors, access to model components through standard R $ mechanisms, and improved tab completion from within the R Graphical User Interface.
Abstract: The new software package OpenMx 2.0 for structural equation and other statistical modeling is introduced and its features are described. OpenMx is evolving in a modular direction and now allows a mix-and-match computational approach that separates model expectations from fit functions and optimizers. Major backend architectural improvements include a move to swappable open-source optimizers such as the newly written CSOLNP. Entire new methodologies such as item factor analysis and state space modeling have been implemented. New model expectation functions including support for the expression of models in LISREL syntax and a simplified multigroup expectation function are available. Ease-of-use improvements include helper functions to standardize model parameters and compute their Jacobian-based standard errors, access to model components through standard R $ mechanisms, and improved tab completion from within the R Graphical User Interface.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy1  +961 moreInstitutions (100)
TL;DR: The discovery of the GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and merge within the age of the Universe as mentioned in this paper.
Abstract: The discovery of the gravitational-wave source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black-hole systems that inspiral and merge within the age of the Universe. Such black-hole mergers have been predicted in two main types of formation models, involving isolated binaries in galactic fields or dynamical interactions in young and old dense stellar environments. The measured masses robustly demonstrate that relatively "heavy" black holes (≳25M⊙) can form in nature. This discovery implies relatively weak massive-star winds and thus the formation of GW150914 in an environment with metallicity lower than ∼1/2 of the solar value. The rate of binary black-hole mergers inferred from the observation of GW150914 is consistent with the higher end of rate predictions (≳1Gpc−3yr−1) from both types of formation models. The low measured redshift (z∼0.1) of GW150914 and the low inferred metallicity of the stellar progenitor imply either binary black-hole formation in a low-mass galaxy in the local Universe and a prompt merger, or formation at high redshift with a time delay between formation and merger of several Gyr. This discovery motivates further studies of binary-black-hole formation astrophysics. It also has implications for future detections and studies by Advanced LIGO and Advanced Virgo, and gravitational-wave detectors in space.


Journal ArticleDOI
TL;DR: The field of second language acquisition (SLA) seeks to understand the processes by which school-aged children, adolescents, and adults learn and use, at any point in life, an additional language, including second, foreign, as discussed by the authors.
Abstract: THE PHENOMENON OF MULTILINGUALISM is as old as humanity, but multilingualism has been catapulted to a new world order in the 21st century. Social relations, knowledge structures, and webs of power are experienced bymany people as highly mobile and interconnected—for good and for bad—as a result of broad sociopolitical events and global markets. As a consequence, today’s multilingualism is enmeshed in globalization, technologization, and mobility. Communication and meaning-making are often felt as deterritorialized, that is, lived as something “which does not belong to one locality but which organizes translocal trajectories and wider spaces” (Blommaert, 2010, p. 46), while language use and learning are seen as emergent, dynamic, unpredictable, open ended, and intersubjectively negotiated. In this context, increasingly numerous and more diverse populations of adults and youth become multilingual and transcultural later in life, either by elective choice or by forced circumstances, or for a mixture of reasons. They must learn to negotiate complex demands and opportunities for varied, emergent competencies across their languages. Understanding such learning requires the integrative consideration of learners’ mental and neurobiological processing, remembering and categorizing patterns, and momentto-moment use of language in conjunction with a variety of socioemotional, sociocultural, sociopolitical, and ideological factors. The field of second language acquisition (SLA) seeks (a) to understand the processes by which school-aged children, adolescents, and adults learn and use, at any point in life, an additional language, including second, foreign,

Journal ArticleDOI
04 Feb 2016-Nature
TL;DR: Material, device architectures, integration strategies, and in vivo demonstrations in rats of implantable, multifunctional silicon sensors for the brain, for which all of the constituent materials naturally resorb via hydrolysis and/or metabolic action, eliminating the need for extraction.
Abstract: Many procedures in modern clinical medicine rely on the use of electronic implants in treating conditions that range from acute coronary events to traumatic injury. However, standard permanent electronic hardware acts as a nidus for infection: bacteria form biofilms along percutaneous wires, or seed haematogenously, with the potential to migrate within the body and to provoke immune-mediated pathological tissue reactions. The associated surgical retrieval procedures, meanwhile, subject patients to the distress associated with re-operation and expose them to additional complications. Here, we report materials, device architectures, integration strategies, and in vivo demonstrations in rats of implantable, multifunctional silicon sensors for the brain, for which all of the constituent materials naturally resorb via hydrolysis and/or metabolic action, eliminating the need for extraction. Continuous monitoring of intracranial pressure and temperature illustrates functionality essential to the treatment of traumatic brain injury; the measurement performance of our resorbable devices compares favourably with that of non-resorbable clinical standards. In our experiments, insulated percutaneous wires connect to an externally mounted, miniaturized wireless potentiostat for data transmission. In a separate set-up, we connect a sensor to an implanted (but only partially resorbable) data-communication system, proving the principle that there is no need for any percutaneous wiring. The devices can be adapted to sense fluid flow, motion, pH or thermal characteristics, in formats that are compatible with the body's abdomen and extremities, as well as the deep brain, suggesting that the sensors might meet many needs in clinical medicine.

Journal ArticleDOI
TL;DR: In this article, the authors evaluate the existing literature on school climate and bring to light the strengths, weakness, and gaps in the ways researchers have approached the construct of school climate.
Abstract: The construct of school climate has received attention as a way to enhance student achievement and reduce problem behaviors. The purpose of this article is to evaluate the existing literature on school climate and to bring to light the strengths, weakness, and gaps in the ways researchers have approached the construct. The central information in this article is organized into five sections. In the first, we describe the theoretical frameworks to support the multidimensionality of school climate and how school climate impacts student outcomes. In the second, we provide a breakdown of the four domains that make up school climate, including academic, community, safety, and institutional environment. In the third, we examine research on the outcomes of school climate. In the fourth, we outline the measurement and analytic methods of the construct of school climate. Finally, we summarize the strengths and limitations of the current work on school climate and make suggestions for future research directions.