scispace - formally typeset
Search or ask a question

Showing papers by "University of Notre Dame published in 2018"


Journal ArticleDOI
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4  +414 moreInstitutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.

5,988 citations


Journal ArticleDOI
TL;DR: Scolnic et al. as discussed by the authors presented optical light curves, redshifts, and classifications for 365 spectroscopically confirmed Type Ia supernovae (SNe Ia) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey.
Abstract: Author(s): Scolnic, DM; Jones, DO; Rest, A; Pan, YC; Chornock, R; Foley, RJ; Huber, ME; Kessler, R; Narayan, G; Riess, AG; Rodney, S; Berger, E; Brout, DJ; Challis, PJ; Drout, M; Finkbeiner, D; Lunnan, R; Kirshner, RP; Sanders, NE; Schlafly, E; Smartt, S; Stubbs, CW; Tonry, J; Wood-Vasey, WM; Foley, M; Hand, J; Johnson, E; Burgett, WS; Chambers, KC; Draper, PW; Hodapp, KW; Kaiser, N; Kudritzki, RP; Magnier, EA; Metcalfe, N; Bresolin, F; Gall, E; Kotak, R; McCrum, M; Smith, KW | Abstract: We present optical light curves, redshifts, and classifications for 365 spectroscopically confirmed Type Ia supernovae (SNe Ia) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey. We detail improvements to the PS1 SN photometry, astrometry, and calibration that reduce the systematic uncertainties in the PS1 SN Ia distances. We combine the subset of 279 PS1 SNe Ia (0.03 l z l 0.68) with useful distance estimates of SNe Ia from the Sloan Digital Sky Survey (SDSS), SNLS, and various low-z and Hubble Space Telescope samples to form the largest combined sample of SNe Ia, consisting of a total of 1048 SNe Ia in the range of 0.01 l z l 2.3, which we call the Pantheon Sample. When combining Planck 2015 cosmic microwave background (CMB) measurements with the Pantheon SN sample, we find Wm = 0.307 ± 0.012 and w = -1.026 ± 0.041 for the wCDM model. When the SN and CMB constraints are combined with constraints from BAO and local H0 measurements, the analysis yields the most precise measurement of dark energy to date: w0 = -1.007 ± 0.089 and wa = -0.222 ± 0.407 for the w0waCDM model. Tension with a cosmological constant previously seen in an analysis of PS1 and low-z SNe has diminished after an increase of 2× in the statistics of the PS1 sample, improved calibration and photometry, and stricter light-curve quality cuts. We find that the systematic uncertainties in our measurements of dark energy are almost as large as the statistical uncertainties, primarily due to limitations of modeling the low-redshift sample. This must be addressed for future progress in using SNe Ia to measure dark energy.

2,025 citations


Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges14, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson1, Valen E. Johnson18 
University of Southern California1, Duke University2, Stockholm School of Economics3, University of Virginia4, Center for Open Science5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Northwestern University14, Mathematica Policy Research15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations


Journal ArticleDOI
25 May 2018-Science
TL;DR: Research prospects for more sustainable routes to nitrogen commodity chemicals are reviewed, considering developments in enzymatic, homogeneous, and heterogeneous catalysis, as well as electrochemical, photochemical, and plasma-based approaches.
Abstract: BACKGROUND The invention of the Haber-Bosch (H-B) process in the early 1900s to produce ammonia industrially from nitrogen and hydrogen revolutionized the manufacture of fertilizer and led to fundamental changes in the way food is produced. Its impact is underscored by the fact that about 50% of the nitrogen atoms in humans today originate from this single industrial process. In the century after the H-B process was invented, the chemistry of carbon moved to center stage, resulting in remarkable discoveries and a vast array of products including plastics and pharmaceuticals. In contrast, little has changed in industrial nitrogen chemistry. This scenario reflects both the inherent efficiency of the H-B process and the particular challenge of breaking the strong dinitrogen bond. Nonetheless, the reliance of the H-B process on fossil fuels and its associated high CO 2 emissions have spurred recent interest in finding more sustainable and environmentally benign alternatives. Nitrogen in its more oxidized forms is also industrially, biologically, and environmentally important, and synergies in new combinations of oxidative and reductive transformations across the nitrogen cycle could lead to improved efficiencies. ADVANCES Major effort has been devoted to developing alternative and environmentally friendly processes that would allow NH 3 production at distributed sources under more benign conditions, rather than through the large-scale centralized H-B process. Hydrocarbons (particularly methane) and water are the only two sources of hydrogen atoms that can sustain long-term, large-scale NH 3 production. The use of water as the hydrogen source for NH 3 production requires substantially more energy than using methane, but it is also more environmentally benign, does not contribute to the accumulation of greenhouse gases, and does not compete for valuable and limited hydrocarbon resources. Microbes living in all major ecosystems are able to reduce N 2 to NH 3 by using the enzyme nitrogenase. A deeper understanding of this enzyme could lead to more efficient catalysts for nitrogen reduction under ambient conditions. Model molecular catalysts have been designed that mimic some of the functions of the active site of nitrogenase. Some modest success has also been achieved in designing electrocatalysts for dinitrogen reduction. Electrochemistry avoids the expense and environmental damage of steam reforming of methane (which accounts for most of the cost of the H-B process), and it may provide a means for distributed production of ammonia. On the oxidative side, nitric acid is the principal commodity chemical containing oxidized nitrogen. Nearly all nitric acid is manufactured by oxidation of NH 3 through the Ostwald process, but a more direct reaction of N 2 with O 2 might be practically feasible through further development of nonthermal plasma technology. Heterogeneous NH 3 oxidation with O 2 is at the heart of the Ostwald process and is practiced in a variety of environmental protection applications as well. Precious metals remain the workhorse catalysts, and opportunities therefore exist to develop lower-cost materials with equivalent or better activity and selectivity. Nitrogen oxides are also environmentally hazardous pollutants generated by industrial and transportation activities, and extensive research has gone into developing and applying reduction catalysts. Three-way catalytic converters are operating on hundreds of millions of vehicles worldwide. However, increasingly stringent emissions regulations, coupled with the low exhaust temperatures of high-efficiency engines, present challenges for future combustion emissions control. Bacterial denitrification is the natural analog of this chemistry and another source of study and inspiration for catalyst design. OUTLOOK Demands for greater energy efficiency, smaller-scale and more flexible processes, and environmental protection provide growing impetus for expanding the scope of nitrogen chemistry. Nitrogenase, as well as nitrifying and denitrifying enzymes, will eventually be understood in sufficient detail that robust molecular catalytic mimics will emerge. Electrochemical and photochemical methods also demand more study. Other intriguing areas of research that have provided tantalizing results include chemical looping and plasma-driven processes. The grand challenge in the field of nitrogen chemistry is the development of catalysts and processes that provide simple, low-energy routes to the manipulation of the redox states of nitrogen.

1,153 citations


Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

965 citations


Journal ArticleDOI
TL;DR: The Synthetic Minority Oversampling Technique (SMOTE) preprocessing algorithm is considered "de facto" standard in the framework of learning from imbalanced data because of its simplicity in the design, as well as its robustness when applied to different type of problems.
Abstract: The Synthetic Minority Oversampling Technique (SMOTE) preprocessing algorithm is considered "de facto" standard in the framework of learning from imbalanced data. This is due to its simplicity in the design of the procedure, as well as its robustness when applied to different type of problems. Since its publication in 2002, SMOTE has proven successful in a variety of applications from several different domains. SMOTE has also inspired several approaches to counter the issue of class imbalance, and has also significantly contributed to new supervised learning paradigms, including multilabel classification, incremental learning, semi-supervised learning, multi-instance learning, among others. It is standard benchmark for learning from imbalanced data. It is also featured in a number of different software packages -- from open source to commercial. In this paper, marking the fifteen year anniversary of SMOTE, we reect on the SMOTE journey, discuss the current state of affairs with SMOTE, its applications, and also identify the next set of challenges to extend SMOTE for Big Data problems.

905 citations


Journal ArticleDOI
TL;DR: This collection of GaN technology developments is not itself a road map but a valuable collection of global state-of-the-art GaN research that will inform the next phase of the technology as market driven requirements evolve.
Abstract: Gallium nitride (GaN) is a compound semiconductor that has tremendous potential to facilitate economic growth in a semiconductor industry that is silicon-based and currently faced with diminishing returns of performance versus cost of investment. At a material level, its high electric field strength and electron mobility have already shown tremendous potential for high frequency communications and photonic applications. Advances in growth on commercially viable large area substrates are now at the point where power conversion applications of GaN are at the cusp of commercialisation. The future for building on the work described here in ways driven by specific challenges emerging from entirely new markets and applications is very exciting. This collection of GaN technology developments is therefore not itself a road map but a valuable collection of global state-of-the-art GaN research that will inform the next phase of the technology as market driven requirements evolve. First generation production devices are igniting large new markets and applications that can only be achieved using the advantages of higher speed, low specific resistivity and low saturation switching transistors. Major investments are being made by industrial companies in a wide variety of markets exploring the use of the technology in new circuit topologies, packaging solutions and system architectures that are required to achieve and optimise the system advantages offered by GaN transistors. It is this momentum that will drive priorities for the next stages of device research gathered here.

788 citations


Journal ArticleDOI
TL;DR: This approach achieves state of the art performance in terms of predictive accuracy and uncertainty quantification in comparison to other approaches in Bayesian neural networks as well as techniques that include Gaussian processes and ensemble methods even when the training data size is relatively small.

522 citations


Journal ArticleDOI
TL;DR: The motivation of this perspective paper is to summarize the state-of-art topology optimization methods for a variety of AM topics and the hope is to inspire both researchers and engineers to meet the challenges with innovative solutions.
Abstract: Manufacturing-oriented topology optimization has been extensively studied the past two decades, in particular for the conventional manufacturing methods, for example, machining and injection molding or casting. Both design and manufacturing engineers have benefited from these efforts because of the close-to-optimal and friendly-to-manufacture design solutions. Recently, additive manufacturing (AM) has received significant attention from both academia and industry. AM is characterized by producing geometrically complex components layer-by-layer, and greatly reduces the geometric complexity restrictions imposed on topology optimization by conventional manufacturing. In other words, AM can make near-full use of the freeform structural evolution of topology optimization. Even so, new rules and restrictions emerge due to the diverse and intricate AM processes, which should be carefully addressed when developing the AM-specific topology optimization algorithms. Therefore, the motivation of this perspective paper is to summarize the state-of-art topology optimization methods for a variety of AM topics. At the same time, this paper also expresses the authors' perspectives on the challenges and opportunities in these topics. The hope is to inspire both researchers and engineers to meet these challenges with innovative solutions.

518 citations


Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2238 moreInstitutions (159)
TL;DR: In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented.
Abstract: Many measurements and searches for physics beyond the standard model at the LHC rely on the efficient identification of heavy-flavour jets, i.e. jets originating from bottom or charm quarks. In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented. Heavy-flavour jet identification algorithms have been improved compared to those used previously at centre-of-mass energies of 7 and 8 TeV. For jets with transverse momenta in the range expected in simulated events, these new developments result in an efficiency of 68% for the correct identification of a b jet for a probability of 1% of misidentifying a light-flavour jet. The improvement in relative efficiency at this misidentification probability is about 15%, compared to previous CMS algorithms. In addition, for the first time algorithms have been developed to identify jets containing two b hadrons in Lorentz-boosted event topologies, as well as to tag c jets. The large data sample recorded in 2016 at a centre-of-mass energy of 13 TeV has also allowed the development of new methods to measure the efficiency and misidentification probability of heavy-flavour jet identification algorithms. The b jet identification efficiency is measured with a precision of a few per cent at moderate jet transverse momenta (between 30 and 300 GeV) and about 5% at the highest jet transverse momenta (between 500 and 1000 GeV).

454 citations


Journal ArticleDOI
TL;DR: It is argued that stronger and more innovative connections to data are required to address gaps in understanding, and that constrained predictions at ecologically relevant spatial and temporal scales will require a similar investment of effort and intensified inter-disciplinary communication.
Abstract: Numerous current efforts seek to improve the representation of ecosystem ecology and vegetation demographic processes within Earth System Models (ESMs). These developments are widely viewed as an important step in developing greater realism in predictions of future ecosystem states and fluxes. Increased realism, however, leads to increased model complexity, with new features raising a suite of ecological questions that require empirical constraints. Here, we review the developments that permit the representation of plant demographics in ESMs, and identify issues raised by these developments that highlight important gaps in ecological understanding. These issues inevitably translate into uncertainty in model projections but also allow models to be applied to new processes and questions concerning the dynamics of real-world ecosystems. We argue that stronger and more innovative connections to data, across the range of scales considered, are required to address these gaps in understanding. The development of first-generation land surface models as a unifying framework for ecophysiological understanding stimulated much research into plant physiological traits and gas exchange. Constraining predictions at ecologically relevant spatial and temporal scales will require a similar investment of effort and intensified inter-disciplinary communication.

Journal ArticleDOI
TL;DR: The need for better evaluation metrics is explained, the importance and unique challenges for deep robotic learning in simulation are highlighted, and the spectrum between purely data-driven and model-driven approaches is explored.
Abstract: The application of deep learning in robotics leads to very specific problems and research questions that are typically not addressed by the computer vision and machine learning communities. In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning. We explain the need for better evaluation metrics, highlight the importance and unique challenges for deep robotic learning in simulation, and explore the spectrum between purely data-driven and model-driven approaches. We hope this paper provides a motivating overview of important research directions to overcome the current limitations, and helps to fulfill the promising potentials of deep learning in robotics.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This paper presents an implementation of model predictive control (MPC) to determine ground reaction forces for a torque-controlled quadruped robot, capable of robust locomotion at a variety of speeds.
Abstract: This paper presents an implementation of model predictive control (MPC) to determine ground reaction forces for a torque-controlled quadruped robot. The robot dynamics are simplified to formulate the problem as convex optimization while still capturing the full 3D nature of the system. With the simplified model, ground reaction force planning problems are formulated for prediction horizons of up to 0.5 seconds, and are solved to optimality in under 1 ms at a rate of 20–30 Hz. Despite using a simplified model, the robot is capable of robust locomotion at a variety of speeds. Experimental results demonstrate control of gaits including stand, trot, flying-trot, pronk, bound, pace, a 3-legged gait, and a full 3D gallop. The robot achieved forward speeds of up to 3 m/s, lateral speeds up to 1 m/s, and angular speeds up to 180 deg/sec. Our approach is general enough to perform all these behaviors with the same set of gains and weights.

Journal ArticleDOI
01 Apr 2018
TL;DR: There are increasing gaps between the computational complexity and energy efficiency required for the continued scaling of deep neural networks and the hardware capacity actually available with current CMOS technology scaling, in situations where edge inference is required.
Abstract: Deep neural networks offer considerable potential across a range of applications, from advanced manufacturing to autonomous cars. A clear trend in deep neural networks is the exponential growth of network size and the associated increases in computational complexity and memory consumption. However, the performance and energy efficiency of edge inference, in which the inference (the application of a trained network to new data) is performed locally on embedded platforms that have limited area and power budget, is bounded by technology scaling. Here we analyse recent data and show that there are increasing gaps between the computational complexity and energy efficiency required by data scientists and the hardware capacity made available by hardware architects. We then discuss various architecture and algorithm innovations that could help to bridge the gaps. This Perspective highlights the existence of gaps between the computational complexity and energy efficiency required for the continued scaling of deep neural networks and the hardware capacity actually available with current CMOS technology scaling, in situations where edge inference is required; it then discusses various architecture and algorithm innovations that could help to bridge these gaps.

Journal ArticleDOI
TL;DR: A robust chemistry consisting of a nearsighted neural network potential, TensorMol-0.1, with screened long-range electrostatic and van der Waals physics is constructed and achieves millihartree accuracy and a scalability to tens-of-thousands of atoms on ordinary laptops.
Abstract: Traditional force fields cannot model chemical reactivity, and suffer from low generality without re-fitting. Neural network potentials promise to address these problems, offering energies and forces with near ab initio accuracy at low cost. However a data-driven approach is naturally inefficient for long-range interatomic forces that have simple physical formulas. In this manuscript we construct a hybrid model chemistry consisting of a nearsighted neural network potential with screened long-range electrostatic and van der Waals physics. This trained potential, simply dubbed “TensorMol-0.1”, is offered in an open-source Python package capable of many of the simulation types commonly used to study chemistry: geometry optimizations, harmonic spectra, open or periodic molecular dynamics, Monte Carlo, and nudged elastic band calculations. We describe the robustness and speed of the package, demonstrating its millihartree accuracy and scalability to tens-of-thousands of atoms on ordinary laptops. We demonstrate the performance of the model by reproducing vibrational spectra, and simulating the molecular dynamics of a protein. Our comparisons with electronic structure theory and experimental data demonstrate that neural network molecular dynamics is poised to become an important tool for molecular simulation, lowering the resource barrier to simulating chemistry.

Journal ArticleDOI
01 Aug 2018
TL;DR: This Perspective argues that electronics is poised to enter a new era of scaling – hyper-scaling – driven by advances in beyond-Boltzmann transistors, embedded non-volatile memories, monolithic three-dimensional integration and heterogeneous integration techniques.
Abstract: In the past five decades, the semiconductor industry has gone through two distinct eras of scaling: the geometric (or classical) scaling era and the equivalent (or effective) scaling era. As transistor and memory features approach 10 nanometres, it is apparent that room for further scaling in the horizontal direction is running out. In addition, the rise of data abundant computing is exacerbating the interconnect bottleneck that exists in conventional computing architecture between the compute cores and the memory blocks. Here we argue that electronics is poised to enter a new, third era of scaling — hyper-scaling — in which resources are added when needed to meet the demands of data abundant workloads. This era will be driven by advances in beyond-Boltzmann transistors, embedded non-volatile memories, monolithic three-dimensional integration and heterogeneous integration techniques. This Perspective argues that electronics is poised to enter a new era of scaling – hyper-scaling – driven by advances in beyond-Boltzmann transistors, embedded non-volatile memories, monolithic three-dimensional integration, and heterogeneous integration techniques.

Proceedings ArticleDOI
01 Oct 2018
Abstract: This paper introduces a new robust, dynamic quadruped, the MIT Cheetah 3. Like its predecessor, the Cheetah 3 exploits tailored mechanical design to enable simple control strategies for dynamic locomotion and features high-bandwidth proprioceptive actuators to manage physical interaction with the environment. A new leg design is presented that includes proprioceptive actuation on the abduction/adduction degrees of freedom in addition to an expanded range of motion on the hips and knees. To make full use of these new capabilities, general balance and locomotion controllers for Cheetah 3 are presented. These controllers are embedded into a modular control architecture that allows the robot to handle unexpected terrain disturbances through reactive gait modification and without the need for external sensors or prior environment knowledge. The efficiency of the robot is demonstrated by a low Cost of Transport (CoT) over multiple gaits at moderate speeds, with the lowest CoT of 0.45 found during trotting. Experiments showcase the ability to blindly climb up stairs as a result of the full system integration. These results collectively represent a promising step toward a platform capable of generalized dynamic legged locomotion.

Journal ArticleDOI
TL;DR: The aims and current foci of the HiTOP Consortium, a group of 70 investigators working together to study empirical classification of psychopathology, are described, which pertain to continued research on the empirical organization of psychopathological constructs; the connection between personality and psychopathology; the utility of empirically based psychopathology constructs in both research and the clinic.

Journal ArticleDOI
TL;DR: In this paper, the performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016.
Abstract: The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013–2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously. We dedicate this paper to the memory of Prof. Alberto Benvenuti, whose work was fundamental for the CMS muon detector.

Journal ArticleDOI
01 Apr 2018
TL;DR: In this article, a density-functional-theory-based microkinetic model was developed to incorporate the effect of vibrational excitations in N2 to decrease dissociation barriers without influencing subsequent reaction steps.
Abstract: Correlations between the energies of elementary steps limit the rates of thermally catalysed reactions at surfaces. Here, we show how these limitations can be circumvented in ammonia synthesis by coupling catalysts to a non-thermal plasma. We postulate that plasma-induced vibrational excitations in N2 decrease dissociation barriers without influencing subsequent reaction steps. We develop a density-functional-theory-based microkinetic model to incorporate this effect, and parameterize the model using N2 vibrational excitations observed in a dielectric-barrier-discharge plasma. We predict plasma enhancement to be particularly great on metals that bind nitrogen too weakly to be active thermally. Ammonia synthesis rates observed in a dielectric-barrier-discharge plasma reactor are consistent with predicted enhancements and predicted changes in the optimal metal catalyst. The results provide guidance for optimizing catalysts for application with plasmas. Plasma catalysis holds promise for overcoming the limitations of conventional catalysis. Now, a kinetic model for ammonia synthesis is reported to predict optimal catalysts for use with plasmas. Reactor measurements at near-ambient conditions confirm the predicted catalytic rates, which are similar to those obtained in the Haber–Bosch process.

Journal ArticleDOI
TL;DR: In this article, a generic interferometric synthetic aperture radar atmospheric correction model was developed to assess the correction performance and feasibility, which includes global coverage, all weather, all-time useability, correction maps available in near real-time, and indicators.
Abstract: For mapping Earth surface movements at larger scale and smaller amplitudes, many new synthetic aperture radar instruments (Sentinel-1A/B, Gaofen-3, ALOS-2) have been developed and launched from 2014–2017, and this trend is set to continue with Sentinel-1C/D, Gaofen-3B/C, RADARSAT Constellation planned for launch during 2018–2025. This posesmore challenges for correcting interferograms for atmospheric effects since the spatial-temporal variations of tropospheric delay may dominate over large scales and completely mask the actual displacements due to tectonic or volcanic deformation. To overcome this, we have developed a generic interferometric synthetic aperture radar atmospheric correction model whose notable features comprise (i) global coverage, (ii) all-weather, all-time useability, (iii) correction maps available in near real time, and (iv) indicators to assess the correction performance and feasibility. The model integrates operational high-resolution European Centre for Medium-Range Weather Forecasts (ECMWF) data (0.125° grid, 137 vertical levels, and 6-hr interval) and continuous GPS tropospheric delay estimates (every 5 min) using an iterative tropospheric decomposition model. The model’s performance was tested using eight globally distributed Sentinel-1 interferograms, encompassing both flat and mountainous topographies, midlatitude and near polar regions, and monsoon and oceanic climate systems, achieving a phase standard deviation and displacement root-mean-square (RMS) of ~1 cm against GPS over wide regions (250 by 250 km). Indicators describing the model’s performance including (i) GPS network and ECMWF cross RMS, (ii) phase versus estimated atmospheric delay correlations, (iii) ECMWF time differences, and (iv) topography variations were developed to provide quality control for subsequent automatic processing and provide insights of the confidence levelwithwhich the generated atmospheric correctionmapsmaybe applied.

Journal ArticleDOI
TL;DR: This paper proposed a framework for automated phrase mining, $\mathsf{AutoPhrase}$, which supports any language as long as a general knowledge base (e.g., Wikipedia) in that language is available, while benefiting from, but not requiring, a POS tagger.
Abstract: As one of the fundamental tasks in text analysis, phrase mining aims at extracting quality phrases from a text corpus and has various downstream applications including information extraction/retrieval, taxonomy construction, and topic modeling. Most existing methods rely on complex, trained linguistic analyzers, and thus likely have unsatisfactory performance on text corpora of new domains and genres without extra but expensive adaption. None of the state-of-the-art models, even data-driven models, is fully automated because they require human experts for designing rules or labeling phrases. In this paper, we propose a novel framework for automated phrase mining, $\mathsf{AutoPhrase}$ , which supports any language as long as a general knowledge base (e.g., Wikipedia) in that language is available, while benefiting from, but not requiring, a POS tagger. Compared to the state-of-the-art methods, $\mathsf{AutoPhrase}$ has shown significant improvements in both effectiveness and efficiency on five real-world datasets across different domains and languages. Besides, $\mathsf{AutoPhrase}$ can be extended to model single-word quality phrases.

Journal ArticleDOI
TL;DR: The authors posit that family firms often face a dilemma in their strategic decision-making: whether to maintain current socioemotional wealth or pursue prospective financial wealth, and apply such a mixed gamble perspective to acquisitions, family owners assess potential acquisitions with regard to their impact on both wealth dimensions.

Journal ArticleDOI
TL;DR: The Extreme Value Machine (EVM) is a novel, theoretically sound classifier that has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning.
Abstract: It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.

Journal ArticleDOI
TL;DR: The identification of mutations in the propeller domains of Kelch 13 as the primary marker for artemisinin resistance in P. falciparum is described and two major mechanisms of resistance that have been independently proposed are explored: the activation of the unfolded protein response and proteostatic dysregulation of parasite phosphatidylinositol 3- kinase.
Abstract: Haldar and colleagues discuss markers and mechanisms of resistance to artemisinins and artemisinin-based combination therapies. They describe the identification of Plasmodium falciparum Kelch 13 as the primary and, to date, sole causative maker of artemisinin resistance in P. falciparum and explore two proposed resistance mechanisms. They emphasize continuing challenges to improve detection strategies and new drug development strategies.

Journal ArticleDOI
TL;DR: How dynamic instability is central to the assembly of many microtubules-based structures and to the robust functioning of the microtubule cytoskeleton is reviewed.
Abstract: Microtubules act as "railways" for motor-driven intracellular transport, interact with accessory proteins to assemble into larger structures such as the mitotic spindle, and provide an organizational framework to the rest of the cell. Key to these functions is the fact that microtubules are "dynamic." As with actin, the polymer dynamics are driven by nucleotide hydrolysis and influenced by a host of specialized regulatory proteins, including microtubule-associated proteins. However, microtubule turnover involves a surprising behavior-termed dynamic instability-in which individual polymers switch stochastically between growth and depolymerization. Dynamic instability allows microtubules to explore intracellular space and remodel in response to intracellular and extracellular cues. Here, we review how such instability is central to the assembly of many microtubule-based structures and to the robust functioning of the microtubule cytoskeleton.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2240 moreInstitutions (157)
TL;DR: In this article, a measurement of the H→ττ signal strength is performed using events recorded in proton-proton collisions by the CMS experiment at the LHC in 2016 at a center-of-mass energy of 13TeV.

Journal ArticleDOI
TL;DR: In this paper, the critical design criteria of Hf0.5Zr 0.5O2 (HZO)-based ferroelectric field effect transistor (FeFET) for nonvolatile memory application were established.
Abstract: We fabricate, characterize, and establish the critical design criteria of Hf0.5Zr0.5O2 (HZO)-based ferroelectric field effect transistor (FeFET) for nonvolatile memory application. We quantify ${V}_{\textsf {TH}}$ shift from electron (hole) trapping in the vicinity of ferroelectric (FE)/interlayer (IL) interface, induced by erase (program) pulse, and ${V}_{\textsf {TH}}$ shift from polarization switching to determine true memory window (MW). The devices exhibit extrapolated retention up to 10 years at 85 °C and endurance up to $5\times 10^{6}$ cycles initiated by the IL breakdown. Endurance up to 1012 cycles of partial polarization switching is shown in metal–FE–metal capacitor, in the absence of IL. A comprehensive metal–FE–insulator–semiconductor FeFET model is developed to quantify the electric field distribution in the gate-stack, and an IL design guideline is established to markedly enhance MW, retention characteristics, and cycling endurance.

Journal ArticleDOI
TL;DR: This Special Report presents a description of Geant4-DNA user applications dedicated to the simulation of track structures (TS) in liquid water and associated physical quantities (e.g., range, stopping power, mean free path…) and shows that the most recent sets of physics models available in Geant 4-DNA enable more accurate simulation of stopping powers, dose point kernels, and W-values in liquidWater.
Abstract: This Special Report presents a description of Geant4-DNA user applications dedicated to the simulation of track structures (TS) in liquid water and associated physical quantities (e.g., range, stopping power, mean free path…). These example applications are included in the Geant4 Monte Carlo toolkit and are available in open access. Each application is described and comparisons to recent international recommendations are shown (e.g., ICRU, MIRD), when available. The influence of physics models available in Geant4-DNA for the simulation of electron interactions in liquid water is discussed. Thanks to these applications, the authors show that the most recent sets of physics models available in Geant4-DNA (the so-called "option4" and "option 6" sets) enable more accurate simulation of stopping powers, dose point kernels, and W-values in liquid water, than the default set of models ("option 2") initially provided in Geant4-DNA. They also serve as reference applications for Geant4-DNA users interested in TS simulations.

Journal ArticleDOI
TL;DR: OCO-2 SIF generally had a better performance for predicting GPP than satellite-derived vegetation indices and a light use efficiency model, and the generally consistent slope of the relationship among biomes suggests a nearly universal rather than biome-specific SIF-GPP relationship.
Abstract: Solar-induced chlorophyll fluorescence (SIF) has been increasingly used as a proxy for terrestrial gross primary productivity (GPP). Previous work mainly evaluated the relationship between satellite-observed SIF and gridded GPP products both based on coarse spatial resolutions. Finer resolution SIF (1.3 km × 2.25 km) measured from the Orbiting Carbon Observatory-2 (OCO-2) provides the first opportunity to examine the SIF-GPP relationship at the ecosystem scale using flux tower GPP data. However, it remains unclear how strong the relationship is for each biome and whether a robust, universal relationship exists across a variety of biomes. Here we conducted the first global analysis of the relationship between OCO-2 SIF and tower GPP for a total of 64 flux sites across the globe encompassing eight major biomes. OCO-2 SIF showed strong correlations with tower GPP at both midday and daily timescales, with the strongest relationship observed for daily SIF at the 757 nm (R2 = 0.72, p < 0.0001). Strong linear relationships between SIF and GPP were consistently found for all biomes (R2 = 0.57-0.79, p < 0.0001) except evergreen broadleaf forests (R2 = 0.16, p < 0.05) at the daily timescale. A higher slope was found for C4 grasslands and croplands than for C3 ecosystems. The generally consistent slope of the relationship among biomes suggests a nearly universal rather than biome-specific SIF-GPP relationship, and this finding is an important distinction and simplification compared to previous results. SIF was mainly driven by absorbed photosynthetically active radiation and was also influenced by environmental stresses (temperature and water stresses) that determine photosynthetic light use efficiency. OCO-2 SIF generally had a better performance for predicting GPP than satellite-derived vegetation indices and a light use efficiency model. The universal SIF-GPP relationship can potentially lead to more accurate GPP estimates regionally or globally. Our findings revealed the remarkable ability of finer resolution SIF observations from OCO-2 and other new or future missions (e.g., TROPOMI, FLEX) for estimating terrestrial photosynthesis across a wide variety of biomes and identified their potential and limitations for ecosystem functioning and carbon cycle studies.