scispace - formally typeset
Search or ask a question

Showing papers by "Purdue University published in 2017"


Journal ArticleDOI
TL;DR: The atomic simulation environment (ASE) provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations.
Abstract: The Atomic Simulation Environment (ASE) is a software package written in the Python programming language with the aim of setting up, steering, and analyzing atomistic simula- tions. In ASE, tasks are fully scripted in Python. The powerful syntax of Python combined with the NumPy array library make it possible to perform very complex simulation tasks. For example, a sequence of calculations may be performed with the use of a simple "for-loop" construction. Calculations of energy, forces, stresses and other quantities are performed through interfaces to many external electronic structure codes or force fields using a uniform interface. On top of this calculator interface, ASE provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations.

2,282 citations


Journal ArticleDOI
TL;DR: The HiTOP promises to improve research and clinical practice by addressing the aforementioned shortcomings of traditional nosologies and provides an effective way to summarize and convey information on risk factors, etiology, pathophysiology, phenomenology, illness course, and treatment response.
Abstract: The reliability and validity of traditional taxonomies are limited by arbitrary boundaries between psychopathology and normality, often unclear boundaries between disorders, frequent disorder co-occurrence, heterogeneity within disorders, and diagnostic instability. These taxonomies went beyond evidence available on the structure of psychopathology and were shaped by a variety of other considerations, which may explain the aforementioned shortcomings. The Hierarchical Taxonomy Of Psychopathology (HiTOP) model has emerged as a research effort to address these problems. It constructs psychopathological syndromes and their components/subtypes based on the observed covariation of symptoms, grouping related symptoms together and thus reducing heterogeneity. It also combines co-occurring syndromes into spectra, thereby mapping out comorbidity. Moreover, it characterizes these phenomena dimensionally, which addresses boundary problems and diagnostic instability. Here, we review the development of the HiTOP and the relevant evidence. The new classification already covers most forms of psychopathology. Dimensional measures have been developed to assess many of the identified components, syndromes, and spectra. Several domains of this model are ready for clinical and research applications. The HiTOP promises to improve research and clinical practice by addressing the aforementioned shortcomings of traditional nosologies. It also provides an effective way to summarize and convey information on risk factors, etiology, pathophysiology, phenomenology, illness course, and treatment response. This can greatly improve the utility of the diagnosis of mental disorders. The new classification remains a work in progress. However, it is developing rapidly and is poised to advance mental health research and care significantly as the relevant science matures. (PsycINFO Database Record

1,635 citations


Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini, M. Alfonsi3, F. D. Amaro4, M. Anthony1, F. Arneodo5, P. Barrow6, Laura Baudis6, Boris Bauermeister7, M. L. Benabderrahmane5, T. Berger8, P. A. Breur2, April S. Brown2, Ethan Brown8, S. Bruenner9, Giacomo Bruno, Ran Budnik10, L. Bütikofer11, J. Calvén7, João Cardoso4, M. Cervantes12, D. Cichon9, D. Coderre11, Auke-Pieter Colijn2, Jan Conrad7, Jean-Pierre Cussonneau13, M. P. Decowski2, P. de Perio1, P. Di Gangi14, A. Di Giovanni5, Sara Diglio13, G. Eurin9, J. Fei15, A. D. Ferella7, A. Fieguth16, W. Fulgione, A. Gallo Rosso, Michelle Galloway6, F. Gao1, M. Garbini14, Robert Gardner17, C. Geis3, Luke Goetzke1, L. Grandi17, Z. Greene1, C. Grignon3, C. Hasterok9, E. Hogenbirk2, J. Howlett1, R. Itay10, B. Kaminsky11, Shingo Kazama6, G. Kessler6, A. Kish6, H. Landsman10, R. F. Lang12, D. Lellouch10, L. Levinson10, Qing Lin1, Sebastian Lindemann9, Manfred Lindner9, F. Lombardi15, J. A. M. Lopes4, A. Manfredini10, I. Mariș5, T. Marrodán Undagoitia9, Julien Masbou13, F. V. Massoli14, D. Masson12, D. Mayani6, M. Messina1, K. Micheneau13, A. Molinario, K. Morâ7, M. Murra16, J. Naganoma18, Kaixuan Ni15, Uwe Oberlack3, P. Pakarha6, Bart Pelssers7, R. Persiani13, F. Piastra6, J. Pienaar12, V. Pizzella9, M.-C. Piro8, Guillaume Plante1, N. Priel10, L. Rauch9, S. Reichard6, C. Reuter12, B. Riedel17, A. Rizzo1, S. Rosendahl16, N. Rupp9, R. Saldanha17, J.M.F. dos Santos4, Gabriella Sartorelli14, M. Scheibelhut3, S. Schindler3, J. Schreiner9, Marc Schumann11, L. Scotto Lavina19, M. Selvi14, P. Shagin18, E. Shockley17, Manuel Gameiro da Silva4, H. Simgen9, M. V. Sivers11, A. Stein20, S. Thapa17, Dominique Thers13, A. Tiseni2, Gian Carlo Trinchero, C. Tunnell17, M. Vargas16, N. Upole17, Hui Wang20, Zirui Wang, Yuehuan Wei6, Ch. Weinheimer16, J. Wulf6, J. Ye15, Yanxi Zhang1, T. Zhu1 
TL;DR: The first dark matter search results from XENON1T, a ∼2000-kg-target-mass dual-phase (liquid-gas) xenon time projection chamber in operation at the Laboratori Nazionali del Gran Sasso in Italy, are reported and a profile likelihood analysis shows that the data are consistent with the background-only hypothesis.
Abstract: We report the first dark matter search results from XENON1T, a ∼2000-kg-target-mass dual-phase (liquid-gas) xenon time projection chamber in operation at the Laboratori Nazionali del Gran Sasso in Italy and the first ton-scale detector of this kind The blinded search used 342 live days of data acquired between November 2016 and January 2017 Inside the (1042±12)-kg fiducial mass and in the [5,40] keVnr energy range of interest for weakly interacting massive particle (WIMP) dark matter searches, the electronic recoil background was (193±025)×10-4 events/(kg×day×keVee), the lowest ever achieved in such a dark matter detector A profile likelihood analysis shows that the data are consistent with the background-only hypothesis We derive the most stringent exclusion limits on the spin-independent WIMP-nucleon interaction cross section for WIMP masses above 10 GeV/c2, with a minimum of 77×10-47 cm2 for 35-GeV/c2 WIMPs at 90% CL

1,061 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: In this paper, the authors proposed a novel deep neural network architecture which allows it to learn without any significant increase in number of parameters and achieves state-of-the-art performance on CamVid and Cityscapes dataset.
Abstract: Pixel-wise semantic segmentation for visual scene understanding not only needs to be accurate, but also efficient in order to find any use in real-time application. Existing algorithms even though are accurate but they do not focus on utilizing the parameters of neural network efficiently. As a result they are huge in terms of parameters and number of operations; hence slow too. In this paper, we propose a novel deep neural network architecture which allows it to learn without any significant increase in number of parameters. Our network uses only 11.5 million parameters and 21.2 GFLOPs for processing an image of resolution 3 × 640 × 360. It gives state-of-the-art performance on CamVid and comparable results on Cityscapes dataset. We also compare our networks processing time on NVIDIA GPU and embedded system device with existing state-of-the-art architectures for different image resolutions.

1,015 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive updated analysis of early childhood development interventions across the five sectors of health, nutrition, education, child protection, and social protection, concluding that to make interventions successful, smart, and sustainable, they need to be implemented as multi-sectoral intervention packages anchored in nurturing care.

858 citations


Journal Article
TL;DR: Automatic differentiation (AD) is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs as discussed by the authors, which is a small but established field with applications in areas including computational uid dynamics, atmospheric sciences, and engineering design optimization.
Abstract: Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "auto-diff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational uid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.

758 citations


Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Ece Aşılar1  +2212 moreInstitutions (157)
TL;DR: A fully-fledged particle-flow reconstruction algorithm tuned to the CMS detector was developed and has been consistently used in physics analyses for the first time at a hadron collider as mentioned in this paper.
Abstract: The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMS performance for jet and hadronic τ decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8\TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.

719 citations


Journal ArticleDOI
TL;DR: In this paper, the key fields within structured light from the perspective of experts in those areas, providing insight into the current state and the challenges their respective fields face, as well as the exciting prospects for the future that are yet to be realized.
Abstract: Structured light refers to the generation and application of custom light fields. As the tools and technology to create and detect structured light have evolved, steadily the applications have begun to emerge. This roadmap touches on the key fields within structured light from the perspective of experts in those areas, providing insight into the current state and the challenges their respective fields face. Collectively the roadmap outlines the venerable nature of structured light research and the exciting prospects for the future that are yet to be realized.

639 citations


Journal ArticleDOI
TL;DR: In this article, the authors present opportunities for future research on OI, organized at different levels of analysis, and discuss some of the contingencies at these different levels, and argue that future research needs to study OI - originally an organisational-level phenomenon.
Abstract: This paper provides an overview of the main perspectives and themes emerging in research on open innovation (OI). The paper is the result of a collaborative process among several OI scholars – having a common basis in the recurrent Professional Development Workshop on ‘Researching Open Innovation’ at the Annual Meeting of the Academy of Management. In this paper, we present opportunities for future research on OI, organised at different levels of analysis. We discuss some of the contingencies at these different levels, and argue that future research needs to study OI – originally an organisational-level phenomenon – across multiple levels of analysis. While our integrative framework allows comparing, contrasting and integrating various perspectives at different levels of analysis, further theorising will be needed to advance OI research. On this basis, we propose some new research categories as well as questions for future research – particularly those that span across research domains that have so far developed in isolation.

623 citations


Journal ArticleDOI
01 Sep 2017-Science
TL;DR: It is demonstrated that under reaction conditions, mobilized Cu ions can travel through zeolite windows and form transient ion pairs that participate in an oxygen (O2)–mediated CuI→CuII redox step integral to SCR.
Abstract: Copper ions exchanged into zeolites are active for the selective catalytic reduction (SCR) of nitrogen oxides (NO x ) with ammonia (NH3), but the low-temperature rate dependence on copper (Cu) volumetric density is inconsistent with reaction at single sites. We combine steady-state and transient kinetic measurements, x-ray absorption spectroscopy, and first-principles calculations to demonstrate that under reaction conditions, mobilized Cu ions can travel through zeolite windows and form transient ion pairs that participate in an oxygen (O2)-mediated CuI→CuII redox step integral to SCR. Electrostatic tethering to framework aluminum centers limits the volume that each ion can explore and thus its capacity to form an ion pair. The dynamic, reversible formation of multinuclear sites from mobilized single atoms represents a distinct phenomenon that falls outside the conventional boundaries of a heterogeneous or homogeneous catalyst.

594 citations


Journal ArticleDOI
TL;DR: Empirical studies identified 207 studies that had tracked changes in measures of personality traits during interventions, including true experiments and prepost change designs, and found that personality traits changed the most, and patients being treated for substance use changed the least.
Abstract: The current meta-analysis investigated the extent to which personality traits changed as a result of intervention, with the primary focus on clinical interventions. We identified 207 studies that had tracked changes in measures of personality traits during interventions, including true experiments and prepost change designs. Interventions were associated with marked changes in personality trait measures over an average time of 24 weeks (e.g., d = .37). Additional analyses showed that the increases replicated across experimental and nonexperimental designs, for nonclinical interventions, and persisted in longitudinal follow-ups of samples beyond the course of intervention. Emotional stability was the primary trait domain showing changes as a result of therapy, followed by extraversion. The type of therapy employed was not strongly associated with the amount of change in personality traits. Patients presenting with anxiety disorders changed the most, and patients being treated for substance use changed the least. The relevance of the results for theory and social policy are discussed. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: In this paper, the trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions.
Abstract: This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, tau lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during data taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.

Journal ArticleDOI
TL;DR: It is shown that for any denoising algorithm satisfying an asymptotic criteria, called bounded denoisers, Plug-and-Play ADMM converges to a fixed point under a continuation scheme.
Abstract: Alternating direction method of multiplier (ADMM) is a widely used algorithm for solving constrained optimization problems in image restoration. Among many useful features, one critical feature of the ADMM algorithm is its modular structure, which allows one to plug in any off-the-shelf image denoising algorithm for a subproblem in the ADMM algorithm. Because of the plug-in nature, this type of ADMM algorithms is coined the name “Plug-and-Play ADMM.” Plug-and-Play ADMM has demonstrated promising empirical results in a number of recent papers. However, it is unclear under what conditions and by using what denoising algorithms would it guarantee convergence. Also, since Plug-and-Play ADMM uses a specific way to split the variables, it is unclear if fast implementation can be made for common Gaussian and Poissonian image restoration problems. In this paper, we propose a Plug-and-Play ADMM algorithm with provable fixed-point convergence. We show that for any denoising algorithm satisfying an asymptotic criteria, called bounded denoisers, Plug-and-Play ADMM converges to a fixed point under a continuation scheme. We also present fast implementations for two image restoration problems on superresolution and single-photon imaging. We compare Plug-and-Play ADMM with state-of-the-art algorithms in each problem type and demonstrate promising experimental results of the algorithm.

Journal ArticleDOI
Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2285 moreInstitutions (147)
TL;DR: In this paper, an improved jet energy scale corrections, based on a data sample corresponding to an integrated luminosity of 19.7 fb^(-1) collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 8 TeV, are presented.
Abstract: Improved jet energy scale corrections, based on a data sample corresponding to an integrated luminosity of 19.7 fb^(-1) collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 8 TeV, are presented. The corrections as a function of pseudorapidity η and transverse momentum p_T are extracted from data and simulated events combining several channels and methods. They account successively for the effects of pileup, uniformity of the detector response, and residual data-simulation jet energy scale differences. Further corrections, depending on the jet flavor and distance parameter (jet size) R, are also presented. The jet energy resolution is measured in data and simulated events and is studied as a function of pileup, jet size, and jet flavor. Typical jet energy resolutions at the central rapidities are 15–20% at 30 GeV, about 10% at 100 GeV, and 5% at 1 TeV. The studies exploit events with dijet topology, as well as photon+jet, Z+jet and multijet events. Several new techniques are used to account for the various sources of jet energy scale corrections, and a full set of uncertainties, and their correlations, are provided. The final uncertainties on the jet energy scale are below 3% across the phase space considered by most analyses (p_T > 30 GeV and 0|η| 30 GeV is reached, when excluding the jet flavor uncertainties, which are provided separately for different jet flavors. A new benchmark for jet energy scale determination at hadron colliders is achieved with 0.32% uncertainty for jets with p_T of the order of 165–330 GeV, and |η| < 0.8.

Proceedings ArticleDOI
04 Aug 2017
TL;DR: The Motif-based Approximate Personalized PageRank (MAPPR) algorithm is developed, a generalization of the conductance metric for network motifs that finds clusters containing a seed node with minimalmotif conductance, and a theory of node neighborhoods for finding sets that have small motif conductance.
Abstract: Local graph clustering methods aim to find a cluster of nodes by exploring a small region of the graph. These methods are attractive because they enable targeted clustering around a given seed node and are faster than traditional global graph clustering methods because their runtime does not depend on the size of the input graph. However, current local graph partitioning methods are not designed to account for the higher-order structures crucial to the network, nor can they effectively handle directed networks. Here we introduce a new class of local graph clustering methods that address these issues by incorporating higher-order network information captured by small subgraphs, also called network motifs. We develop the Motif-based Approximate Personalized PageRank (MAPPR) algorithm that finds clusters containing a seed node with minimal \emph{motif conductance}, a generalization of the conductance metric for network motifs. We generalize existing theory to prove the fast running time (independent of the size of the graph) and obtain theoretical guarantees on the cluster quality (in terms of motif conductance). We also develop a theory of node neighborhoods for finding sets that have small motif conductance, and apply these results to the case of finding good seed nodes to use as input to the MAPPR algorithm. Experimental validation on community detection tasks in both synthetic and real-world networks, shows that our new framework MAPPR outperforms the current edge-based personalized PageRank methodology.

Journal ArticleDOI
TL;DR: In this paper, a meta-aggregative approach was used to analyze the results of 14 selected studies to further understand the link between teachers' pedagogical beliefs and their educational uses of technology.
Abstract: This review was designed to further our understanding of the link between teachers’ pedagogical beliefs and their educational uses of technology. The synthesis of qualitative findings integrates the available evidence about this relationship with the ultimate goal being to facilitate the integration of technology in education. A meta-aggregative approach was utilized to analyze the results of the 14 selected studies. The findings are reported in terms of five synthesis statements, describing (1) the bi-directional relationship between pedagogical beliefs and technology use, (2) teachers’ beliefs as perceived barriers, (3) the association between specific beliefs with types of technology use, (4) the role of beliefs in professional development, and (5) the importance of the school context. By interpreting the results of the review, recommendations are provided for practitioners, policy makers, and researchers focusing on pre- and in-service teacher technology training.

Journal ArticleDOI
TL;DR: In this article, a distributed denial-of-service attack demonstrated the high vulnerability of Internet of Things (IoT) systems and devices and addressed this challenge will require scalable security solutions optimized for the IoT ecosystem.
Abstract: Recent distributed denial-of-service attacks demonstrate the high vulnerability of Internet of Things (IoT) systems and devices. Addressing this challenge will require scalable security solutions optimized for the IoT ecosystem.

Posted Content
Yonit Hochberg1, Yonit Hochberg2, A. N. Villano3, Andrei Afanasev4  +238 moreInstitutions (98)
TL;DR: The white paper summarizes the workshop "U.S. Cosmic Visions: New Ideas in Dark Matter" held at University of Maryland on March 23-25, 2017.
Abstract: This white paper summarizes the workshop "U.S. Cosmic Visions: New Ideas in Dark Matter" held at University of Maryland on March 23-25, 2017.

Journal ArticleDOI
TL;DR: A rapid search in PubMed shows that using "flow cytometry immunology" as a search term yields more than 68 000 articles, the first of which is not about lymphocytes as mentioned in this paper.
Abstract: The marriage between immunology and cytometry is one of the most stable and productive in the recent history of science. A rapid search in PubMed shows that, as of July 2017, using “flow cytometry immunology” as a search term yields more than 68 000 articles, the first of which, interestingly, is not about lymphocytes. It might be stated that, after a short engagement, the exchange of the wedding rings between immunology and cytometry officially occurred when the idea to link fluorochromes to monoclonal antibodies came about. After this, recognizing different types of cells became relatively easy and feasible not only by using a simple fluorescence microscope, but also by a complex and sometimes esoteric instrument, the flow cytometer that is able to count hundreds of cells in a single second, and can provide repetitive results in a tireless manner. Given this, the possibility to analyse immune phenotypes in a variety of clinical conditions has changed the use of the flow cytometer, which was incidentally invented in the late 1960s to measure cellular DNA by using intercalating dyes, such as ethidium bromide. The epidemics of HIV/AIDS in the 1980s then gave a dramatic impulse to the technology of counting specific cells, since it became clear that the quantification of the number of peripheral blood CD4+ T cells was crucial to follow the course of the infection, and eventually for monitoring the therapy. As a consequence, the development of flow cytometers that had to be easy-to-use in all clinical laboratories helped to widely disseminate this technology. Nowadays, it is rare to find an immunological paper or read a conference abstract in which the authors did not use flow cytometry as the main tool to dissect the immune system and identify its fine and complex functions. Of note, recent developments have created the sophisticated technology of mass cytometry, which is able to simultaneously identify dozens of molecules at the single cell level and allows us to better understand the complexity and beauty of the immune system.

Journal ArticleDOI
TL;DR: In this article, the authors present measurements of bulk properties of the matter produced in Au+Au collisions at sNN=7.7,11.5,19.6,27, and 39 GeV using identified hadrons from the STAR experiment in the Beam Energy Scan (BES) Program at the Relativistic Heavy Ion Collider (RHIC).
Abstract: © 2017 American Physical Society. We present measurements of bulk properties of the matter produced in Au+Au collisions at sNN=7.7,11.5,19.6,27, and 39 GeV using identified hadrons (π±, K±, p, and p) from the STAR experiment in the Beam Energy Scan (BES) Program at the Relativistic Heavy Ion Collider (RHIC). Midrapidity (|y| < 0.1) results for multiplicity densities dN/dy, average transverse momenta (pT), and particle ratios are presented. The chemical and kinetic freeze-out dynamics at these energies are discussed and presented as a function of collision centrality and energy. These results constitute the systematic measurements of bulk properties of matter formed in heavy-ion collisions over a broad range of energy (or baryon chemical potential) at RHIC.

Journal ArticleDOI
TL;DR: A review of the development, application, and current capabilities of infrared laser-absorption spectroscopy (IR-LAS) sensors for combustion gases can be found in this paper.

Journal ArticleDOI
TL;DR: Results are consistent with theory, including a peak conductance that is proportional to tunnel coupling, saturates at 2e^{2}/h, decreases as expected with field-dependent gap, and collapses onto a simple scaling function in the dimensionless ratio of temperature and tunnel coupling.
Abstract: We report an experimental study of the scaling of zero-bias conductance peaks compatible with Majorana zero modes as a function of magnetic field, tunnel coupling, and temperature in one-dimensional structures fabricated from an epitaxial semiconductor-superconductor heterostructure. Results are consistent with theory, including a peak conductance that is proportional to tunnel coupling, saturates at 2e^{2}/h, decreases as expected with field-dependent gap, and collapses onto a simple scaling function in the dimensionless ratio of temperature and tunnel coupling.

Journal ArticleDOI
07 Apr 2017-Science
TL;DR: Direct visualization of hot-carrier migration in methylammonium lead iodide (CH3NH3PbI3) thin films by ultrafast transient absorption microscopy is reported, demonstrating three distinct transport regimes.
Abstract: The Shockley-Queisser limit for solar cell efficiency can be overcome if hot carriers can be harvested before they thermalize. Recently, carrier cooling time up to 100 picoseconds was observed in hybrid perovskites, but it is unclear whether these long-lived hot carriers can migrate long distance for efficient collection. We report direct visualization of hot-carrier migration in methylammonium lead iodide (CH3NH3PbI3) thin films by ultrafast transient absorption microscopy, demonstrating three distinct transport regimes. Quasiballistic transport was observed to correlate with excess kinetic energy, resulting in up to 230 nanometers transport distance that could overcome grain boundaries. The nonequilibrium transport persisted over tens of picoseconds and ~600 nanometers before reaching the diffusive transport limit. These results suggest potential applications of hot-carrier devices based on hybrid perovskites.

Journal ArticleDOI
TL;DR: In this paper, the authors describe five sets of recent findings on subjective well-being: (a) the multidimensionality of SWB, (b) circumstances that influence long-term SWB; (c) cultural differences in SWB and (d) the beneficial effects of SWBs on health and social relationships; and (e) interventions to increase SWBs.
Abstract: Recent decades have seen rapid growth in the science of subjective well-being (SWB), with 14,000 publications a year now broaching the topic. The insights of this growing scholarly literature can be helpful to psychologists working both in research and applied areas. The authors describe 5 sets of recent findings on SWB: (a) the multidimensionality of SWB; (b) circumstances that influence long-term SWB; (c) cultural differences in SWB; (d) the beneficial effects of SWB on health and social relationships; and (e) interventions to increase SWB. In addition, they outline the implications of these findings for the helping professions, organizational psychology, and for researchers. Finally, they describe current developments in national accounts of well-being, which capture the quality of life in societies beyond economic indicators and point toward policies that can enhance societal well-being. Nous avons assisté lors des dernières décennies à une forte croissance de la science du bien-être subjectif, les publications se chiffrant actuellement à environ 14 000 par année. Les constats dont cette littérature savante grandissante font état peuvent être utiles aux psychologues œuvrant dans les domaines de la psychologie appliquée et de la recherche. Les auteurs y décrivent cinq ensembles de récents constats au sujet du bien-être subjectif : (a) la multidimensionnalité du bien-être subjectif; (b) les circonstances qui influencent le bien-être subjectif; (c) l’impact des différences culturelles sur le bien-être subjectif; (d) les effets bénéfiques du bien-être subjectif sur la santé et les relations sociales; et (e) les interventions visant à augmenter le bien-être subjectif. On y précise également les implications de ces constats pour les professions d’aide, la psychologie organisationnelle et les chercheurs. Finalement, les auteurs y décrivent les développements actuels des témoignages de bien-être, lesquels rendent compte de la qualité de vie dans les sociétés, au-delà des indicateurs économiques, et nous guident vers des politiques visant à augmenter le bien-être de la société.

Journal ArticleDOI
TL;DR: A noncanonical thermogenic mechanism through which beige fat controls whole-body energy homeostasis via Ca2+ cycling is uncovered through which it functions as a 'glucose sink' and improves glucose tolerance independently of body weight loss.
Abstract: Uncoupling protein 1 (UCP1) plays a central role in nonshivering thermogenesis in brown fat; however, its role in beige fat remains unclear. Here we report a robust UCP1-independent thermogenic mechanism in beige fat that involves enhanced ATP-dependent Ca2+ cycling by sarco/endoplasmic reticulum Ca2+-ATPase 2b (SERCA2b) and ryanodine receptor 2 (RyR2). Inhibition of SERCA2b impairs UCP1-independent beige fat thermogenesis in humans and mice as well as in pigs, a species that lacks a functional UCP1 protein. Conversely, enhanced Ca2+ cycling by activation of α1- and/or β3-adrenergic receptors or the SERCA2b-RyR2 pathway stimulates UCP1-independent thermogenesis in beige adipocytes. In the absence of UCP1, beige fat dynamically expends glucose through enhanced glycolysis, tricarboxylic acid metabolism and pyruvate dehydrogenase activity for ATP-dependent thermogenesis through the SERCA2b pathway; beige fat thereby functions as a 'glucose sink' and improves glucose tolerance independently of body weight loss. Our study uncovers a noncanonical thermogenic mechanism through which beige fat controls whole-body energy homeostasis via Ca2+ cycling.

Journal ArticleDOI
TL;DR: It is shown that increased lipid unsaturation is a metabolic marker for ovarian CSCs and a target for CSC-specific therapy and Mechanistically, it is demonstrated that nuclear factor κB (NF-κB) directly regulates the expression levels of lipid desaturases, and inhibition of desaturase blocks NF-κBs signaling.

Journal ArticleDOI
TL;DR: In this paper, a meta-analysis examined the relationship between social presence and students' satisfaction and perceived learning in online courses and found that the strength of the relationship among social presence (SP) and satisfaction was moderated by the course length, discipline area, and scale used to measure social presence.

Journal ArticleDOI
TL;DR: Criteria that must be met during design of ligand-targeted drugs (LTDs) to achieve the required therapeutic potency with minimal toxicity are summarized.
Abstract: Safety and efficacy constitute the major criteria governing regulatory approval of any new drug. The best method to maximize safety and efficacy is to deliver a proven therapeutic agent with a targeting ligand that exhibits little affinity for healthy cells but high affinity for pathologic cells. The probability of regulatory approval can conceivably be further enhanced by exploiting the same targeting ligand, conjugated to an imaging agent, to select patients whose diseased tissues display sufficient targeted receptors for therapeutic efficacy. The focus of this Review is to summarize criteria that must be met during design of ligand-targeted drugs (LTDs) to achieve the required therapeutic potency with minimal toxicity. Because most LTDs are composed of a targeting ligand (e.g., organic molecule, aptamer, protein scaffold, or antibody), spacer, cleavable linker, and therapeutic warhead, criteria for successful design of each component will be described. Moreover, because obstacles to successful drug design can differ among human pathologies, limitations to drug delivery imposed by the unique characteristics of different diseases will be considered. With the explosion of genomic and transcriptomic data providing an ever-expanding selection of disease-specific targets, and with tools for high-throughput chemistry offering an escalating diversity of warheads, opportunities for innovating safe and effective LTDs has never been greater.

Proceedings Article
01 Aug 2017
TL;DR: This paper introduces a framework that generalizes several LDP protocols proposed in the literature and yields a simple and fast aggregation algorithm, whose accuracy can be precisely analyzed, resulting in two new protocols that provide better utility than protocols previously proposed.
Abstract: Protocols satisfying Local Differential Privacy (LDP) enable parties to collect aggregate information about a population while protecting each user’s privacy, without relying on a trusted third party. LDP protocols (such as Google’s RAPPOR) have been deployed in real-world scenarios. In these protocols, a user encodes his private information and perturbs the encoded value locally before sending it to an aggregator, who combines values that users contribute to infer statistics about the population. In this paper, we introduce a framework that generalizes several LDP protocols proposed in the literature. Our framework yields a simple and fast aggregation algorithm, whose accuracy can be precisely analyzed. Our in-depth analysis enables us to choose optimal parameters, resulting in two new protocols (i.e., Optimized Unary Encoding and Optimized Local Hashing) that provide better utility than protocols previously proposed. We present precise conditions for when each proposed protocol should be used, and perform experiments that demonstrate the advantage of our proposed protocols.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive review of published literatures concerning the fluid mechanics and heat transfer mechanisms of liquid drop impact on a heated wall is provided, divided into four parts, each centered on one of the main heat transfer regimes: film evaporation, nucleate boiling, transition boiling, and film boiling.