scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Insight is gained into how van der Waals interaction and chemical binding contribute to the adsorption of Li2Sn species for anchoring materials with strong, medium, and weak interactions, and it is discovered that too strong binding strength can cause decomposition of Li1Sn species.
Abstract: Although the rechargeable lithium–sulfur battery system has attracted significant attention due to its high theoretical specific energy, its implementation has been impeded by multiple challenges, especially the dissolution of intermediate lithium polysulfide (Li2Sn) species into the electrolyte. Introducing anchoring materials, which can induce strong binding interaction with Li2Sn species, has been demonstrated as an effective way to overcome this problem and achieve long-term cycling stability and high-rate performance. The interaction between Li2Sn species and anchoring materials should be studied at the atomic level in order to understand the mechanism behind the anchoring effect and to identify ideal anchoring materials to further improve the performance of Li–S batteries. Using first-principles approach with van der Waals interaction included, we systematically investigate the adsorption of Li2Sn species on various two-dimensional layered materials (oxides, sulfides, and chlorides) and study the de...

739 citations


Journal ArticleDOI
27 Nov 2015-Science
TL;DR: A synthetic lethality network focused on the secretory pathway based exclusively on mutations was created and revealed a genetic cross-talk governing Golgi homeostasis, an additional subunit of the human oligosaccharyltransferase complex, and a phosphatidylinositol 4-kinase β adaptor hijacked by viruses.
Abstract: Although the genes essential for life have been identified in less complex model organisms, their elucidation in human cells has been hindered by technical barriers. We used extensive mutagenesis in haploid human cells to identify approximately 2000 genes required for optimal fitness under culture conditions. To study the principles of genetic interactions in human cells, we created a synthetic lethality network focused on the secretory pathway based exclusively on mutations. This revealed a genetic cross-talk governing Golgi homeostasis, an additional subunit of the human oligosaccharyltransferase complex, and a phosphatidylinositol 4-kinase β adaptor hijacked by viruses. The synthetic lethality map parallels observations made in yeast and projects a route forward to reveal genetic networks in diverse aspects of human cell biology.

739 citations


Journal ArticleDOI
TL;DR: The origins, cross-disciplinary evolution, and definition of "thick description" are reviewed in this paper, where the authors provide guidelines for presenting 'thick descriptions' in non-ethnographic studies.
Abstract: The origins, cross-disciplinary evolution, and definition of "thick description" are reviewed. Despite its frequent use in the qualitative literature, the concept of "thick description" is often confusing to researchers at all levels. The roots of this confusion are explored and examples of "thick description" are provided. The article closes with guidelines for presenting "thick description" in written reports. Key Words: Thick Description, Ethnography, Grounded Theory, Phenomenology, Thick Interpretation, Thick Meaning, and Qualitative Writing One of the most important concepts in the lexicon of qualitative researchers is "thick description." In fact, the Subject Index of virtually every major textbook on qualitative methods published during the last three decades includes one or more entries under either "thick description," or "description, thick" (Bogdan & Biklen, 2003; Creswell, 1998; Denzin, 1989; Denzin & Lincoln, 2005; Lincoln & Guba, 1985; Marshall & Rossman, 1999; Patton, 1990, to name but a few). Despite the widespread use and acceptance of the term "thick description," in qualitative research, there appears to be some confusion over precisely what the concept means (Holloway, 1997; Schwandt, 2001). Personally, I can relate to this confusion on two levels. First, in my own qualitative research and writing over the years, I have at times struggled to fully understand the concept of "thick description." Second, in my experience teaching and supervising qualitative research, I find that students and colleagues struggle in their attempts to understand and practice "thick description" in their work. It was this set of struggles that led me to study the concept of "thick description" more closely, and to share my findings with the readership of The Qualitative Report (TQR). The goals of this Brief Note are to (a) clarify the origins of the concept of "thick description"; (b) trace its evolution across various disciplines; (c) define the concept comprehensively; (d) provide exemplars of "thick description" in the published literature; and (e) offer guidelines for presenting "thick description" in non-ethnographic studies. In meeting these goals, I hope to bring some clarity and consensus to our understanding and usage of the concept "thick description." Origins of "Thick Description" Though many researchers cite North American anthropologist Clifford Geertz's (1973) The Interpretation of Cultures, when they introduce "thick description," the term and concept originate, as Geertz himself notes, with Gilbert Ryle, a British metaphysical philosopher at the University of Oxford. The root of the concept can be found in Ryle's (1949) Concept of the Mind where he discussed in great detail "the description of intellectual work" (p. 305). The first presentation of the actual term, "thick" description, appears to come from two of Ryle's lectures published in the mid 1960s titled Thinking and Reflecting and The Thinking of Thoughts: [colon added] What is "La Penseur" Doing? Both lectures were published in Ryle's (1971) Collected Papers, Volume II, Collected Essays 1929-1968, and can be easily located by the interested qualitative researcher. For Ryle (1971) "thick" description involved ascribing intentionality to one's behavior. He used the following example, A single golfer, with six golf balls in front of him [sic], hitting each of them, one after another, towards one and the same green. He [sic] then goes and collects the balls, comes back to where he [sic] was before, and does it again. What is he doing? (p. 474) The "thin" description of this behavior is that the golfer is repeatedly hitting a little round white object with a club like device toward a green. The "thick" description interprets the behavior within the context of the golf course and the game of golf, and ascribes thinking and intentionality to the observed behavior. …

739 citations


Journal ArticleDOI
TL;DR: The evidence that laboratory criteria for diagnosing DIC are present in nearly three-fourths of patients who died underscores the critical role of these tests in this and other clinical settings, thus suggesting that their assessment shall be considered a routine part of COVID-19 patient monitoring.
Abstract: The aim of this article is to provide a brief overview on the most frequent laboratory abnormalities encountered in patients with COVID-2019 infection An electronic search was performed in Medline (PubMed interface), Scopus and Web of Science, using the keywords "2019 novel coronavirus" or "2019-nCoV"or "COVID-19" without date or language restrictions The title, abstract and full text (when available) of all articles identified according to these search criteria were scrutinized by the authors, and those describing significant laboratory abnormalities in patients with severe COVID-19 infection were finally selected The references of identified documents were also crosschecked for detecting additional studies A particular mention shall be made for procalcitonin and coagulation tests The former test does not appear substantially altered in patients with COVID-19 at admission, but the progressive increase of its value seemingly mirrors a worse prognosis This is not unexpected, whereby serum procalcitonin levels are typically normal in patients with viral infections (or viral sepsis), whilst its gradual increase probably mirrors bacterial superinfection, which may then contribute to drive the clinical course towards unfavorable progression The measurement of other innovative sepsis biomarkers, such as presepsin, for example, would probably help in increasing the accuracy in identification of severe COVID-19 cases, as well as for improving the current approach used for mortality risk prediction As concerns hemostasis tests, the evidence that laboratory criteria for diagnosing DIC are present in nearly three-fourths of patients who died underscores the critical role of these tests in this and other clinical settings, thus suggesting that their assessment shall be considered a routine part of COVID-19 patient monitoring

739 citations


Journal ArticleDOI
TL;DR: The combination of electrochemical reaction rate measurements and density functional theory computation shows that the high activity of anomalous Ru catalyst in alkaline solution originates from its suitable adsorption energies to some key reaction intermediates and reaction kinetics in the HER process.
Abstract: Hydrogen evolution reaction (HER) is a critical process due to its fundamental role in electrocatalysis. Practically, the development of high-performance electrocatalysts for HER in alkaline media is of great importance for the conversion of renewable energy to hydrogen fuel via photoelectrochemical water splitting. However, both mechanistic exploration and materials development for HER under alkaline conditions are very limited. Precious Pt metal, which still serves as the state-of-the-art catalyst for HER, is unable to guarantee a sustainable hydrogen supply. Here we report an anomalously structured Ru catalyst that shows 2.5 times higher hydrogen generation rate than Pt and is among the most active HER electrocatalysts yet reported in alkaline solutions. The identification of new face-centered cubic crystallographic structure of Ru nanoparticles was investigated by high-resolution transmission electron microscopy imaging, and its formation mechanism was revealed by spectroscopic characterization and theoretical analysis. For the first time, it is found that the Ru nanocatalyst showed a pronounced effect of the crystal structure on the electrocatalytic activity tested under different conditions. The combination of electrochemical reaction rate measurements and density functional theory computation shows that the high activity of anomalous Ru catalyst in alkaline solution originates from its suitable adsorption energies to some key reaction intermediates and reaction kinetics in the HER process.

739 citations


Journal ArticleDOI
TL;DR: Although a substantial minority of PTSD cases remits within months after onset, mean symptom duration is considerably longer than previously recognized and differential across trauma types with respect to PTSD risk.
Abstract: Background: Although post-traumatic stress disorder (PTSD) onset-persistence is thought to vary significantly by trauma type, most epidemiological surveys are incapable of assessing this because they evaluate lifetime PTSD only for traumas nominated by respondents as their 'worst.' Objective: To review research on associations of trauma type with PTSD in the WHO World Mental Health (WMH) surveys, a series of epidemiological surveys that obtained representative data on trauma-specific PTSD. Method: WMH Surveys in 24 countries (n = 68,894) assessed 29 lifetime traumas and evaluated PTSD twice for each respondent: once for the 'worst' lifetime trauma and separately for a randomly-selected trauma with weighting to adjust for individual differences in trauma exposures. PTSD onset-persistence was evaluated with the WHO Composite International Diagnostic Interview. Results: In total, 70.4% of respondents experienced lifetime traumas, with exposure averaging 3.2 traumas per capita. Substantial between-trauma differences were found in PTSD onset but less in persistence. Traumas involving interpersonal violence had highest risk. Burden of PTSD, determined by multiplying trauma prevalence by trauma-specific PTSD risk and persistence, was 77.7 person-years/100 respondents. The trauma types with highest proportions of this burden were rape (13.1%), other sexual assault (15.1%), being stalked (9.8%), and unexpected death of a loved one (11.6%). The first three of these four represent relatively uncommon traumas with high PTSD risk and the last a very common trauma with low PTSD risk. The broad category of intimate partner sexual violence accounted for nearly 42.7% of all person-years with PTSD. Prior trauma history predicted both future trauma exposure and future PTSD risk. Conclusions: Trauma exposure is common throughout the world, unequally distributed, and differential across trauma types with respect to PTSD risk. Although a substantial minority of PTSD cases remits within months after onset, mean symptom duration is considerably longer than previously recognized.

739 citations


Proceedings ArticleDOI
15 Apr 2017
TL;DR: RACE as discussed by the authors is a dataset for benchmark evaluation of methods in reading comprehension task, collected from the English exams for middle and high school Chinese students in the age range between 12 to 18.
Abstract: We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/~glai1/data/race/and the code is available at https://github.com/qizhex/RACE_AR_baselines.

739 citations


Journal ArticleDOI
TL;DR: Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals.

739 citations



Journal ArticleDOI
26 Jan 2017-PLOS ONE
TL;DR: Many different psychological, contextual, sociodemographic and physical barriers that are specific to certain risk groups were identified and map knowledge gaps in understanding influenza vaccine hesitancy to derive directions for further research and inform interventions in this area.
Abstract: Background Influenza vaccine hesitancy is a significant threat to global efforts to reduce the burden of seasonal and pandemic influenza. Potential barriers of influenza vaccination need to be identified to inform interventions to raise awareness, influenza vaccine acceptance and uptake. Objective This review aims to (1) identify relevant studies and extract individual barriers of seasonal and pandemic influenza vaccination for risk groups and the general public; and (2) map knowledge gaps in understanding influenza vaccine hesitancy to derive directions for further research and inform interventions in this area. Methods Thirteen databases covering the areas of Medicine, Bioscience, Psychology, Sociology and Public Health were searched for peer-reviewed articles published between the years 2005 and 2016. Following the PRISMA approach, 470 articles were selected and analyzed for significant barriers to influenza vaccine uptake or intention. The barriers for different risk groups and flu types were clustered according to a conceptual framework based on the Theory of Planned Behavior and discussed using the 4C model of reasons for non-vaccination. Results Most studies were conducted in the American and European region. Health care personnel (HCP) and the general public were the most studied populations, while parental decisions for children at high risk were under-represented. This study also identifies understudied concepts. A lack of confidence, inconvenience, calculation and complacency were identified to different extents as barriers to influenza vaccine uptake in risk groups. Conclusion Many different psychological, contextual, sociodemographic and physical barriers that are specific to certain risk groups were identified. While most sociodemographic and physical variables may be significantly related to influenza vaccine hesitancy, they cannot be used to explain its emergence or intensity. Psychological determinants were meaningfully related to uptake and should therefore be measured in a valid and comparable way. A compendium of measurements for future use is suggested as supporting information.

738 citations


Journal ArticleDOI
TL;DR: Evidence demonstrates favorable validity, reliability, and responsiveness in a large, heterogeneous US sample of patients undergoing cancer treatment in a patient-reported outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE).
Abstract: Importance To integrate the patient perspective into adverse event reporting, the National Cancer Institute developed a patient-reported outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE). Objective To assess the construct validity, test-retest reliability, and responsiveness of PRO-CTCAE items. Design, Setting, and Participants A total of 975 adults with cancer undergoing outpatient chemotherapy and/or radiation therapy enrolled in this questionnaire-based study between January 2011 and February 2012. Eligible participants could read English and had no clinically significant cognitive impairment. They completed PRO-CTCAE items on tablet computers in clinic waiting rooms at 9 US cancer centers and community oncology practices at 2 visits 1 to 6 weeks apart. A subset completed PRO-CTCAE items during an additional visit 1 business day after the first visit. Main Outcomes and Measures Primary comparators were clinician-reported Eastern Cooperative Oncology Group Performance Status (ECOG PS) and the European Organisation for Research and Treatment of Cancer Core Quality of Life Questionnaire (QLQ-C30). Results A total of 940 of 975 (96.4%) and 852 of 940 (90.6%) participants completed PRO-CTCAE items at visits 1 and 2, respectively. At least 1 symptom was reported by 938 of 940 (99.8%) participants. Participants’ median age was 59 years; 57.3% were female, 32.4% had a high school education or less, and 17.1% had an ECOG PS of 2 to 4. All PRO-CTCAE items had at least 1 correlation in the expected direction with a QLQ-C30 scale (111 of 124, P P r = 0.43 [0.10-.56]; all P ≤ .006). Conclusions and Relevance Evidence demonstrates favorable validity, reliability, and responsiveness of PRO-CTCAE in a large, heterogeneous US sample of patients undergoing cancer treatment. Studies evaluating other measurement properties of PRO-CTCAE are under way to inform further development of PRO-CTCAE and its inclusion in cancer trials.

Journal ArticleDOI
TL;DR: In this article, the authors show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters.
Abstract: A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.

Journal ArticleDOI
TL;DR: In this paper, the authors present OSMnx, a new tool to make the collection of data and creation and analysis of street networks simple, consistent, automatable and sound from the perspectives of graph theory, transportation, and urban design.
Abstract: Urban scholars have studied street networks in various ways, but there are data availability and consistency limitations to the current urban planning/street network analysis literature. To address these challenges, this article presents OSMnx, a new tool to make the collection of data and creation and analysis of street networks simple, consistent, automatable and sound from the perspectives of graph theory, transportation, and urban design. OSMnx contributes five significant capabilities for researchers and practitioners: first, the automated downloading of political boundaries and building footprints; second, the tailored and automated downloading and constructing of street network data from OpenStreetMap; third, the algorithmic correction of network topology; fourth, the ability to save street networks to disk as shapefiles, GraphML, or SVG files; and fifth, the ability to analyze street networks, including calculating routes, projecting and visualizing networks, and calculating metric and topological measures. These measures include those common in urban design and transportation studies, as well as advanced measures of the structure and topology of the network. Finally, this article presents a simple case study using OSMnx to construct and analyze street networks in Portland, Oregon.

Journal ArticleDOI
TL;DR: The ability to provide lifesaving treatments for AKI provides a compelling argument to consider therapy forAKI as much of a basic right as it is to give antiretroviral drugs to treat HIV in low-resource regions, especially because care needs only be given for a Published Online March 13, 2015 http://dx.doi.org/10.1016/ S0140-6736(15)60126-X

Journal ArticleDOI
TL;DR: Three different approaches are compared: “Sum” distributions, postulated undated events, and kernel density approaches and their suitability for visualizing the results from chronological and geographic analyses considered for cases with and without useful prior information.
Abstract: Bayesian models have proved very powerful in analyzing large datasets of radiocarbon (14C) measurements from specific sites and in regional cultural or political models. These models require the prior for the underlying processes that are being described to be defined, including the distribution of underlying events. Chronological information is also incorporated into Bayesian models used in DNA research, with the use of Skyline plots to show demographic trends. Despite these advances, there remain difficulties in assessing whether data conform to the assumed underlying models, and in dealing with the type of artifacts seen in Sum plots. In addition, existing methods are not applicable for situations where it is not possible to quantify the underlying process, or where sample selection is thought to have filtered the data in a way that masks the original event distribution. In this paper three different approaches are compared: “Sum” distributions, postulated undated events, and kernel density approaches. Their implementation in the OxCal program is described and their suitability for visualizing the results from chronological and geographic analyses considered for cases with and without useful prior information. The conclusion is that kernel density analysis is a powerful method that could be much more widely applied in a wide range of dating applications.

Journal ArticleDOI
TL;DR: The recent advances in electron detection and image processing are reviewed and the exciting new opportunities that they offer to structural biology research are illustrated.

Proceedings ArticleDOI
26 Jul 2015
TL;DR: This work proposes a deep learning based classification method that hierarchically constructs high-level features in an automated way and exploits a Convolutional Neural Network to encode pixels' spectral and spatial information and a Multi-Layer Perceptron to conduct the classification task.
Abstract: Spectral observations along the spectrum in many narrow spectral bands through hyperspectral imaging provides valuable information towards material and object recognition, which can be consider as a classification task. Most of the existing studies and research efforts are following the conventional pattern recognition paradigm, which is based on the construction of complex handcrafted features. However, it is rarely known which features are important for the problem at hand. In contrast to these approaches, we propose a deep learning based classification method that hierarchically constructs high-level features in an automated way. Our method exploits a Convolutional Neural Network to encode pixels' spectral and spatial information and a Multi-Layer Perceptron to conduct the classification task. Experimental results and quantitative validation on widely used datasets showcasing the potential of the developed approach for accurate hyperspectral data classification.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Baseline accuracies for both face detection and face recognition from commercial and open source algorithms demonstrate the challenge offered by this new unconstrained benchmark.
Abstract: Rapid progress in unconstrained face recognition has resulted in a saturation in recognition accuracy for current benchmark datasets. While important for early progress, a chief limitation in most benchmark datasets is the use of a commodity face detector to select face imagery. The implication of this strategy is restricted variations in face pose and other confounding factors. This paper introduces the IARPA Janus Benchmark A (IJB-A), a publicly available media in the wild dataset containing 500 subjects with manually localized face images. Key features of the IJB-A dataset are: (i) full pose variation, (ii) joint use for face recognition and face detection benchmarking, (iii) a mix of images and videos, (iv) wider geographic variation of subjects, (v) protocols supporting both open-set identification (1∶N search) and verification (1∶1 comparison), (vi) an optional protocol that allows modeling of gallery subjects, and (vii) ground truth eye and nose locations. The dataset has been developed using 1,501,267 million crowd sourced annotations. Baseline accuracies for both face detection and face recognition from commercial and open source algorithms demonstrate the challenge offered by this new unconstrained benchmark.

Book
13 Aug 2019
TL;DR: In the last two decades, Bell's theorem has been a central theme of research from a variety of perspectives, mainly motivated by quantum information science, where the nonlocality of quantum theory underpins many of the advantages afforded by a quantum processing of information.
Abstract: Bell's 1964 theorem, which states that the predictions of quantum theory cannot be accounted for by any local theory, represents one of the most profound developments in the foundations of physics In the last two decades, Bell's theorem has been a central theme of research from a variety of perspectives, mainly motivated by quantum information science, where the nonlocality of quantum theory underpins many of the advantages afforded by a quantum processing of information The focus of this review is to a large extent oriented by these later developments We review the main concepts and tools which have been developed to describe and study the nonlocality of quantum theory, and which have raised this topic to the status of a full sub-field of quantum information science

Journal ArticleDOI
TL;DR: A comprehensive survey of photo-CRP reactions can be found in this article, where a large number of methods are summarized and further classified into subcategories based on the specific reagents, catalysts, etc., involved.
Abstract: The use of light to mediate controlled radical polymerization has emerged as a powerful strategy for rational polymer synthesis and advanced materials fabrication. This review provides a comprehensive survey of photocontrolled, living radical polymerizations (photo-CRPs). From the perspective of mechanism, all known photo-CRPs are divided into either (1) intramolecular photochemical processes or (2) photoredox processes. Within these mechanistic regimes, a large number of methods are summarized and further classified into subcategories based on the specific reagents, catalysts, etc., involved. To provide a clear understanding of each subcategory, reaction mechanisms are discussed. In addition, applications of photo-CRP reported so far, which include surface fabrication, particle preparation, photoresponsive gel design, and continuous flow technology, are summarized. We hope this review will not only provide informative knowledge to researchers in this field but also stimulate new ideas and applications to further advance photocontrolled reactions.

Journal ArticleDOI
Peter A. R. Ade, Nabila Aghanim, Monique Arnaud, M. Ashdown, J. Aumont, Carlo Baccigalupi, A. J. Banday, R. B. Barreiro, J. G. Bartlett, N. Bartolo, E. Battaner, Richard A. Battye, K. Benabed, Alain Benoit, A. Benoit-Lévy, J.-P. Bernard, Marco Bersanelli, P. Bielewicz, J. J. Bock, Anna Bonaldi, Laura Bonavera, J. R. Bond, Julian Borrill, François R. Bouchet, M. Bucher, Carlo Burigana, R. C. Butler, Erminia Calabrese, Jean-François Cardoso, A. Catalano, Anthony Challinor, A. Chamballu, R.-R. Chary, H. C. Chiang, P. R. Christensen, Sarah E. Church, David L. Clements, S. Colombi, L. P. L. Colombo, C. Combet, B. Comis, F. Couchot, A. Coulais, B. P. Crill, A. Curto, F. Cuttaia, Luigi Danese, R. D. Davies, R. J. Davis, P. de Bernardis, A. de Rosa, G. de Zotti, Jacques Delabrouille, F.-X. Désert, Jose M. Diego, Klaus Dolag, H. Dole, S. Donzelli, Olivier Doré, Marian Douspis, A. Ducout, X. Dupac, George Efstathiou, F. Elsner, Torsten A. Enßlin, H. K. Eriksen, E. Falgarone, James R. Fergusson, Fabio Finelli, Olivier Forni, M. Frailis, A. A. Fraisse, E. Franceschi, A. Frejsel, S. Galeotta, S. Galli, K. Ganga, M. Giard, Y. Giraud-Héraud, E. Gjerløw, J. González-Nuevo, Krzysztof M. Gorski, Serge Gratton, A. Gregorio, Alessandro Gruppuso, Jon E. Gudmundsson, F. K. Hansen, Duncan Hanson, D. L. Harrison, Sophie Henrot-Versille, C. Hernández-Monteagudo, D. Herranz, S. R. Hildebrandt, E. Hivon, Michael P. Hobson, W. A. Holmes, Allan Hornstrup, W. Hovest, Kevin M. Huffenberger 
TL;DR: In this article, the authors present constraints on cosmological parameters using number counts as a function of redshift for a sub-sample of 189 galaxy clusters from the Planck SZ (PSZ) catalogue.
Abstract: We present constraints on cosmological parameters using number counts as a function of redshift for a sub-sample of 189 galaxy clusters from the Planck SZ (PSZ) catalogue. The PSZ is selected through the signature of the Sunyaev--Zeldovich (SZ) effect, and the sub-sample used here has a signal-to-noise threshold of seven, with each object confirmed as a cluster and all but one with a redshift estimate. We discuss the completeness of the sample and our construction of a likelihood analysis. Using a relation between mass $M$ and SZ signal $Y$ calibrated to X-ray measurements, we derive constraints on the power spectrum amplitude $\sigma_8$ and matter density parameter $\Omega_{\mathrm{m}}$ in a flat $\Lambda$CDM model. We test the robustness of our estimates and find that possible biases in the $Y$--$M$ relation and the halo mass function are larger than the statistical uncertainties from the cluster sample. Assuming the X-ray determined mass to be biased low relative to the true mass by between zero and 30%, motivated by comparison of the observed mass scaling relations to those from a set of numerical simulations, we find that $\sigma_8=0.75\pm 0.03$, $\Omega_{\mathrm{m}}=0.29\pm 0.02$, and $\sigma_8(\Omega_{\mathrm{m}}/0.27)^{0.3} = 0.764 \pm 0.025$. The value of $\sigma_8$ is degenerate with the mass bias; if the latter is fixed to a value of 20% we find $\sigma_8(\Omega_{\mathrm{m}}/0.27)^{0.3}=0.78\pm 0.01$ and a tighter one-dimensional range $\sigma_8=0.77\pm 0.02$. We find that the larger values of $\sigma_8$ and $\Omega_{\mathrm{m}}$ preferred by Planck's measurements of the primary CMB anisotropies can be accommodated by a mass bias of about 40%. Alternatively, consistency with the primary CMB constraints can be achieved by inclusion of processes that suppress power on small scales relative to the $\Lambda$CDM model, such as a component of massive neutrinos (abridged).

Journal ArticleDOI
TL;DR: It is suggested that other topics also need addressing, including better assessment of the wider benefits of intercropping in terms of multiple ecosystem services, collaboration with agricultural engineering, and more effective interdisciplinary research.
Abstract: Intercropping is a farming practice involving two or more crop species, or genotypes, growing together and coexisting for a time. On the fringes of modern intensive agriculture, intercropping is important in many subsistence or low-input/resource-limited agricultural systems. By allowing genuine yield gains without increased inputs, or greater stability of yield with decreased inputs, intercropping could be one route to delivering ‘sustainable intensification’. We discuss how recent knowledge from agronomy, plant physiology and ecology can be combined with the aim of improving intercropping systems. Recent advances in agronomy and plant physiology include better understanding of the mechanisms of interactions between crop genotypes and species – for example, enhanced resource availability through niche complementarity. Ecological advances include better understanding of the context-dependency of interactions, the mechanisms behind disease and pest avoidance, the links between above- and below-ground systems, and the role of microtopographic variation in coexistence. This improved understanding can guide approaches for improving intercropping systems, including breeding crops for intercropping. Although such advances can help to improve intercropping systems, we suggest that other topics also need addressing. These include better assessment of the wider benefits of intercropping in terms of multiple ecosystem services, collaboration with agricultural engineering, and more effective interdisciplinary research.

Book ChapterDOI
TL;DR: This article reviewed and synthesized our current understanding of the shocks that drive economic fluctuations and concluded that we are much closer to understanding the shocks in economic fluctuations than we were 20 years ago.
Abstract: This chapter reviews and synthesizes our current understanding of the shocks that drive economic fluctuations. The chapter begins with an illustration of the problem of identifying macroeconomic shocks, followed by an overview of the many recent innovations for identifying shocks. It then reviews in detail three main types of shocks: monetary, fiscal, and technology. After surveying the literature, each section presents new estimates that compare and synthesize key parts of the literature. The penultimate section briefly summarizes a few additional shocks. The final section analyzes the extent to which the leading shock candidates can explain fluctuations in output and hours. It concludes that we are much closer to understanding the shocks that drive economic fluctuations than we were 20 years ago.

Journal ArticleDOI
19 Apr 2016-PLOS ONE
TL;DR: This paper aims to be a new well-funded basis for unsupervised anomaly detection research by publishing the source code and the datasets, and reveals the strengths and weaknesses of the different approaches for the first time.
Abstract: Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.

Journal ArticleDOI
TL;DR: Numerical approaches were reviewed and implemented for depicting the cellular mechanics within the hydrogel as well as for prediction of mechanical properties to achieve the desired hydrogels construct considering cell density, distribution and material-cell interaction.
Abstract: Bioprinting is a process based on additive manufacturing from materials containing living cells. These materials, often referred to as bioink, are based on cytocompatible hydrogel precursor formulations, which gel in a manner compatible with different bioprinting approaches. The bioink properties before, during and after gelation are essential for its printability, comprising such features as achievable structural resolution, shape fidelity and cell survival. However, it is the final properties of the matured bioprinted tissue construct that are crucial for the end application. During tissue formation these properties are influenced by the amount of cells present in the construct, their proliferation, migration and interaction with the material. A calibrated computational framework is able to predict the tissue development and maturation and to optimize the bioprinting input parameters such as the starting material, the initial cell loading and the construct geometry. In this contribution relevant bioink properties are reviewed and discussed on the example of most popular bioprinting approaches. The effect of cells on hydrogel processing and vice versa is highlighted. Furthermore, numerical approaches were reviewed and implemented for depicting the cellular mechanics within the hydrogel as well as for prediction of mechanical properties to achieve the desired hydrogel construct considering cell density, distribution and material-cell interaction.

Journal ArticleDOI
TL;DR: This record-breaking implementation of the MDIQKD method provides a new distance record and achieves a distance that the traditional Bennett-Brassard 1984 QKD would not be able to achieve with the same detection devices even with ideal single-photon sources.
Abstract: A protocol for secure quantum communications has been demonstrated over a record-breaking distance of 404 km.

Book ChapterDOI
08 Sep 2018
TL;DR: In this article, a novel unsupervised domain adaptation (UDA) framework is proposed based on an iterative self-training procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels.
Abstract: Recent deep networks achieved state of the art performance on a variety of semantic segmentation tasks. Despite such progress, these models often face challenges in real world “wild tasks” where large difference between labeled training/source data and unseen test/target data exists. In particular, such difference is often referred to as “domain gap”, and could cause significantly decreased performance which cannot be easily remedied by further increasing the representation power. Unsupervised domain adaptation (UDA) seeks to overcome such problem without target domain labels. In this paper, we propose a novel UDA framework based on an iterative self-training (ST) procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels. On top of ST, we also propose a novel class-balanced self-training (CBST) framework to avoid the gradual dominance of large classes on pseudo-label generation, and introduce spatial priors to refine generated labels. Comprehensive experiments show that the proposed methods achieve state of the art semantic segmentation performance under multiple major UDA settings.

Book ChapterDOI
02 Sep 2016
TL;DR: It is shown that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task especially at low signal to noise ratio.
Abstract: We study the adaptation of convolutional neural networks to the complex-valued temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert feature based methods which are widely used today and e show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task especially at low signal to noise ratio.

Posted Content
TL;DR: This paper proposes an automated benchmark for facial manipulation detection, and shows that the use of additional domain-specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.
Abstract: The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best, this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available and contains a hidden test set as well as a database of over 1.8 million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data, we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domainspecific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.

Journal ArticleDOI
TL;DR: In this paper, the authors present a perspective on heterogeneous materials, a new class of materials possessing superior combinations of strength and ductility that are not accessible to their homogeneous counterpar...
Abstract: Here we present a perspective on heterogeneous materials, a new class of materials possessing superior combinations of strength and ductility that are not accessible to their homogeneous counterpar...