scispace - formally typeset
Search or ask a question
Browse all papers

Proceedings ArticleDOI
24 Oct 2016
TL;DR: This paper outlines a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F*, a functional programming language aimed at program verification.
Abstract: Ethereum is a framework for cryptocurrencies which uses blockchain technology to provide an open global computing platform, called the Ethereum Virtual Machine (EVM). EVM executes bytecode on a simple stack machine. Programmers do not usually write EVM code; instead, they can program in a JavaScript-like language, called Solidity, that compiles to bytecode. Since the main purpose of EVM is to execute smart contracts that manage and transfer digital assets (called Ether), security is of paramount importance. However, writing secure smart contracts can be extremely difficult: due to the openness of Ethereum, both programs and pseudonymous users can call into the public methods of other programs, leading to potentially dangerous compositions of trusted and untrusted code. This risk was recently illustrated by an attack on TheDAO contract that exploited subtle details of the EVM semantics to transfer roughly $50M worth of Ether into the control of an attacker.In this paper, we outline a framework to analyze and verify both the runtime safety and the functional correctness of Ethereum contracts by translation to F*, a functional programming language aimed at program verification.

551 citations


Journal ArticleDOI
TL;DR: The types, clinical approach and treatment of bone metastases are reviewed, which indicates a short-term prognosis in cancer patients with bone metastasis.
Abstract: Bone is a frequent site of metastases and typically indicates a short-term prognosis in cancer patients. Once cancer has spread to the bones it can rarely be cured, but often it can still be treated to slow its growth. The majority of skeletal metastases are due to breast and prostate cancer. Bone metastasis is actually much more common than primary bone cancers, especially in adults. The diagnosis is based on signs, symptoms and imaging. New classes of drugs and new interventions are given a better quality of life to these patients and improved the expectancy of life. It is necessary a multidisciplinary approach to treat patients with bone metastasis. In this paper we review the types, clinical approach and treatment of bone metastases.

551 citations


Journal ArticleDOI
01 Sep 2015-Genetics
TL;DR: This work reports that direct injection of in vitro–assembled Cas9-CRISPR RNA (crRNA) trans-activating crRNA (tracrRNA) ribonucleoprotein complexes into the gonad of Caenorhabditis elegans yields HDR edits at a high frequency.
Abstract: Homology-directed repair (HDR) of breaks induced by the RNA-programmed nuclease Cas9 has become a popular method for genome editing in several organisms. Most HDR protocols rely on plasmid-based expression of Cas9 and the gene-specific guide RNAs. Here we report that direct injection of in vitro–assembled Cas9-CRISPR RNA (crRNA) trans-activating crRNA (tracrRNA) ribonucleoprotein complexes into the gonad of Caenorhabditis elegans yields HDR edits at a high frequency. Building on our earlier finding that PCR fragments with 35-base homology are efficient repair templates, we developed an entirely cloning-free protocol for the generation of seamless HDR edits without selection. Combined with the co-CRISPR method, this protocol is sufficiently robust for use with low-efficiency guide RNAs and to generate complex edits, including ORF replacement and simultaneous tagging of two genes with fluorescent proteins.

550 citations


Journal ArticleDOI
24 Jul 2015-Science
TL;DR: How molecules ranging from lantibiotics and microcins to indoxyl sulfate and immunemodulatory oligosaccharides and lipids could affect the health and physiology of the whole organism, depending on the composition of an individual's microbial community is reviewed.
Abstract: Developments in the use of genomics to guide natural product discovery and a recent emphasis on understanding the molecular mechanisms of microbiota-host interactions have converged on the discovery of small molecules from the human microbiome. Here, we review what is known about small molecules produced by the human microbiota. Numerous molecules representing each of the major metabolite classes have been found that have a variety of biological activities, including immune modulation and antibiosis. We discuss technologies that will affect how microbiota-derived molecules are discovered in the future and consider the challenges inherent in finding specific molecules that are critical for driving microbe-host and microbe-microbe interactions and understanding their biological relevance.

550 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, the optimal probability for correctly discriminating the outputs of an image restoration algorithm from real images was studied and it was shown that as the mean distortion decreases, this probability must increase (indicating worse perceptual quality).
Abstract: Image restoration algorithms are typically evaluated by some distortion measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify perceived perceptual quality. In this paper, we prove mathematically that distortion and perceptual quality are at odds with each other. Specifically, we study the optimal probability for correctly discriminating the outputs of an image restoration algorithm from real images. We show that as the mean distortion decreases, this probability must increase (indicating worse perceptual quality). As opposed to the common belief, this result holds true for any distortion measure, and is not only a problem of the PSNR or SSIM criteria. However, as we show experimentally, for some measures it is less severe (e.g. distance between VGG features). We also show that generative-adversarial-nets (GANs) provide a principled way to approach the perception-distortion bound. This constitutes theoretical support to their observed success in low-level vision tasks. Based on our analysis, we propose a new methodology for evaluating image restoration methods, and use it to perform an extensive comparison between recent super-resolution algorithms.

550 citations


Proceedings ArticleDOI
04 Feb 2019
TL;DR: The authors found that statistical NLI models may adopt three fallible syntactic heuristics: the lexical overlap heuristic, the subsequence heuristic and the constituent heuristic.
Abstract: A machine learning system can score well on a given test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of determining whether one sentence entails another. We hypothesize that statistical NLI models may adopt three fallible syntactic heuristics: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic. To determine whether models have adopted these heuristics, we introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI Systems), which contains many examples where the heuristics fail. We find that models trained on MNLI, including BERT, a state-of-the-art model, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics. We conclude that there is substantial room for improvement in NLI systems, and that the HANS dataset can motivate and measure progress in this area.

550 citations


Journal ArticleDOI
TL;DR: Theoretical modelling reveals that the observed anisotropy is primarily related to the anisotropic phonon dispersion, whereas the intrinsic phonon scattering rates are found to be similar along the armchair and zigzag directions.
Abstract: Black phosphorus has been revisited recently as a new two-dimensional material showing potential applications in electronics and optoelectronics. Here we report the anisotropic in-plane thermal conductivity of suspended few-layer black phosphorus measured by micro-Raman spectroscopy. The armchair and zigzag thermal conductivities are ∼20 and ∼40 W m(-1) K(-1) for black phosphorus films thicker than 15 nm, respectively, and decrease to ∼10 and ∼20 W m(-1) K(-1) as the film thickness is reduced, exhibiting significant anisotropy. The thermal conductivity anisotropic ratio is found to be ∼2 for thick black phosphorus films and drops to ∼1.5 for the thinnest 9.5-nm-thick film. Theoretical modelling reveals that the observed anisotropy is primarily related to the anisotropic phonon dispersion, whereas the intrinsic phonon scattering rates are found to be similar along the armchair and zigzag directions. Surface scattering in the black phosphorus films is shown to strongly suppress the contribution of long mean-free-path acoustic phonons.

550 citations


Journal ArticleDOI
31 Aug 2017-Nature
TL;DR: The experimental manifestation of another type of skyrmions—the magnetic antiskyrmion—in acentric tetragonal Heusler compounds with D2d crystal symmetry is presented, which enlarge the family of magnetic skyr mions and pave the way to the engineering of complex bespoke designed skyrMionic structures.
Abstract: Magnetic skyrmions are topologically stable, vortex-like objects surrounded by chiral boundaries that separate a region of reversed magnetization from the surrounding magnetized material. They are closely related to nanoscopic chiral magnetic domain walls, which could be used as memory and logic elements for conventional and neuromorphic computing applications that go beyond Moore’s law. Of particular interest is ‘racetrack memory’, which is composed of vertical magnetic nanowires, each accommodating of the order of 100 domain walls, and that shows promise as a solid state, non-volatile memory with exceptional capacity and performance. Its performance is derived from the very high speeds (up to one kilometre per second) at which chiral domain walls can be moved with nanosecond current pulses in synthetic antiferromagnet racetracks. Because skyrmions are essentially composed of a pair of chiral domain walls closed in on themselves, but are, in principle, more stable to perturbations than the component domain walls themselves, they are attractive for use in spintronic applications, notably racetrack memory. Stabilization of skyrmions has generally been achieved in systems with broken inversion symmetry, in which the asymmetric Dzyaloshinskii–Moriya interaction modifies the uniform magnetic state to a swirling state. Depending on the crystal symmetry, two distinct types of skyrmions have been observed experimentally, namely, Bloch and Neel skyrmions. Here we present the experimental manifestation of another type of skyrmion—the magnetic antiskyrmion—in acentric tetragonal Heusler compounds with D$_{2d}$ crystal symmetry. Antiskyrmions are characterized by boundary walls that have alternating Bloch and Neel type as one traces around the boundary. A spiral magnetic ground-state, which propagates in the tetragonal basal plane, is transformed into an antiskyrmion lattice state under magnetic fields applied along the tetragonal axis over a wide range of temperatures. Direct imaging by Lorentz transmission electron microscopy shows field-stabilized antiskyrmion lattices and isolated antiskyrmions from 100 kelvin to well beyond room temperature, and zero-field metastable antiskyrmions at low temperatures. These results enlarge the family of magnetic skyrmions and pave the way to the engineering of complex bespoke designed skyrmionic structures.

550 citations


Journal ArticleDOI
TL;DR: ARS-853 is described, a selective, covalent inhibitor of KRAS(G12C) that inhibits mutant KRAS-driven signaling by binding to the GDP-bound oncoprotein and preventing activation.
Abstract: KRAS gain-of-function mutations occur in approximately 30% of all human cancers. Despite more than 30 years of KRAS-focused research and development efforts, no targeted therapy has been discovered for cancers with KRAS mutations. Here, we describe ARS-853, a selective, covalent inhibitor of KRASG12C that inhibits mutant KRAS–driven signaling by binding to the GDP-bound oncoprotein and preventing activation. Based on the rates of engagement and inhibition observed for ARS-853, along with a mutant-specific mass spectrometry–based assay for assessing KRAS activation status, we show that the nucleotide state of KRASG12C is in a state of dynamic flux that can be modulated by upstream signaling factors. These studies provide convincing evidence that the KRAS G12C mutation generates a “hyperexcitable” rather than a “statically active” state and that targeting the inactive, GDP-bound form is a promising approach for generating novel anti-RAS therapeutics. Significance: A cell-active, mutant-specific, covalent inhibitor of KRASG12C is described that targets the GDP-bound, inactive state and prevents subsequent activation. Using this novel compound, we demonstrate that KRASG12C oncoprotein rapidly cycles bound nucleotide and responds to upstream signaling inputs to maintain a highly active state. Cancer Discov; 6(3); 316–29. ©2016 AACR . See related commentary by Westover et al., [p. 233][1] . This article is highlighted in the In This Issue feature, [p. 217][2] [1]: /lookup/volpage/6/233?iss=3 [2]: /lookup/volpage/6/217?iss=3

550 citations


Posted Content
TL;DR: A user‐friendly and tractable implementation of SEM that also reflects the ecological and methodological processes generating data, and extends this method to all current (generalized) linear, (phylogenetic) least‐square, and mixed effects models, relying on familiar r syntax.
Abstract: Ecologists and evolutionary biologists are relying on an increasingly sophisticated set of statistical tools to describe complex natural systems. One such tool that has gained increasing traction in the life sciences is structural equation modeling (SEM), a variant of path analysis that resolves complex multivariate relationships among a suite of interrelated variables. SEM has historically relied on covariances among variables, rather than the values of the data points themselves. While this approach permits a wide variety of model forms, it limits the incorporation of detailed specifications. Here, I present a fully-documented, open-source R package piecewiseSEM that builds on the base R syntax for all current generalized linear, least-square, and mixed effects models. I also provide two worked examples: one involving a hierarchical dataset with non-normally distributed variables, and a second involving phylogenetically-independent contrasts. My goal is to provide a user-friendly and tractable implementation of SEM that also reflects the ecological and methodological processes generating data.

550 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: A general training framework named self distillation, which notably enhances the performance of convolutional neural networks through shrinking the size of the network rather than aggrandizing it, and can also provide flexibility of depth-wise scalable inference on resource-limited edge devices.
Abstract: Convolutional neural networks have been widely deployed in various application scenarios. In order to extend the applications' boundaries to some accuracy-crucial domains, researchers have been investigating approaches to boost accuracy through either deeper or wider network structures, which brings with them the exponential increment of the computational and storage cost, delaying the responding time. In this paper, we propose a general training framework named self distillation, which notably enhances the performance (accuracy) of convolutional neural networks through shrinking the size of the network rather than aggrandizing it. Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself. The networks are firstly divided into several sections. Then the knowledge in the deeper portion of the networks is squeezed into the shallow ones. Experiments further prove the generalization of the proposed self distillation framework: enhancement of accuracy at average level is 2.65%, varying from 0.61% in ResNeXt as minimum to 4.07% in VGG19 as maximum. In addition, it can also provide flexibility of depth-wise scalable inference on resource-limited edge devices. Our codes have been released on github.

Journal ArticleDOI
13 Sep 2018-Nature
TL;DR: A global modelling approach shows that in response to rises in global sea level, gains of up to 60% in coastal wetland areas are possible, if appropriate coastal management solutions are developed to help support wetland resilience.
Abstract: The response of coastal wetlands to sea-level rise during the twenty-first century remains uncertain Global-scale projections suggest that between 20 and 90 per cent (for low and high sea-level rise scenarios, respectively) of the present-day coastal wetland area will be lost, which will in turn result in the loss of biodiversity and highly valued ecosystem services1-3 These projections do not necessarily take into account all essential geomorphological4-7 and socio-economic system feedbacks8 Here we present an integrated global modelling approach that considers both the ability of coastal wetlands to build up vertically by sediment accretion, and the accommodation space, namely, the vertical and lateral space available for fine sediments to accumulate and be colonized by wetland vegetation We use this approach to assess global-scale changes in coastal wetland area in response to global sea-level rise and anthropogenic coastal occupation during the twenty-first century On the basis of our simulations, we find that, globally, rather than losses, wetland gains of up to 60 per cent of the current area are possible, if more than 37 per cent (our upper estimate for current accommodation space) of coastal wetlands have sufficient accommodation space, and sediment supply remains at present levels In contrast to previous studies1-3, we project that until 2100, the loss of global coastal wetland area will range between 0 and 30 per cent, assuming no further accommodation space in addition to current levels Our simulations suggest that the resilience of global wetlands is primarily driven by the availability of accommodation space, which is strongly influenced by the building of anthropogenic infrastructure in the coastal zone and such infrastructure is expected to change over the twenty-first century Rather than being an inevitable consequence of global sea-level rise, our findings indicate that large-scale loss of coastal wetlands might be avoidable, if sufficient additional accommodation space can be created through careful nature-based adaptation solutions to coastal management

Journal ArticleDOI
TL;DR: This review provides further support for practitioners to use subjective measures to monitor changes in athlete well-being in response to training, and reflects acute and chronic training loads with superior sensitivity and consistency than objective measures.
Abstract: Background Monitoring athlete well-being is essential to guide training and to detect any progression towards negative health outcomes and associated poor performance. Objective (performance, physiological, biochemical) and subjective measures are all options for athlete monitoring. Objective We systematically reviewed objective and subjective measures of athlete well-being. Objective measures, including those taken at rest (eg, blood markers, heart rate) and during exercise (eg, oxygen consumption, heart rate response), were compared against subjective measures (eg, mood, perceived stress). All measures were also evaluated for their response to acute and chronic training load. Methods The databases Academic search complete, MEDLINE, PsycINFO, SPORTDiscus and PubMed were searched in May 2014. Fifty-six original studies reported concurrent subjective and objective measures of athlete well-being. The quality and strength of findings of each study were evaluated to determine overall levels of evidence. Results Subjective and objective measures of athlete well-being generally did not correlate. Subjective measures reflected acute and chronic training loads with superior sensitivity and consistency than objective measures. Subjective well-being was typically impaired with an acute increase in training load, and also with chronic training, while an acute decrease in training load improved subjective well-being. Summary This review provides further support for practitioners to use subjective measures to monitor changes in athlete well-being in response to training. Subjective measures may stand alone, or be incorporated into a mixed methods approach to athlete monitoring, as is current practice in many sport settings.

Journal ArticleDOI
TL;DR: It is found that NRF2 controls the expression of the key serine/glycine biosynthesis enzyme genes PHGDH, PSAT1 and SHMT2 via ATF4 to support glutathione and nucleotide production and it is shown that expression of these genes confers poor prognosis in human NSCLC.
Abstract: Tumors have high energetic and anabolic needs for rapid cell growth and proliferation, and the serine biosynthetic pathway was recently identified as an important source of metabolic intermediates for these processes. We integrated metabolic tracing and transcriptional profiling of a large panel of non-small cell lung cancer (NSCLC) cell lines to characterize the activity and regulation of the serine/glycine biosynthetic pathway in NSCLC. Here we show that the activity of this pathway is highly heterogeneous and is regulated by NRF2, a transcription factor frequently deregulated in NSCLC. We found that NRF2 controls the expression of the key serine/glycine biosynthesis enzyme genes PHGDH, PSAT1 and SHMT2 via ATF4 to support glutathione and nucleotide production. Moreover, we show that expression of these genes confers poor prognosis in human NSCLC. Thus, a substantial fraction of human NSCLCs activates an NRF2-dependent transcriptional program that regulates serine and glycine metabolism and is linked to clinical aggressiveness.

Journal ArticleDOI
TL;DR: A formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes and ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies.
Abstract: We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

Journal ArticleDOI
TL;DR: In this paper, the authors provide the first comprehensive analysis of the labor market for Uber's driver-partners, finding that most of them had full- or part-time employment before joining Uber, and many continue in those positions after starting to drive with the Uber platform, which makes the flexibility to set their own hours especially valuable.
Abstract: Uber, the ride-sharing company launched in 2010, has grown at an exponential rate. Using both survey and administrative data, the authors provide the first comprehensive analysis of the labor market for Uber’s driver-partners. Drivers appear to be attracted to the Uber platform largely because of the flexibility it offers, the level of compensation, and the fact that earnings per hour do not vary much based on the number of hours worked. Uber’s driver-partners are more similar in terms of their age and education to the general workforce than to taxi drivers and chauffeurs. Most of Uber’s driver-partners had full- or part-time employment before joining Uber, and many continue in those positions after starting to drive with the Uber platform, which makes the flexibility to set their own hours especially valuable. Drivers often cite the desire to smooth fluctuations in their income as one of their reasons for partnering with Uber.

Journal ArticleDOI
TL;DR: The majority of participants overestimated intervention benefit and underestimated harm and Clinicians should discuss accurate and balanced information about intervention benefits and harms with patients, providing the opportunity to develop realistic expectations and make informed decisions.
Abstract: Importance Unrealistic patient expectations of the benefits and harms of interventions can influence decision making and may be contributing to increasing intervention uptake and health care costs. Objective To systematically review all studies that have quantitatively assessed patients’ expectations of the benefits and/or harms of any treatment, test, or screening test. Evidence Review A comprehensive search strategy was used in 4 databases (MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature, PsycINFO) up to June 2013, with no language or study type restriction. We also ran cited reference searches of included studies and contacted experts and study authors. Two researchers independently evaluated methodological quality and extracted participants’ estimates of benefit and harms and authors’ contemporaneous estimates. Findings Of the 15 343 records screened, 36 articles (from 35 studies) involving a total of 27 323 patients were eligible. Fourteen studies focused on a screen, 15 on treatment, 3 a test, and 3 on treatment and screening. More studies assessed only benefit expectations (22 [63%]) than benefit and harm expectations (10 [29%]) or only harm (3 [8%]). Fifty-four outcomes (across 32 studies) assessed benefit expectations: of the 34 outcomes with overestimation data available, the majority of participants overestimated benefit for 22 (65%) of them. For 17 benefit expectation outcomes, we could not calculate the proportion of participants who overestimated or underestimated, although for 15 (88%) of these, study authors concluded that participants overestimated benefits. Expectations of harm were assessed by 27 outcomes (across 13 studies): underestimation data were available for 15 outcomes and the majority of participants underestimated harm for 10 (67%) of these. A correct estimation by at least 50% of participants only occurred for 2 outcomes about benefit expectations and 2 outcomes about harm expectations. Conclusions and Relevance The majority of participants overestimated intervention benefit and underestimated harm. Clinicians should discuss accurate and balanced information about intervention benefits and harms with patients, providing the opportunity to develop realistic expectations and make informed decisions.

Journal ArticleDOI
TL;DR: In this article, the authors show that screening effects associated to ionic transport can be effectively eliminated by lowering the operating temperature of methylammonium lead iodide perovskite (CH3NH3PbI3) field-effect transistors.
Abstract: Despite the widespread use of solution-processable hybrid organic-inorganic perovskites in photovoltaic and light-emitting applications, determination of their intrinsic charge transport parameters has been elusive due to the variability of film preparation and history-dependent device performance. Here we show that screening effects associated to ionic transport can be effectively eliminated by lowering the operating temperature of methylammonium lead iodide perovskite (CH3NH3PbI3) field-effect transistors. Field-effect carrier mobility is found to increase by almost two orders of magnitude below 200 K, consistent with phonon scattering-limited transport. Under balanced ambipolar carrier injection, gate-dependent electroluminescence is also observed from the transistor channel, with spectra revealing the tetragonal to orthorhombic phase transition. This demonstration of CH3NH3PbI3 light-emitting field-effect transistors provides intrinsic transport parameters to guide materials and solar cell optimization, and will drive the development of new electro-optic device concepts, such as gated light-emitting diodes and lasers operating at room temperature.

Journal ArticleDOI
TL;DR: In this article, the authors adapted the Fear of COVID-19 Scale into Turkish and investigated the relationship between fear of the pandemic and psychological distress, psychological distress and life satisfaction.
Abstract: The world is currently experiencing a pandemic of an infectious disease called COVID-19 which has drawn global intensive attention. While global attention is largely focusing on the effects of the coronavirus on physical health, the impacts of the coronavirus on psychological health cannot be overlooked. Therefore, this study aims to adapt the Fear of COVID-19 Scale into Turkish and investigate the relationships between fear of COVID-19, psychological distress, and life satisfaction. Data were collected by convenience sampling method, which allowed us to reach total 1304 participants, aged between 18 and 64 years, from 75 cities in Turkey. In the adaptation process of the Fear of COVID-19 Scale, confirmatory factor analysis, Item Response Theory, convergent validity, and reliability (Cronbach's α, McDonald's ω, Guttmann's λ6, and composite reliability) analyses were performed. Additionally, the mediating role of psychological distress on the relationship between fear of COVID-19 and life satisfaction was tested. The uni-dimensionality of the 7-item scale was confirmed on a Turkish sample. Item Response Theory revealed that all items were coherent and fit with the model. The results indicated that the Turkish version of the scale had satisfactory reliability coefficients. The fear of COVID-19 was found to be associated with psychological distress and life satisfaction. Results indicated that the Turkish version of the Fear of COVID-19 Scale had strong psychometric properties. This scale will allow mental health professionals to do research on the psychological impacts of COVID-19 in Turkey.

Proceedings Article
17 Feb 2017
TL;DR: A variant of the full NIST dataset is introduced, which is called Extended MNIST (EMNIST), which follows the same conversion paradigm used to create the MNIST dataset, and shares the same image structure and parameters as the original MNIST task, allowing for direct compatibility with all existing classifiers and systems.
Abstract: The MNIST dataset has become a standard benchmark for learning, classification and computer vision systems. Contributing to its widespread adoption are the understandable and intuitive nature of the task, its relatively small size and storage requirements and the accessibility and ease-of-use of the database itself. The MNIST database was derived from a larger dataset known as the NIST Special Database 19 which contains digits, uppercase and lowercase handwritten letters. This paper introduces a variant of the full NIST dataset, which we have called Extended MNIST (EMNIST), which follows the same conversion paradigm used to create the MNIST dataset. The result is a set of datasets that constitute a more challenging classification tasks involving letters and digits, and that shares the same image structure and parameters as the original MNIST task, allowing for direct compatibility with all existing classifiers and systems. Benchmark results are presented along with a validation of the conversion process through the comparison of the classification results on converted NIST digits and the MNIST digits.

Journal ArticleDOI
TL;DR: In this article, the authors examined the impact of a positive and policy-driven change in economic resources available in utero and during childhood, and found that access to food stamps in childhood leads to a significant reduction in the incidence of "metabolic syndrome" (obesity, high blood pressure, and diabetes) and an increase in economic selfsufficiency.
Abstract: A growing economics literature establishes a causal link between in utero shocks and health and human capital in adulthood. Most studies rely on extreme negative shocks such as famine and pandemics. We are the first to examine the impact of a positive and policy-driven change in economic resources available in utero and during childhood. In particular, we focus on the introduction of a key element of the U.S. safety net, the Food Stamp Program, which was rolled out across counties in the U.S. between 1961 and 1975. We use the Panel Study of Income Dynamics to assemble unique data linking family background and county of residence in early childhood to adult health and economic outcomes. The identification comes from variation across counties and over birth cohorts in availability of the food stamp program. Our findings indicate that the food stamp program has effects decades after initial exposure. Specifically, access to food stamps in childhood leads to a significant reduction in the incidence of "metabolic syndrome" (obesity, high blood pressure, and diabetes) and, for women, an increase in economic self-sufficiency. Overall, our results suggest substantial internal and external benefits of the safety net that have not previously been quantified.

Journal ArticleDOI
18 Jul 2018-BMJ
TL;DR: Mortality due to cirrhosis has been increasing in the US since 2009, driven entirely by alcohol related liver disease, and people aged 25-34 have experienced the greatest relative increase in mortality.
Abstract: Objective To describe liver disease related mortality in the United States during 1999-2016 by age group, sex, race, cause of liver disease, and geographic region. Design Observational cohort study. Setting Death certificate data from the Vital Statistics Cooperative, and population data from the US Census Bureau compiled by the Center for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research (1999-2016). Participants US residents. Main outcome measure Deaths from cirrhosis and hepatocellular carcinoma, with trends evaluated using joinpoint regression. Results From 1999 to 2016 in the US annual deaths from cirrhosis increased by 65%, to 34 174, while annual deaths from hepatocellular carcinoma doubled to 11 073. Only one subgroup—Asians and Pacific Islanders—experienced an improvement in mortality from hepatocellular carcinoma: the death rate decreased by 2.7% (95% confidence interval 2.2% to 3.3%, P Conclusions Mortality due to cirrhosis has been increasing in the US since 2009. Driven by deaths due to alcoholic cirrhosis, people aged 25-34 have experienced the greatest relative increase in mortality. White Americans, Native Americans, and Hispanic Americans experienced the greatest increase in deaths from cirrhosis. Mortality due to cirrhosis is improving in Maryland but worst in Kentucky, New Mexico, and Arkansas. The rapid increase in death rates among young people due to alcohol highlight new challenges for optimal care of patients with preventable liver disease.

Journal ArticleDOI
TL;DR: This article provides an introduction to RNA-Seq methods, including applications, experimental design, and technical challenges, and applies them to investigate different populations of RNA.
Abstract: RNA sequencing (RNA-Seq) uses the capabilities of high-throughput sequencing methods to provide insight into the transcriptome of a cell. Compared to previous Sanger sequencing- and microarray-based methods, RNA-Seq provides far higher coverage and greater resolution of the dynamic nature of the transcriptome. Beyond quantifying gene expression, the data generated by RNA-Seq facilitate the discovery of novel transcripts, identification of alternatively spliced genes, and detection of allele-specific expression. Recent advances in the RNA-Seq workflow, from sample preparation to library construction to data analysis, have enabled researchers to further elucidate the functional complexity of the transcription. In addition to polyadenylated messenger RNA (mRNA) transcripts, RNA-Seq can be applied to investigate different populations of RNA, including total RNA, pre-mRNA, and noncoding RNA, such as microRNA and long ncRNA. This article provides an introduction to RNA-Seq methods, including applications, experimental design, and technical challenges.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the role of the structural build-up properties of concrete and cement-based materials in a layer-by-layer construction technique and proposed a theoretical framework to optimize the building rate.
Abstract: Additive manufacturing and digital fabrication bring new horizons to concrete and cement-based material construction. 3D printing inspired construction techniques that have recently been developed at laboratory scale for cement-based materials. This study aims to investigate the role of the structural build-up properties of cement-based materials in such a layer by layer construction technique. As construction progresses, the cement-based materials become harder with time. The mechanical strength of the cement-based materials must be sufficient to sustain the weight of the layers subsequently deposited. It follows that the comparison of the mechanical strength, which evolves with time (i.e. structural build-up), with the loading due to layers subsequently deposited, can be expected to provide the optimal rate of layer by layer construction. A theoretical framework has been developed to propose a method of optimization of the building rate, which is experimentally validated in a layer-wise built column.

Posted Content
TL;DR: This paper reconciles the classical understanding and the modern practice within a unified performance curve that subsumes the textbook U-shaped bias-variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance.
Abstract: Breakthroughs in machine learning are rapidly changing science and society, yet our fundamental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the field, the bias-variance trade-off, appears to be at odds with the observed behavior of methods used in the modern machine learning practice. The bias-variance trade-off implies that a model should balance under-fitting and over-fitting: rich enough to express underlying structure in data, simple enough to avoid fitting spurious patterns. However, in the modern practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate) the data. Classically, such models would be considered over-fit, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners. In this paper, we reconcile the classical understanding and the modern practice within a unified performance curve. This "double descent" curve subsumes the textbook U-shaped bias-variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance. We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence. This connection between the performance and the structure of machine learning models delineates the limits of classical analyses, and has implications for both the theory and practice of machine learning.

Proceedings ArticleDOI
10 Apr 2018
TL;DR: Wang et al. as mentioned in this paper proposed a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation, which is a content-based deep recommendation framework for click-through rate prediction.
Abstract: Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users» diverse interests, we also design an attention module in DKN to dynamically aggregate a user»s history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.

Journal ArticleDOI
TL;DR: This review gathers studies of the community-level effects of heavy metal pollution, including heavy metal transfer from soils to plants, microbes, invertebrates, and to both small and large mammals (including humans).
Abstract: Heavy metals are released into the environment by both anthropogenic and natural sources. Highly reactive and often toxic at low concentrations, they may enter soils and groundwater, bioaccumulate in food webs, and adversely affect biota. Heavy metals also may remain in the environment for years, posing long-term risks to life well after point sources of heavy metal pollution have been removed. In this review, we compile studies of the community-level effects of heavy metal pollution, including heavy metal transfer from soils to plants, microbes, invertebrates, and to both small and large mammals (including humans). Many factors contribute to heavy metal accumulation in animals including behavior, physiology, and diet. Biotic effects of heavy metals are often quite different for essential and non-essential heavy metals, and vary depending on the specific metal involved. They also differ for adapted organisms, including metallophyte plants and heavy metal-tolerant insects, which occur in naturally high-metal habitats (such as serpentine soils) and have adaptations that allow them to tolerate exposure to relatively high concentrations of some heavy metals. Some metallophyte plants are hyperaccumulators of certain heavy metals and new technologies using them to clean metal-contaminated soil (phytoextraction) may offer economically attractive solutions to some metal pollution challenges. These new technologies provide incentive to catalog and protect the unique biodiversity of habitats that have naturally high levels of heavy metals.

Posted Content
TL;DR: This report presents a brief survey on development of DL approaches, including Deep Neural Network (DNN), Convolutional neural network (CNN), Recurrent Neural network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL).
Abstract: Deep learning has demonstrated tremendous success in variety of application domains in the past few years. This new field of machine learning has been growing rapidly and applied in most of the application domains with some new modalities of applications, which helps to open new opportunity. There are different methods have been proposed on different category of learning approaches, which includes supervised, semi-supervised and un-supervised learning. The experimental results show state-of-the-art performance of deep learning over traditional machine learning approaches in the field of Image Processing, Computer Vision, Speech Recognition, Machine Translation, Art, Medical imaging, Medical information processing, Robotics and control, Bio-informatics, Natural Language Processing (NLP), Cyber security, and many more. This report presents a brief survey on development of DL approaches, including Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). In addition, we have included recent development of proposed advanced variant DL techniques based on the mentioned DL approaches. Furthermore, DL approaches have explored and evaluated in different application domains are also included in this survey. We have also comprised recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys have published on Deep Learning in Neural Networks [1, 38] and a survey on RL [234]. However, those papers have not discussed the individual advanced techniques for training large scale deep learning models and the recently developed method of generative models [1].

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed perovskite solar cells with different architectures (planar, mesoporous, HTL-free), employing temperature dependent measurements (currentvoltage, light intensity, electroluminescence) of the ideality factor to identify dominating recombination processes that limit the open-circuit voltage (Voc).
Abstract: Metal halide perovskite absorber materials are about to emerge as a high-efficiency photovoltaic technology. At the same time, they are suitable for high-throughput manufacturing characterized by a low energy input and abundant low-cost materials. However, a further optimization of their efficiency, stability and reliability demands a more detailed optoelectronic characterization and understanding of losses including their evolution with time. In this work, we analyze perovskite solar cells with different architectures (planar, mesoporous, HTL-free), employing temperature dependent measurements (current–voltage, light intensity, electroluminescence) of the ideality factor to identify dominating recombination processes that limit the open-circuit voltage (Voc). We find that in thoroughly-optimized, high-Voc (≈1.2 V) devices recombination prevails through defects in the perovskite. On the other hand, irreversible degradation at elevated temperature is caused by the introduction of broad tail states originating from an external source (e.g. metal electrode). Light-soaking is another effect decreasing performance, though reversibly. Based on FTPS measurements, this degradation is attributed to the generation of surface defects becoming a new source of non-radiative recombination. We conclude that improving long-term stability needs to focus on adjacent layers, whereas a further optimization of efficiency of top-performing devices requires understanding of the defect physics of the nanocrystalline perovskite absorber. Finally, our work provides guidelines for the design of further dedicated studies to correctly interpret the diode ideality factor and decrease recombination losses.

Journal ArticleDOI
TL;DR: In this article, the authors bring together perspectives of various communities involved in the research and regulation of bioenergy deployment in the context of climate change mitigation: Land-use and energy experts, land use and integrated assessment modelers, human geographers, ecosystem researchers, climate scientists and two different strands of life-cycle assessment experts.
Abstract: Bioenergy deployment offers significant potential for climate change mitigation, but also carries considerable risks. In this review, we bring together perspectives of various communities involved in the research and regulation of bioenergy deployment in the context of climate change mitigation: Land-use and energy experts, land-use and integrated assessment modelers, human geographers, ecosystem researchers, climate scientists and two different strands of life-cycle assessment experts. We summarize technological options, outline the state-of-the-art knowledge on various climate effects, provide an update on estimates of technical resource potential and comprehensively identify sustainability effects. Cellulosic feedstocks, increased end-use efficiency, improved land carbon-stock management and residue use, and, when fully developed, BECCS appear as the most promising options, depending on development costs, implementation, learning, and risk management. Combined heat and power, efficient biomass cookstoves and small-scale power generation for rural areas can help to promote energy access and sustainable development, along with reduced emissions. We estimate the sustainable technical potential as up to 100EJ: high agreement; 100-300EJ: medium agreement; above 300EJ: low agreement. Stabilization scenarios indicate that bioenergy may supply from 10 to 245EJyr(-1) to global primary energy supply by 2050. Models indicate that, if technological and governance preconditions are met, large-scale deployment (>200EJ), together with BECCS, could help to keep global warming below 2 degrees degrees of preindustrial levels; but such high deployment of land-intensive bioenergy feedstocks could also lead to detrimental climate effects, negatively impact ecosystems, biodiversity and livelihoods. The integration of bioenergy systems into agriculture and forest landscapes can improve land and water use efficiency and help address concerns about environmental impacts. We conclude that the high variability in pathways, uncertainties in technological development and ambiguity in political decision render forecasts on deployment levels and climate effects very difficult. However, uncertainty about projections should not preclude pursuing beneficial bioenergy options.