scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this article, the authors quantify potential global impacts of different negative emissions technologies on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application.
Abstract: To have a >50% chance of limiting warming below 2 °C, most recent scenarios from integrated assessment models (IAMs) require large-scale deployment of negative emissions technologies (NETs). These are technologies that result in the net removal of greenhouse gases from the atmosphere. We quantify potential global impacts of the different NETs on various factors (such as land, greenhouse gas emissions, water, albedo, nutrients and energy) to determine the biophysical limits to, and economic costs of, their widespread application. Resource implications vary between technologies and need to be satisfactorily addressed if NETs are to have a significant role in achieving climate goals.

974 citations


Journal ArticleDOI
TL;DR: A crucial part of statistical analysis is evaluating a model’s quality and fit, or performance, and investigating the fit of models to data also often involves selecting the best fitting model amongst many competing models.
Abstract: A crucial part of statistical analysis is evaluating a model’s quality and fit, or performance. During analysis, especially with regression models, investigating the fit of models to data also often involves selecting the best fitting model amongst many competing models. Upon investigation, fit indices should also be reported both visually and numerically to bring readers in on the investigative effort.

973 citations


Journal ArticleDOI
TL;DR: Women who had midwife-led continuity models of care were less likely to experience regional analgesia and spontaneous vaginal birth and more likely to be attended at birth by a known midwife, according to the quality of the trial evidence.
Abstract: Background Midwives are primary providers of care for childbearing women around the world. However, there is a lack of synthesised information to establish whether there are differences in morbidity and mortality, effectiveness and psychosocial outcomes between midwife-led continuity models and other models of care. Objectives To compare midwife-led continuity models of care with other models of care for childbearing women and their infants. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (25 January 2016) and reference lists of retrieved studies. Selection criteria All published and unpublished trials in which pregnant women are randomly allocated to midwife-led continuity models of care or other models of care during pregnancy and birth. Data collection and analysis Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. The quality of the evidence was assessed using the GRADE approach. Main results We included 15 trials involving 17,674 women. We assessed the quality of the trial evidence for all primary outcomes (i.e. regional analgesia (epidural/spinal), caesarean birth, instrumental vaginal birth (forceps/vacuum), spontaneous vaginal birth, intact perineum, preterm birth (less than 37 weeks) and all fetal loss before and after 24 weeks plus neonatal death using the GRADE methodology: all primary outcomes were graded as of high quality.For the primary outcomes, women who had midwife-led continuity models of care were less likely to experience regional analgesia (average risk ratio (RR) 0.85, 95% confidence interval (CI) 0.78 to 0.92; participants = 17,674; studies = 14; high quality), instrumental vaginal birth (average RR 0.90, 95% CI 0.83 to 0.97; participants = 17,501; studies = 13; high quality), preterm birth less than 37 weeks (average RR 0.76, 95% CI 0.64 to 0.91; participants = 13,238; studies = eight; high quality) and less all fetal loss before and after 24 weeks plus neonatal death (average RR 0.84, 95% CI 0.71 to 0.99; participants = 17,561; studies = 13; high quality evidence). Women who had midwife-led continuity models of care were more likely to experience spontaneous vaginal birth (average RR 1.05, 95% CI 1.03 to 1.07; participants = 16,687; studies = 12; high quality). There were no differences between groups for caesarean births or intact perineum.For the secondary outcomes, women who had midwife-led continuity models of care were less likely to experience amniotomy (average RR 0.80, 95% CI 0.66 to 0.98; participants = 3253; studies = four), episiotomy (average RR 0.84, 95% CI 0.77 to 0.92; participants = 17,674; studies = 14) and fetal loss less than 24 weeks and neonatal death (average RR 0.81, 95% CI 0.67 to 0.98; participants = 15,645; studies = 11). Women who had midwife-led continuity models of care were more likely to experience no intrapartum analgesia/anaesthesia (average RR 1.21, 95% CI 1.06 to 1.37; participants = 10,499; studies = seven), have a longer mean length of labour (hours) (mean difference (MD) 0.50, 95% CI 0.27 to 0.74; participants = 3328; studies = three) and more likely to be attended at birth by a known midwife (average RR 7.04, 95% CI 4.48 to 11.08; participants = 6917; studies = seven). There were no differences between groups for fetal loss equal to/after 24 weeks and neonatal death, induction of labour, antenatal hospitalisation, antepartum haemorrhage, augmentation/artificial oxytocin during labour, opiate analgesia, perineal laceration requiring suturing, postpartum haemorrhage, breastfeeding initiation, low birthweight infant, five-minute Apgar score less than or equal to seven, neonatal convulsions, admission of infant to special care or neonatal intensive care unit(s) or in mean length of neonatal hospital stay (days).Due to a lack of consistency in measuring women's satisfaction and assessing the cost of various maternity models, these outcomes were reported narratively. The majority of included studies reported a higher rate of maternal satisfaction in midwife-led continuity models of care. Similarly, there was a trend towards a cost-saving effect for midwife-led continuity care compared to other care models. Authors' conclusions This review suggests that women who received midwife-led continuity models of care were less likely to experience intervention and more likely to be satisfied with their care with at least comparable adverse outcomes for women or their infants than women who received other models of care.Further research is needed to explore findings of fewer preterm births and fewer fetal deaths less than 24 weeks, and all fetal loss/neonatal death associated with midwife-led continuity models of care.

973 citations


Journal ArticleDOI
TL;DR: The main outcome measured was abstinence from smoking at longest follow-up, and the most rigorous definition of abstinence was abstinence, and preferred biochemically validated rates where they were reported.
Abstract: Nicotine receptor partial agonists may help people to stop smoking by a combination of maintaining moderate levels of dopamine to counteract withdrawal symptoms (acting as an agonist) and reducing smoking satisfaction (acting as an antagonist). The primary objective of this review is to assess the efficacy and tolerability of nicotine receptor partial agonists, including cytisine, dianicline and varenicline for smoking cessation. We searched the Cochrane Tobacco Addiction Group's specialised register for trials, using the terms ('cytisine' or 'Tabex' or 'dianicline' or 'varenicline' or 'nicotine receptor partial agonist') in the title or abstract, or as keywords. The register is compiled from searches of MEDLINE, EMBASE, PsycINFO and Web of Science using MeSH terms and free text to identify controlled trials of interventions for smoking cessation and prevention. We contacted authors of trial reports for additional information where necessary. The latest update of the specialised register was in December 2011. We also searched online clinical trials registers. We included randomized controlled trials which compared the treatment drug with placebo. We also included comparisons with bupropion and nicotine patches where available. We excluded trials which did not report a minimum follow-up period of six months from start of treatment. We extracted data on the type of participants, the dose and duration of treatment, the outcome measures, the randomization procedure, concealment of allocation, and completeness of follow-up.The main outcome measured was abstinence from smoking at longest follow-up. We used the most rigorous definition of abstinence, and preferred biochemically validated rates where they were reported. Where appropriate we pooled risk ratios (RRs), using the Mantel-Haenszel fixed-effect model. Two recent cytisine trials (937 people) found that more participants taking cytisine stopped smoking compared with placebo at longest follow-up, with a pooled RR of 3.98 (95% confidence interval (CI) 2.01 to 7.87). One trial of dianicline (602 people) failed to find evidence that it was effective (RR 1.20, 95% CI 0.82 to 1.75). Fifteen trials compared varenicline with placebo for smoking cessation; three of these also included a bupropion treatment arm. We also found one open-label trial comparing varenicline plus counselling with counselling alone. We found one relapse prevention trial, comparing varenicline with placebo, and two open-label trials comparing varenicline with nicotine replacement therapy (NRT). We also include one trial in which all the participants were given varenicline, but received behavioural support either online or by phone calls, or by both methods. This trial is not included in the analyses, but contributes to the data on safety and tolerability. The included studies covered 12,223 participants, 8100 of whom used varenicline.The pooled RR for continuous or sustained abstinence at six months or longer for varenicline at standard dosage versus placebo was 2.27 (95% CI 2.02 to 2.55; 14 trials, 6166 people, excluding one trial evaluating long term safety). Varenicline at lower or variable doses was also shown to be effective, with an RR of 2.09 (95% CI 1.56 to 2.78; 4 trials, 1272 people). The pooled RR for varenicline versus bupropion at one year was 1.52 (95% CI 1.22 to 1.88; 3 trials, 1622 people). The RR for varenicline versus NRT for point prevalence abstinence at 24 weeks was 1.13 (95% CI 0.94 to 1.35; 2 trials, 778 people). The two trials which tested the use of varenicline beyond the 12-week standard regimen found the drug to be well-tolerated during long-term use. The main adverse effect of varenicline was nausea, which was mostly at mild to moderate levels and usually subsided over time. A meta-analysis of reported serious adverse events occurring during or after active treatment and not necessarily considered attributable to treatment suggests there may be a one-third increase in the chance of severe adverse effects among people using varenicline (RR 1.36; 95% CI 1.04 to 1.79; 17 trials, 7725 people), but this finding needs to be tested further. Post-marketing safety data have raised questions about a possible association between varenicline and depressed mood, agitation, and suicidal behaviour or ideation. The labelling of varenicline was amended in 2008, and the manufacturers produced a Medication Guide. Thus far, surveillance reports and secondary analyses of trial data are inconclusive, but the possibility of a link between varenicline and serious psychiatric or cardiovascular events cannot be ruled out. Cytisine increases the chances of quitting, although absolute quit rates were modest in two recent trials. Varenicline at standard dose increased the chances of successful long-term smoking cessation between two- and threefold compared with pharmacologically unassisted quit attempts. Lower dose regimens also conferred benefits for cessation, while reducing the incidence of adverse events. More participants quit successfully with varenicline than with bupropion. Two open-label trials of varenicline versus NRT suggested a modest benefit of varenicline but confidence intervals did not rule out equivalence. Limited evidence suggests that varenicline may have a role to play in relapse prevention. The main adverse effect of varenicline is nausea, but mostly at mild to moderate levels and tending to subside over time. Possible links with serious adverse events, including serious psychiatric or cardiovascular events, cannot be ruled out.Future trials of cytisine may test extended regimens and more intensive behavioural support. There is a need for further trials of the efficacy of varenicline treatment extended beyond 12 weeks.

973 citations


Journal ArticleDOI
TL;DR: NAFLD and NASH represent a large and growing public health problem and efforts to understand this epidemic and to mitigate the disease burden are needed, if obesity and DM continue to increase at current and historical rates.

973 citations


Journal ArticleDOI
TL;DR: A broadband photodetector using a layered black phosphorus transistor that is polarization-sensitive over a bandwidth from ∼400 nm to 3,750‽nm is demonstrated and might provide new functionalities in novel optical and optoelectronic device applications.
Abstract: The ability to detect light over a broad spectral range is central to practical optoelectronic applications and has been successfully demonstrated with photodetectors of two-dimensional layered crystals such as graphene and MoS2. However, polarization sensitivity within such a photodetector remains elusive. Here, we demonstrate a broadband photodetector using a layered black phosphorus transistor that is polarization-sensitive over a bandwidth from ∼400 nm to 3,750 nm. The polarization sensitivity is due to the strong intrinsic linear dichroism, which arises from the in-plane optical anisotropy of this material. In this transistor geometry, a perpendicular built-in electric field induced by gating can spatially separate the photogenerated electrons and holes in the channel, effectively reducing their recombination rate and thus enhancing the performance for linear dichroism photodetection. The use of anisotropic layered black phosphorus in polarization-sensitive photodetection might provide new functionalities in novel optical and optoelectronic device applications. The anisotropic optical properties of black phosphorus can be exploited to fabricate photodetectors with linear dichroism operating over a broad spectral range.

973 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the performance of the LIGO instruments during this epoch, the work done to characterize the de- tectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of the detectors to a variety of astrophysical sources.
Abstract: In 2009-2010, the Laser Interferometer Gravitational-wave Observa- tory (LIGO) operated together with international partners Virgo and GEO600 as a network to search for gravitational waves of astrophysical origin. The sensitiv- ity of these detectors was limited by a combination of noise sources inherent to the instrumental design and its environment, often localized in time or frequency, that couple into the gravitational-wave readout. Here we review the performance of the LIGO instruments during this epoch, the work done to characterize the de- tectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of LIGO to a variety of astrophysical sources.

973 citations


Journal ArticleDOI
Hye-Geum Kim1, Eun-Jin Cheon1, Dai-Seg Bai1, Young Hwan Lee1, Bon-Hoon Koo1 
TL;DR: The current neurobiological evidence suggests that HRV is impacted by stress and supports its use for the objective assessment of psychological health and stress.
Abstract: Objective Physical or mental imbalance caused by harmful stimuli can induce stress to maintain homeostasis. During chronic stress, the sympathetic nervous system is hyperactivated, causing physical, psychological, and behavioral abnormalities. At present, there is no accepted standard for stress evaluation. This review aimed to survey studies providing a rationale for selecting heart rate variability (HRV) as a psychological stress indicator. Methods Term searches in the Web of Science®, National Library of Medicine (PubMed), and Google Scholar databases yielded 37 publications meeting our criteria. The inclusion criteria were involvement of human participants, HRV as an objective psychological stress measure, and measured HRV reactivity. Results In most studies, HRV variables changed in response to stress induced by various methods. The most frequently reported factor associated with variation in HRV variables was low parasympathetic activity, which is characterized by a decrease in the high-frequency band and an increase in the low-frequency band. Neuroimaging studies suggested that HRV may be linked to cortical regions (e.g., the ventromedial prefrontal cortex) that are involved in stressful situation appraisal. Conclusion In conclusion, the current neurobiological evidence suggests that HRV is impacted by stress and supports its use for the objective assessment of psychological health and stress.

972 citations


Journal ArticleDOI
TL;DR: In this article, a systematic literature review was conducted to identify and summarise the core features of inquiry-based learning by means of a systematic review and developed a synthesized inquiry cycle that combines the strengths of existing Inquiry-Based Learning frameworks.

972 citations


Journal ArticleDOI
TL;DR: Radiologists in China and the United States distinguished COVID-19 from viral pneumonia on chest CT with high specificity but moderate sensitivity.
Abstract: Radiologists had high specificity but moderate sensitivity in differentiating coronavirus disease 2019 (COVID-19) from non-COVID-19 viral pneumonia at chest CT.

972 citations


Proceedings Article
14 Jun 2016
TL;DR: This paper shows how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way.
Abstract: The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.

Posted Content
TL;DR: An audio dataset of spoken words designed to help train and evaluate keyword spotting systems and suggests a methodology for reproducible and comparable accuracy metrics for this task.
Abstract: Describes an audio dataset of spoken words designed to help train and evaluate keyword spotting systems. Discusses why this task is an interesting challenge, and why it requires a specialized dataset that is different from conventional datasets used for automatic speech recognition of full sentences. Suggests a methodology for reproducible and comparable accuracy metrics for this task. Describes how the data was collected and verified, what it contains, previous versions and properties. Concludes by reporting baseline results of models trained on this dataset.

Journal ArticleDOI
08 Jul 2016-Science
TL;DR: A generalized framework for clustering networks on the basis of higher-order connectivity patterns provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges.
Abstract: Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks—at the level of small network subgraphs—remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.

Journal ArticleDOI
TL;DR: How the field of functional MRI should evolve is described to produce the most meaningful and reliable answers to neuroscientific questions.
Abstract: Functional neuroimaging techniques have transformed our ability to probe the neurobiological basis of behaviour and are increasingly being applied by the wider neuroscience community. However, concerns have recently been raised that the conclusions that are drawn from some human neuroimaging studies are either spurious or not generalizable. Problems such as low statistical power, flexibility in data analysis, software errors and a lack of direct replication apply to many fields, but perhaps particularly to functional MRI. Here, we discuss these problems, outline current and suggested best practices, and describe how we think the field should evolve to produce the most meaningful and reliable answers to neuroscientific questions.

Journal ArticleDOI
TL;DR: In high‐income countries mainly, improvements in prevention, acute treatment, and neurorehabilitation have led to a substantial decrease in the burden of stroke over the past 30 years.
Abstract: Stroke is the second leading cause of death and a major cause of disability worldwide. Its incidence is increasing because the population ages. In addition, more young people are affected by stroke in low- and middle-income countries. Ischemic stroke is more frequent but hemorrhagic stroke is responsible for more deaths and disability-adjusted life-years lost. Incidence and mortality of stroke differ between countries, geographical regions, and ethnic groups. In high-income countries mainly, improvements in prevention, acute treatment, and neurorehabilitation have led to a substantial decrease in the burden of stroke over the past 30 years. This article reviews the epidemiological and clinical data concerning stroke incidence and burden around the globe.

Journal ArticleDOI
TL;DR: An objective response was achieved in ten (10%) of 98 patients receiving nivolumab 3 mg/kg, one (33%) of three patients receivingnivolUMab 1mg/kg plus ipilimumab 1 mg /kg, and 14 (23%) of 61 receiving n ivolumAB 1 mg/ kg plus ipILimumab 3mg/ kg, and ten (19%) of 54 receiving nvolum ab 3 mg / kg plusipilimum
Abstract: Summary Background Treatments for small-cell lung cancer (SCLC) after failure of platinum-based chemotherapy are limited. We assessed safety and activity of nivolumab and nivolumab plus ipilimumab in patients with SCLC who progressed after one or more previous regimens. Methods The SCLC cohort of this phase 1/2 multicentre, multi-arm, open-label trial was conducted at 23 sites (academic centres and hospitals) in six countries. Eligible patients were 18 years of age or older, had limited-stage or extensive-stage SCLC, and had disease progression after at least one previous platinum-containing regimen. Patients received nivolumab (3 mg/kg bodyweight intravenously) every 2 weeks (given until disease progression or unacceptable toxicity), or nivolumab plus ipilimumab (1 mg/kg plus 1 mg/kg, 1 mg/kg plus 3 mg/kg, or 3 mg/kg plus 1 mg/kg, intravenously) every 3 weeks for four cycles, followed by nivolumab 3 mg/kg every 2 weeks. Patients were either assigned to nivolumab monotherapy or assessed in a dose-escalating safety phase for the nivolumab/ipilimumab combination beginning at nivolumab 1 mg/kg plus ipilimumab 1 mg/kg. Depending on tolerability, patients were then assigned to nivolumab 1 mg/kg plus ipilimumab 3 mg/kg or nivolumab 3 mg/kg plus ipilimumab 1 mg/kg. The primary endpoint was objective response by investigator assessment. All analyses included patients who were enrolled at least 90 days before database lock. This trial is ongoing; here, we report an interim analysis of the SCLC cohort. This study is registered with ClinicalTrials.gov, number NCT01928394. Findings Between Nov 18, 2013, and July 28, 2015, 216 patients were enrolled and treated (98 with nivolumab 3 mg/kg, three with nivolumab 1 mg/kg plus ipilimumab 1 mg/kg, 61 with nivolumab 1 mg/kg plus ipilimumab 3 mg/kg, and 54 with nivolumab 3 mg/kg plus ipilimumab 1 mg/kg). At database lock on Nov 6, 2015, median follow-up for patients continuing in the study (including those who had died or discontinued treatment) was 198·5 days (IQR 163·0–464·0) for nivolumab 3 mg/kg, 302 days (IQR not calculable) for nivolumab 1 mg/kg plus ipilimumab 1 mg/kg, 361·0 days (273·0–470·0) for nivolumab 1 mg/kg plus ipilimumab 3 mg/kg, and 260·5 days (248·0–288·0) for nivolumab 3 mg/kg plus ipilimumab 1 mg/kg. An objective response was achieved in ten (10%) of 98 patients receiving nivolumab 3 mg/kg, one (33%) of three patients receiving nivolumab 1 mg/kg plus ipilimumab 1 mg/kg, 14 (23%) of 61 receiving nivolumab 1 mg/kg plus ipilimumab 3 mg/kg, and ten (19%) of 54 receiving nivolumab 3 mg/kg plus ipilimumab 1 mg/kg. Grade 3 or 4 treatment-related adverse events occurred in 13 (13%) patients in the nivolumab 3 mg/kg cohort, 18 (30%) in the nivolumab 1 mg/kg plus ipilimumab 3 mg/kg cohort, and ten (19%) in the nivolumab 3 mg/kg plus ipilimumab 1 mg/kg cohort; the most commonly reported grade 3 or 4 treatment-related adverse events were increased lipase (none vs 5 [8%] vs none) and diarrhoea (none vs 3 [5%] vs 1 [2%]). No patients in the nivolumab 1 mg/kg plus ipilimumab 1 mg/kg cohort had a grade 3 or 4 treatment-related adverse event. Six (6%) patients in the nivolumab 3 mg/kg group, seven (11%) in the nivolumab 1 mg/kg plus ipilimumab 3 mg/kg group, and four (7%) in the nivolumab 3 mg/kg plus ipilimumab 1 mg/kg group discontinued treatment due to treatment-related adverse events. Two patients who received nivolumab 1 mg/kg plus ipilimumab 3 mg/kg died from treatment-related adverse events (myasthenia gravis and worsening of renal failure), and one patient who received nivolumab 3 mg/kg plus ipilimumab 1 mg/kg died from treatment-related pneumonitis. Interpretation Nivolumab monotherapy and nivolumab plus ipilimumab showed antitumour activity with durable responses and manageable safety profiles in previously treated patients with SCLC. These data suggest a potential new treatment approach for a population of patients with limited treatment options and support the evaluation of nivolumab and nivolumab plus ipilimumab in phase 3 randomised controlled trials in SCLC. Funding Bristol-Myers Squibb.

Proceedings Article
01 Jan 2017
TL;DR: In this article, adaptive instance normalization (AdaIN) is proposed to align the mean and variance of the content features with those of the style features, which enables arbitrary style transfer in real-time.
Abstract: Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.

Journal ArticleDOI
TL;DR: The appropriately graded prescription of high training loads should improve players’ fitness, which in turn may protect against injury, ultimately leading to greater physical outputs and resilience in competition, and a greater proportion of the squad available for selection each week.
Abstract: Background There is dogma that higher training load causes higher injury rates. However, there is also evidence that training has a protective effect against injury. For example, team sport athletes who performed more than 18 weeks of training before sustaining their initial injuries were at reduced risk of sustaining a subsequent injury, while high chronic workloads have been shown to decrease the risk of injury. Second, across a wide range of sports, well-developed physical qualities are associated with a reduced risk of injury. Clearly, for athletes to develop the physical capacities required to provide a protective effect against injury, they must be prepared to train hard. Finally, there is also evidence that under-training may increase injury risk. Collectively, these results emphasise that reductions in workloads may not always be the best approach to protect against injury. Main thesis This paper describes the ‘Training-Injury Prevention Paradox’ model; a phenomenon whereby athletes accustomed to high training loads have fewer injuries than athletes training at lower workloads. The Model is based on evidence that non-contact injuries are not caused by training per se , but more likely by an inappropriate training programme. Excessive and rapid increases in training loads are likely responsible for a large proportion of non-contact, soft-tissue injuries. If training load is an important determinant of injury, it must be accurately measured up to twice daily and over periods of weeks and months (a season). This paper outlines ways of monitoring training load (‘internal’ and ‘external’ loads) and suggests capturing both recent (‘acute’) training loads and more medium-term (‘chronic’) training loads to best capture the player's training burden. I describe the critical variable—acute:chronic workload ratio—as a best practice predictor of training-related injuries. This provides the foundation for interventions to reduce players risk, and thus, time-loss injuries. Summary The appropriately graded prescription of high training loads should improve players’ fitness, which in turn may protect against injury, ultimately leading to (1) greater physical outputs and resilience in competition, and (2) a greater proportion of the squad available for selection each week.

Posted Content
TL;DR: It is shown empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search and that if a teacher network is trained with label smoothed, knowledge distillation into a student network is much less effective.
Abstract: The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevents the network from becoming over-confident and label smoothing has been used in many state-of-the-art models, including image classification, language translation and speech recognition. Despite its widespread use, label smoothing is still poorly understood. Here we show empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search. However, we also observe that if a teacher network is trained with label smoothing, knowledge distillation into a student network is much less effective. To explain these observations, we visualize how label smoothing changes the representations learned by the penultimate layer of the network. We show that label smoothing encourages the representations of training examples from the same class to group in tight clusters. This results in loss of information in the logits about resemblances between instances of different classes, which is necessary for distillation, but does not hurt generalization or calibration of the model's predictions.

Journal ArticleDOI
TL;DR: The authors survey institutional investors to understand their role in the corporate governance of firms and find that most investors use proxy advisors and believe that the information provided by such advisors improves their own voting decisions.
Abstract: We survey institutional investors to better understand their role in the corporate governance of firms. Consistent with a number of theories, we document widespread behind-the-scenes intervention as well as governance-motivated exit. These governance mechanisms are viewed as complementary devices, with intervention typically occurring prior to a potential exit. We further find that long-term investors and investors that are less concerned about stock liquidity intervene more intensively. Finally, we find that most investors use proxy advisors and believe that the information provided by such advisors improves their own voting decisions.

ReportDOI
01 Jun 2017
TL;DR: In this paper, 35 estudios meticulosos that have demostrado un vinculo positivo entre el desarrollo profesional docente, las practicas docentes and los resultados academicos de los estudiantes.
Abstract: Este documento revisa 35 estudios meticulosos que han demostrado un vinculo positivo entre el desarrollo profesional docente, las practicas docentes y los resultados academicos de los estudiantes. Se identificaron caracteristicas principales de iniciativas efectivas y se ofrecen descripciones completas de estos modelos para informar a los dirigentes educativos y formuladores de politicas que buscan potenciar el desarrollo profesional para mejorar el aprendizaje escolar.

Journal ArticleDOI
TL;DR: In this Review, recent progress in the synthesis and electrochemical application of transition metal carbides and nitrides for energy storage and conversion is summarized andvantages and benefits of nanostructuring are highlighted.
Abstract: High-performance electrode materials are the key to advances in the areas of energy conversion and storage (e.g., fuel cells and batteries). In this Review, recent progress in the synthesis and electrochemical application of transition metal carbides (TMCs) and nitrides (TMNs) for energy storage and conversion is summarized. Their electrochemical properties in Li-ion and Na-ion batteries as well as in supercapacitors, and electrocatalytic reactions (oxygen evolution and reduction reactions, and hydrogen evolution reaction) are discussed in association with their crystal structure/morphology/composition. Advantages and benefits of nanostructuring (e.g., 2D MXenes) are highlighted. Prospects of future research trends in rational design of high-performance TMCs and TMNs electrodes are provided at the end.

Journal ArticleDOI
Jarrod R. McClean1, Sergio Boixo1, Vadim Smelyanskiy1, Ryan Babbush1, Hartmut Neven1 
TL;DR: In this article, the authors show that for a wide class of reasonable parameterized quantum circuits, the probability that the gradient along any reasonable direction is non-zero to some fixed precision is exponentially small as a function of the number of qubits.
Abstract: Many experimental proposals for noisy intermediate scale quantum devices involve training a parameterized quantum circuit with a classical optimization loop. Such hybrid quantum-classical algorithms are popular for applications in quantum simulation, optimization, and machine learning. Due to its simplicity and hardware efficiency, random circuits are often proposed as initial guesses for exploring the space of quantum states. We show that the exponential dimension of Hilbert space and the gradient estimation complexity make this choice unsuitable for hybrid quantum-classical algorithms run on more than a few qubits. Specifically, we show that for a wide class of reasonable parameterized quantum circuits, the probability that the gradient along any reasonable direction is non-zero to some fixed precision is exponentially small as a function of the number of qubits. We argue that this is related to the 2-design characteristic of random circuits, and that solutions to this problem must be studied. Gradient-based hybrid quantum-classical algorithms are often initialised with random, unstructured guesses. Here, the authors show that this approach will fail in the long run, due to the exponentially-small probability of finding a large enough gradient along any direction.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, the authors propose a taxonomic map for task transfer learning, which is a set of tools for computing and probing this taxonomical structure including a solver to find supervision policies for their use cases.
Abstract: Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. We provide a set of tools for computing and probing this taxonomical structure including a solver users can employ to find supervision policies for their use cases.

Journal ArticleDOI
TL;DR: In this article, the authors presented an improved estimate of the occurrence rate of small planets orbiting small stars by searching the full four-year Kepler data set for transiting planets using their own planet detection pipeline and conducting transit injection and recovery simulations to empirically measure the search completeness of their pipeline.
Abstract: We present an improved estimate of the occurrence rate of small planets orbiting small stars by searching the full four-year Kepler data set for transiting planets using our own planet detection pipeline and conducting transit injection and recovery simulations to empirically measure the search completeness of our pipeline. We identified 156 planet candidates, including one object that was not previously identified as a Kepler Object of Interest. We inspected all publicly available follow-up images, observing notes, and centroid analyses, and corrected for the likelihood of false positives. We evaluated the sensitivity of our detection pipeline on a star-by-star basis by injecting 2000 transit signals into the light curve of each target star. For periods shorter than 50 days, we find Earth-size planets (1−1.5 R⊕) and super-Earths (1.5−2 R⊕) per M dwarf. In total, we estimate a cumulative planet occurrence rate of 2.5 ± 0.2 planets per M dwarf with radii 1−4 R⊕ and periods shorter than 200 days. Within a conservatively defined habitable zone (HZ) based on the moist greenhouse inner limit and maximum greenhouse outer limit, we estimate an occurrence rate of Earth-size planets and super-Earths per M dwarf HZ. Adopting the broader insolation boundaries of the recent Venus and early Mars limits yields a higher estimate of Earth-size planets and super-Earths per M dwarf HZ. This suggests that the nearest potentially habitable non-transiting and transiting Earth-size planets are 2.6 ± 0.4 pc and pc away, respectively. If we include super-Earths, these distances diminish to 2.1 ± 0.2 pc and pc.

Proceedings Article
21 Feb 2015
TL;DR: In this paper, the authors study the connection between the loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of variable independence, redundancy in network parametrization, and uniformity.
Abstract: We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between largeand small-size networks where for the latter poor quality local minima have nonzero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.

Posted Content
TL;DR: In this paper, a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions is presented. But the model is not able to model the discrete probability of the raw pixel values and encodes the complete set of dependencies.
Abstract: Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.

Journal ArticleDOI
TL;DR: In this article, the CT features of COVID-19 were compared with those of other viruses to familiarize radiologists with possible CT patterns and found that CT is still limited for identifying specific viruses and distinguishing between viruses.
Abstract: OBJECTIVE. The objective of our study was to determine the misdiagnosis rate of radiologists for coronavirus disease 2019 (COVID-19) and evaluate the performance of chest CT in the diagnosis and management of COVID-19. The CT features of COVID-19 are reported and compared with the CT features of other viruses to familiarize radiologists with possible CT patterns. MATERIALS AND METHODS. This study included the first 51 patients with a diagnosis of COVID-19 infection confirmed by nucleic acid testing (23 women and 28 men; age range, 26-83 years) and two patients with adenovirus (one woman and one man; ages, 58 and 66 years). We reviewed the clinical information, CT images, and corresponding image reports of these 53 patients. The CT images included images from 99 chest CT examinations, including initial and follow-up CT studies. We compared the image reports of the initial CT study with the laboratory test results and identified CT patterns suggestive of viral infection. RESULTS. COVID-19 was misdiagnosed as a common infection at the initial CT study in two inpatients with underlying disease and COVID-19. Viral pneumonia was correctly diagnosed at the initial CT study in the remaining 49 patients with COVID-19 and two patients with adenovirus. These patients were isolated and obtained treatment. Ground-glass opacities (GGOs) and consolidation with or without vascular enlargement, interlobular septal thickening, and air bronchogram sign are common CT features of COVID-19. The The "reversed halo" sign and pulmonary nodules with a halo sign are uncommon CT features. The CT findings of COVID-19 overlap with the CT findings of adenovirus infection. There are differences as well as similarities in the CT features of COVID-19 compared with those of the severe acute respiratory syndrome. CONCLUSION. We found that chest CT had a low rate of missed diagnosis of COVID-19 (3.9%, 2/51) and may be useful as a standard method for the rapid diagnosis of COVID-19 to optimize the management of patients. However, CT is still limited for identifying specific viruses and distinguishing between viruses.

Journal ArticleDOI
TL;DR: This survey provides the reader with comprehensive details on the use of space-based optical backhaul links in order to provide high capacity and low cost backhaul solutions.
Abstract: In recent years, free space optical (FSO) communication has gained significant importance owing to its unique features: large bandwidth, license free spectrum, high data rate, easy and quick deployability, less power, and low mass requirements. FSO communication uses optical carrier in the near infrared band to establish either terrestrial links within the Earth’s atmosphere or inter-satellite/deep space links or ground-to-satellite/satellite-to-ground links. It also finds its applications in remote sensing, radio astronomy, military, disaster recovery, last mile access, backhaul for wireless cellular networks, and many more. However, despite of great potential of FSO communication, its performance is limited by the adverse effects (viz., absorption, scattering, and turbulence) of the atmospheric channel. Out of these three effects, the atmospheric turbulence is a major challenge that may lead to serious degradation in the bit error rate performance of the system and make the communication link infeasible. This paper presents a comprehensive survey on various challenges faced by FSO communication system for ground-to-satellite/satellite-to-ground and inter-satellite links. It also provides details of various performance mitigation techniques in order to have high link availability and reliability. The first part of this paper will focus on various types of impairments that pose a serious challenge to the performance of optical communication system for ground-to-satellite/satellite-to-ground and inter-satellite links. The latter part of this paper will provide the reader with an exhaustive review of various techniques both at physical layer as well as at the other layers (link, network, or transport layer) to combat the adverse effects of the atmosphere. It also uniquely presents a recently developed technique using orbital angular momentum for utilizing the high capacity advantage of optical carrier in case of space-based and near-Earth optical communication links. This survey provides the reader with comprehensive details on the use of space-based optical backhaul links in order to provide high capacity and low cost backhaul solutions.

Journal ArticleDOI
09 Mar 2017-Nature
TL;DR: In this paper, the authors present the experimental observation of a discrete time crystal in an interacting spin chain of trapped atomic ions and apply a periodic Hamiltonian to the system under many-body localization conditions, and observe a subharmonic temporal response that is robust to external perturbations.
Abstract: Spontaneous symmetry breaking is a fundamental concept in many areas of physics, including cosmology, particle physics and condensed matter. An example is the breaking of spatial translational symmetry, which underlies the formation of crystals and the phase transition from liquid to solid. Using the analogy of crystals in space, the breaking of translational symmetry in time and the emergence of a 'time crystal' was recently proposed, but was later shown to be forbidden in thermal equilibrium. However, non-equilibrium Floquet systems, which are subject to a periodic drive, can exhibit persistent time correlations at an emergent subharmonic frequency. This new phase of matter has been dubbed a 'discrete time crystal'. Here we present the experimental observation of a discrete time crystal, in an interacting spin chain of trapped atomic ions. We apply a periodic Hamiltonian to the system under many-body localization conditions, and observe a subharmonic temporal response that is robust to external perturbations. The observation of such a time crystal opens the door to the study of systems with long-range spatio-temporal correlations and novel phases of matter that emerge under intrinsically non-equilibrium conditions.