scispace - formally typeset
Search or ask a question
Browse all papers

Proceedings ArticleDOI
27 Jun 2016
TL;DR: The MegaFace dataset is assembled, both for identification and verification performance, and performance with respect to pose and a persons age is evaluated, as a function of training data size (#photos and #people).
Abstract: Recent face recognition experiments on a major benchmark (LFW [15]) show stunning performance–a number of algorithms achieve near to perfect score, surpassing human recognition rates. In this paper, we advocate evaluations at the million scale (LFW includes only 13K photos of 5K people). To this end, we have assembled the MegaFace dataset and created the first MegaFace challenge. Our dataset includes One Million photos that capture more than 690K different individuals. The challenge evaluates performance of algorithms with increasing numbers of "distractors" (going from 10 to 1M) in the gallery set. We present both identification and verification performance, evaluate performance with respect to pose and a persons age, and compare as a function of training data size (#photos and #people). We report results of state of the art and baseline algorithms. The MegaFace dataset, baseline code, and evaluation scripts, are all publicly released for further experimentations1.

841 citations


Journal ArticleDOI
Rattan Lal1
TL;DR: In this paper, the authors proposed a strategy to minimize soil erosion, create positive organic carbon (SOC) and N budgets, enhance activity and species diversity of soil biota (micro, meso, and macro), and improve structural stability and pore geometry.
Abstract: Feeding the world population, 7.3 billion in 2015 and projected to increase to 9.5 billion by 2050, necessitates an increase in agricultural production of ~70% between 2005 and 2050. Soil degradation, characterized by decline in quality and decrease in ecosystem goods and services, is a major constraint to achieving the required increase in agricultural production. Soil is a non-renewable resource on human time scales with its vulnerability to degradation depending on complex interactions between processes, factors and causes occurring at a range of spatial and temporal scales. Among the major soil degradation processes are accelerated erosion, depletion of the soil organic carbon (SOC) pool and loss in biodiversity, loss of soil fertility and elemental imbalance, acidification and salinization. Soil degradation trends can be reversed by conversion to a restorative land use and adoption of recommended management practices. The strategy is to minimize soil erosion, create positive SOC and N budgets, enhance activity and species diversity of soil biota (micro, meso, and macro), and improve structural stability and pore geometry. Improving soil quality (i.e., increasing SOC pool, improving soil structure, enhancing soil fertility) can reduce risks of soil degradation (physical, chemical, biological and ecological) while improving the environment. Increasing the SOC pool to above the critical level (10 to 15 g/kg) is essential to set-in-motion the restorative trends. Site-specific techniques of restoring soil quality include conservation agriculture, integrated nutrient management, continuous vegetative cover such as residue mulch and cover cropping, and controlled grazing at appropriate stocking rates. The strategy is to produce “more from less” by reducing losses and increasing soil, water, and nutrient use efficiency.

841 citations


Journal ArticleDOI
30 Jul 2015-Nature
TL;DR: It is shown that large eruptions in the tropics and high latitudes were primary drivers of interannual-to-decadal temperature variability in the Northern Hemisphere during the past 2,500 years and cooling was proportional to the magnitude of volcanic forcing.
Abstract: Volcanic eruptions contribute to climate variability, but quantifying these contributions has been limited by inconsistencies in the timing of atmospheric volcanic aerosol loading determined from ice cores and subsequent cooling from climate proxies such as tree rings. Here we resolve these inconsistencies and show that large eruptions in the tropics and high latitudes were primary drivers of interannual-to-decadal temperature variability in the Northern Hemisphere during the past 2,500 years. Our results are based on new records of atmospheric aerosol loading developed from high-resolution, multi-parameter measurements from an array of Greenland and Antarctic ice cores as well as distinctive age markers to constrain chronologies. Overall, cooling was proportional to the magnitude of volcanic forcing and persisted for up to ten years after some of the largest eruptive episodes. Our revised timescale more firmly implicates volcanic eruptions as catalysts in the major sixth-century pandemics, famines, and socioeconomic disruptions in Eurasia and Mesoamerica while allowing multi-millennium quantification of climate response to volcanic forcing.

841 citations


Journal ArticleDOI
TL;DR: This review increases the understanding of tumor treatment with the promising use of nanotechnology by covering the description of selected tumors, including breast, lungs, colorectal and pancreatic tumors, and applications of relative nanocarriers in these tumors.
Abstract: Nanotechnology has recently gained increased attention for its capability to effectively diagnose and treat various tumors. Nanocarriers have been used to circumvent the problems associated with conventional antitumor drug delivery systems, including their nonspecificity, severe side effects, burst release and damaging the normal cells. Nanocarriers improve the bioavailability and therapeutic efficiency of antitumor drugs, while providing preferential accumulation at the target site. A number of nanocarriers have been developed; however, only a few of them are clinically approved for the delivery of antitumor drugs for their intended actions at the targeted sites. The present review is divided into three main parts: first part presents introduction of various nanocarriers and their relevance in the delivery of anticancer drugs, second part encompasses targeting mechanisms and surface functionalization on nanocarriers and third part covers the description of selected tumors, including breast, lungs, colorectal and pancreatic tumors, and applications of relative nanocarriers in these tumors. This review increases the understanding of tumor treatment with the promising use of nanotechnology.

841 citations


Posted Content
TL;DR: This paper proposed Bidirectional Generative Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
Abstract: The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.

841 citations


Journal ArticleDOI
TL;DR: This paper introduced a new development in theoretical quantum physics, the ''resource-theoretic'' point of view, which aims to be closely linked to experiment, and to state exactly what result you can hope to achieve for what expenditure of effort in the laboratory.
Abstract: This review introduces a new development in theoretical quantum physics, the ``resource-theoretic'' point of view. The approach aims to be closely linked to experiment, and to state exactly what result you can hope to achieve for what expenditure of effort in the laboratory. This development is an extension of the principles of thermodynamics to quantum problems; but there are resources that would never have been considered previously in thermodynamics, such as shared knowledge of a frame of reference. Many additional examples and new quantifications of resources are provided.

841 citations


Journal ArticleDOI
TL;DR: There is more than a 50% probability that by 2030, national female life expectancy will break the 90 year barrier, a level that was deemed unattainable by some at the turn of the 21st century.

840 citations


Journal ArticleDOI
TL;DR: In this article, a quadratic transform technique is proposed for solving the multiple-ratio concave-convex FP problem, where the original nonconveX problem is recast as a sequence of convex problems.
Abstract: Fractional programming (FP) refers to a family of optimization problems that involve ratio term(s). This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave–convex FP problem—in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

840 citations


Journal ArticleDOI
TL;DR: The subgroup with the poorest prognosis had significant enrichment of hypermutated tumors and a characteristic elevation in the expression of immune checkpoint molecules, suggesting immune-modulating therapies might also be potentially promising options for these patients.
Abstract: The incidence of biliary tract cancer (BTC), including intrahepatic (ICC) and extrahepatic (ECC) cholangiocarcinoma and gallbladder cancer, has increased globally; however, no effective targeted molecular therapies have been approved at the present time. Here we molecularly characterized 260 BTCs and uncovered spectra of genomic alterations that included new potential therapeutic targets. Gradient spectra of mutational signatures with a higher burden of the APOBEC-associated mutation signature were observed in gallbladder cancer and ECC. Thirty-two significantly altered genes, including ELF3, were identified, and nearly 40% of cases harbored targetable genetic alterations. Gene fusions involving FGFR2 and PRKACA or PRKACB preferentially occurred in ICC and ECC, respectively, and the subtype-associated prevalence of actionable growth factor-mediated signals was noteworthy. The subgroup with the poorest prognosis had significant enrichment of hypermutated tumors and a characteristic elevation in the expression of immune checkpoint molecules. Accordingly, immune-modulating therapies might also be potentially promising options for these patients.

840 citations


Proceedings Article
27 Sep 2018
TL;DR: In this paper, the same standard architecture that learns a texture-based representation on ImageNet is able to learn a shapebased representation instead when trained on "Stylized-ImageNet", a stylized version of ImageNet.
Abstract: Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on "Stylized-ImageNet", a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.

840 citations


Posted Content
TL;DR: A thorough survey to fully understand Few-Shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.
Abstract: Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-Shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this paper, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimized is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications and theories, are also proposed to provide insights for future research.

Journal ArticleDOI
TL;DR: The conceptual guidelines that have emerged for their classification and functional annotation based on expanding and more comprehensive use of large systems biology-based datasets are described.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A unified approach to combining feature computation and similarity networks for training a patch matching system that improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors is confirmed.
Abstract: Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available.

Journal ArticleDOI
TL;DR: The concept of regional economic resilience has been used for some time in ecology and psychology, both as perceived (and typically positive) attribute of an object, entity or system and, more normatively, as a desired feature that should somehow be promoted or fostered as mentioned in this paper.
Abstract: Over the past few years a new buzzword has entered academic, political and public discourse: the notion of resilience, a term invoked to describe how an entity or system responds to shocks and disturbances. Although the concept has been used for some time in ecology and psychology, it is now invoked in diverse contexts, both as a perceived (and typically positive) attribute of an object, entity or system and, more normatively, as a desired feature that should somehow be promoted or fostered. As part of this development, the notion of resilience is rapidly becoming part of the conceptual and analytical lexicon of regional and local economic studies: there is increasing interest in the resilience of regional, local and urban economies. Further, resilience is rapidly emerging as an idea ‘whose time has come’ in policy debates: a new imperative of ‘constructing’ or ‘building’ regional and urban economic resilience is gaining currency. However, this rush to use the idea of regional and local economic resilience in policy circles has arguably run somewhat ahead of our understanding of the concept. There is still considerable ambiguity about what, precisely, is meant by the notion of regional economic resilience, about how it should be conceptualized and measured, what its determinants are, and how it links to patterns of long-run regional growth. The aim of this article is to address these and related questions on the meaning and explanation of regional economic resilience and thereby to outline the directions of a research agenda.

Journal ArticleDOI
TL;DR: TAVR with SAPIEN 3 in intermediate-risk patients with severe aortic stenosis is associated with low mortality, strokes, and regurgitation at 1 year after implantation, and a significant superiority for the composite outcome with TAVR compared with surgery is indicated.

Journal ArticleDOI
TL;DR: By s-shell pulsed resonant excitation of a Purcell-enhanced quantum dot-micropillar system, deterministically generate resonance fluorescence single photons which, at π pulse excitation, have an extraction efficiency of 66, single-photon purity of 99.1%, and photon indistinguishability of 98.5%.
Abstract: This work was supported by the National Natural Science Foundation of China, the Chinese Academy of Sciences, and the National Fundamental Research Program. We acknowledge financial support by the State of Bavaria and the German Ministry of Education and Research (BMBF) within the projects Q.com-H and the Chist-era project SSQN. N. G. acknowledges support from the Danish Research Council for Technology and Production.

Proceedings Article
12 Feb 2016
TL;DR: A siamese adaptation of the Long Short-Term Memory network for labeled data comprised of pairs of variable-length sequences is presented, which compel the sentence representations learned by the model to form a highly structured space whose geometry reflects complex semantic relationships.
Abstract: We present a siamese adaptation of the Long Short-Term Memory (LSTM) network for labeled data comprised of pairs of variable-length sequences. Our model is applied to assess semantic similarity between sentences, where we exceed state of the art, outperforming carefully handcrafted features and recently proposed neural network systems of greater complexity. For these applications, we provide word-embedding vectors supplemented with synonymic information to the LSTMs, which use a fixed size vector to encode the underlying meaning expressed in a sentence (irrespective of the particular wording/syntax). By restricting subsequent operations to rely on a simple Manhattan metric, we compel the sentence representations learned by our model to form a highly structured space whose geometry reflects complex semantic relationships. Our results are the latest in a line of findings that showcase LSTMs as powerful language models capable of tasks requiring intricate understanding.

Journal ArticleDOI
TL;DR: In this article, the authors describe how regularization techniques can be used to efficiently estimate a parsimonious and interpretable network structure in psychological data, and demonstrate the method in an empirical example on post-traumatic stress disorder data.
Abstract: Recent years have seen an emergence of network modeling applied to moods, attitudes, and problems in the realm of psychology. In this framework, psychological variables are understood to directly affect each other rather than being caused by an unobserved latent entity. In this tutorial, we introduce the reader to estimating the most popular network model for psychological data: the partial correlation network. We describe how regularization techniques can be used to efficiently estimate a parsimonious and interpretable network structure in psychological data. We show how to perform these analyses in R and demonstrate the method in an empirical example on post-traumatic stress disorder data. In addition, we discuss the effect of the hyperparameter that needs to be manually set by the researcher, how to handle non-normal data, how to determine the required sample size for a network analysis, and provide a checklist with potential solutions for problems that can arise when estimating regularized partial correlation networks.

Journal ArticleDOI
TL;DR: PCN-222 significantly enhances photocatalytic conversion of CO2 into formate anion compared to the corresponding porphyrin ligand itself, which provides important insights into the design of MOF-based materials for CO2 capture and photoreduction.
Abstract: It is highly desirable to convert CO2 to valuable fuels or chemicals by means of solar energy, which requires CO2 enrichment around photocatalysts from the atmosphere. Here we demonstrate that a porphyrin-involved metal–organic framework (MOF), PCN-222, can selectively capture and further photoreduce CO2 with high efficiency under visible-light irradiation. Mechanistic information gleaned from ultrafast transient absorption spectroscopy (combined with time-resolved photoluminescence spectroscopy) has elucidated the relationship between the photocatalytic activity and the electron–hole separation efficiency. The presence of a deep electron trap state in PCN-222 effectively inhibits the detrimental, radiative electron–hole recombination. As a direct result, PCN-222 significantly enhances photocatalytic conversion of CO2 into formate anion compared to the corresponding porphyrin ligand itself. This work provides important insights into the design of MOF-based materials for CO2 capture and photoreduction.

Journal ArticleDOI
Richard J. Abbott1, Richard J. Abbott2, T. D. Abbott, Sheelu Abraham  +1347 moreInstitutions (6)
TL;DR: In this article, the authors present 39 candidate gravitational wave events from compact binary coalescences detected by Advanced LIGO and Advanced Virgo in the first half of the third observing run (O3a) between 1 April 2019 15:00 UTC and 1 October 2019 15.00.
Abstract: We report on gravitational wave discoveries from compact binary coalescences detected by Advanced LIGO and Advanced Virgo in the first half of the third observing run (O3a) between 1 April 2019 15:00 UTC and 1 October 2019 15:00. By imposing a false-alarm-rate threshold of two per year in each of the four search pipelines that constitute our search, we present 39 candidate gravitational wave events. At this threshold, we expect a contamination fraction of less than 10%. Of these, 26 candidate events were reported previously in near real-time through GCN Notices and Circulars; 13 are reported here for the first time. The catalog contains events whose sources are black hole binary mergers up to a redshift of ~0.8, as well as events whose components could not be unambiguously identified as black holes or neutron stars. For the latter group, we are unable to determine the nature based on estimates of the component masses and spins from gravitational wave data alone. The range of candidate events which are unambiguously identified as binary black holes (both objects $\geq 3~M_\odot$) is increased compared to GWTC-1, with total masses from $\sim 14~M_\odot$ for GW190924_021846 to $\sim 150~M_\odot$ for GW190521. For the first time, this catalog includes binary systems with significantly asymmetric mass ratios, which had not been observed in data taken before April 2019. We also find that 11 of the 39 events detected since April 2019 have positive effective inspiral spins under our default prior (at 90% credibility), while none exhibit negative effective inspiral spin. Given the increased sensitivity of Advanced LIGO and Advanced Virgo, the detection of 39 candidate events in ~26 weeks of data (~1.5 per week) is consistent with GWTC-1.

Journal ArticleDOI
TL;DR: There was no clear evidence of a change in prevalence for autistic disorder or other ASDs between 1990 and 2010 and Worldwide, there was little regional variation in the prevalence of ASDs.
Abstract: Background Autism spectrum disorders (ASDs) are persistent disabling neurodevelopmental disorders clinically evident from early childhood. For the first time, the burden of ASDs has been estimated for the Global Burden of Disease Study 2010 (GBD 2010). The aims of this study were to develop global and regional prevalence models and estimate the global burden of disease of ASDs. Method A systematic review was conducted for epidemiological data (prevalence, incidence, remission and mortality risk) of autistic disorder and other ASDs. Data were pooled using a Bayesian meta-regression approach while adjusting for between-study variance to derive prevalence models. Burden was calculated in terms of years lived with disability (YLDs) and disability-adjusted life-years (DALYs), which are reported here by world region for 1990 and 2010. Results In 2010 there were an estimated 52 million cases of ASDs, equating to a prevalence of 7.6 per 1000 or one in 132 persons. After accounting for methodological variations, there was no clear evidence of a change in prevalence for autistic disorder or other ASDs between 1990 and 2010. Worldwide, there was little regional variation in the prevalence of ASDs. Globally, autistic disorders accounted for more than 58 DALYs per 100 000 population and other ASDs accounted for 53 DALYs per 100 000. Conclusions ASDs account for substantial health loss across the lifespan. Understanding the burden of ASDs is essential for effective policy making. An accurate epidemiological description of ASDs is needed to inform public health policy and to plan for education, housing and financial support services.

Book ChapterDOI
08 Oct 2016
TL;DR: This work takes a natural step from image-level annotation towards stronger supervision: it asks annotators to point to an object if one exists, and incorporates this point supervision along with a novel objectness potential in the training loss function of a CNN model.
Abstract: The semantic image segmentation task presents a trade-off between test time accuracy and training time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain; image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of \(12.9\,\%\) mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.

Posted Content
TL;DR: An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.
Abstract: Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at this https URL.

Proceedings ArticleDOI
02 Apr 2019
TL;DR: The comparison between learning and SLAM approaches from two recent works are revisited and evidence is found -- that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and the first cross-dataset generalization experiments are conducted.
Abstract: We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -- when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms -- defining tasks (e.g., navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion -- that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} x {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.

Journal ArticleDOI
TL;DR: This trial found no evidence that just under an hour of sevoflurane anaesthesia in infancy increases the risk of adverse neurodevelopmental outcome at two years of age compared to RA.

Journal ArticleDOI
08 Dec 2015-JAMA
TL;DR: A systematic review of studies with information on the prevalence of depression or depressive symptoms among resident physicians published between January 1963 and September 2015 found no statistically significant differences between cross-sectional vs longitudinal studies, studies of only interns vs only upper-level residents, or studies of nonsurgical vs both nonsurgical and surgical residents.
Abstract: Importance Physicians in training are at high risk for depression. However, the estimated prevalence of this disorder varies substantially between studies. Objective To provide a summary estimate of depression or depressive symptom prevalence among resident physicians. Data Sources and Study Selection Systematic search of EMBASE, ERIC, MEDLINE, and PsycINFO for studies with information on the prevalence of depression or depressive symptoms among resident physicians published between January 1963 and September 2015. Studies were eligible for inclusion if they were published in the peer-reviewed literature and used a validated method to assess for depression or depressive symptoms. Data Extraction and Synthesis Information on study characteristics and depression or depressive symptom prevalence was extracted independently by 2 trained investigators. Estimates were pooled using random-effects meta-analysis. Differences by study-level characteristics were estimated using meta-regression. Main Outcomes and Measures Point or period prevalence of depression or depressive symptoms as assessed by structured interview or validated questionnaire. Results Data were extracted from 31 cross-sectional studies (9447 individuals) and 23 longitudinal studies (8113 individuals). Three studies used clinical interviews and 51 used self-report instruments. The overall pooled prevalence of depression or depressive symptoms was 28.8% (4969/17 560 individuals, 95% CI, 25.3%-32.5%), with high between-study heterogeneity ( Q = 1247, τ 2 = 0.39, I2 = 95.8%, P Q = 14.4, τ 2 = 0.04, I2 = 79.2%) to 43.2% for the 2-item PRIME-MD (1349/2891 individuals, 95% CI, 37.6%-49.0%, Q = 45.6, τ 2 = 0.09, I2 = 84.6%). There was an increased prevalence with increasing calendar year (slope = 0.5% increase per year, adjusted for assessment modality; 95% CI, 0.03%-0.9%, P = .04). In a secondary analysis of 7 longitudinal studies, the median absolute increase in depressive symptoms with the onset of residency training was 15.8% (range, 0.3%-26.3%; relative risk, 4.5). No statistically significant differences were observed between cross-sectional vs longitudinal studies, studies of only interns vs only upper-level residents, or studies of nonsurgical vs both nonsurgical and surgical residents. Conclusions and Relevance In this systematic review, the summary estimate of the prevalence of depression or depressive symptoms among resident physicians was 28.8%, ranging from 20.9% to 43.2% depending on the instrument used, and increased with calendar year. Further research is needed to identify effective strategies for preventing and treating depression among physicians in training.

Journal ArticleDOI
TL;DR: Treatment of Cushing's syndrome is essential to reduce mortality and associated comorbidities and the choice of second-line treatments, including medication, bilateral adrenalectomy, and radiation therapy, must be individualized to each patient.
Abstract: Objective: The objective is to formulate clinical practice guidelines for treating Cushing's syndrome. Participants: Participants include an Endocrine Society-appointed Task Force of experts, a methodologist, and a medical writer. The European Society for Endocrinology co-sponsored the guideline. Evidence: The Task Force used the Grading of Recommendations, Assessment, Development, and Evaluation system to describe the strength of recommendations and the quality of evidence. The Task Force commissioned three systematic reviews and used the best available evidence from other published systematic reviews and individual studies. Consensus Process: The Task Force achieved consensus through one group meeting, several conference calls, and numerous e-mail communications. Committees and members of The Endocrine Society and the European Society of Endocrinology reviewed and commented on preliminary drafts of these guidelines. Conclusions: Treatment of Cushing's syndrome is essential to reduce mortality and associ...

Journal ArticleDOI
TL;DR: In this article, an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality is presented. But the system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images.
Abstract: In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.

Journal ArticleDOI
TL;DR: In this paper, the capacities of the country and its population to continue the education process at the schools in the online form of distance learning, study reviews the different available platforms and indicates the ones that were used by the support of the government, such as online portal, TV School and Microsoft teams for public schools and the alternatives like Zoom, Slack and Google Meet, EduPage platform that can be used for online education and live communication and gives examples of their usage.
Abstract: The situation in general education in Georgia has changed in the spring semester of 2020, when the first case of coronavirus COVID-19 infection was detected rising to 211 local and more than 1,5 million infection cases worldwide by the Apr. 8. 2020. Georgia became one of 188 countries worldwide that has suspended the education process. The paper studies the capacities of the country and its population to continue the education process at the schools in the online form of distance learning, study reviews the different available platforms and indicates the ones that were used by the support of the government, such as online portal, TV School and Microsoft teams for public schools and the alternatives like Zoom, Slack and Google Meet, EduPage platform that can be used for online education and live communication and gives examples of their usage. Authors made a case study, where the Google Meet platform was implemented for online education in a private school with 950 students, shows the usage statistics generated by the system for the first week of the online education process. Results confirm that the quick transition to the online form of education went successful and gained experience can be used in the future. The experience and studies can be useful for other countries that have not found the ways of transition yet. The lesson learned from the pandemic of 2020 will force a generation of new laws, regulations, platforms and solutions for future cases, when the countries, government and population will be more prepared than today.

Journal ArticleDOI
03 Jun 2020-BMJ
TL;DR: An unusually high proportion of the affected children and adolescents had gastrointestinal symptoms, Kawasaki disease shock syndrome, and were of African ancestry in this study, suggesting that the ongoing outbreak of Kawasaki-like multisystem inflammatory syndrome in the Paris area might be related to SARS-CoV-2.
Abstract: Objectives To describe the characteristics of children and adolescents affected by an outbreak of Kawasaki-like multisystem inflammatory syndrome and to evaluate a potential temporal association with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Design Prospective observational study. Setting General paediatric department of a university hospital in Paris, France. Participants 21 children and adolescents (aged ≤18 years) with features of Kawasaki disease who were admitted to hospital between 27 April and 11 May 2020 and followed up until discharge by 15 May 2020. Main outcome measures The primary outcomes were clinical and biological data, imaging and echocardiographic findings, treatment, and outcomes. Nasopharyngeal swabs were prospectively tested for SARS-CoV-2 using reverse transcription-polymerase chain reaction (RT-PCR) and blood samples were tested for IgG antibodies to the virus. Results 21 children and adolescents (median age 7.9 (range 3.7-16.6) years) were admitted with features of Kawasaki disease over a 15 day period, with 12 (57%) of African ancestry. 12 (57%) presented with Kawasaki disease shock syndrome and 16 (76%) with myocarditis. 17 (81%) required intensive care support. All 21 patients had noticeable gastrointestinal symptoms during the early stage of illness and high levels of inflammatory markers. 19 (90%) had evidence of recent SARS-CoV-2 infection (positive RT-PCR result in 8/21, positive IgG antibody detection in 19/21). All 21 patients received intravenous immunoglobulin and 10 (48%) also received corticosteroids. The clinical outcome was favourable in all patients. Moderate coronary artery dilations were detected in 5 (24%) of the patients during hospital stay. By 15 May 2020, after 8 (5-17) days of hospital stay, all patients were discharged home. Conclusions The ongoing outbreak of Kawasaki-like multisystem inflammatory syndrome among children and adolescents in the Paris area might be related to SARS-CoV-2. In this study an unusually high proportion of the affected children and adolescents had gastrointestinal symptoms, Kawasaki disease shock syndrome, and were of African ancestry.