scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: This research highlights the need to understand more fully the evolutionary drivers of infectious disease in response to infectious disease-related diarrhoea.
Abstract: Chelsea M. Rochman,* Cole Brookson, Jacqueline Bikker, Natasha Djuric, Arielle Earn, Kennedy Bucci, Samantha Athey, Aimee Huntington, Hayley McIlwraith, Keenan Munno, Hannah De Frond, Anna Kolomijeca, Lisa Erdle, Jelena Grbic, Malak Bayoumi, Stephanie B. Borrelle, Tina Wu, Samantha Santoro, Larissa M. Werbowski, Xia Zhu, Rachel K. Giles, Bonnie M. Hamilton, Clara Thaysen, Ashima Kaura, Natasha Klasios, Lauren Ead, Joel Kim, Cassandra Sherlock, Annissa Ho, and Charlotte Hung Department of Ecology and Evolutionary Biology, University of Toronto, St. George Campus, Toronto, Ontario, Canada Department of Earth Sciences, University of Toronto, St. George Campus, Toronto, Ontario, Canada David H. Smith Conservation Research Program, Society for Conservation Biology, Washington, DC, USA

579 citations


Journal ArticleDOI
Zheyu Song1, Yuanyu Wu1, Jiebing Yang1, Dingquan Yang1, Xuedong Fang1 
TL;DR: Several common methods used to treat advanced gastric cancer are summarized and the progress made in the treatment of Gastric cancer in detail is discussed.
Abstract: Gastric cancer is one of the most common malignant tumors in the digestive system. Surgery is currently considered to be the only radical treatment. As surgical techniques improve and progress is made in traditional radiotherapy, chemotherapy, and the implementation of neoadjuvant therapy, the 5-year survival rate of early gastric cancer can reach >95%. However, the low rate of early diagnosis means that most patients have advanced-stage disease at diagnosis and so the best surgical window is missed. Therefore, the main treatment for advanced gastric cancer is the combination of neoadjuvant chemoradiotherapy, molecular-targeted therapy, and immunotherapy. In this article, we summarize several common methods used to treat advanced gastric cancer and discuss the progress made in the treatment of gastric cancer in detail. Only clinical practice and clinical research will allow us to prolong the survival time of patients and allow the patients to truly benefit by paying attention to the individual patient cha...

579 citations


Journal ArticleDOI
TL;DR: Promising results, based on robust analysis of a larger meta-dataset, suggest that appropriate investment in agroecological research to improve organic management systems could greatly reduce or eliminate the yield gap for some crops or regions.
Abstract: Agriculture today places great strains on biodiversity, soils, water and the atmosphere, and these strains will be exacerbated if current trends in population growth, meat and energy consumption, and food waste continue. Thus, farming systems that are both highly productive and minimize environmental harms are critically needed. How organic agriculture may contribute to world food production has been subject to vigorous debate over the past decade. Here, we revisit this topic comparing organic and conventional yields with a new meta-dataset three times larger than previously used (115 studies containing more than 1000 observations) and a new hierarchical analytical framework that can better account for the heterogeneity and structure in the data. We find organic yields are only 19.2% (±3.7%) lower than conventional yields, a smaller yield gap than previous estimates. More importantly, we find entirely different effects of crop types and management practices on the yield gap compared with previous studies. For example, we found no significant differences in yields for leguminous versus non-leguminous crops, perennials versus annuals or developed versus developing countries. Instead, we found the novel result that two agricultural diversification practices, multi-cropping and crop rotations, substantially reduce the yield gap (to 9 ± 4% and 8 ± 5%, respectively) when the methods were applied in only organic systems. These promising results, based on robust analysis of a larger meta-dataset, suggest that appropriate investment in agroecological research to improve organic management systems could greatly reduce or eliminate the yield gap for some crops or regions.

579 citations


Book ChapterDOI
08 Sep 2018
TL;DR: A novel approach is proposed that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image and significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.
Abstract: With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNNs, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.

579 citations


Journal ArticleDOI
TL;DR: A number of trials have shown that therapies correcting dysbiosis, including fecal microbiota transplantation and probiotics, are promising in IBD, and it has not yet been established how Dysbiosis contributes to intestinal inflammation.
Abstract: Inflammatory bowel disease (IBD) is a chronic and relapsing inflammatory disorder of the gut. Although the precise cause of IBD remains unknown, the most accepted hypothesis of IBD pathogenesis to date is that an aberrant immune response against the gut microbiota is triggered by environmental factors in a genetically susceptible host. The advancement of next-generation sequencing technology has enabled identification of various alterations of the gut microbiota composition in IBD. While some results related to dysbiosis in IBD are different between studies owing to variations of sample type, method of investigation, patient profiles, and medication, the most consistent observation in IBD is reduced bacterial diversity, a decrease of Firmicutes, and an increase of Proteobacteria. It has not yet been established how dysbiosis contributes to intestinal inflammation. Many of the known IBD susceptibility genes are associated with recognition and processing of bacteria, which is consistent with a role of the gut microbiota in the pathogenesis of IBD. A number of trials have shown that therapies correcting dysbiosis, including fecal microbiota transplantation and probiotics, are promising in IBD.

579 citations


Journal ArticleDOI
TL;DR: A de novo transcriptome is assembled and annotated using RNA-sequencing profiles for a broad spectrum of tissues that is estimated to have near-complete sequence information for 88% of axolotl genes and finds evidence that cirbp plays a cytoprotective role during limb regeneration whereas manipulation of kazald1 expression disrupts regeneration.

579 citations


Journal ArticleDOI
TL;DR: The recent advances in green synthesis of silver nanoparticles, their application as antimicrobial agents and mechanism of antimicrobial mode of action are discussed.
Abstract: Since discovery of the first antibiotic drug, penicillin, in 1928, a variety of antibiotic and antimicrobial agents have been developed and used for both human therapy and industrial applications. However, excess and uncontrolled use of antibiotic agents has caused a significant growth in the number of drug resistant pathogens. Novel therapeutic approaches replacing the inefficient antibiotics are in high demand to overcome increasing microbial multidrug resistance. In the recent years, ongoing research has focused on development of nano-scale objects as efficient antimicrobial therapies. Among the various nanoparticles, silver nanoparticles have gained much attention due to their unique antimicrobial properties. However, concerns about the synthesis of these materials such as use of precursor chemicals and toxic solvents, and generation of toxic byproducts have led to a new alternative approach, green synthesis. This eco-friendly technique incorporates use of biological agents, plants or microbial agents as reducing and capping agents. Silver nanoparticles synthesized by green chemistry offer a novel and potential alternative to chemically synthesized nanoparticles. In this review, we discuss the recent advances in green synthesis of silver nanoparticles, their application as antimicrobial agents and mechanism of antimicrobial mode of action.

579 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: A novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations is proposed, which achieves the state-of-the-art classification accuracy on Novel categories by a significant margin while keeping comparable performance on the large-scale categories.
Abstract: In this paper, we are interested in the few-shot learning problem. In particular, we focus on a challenging scenario where the number of categories is large and the number of examples per novel category is very limited, e.g. 1, 2, or 3. Motivated by the close relationship between the parameters and the activations in a neural network associated with the same category, we propose a novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations. Zero training is required in adaptation to novel categories, and fast inference is realized by a single forward pass. We evaluate our method by doing few-shot image recognition on the ImageNet dataset, which achieves the state-of-the-art classification accuracy on novel categories by a significant margin while keeping comparable performance on the large-scale categories. We also test our method on the MiniImageNet dataset and it strongly outperforms the previous state-of-the-art methods.

579 citations


Posted Content
TL;DR: Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours.
Abstract: We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89%, which is on par with NASNet (Zoph et al., 2018), whose test error is 2.65%.

579 citations


Journal ArticleDOI
Mansi M. Kasliwal1, Ehud Nakar2, Leo Singer3, Leo Singer4, David L. Kaplan5, David O. Cook1, A. Van Sistine5, R. M. Lau1, Christoffer Fremling1, Ore Gottlieb2, Jacob E. Jencson1, Scott M. Adams1, U. Feindt6, Kenta Hotokezaka7, Sourav Ghosh5, Daniel A. Perley8, Po-Chieh Yu9, Tsvi Piran10, James R. Allison11, James R. Allison12, G. C. Anupama13, Arvind Balasubramanian14, Keith W. Bannister15, John Bally16, Jennifer Barnes17, Sudhanshu Barway, Eric C. Bellm18, Varun Bhalerao19, Deb Sankar Bhattacharya20, Nadejda Blagorodnova1, Joshua S. Bloom21, Joshua S. Bloom22, Patrick Brady5, Chris Cannella1, Deep Chatterjee5, S. B. Cenko3, S. B. Cenko4, B. E. Cobb23, Chris M. Copperwheat8, A. Corsi24, Kaushik De1, Dougal Dobie11, Dougal Dobie15, Dougal Dobie12, S. W. K. Emery25, Phil Evans26, Ori D. Fox27, Dale A. Frail28, C. Frohmaier29, C. Frohmaier30, Ariel Goobar6, Gregg Hallinan1, Fiona A. Harrison1, George Helou1, Tanja Hinderer31, Anna Y. Q. Ho1, Assaf Horesh10, Wing-Huen Ip7, Ryosuke Itoh32, Daniel Kasen22, Hyesook Kim, N. P. M. Kuin25, Thomas Kupfer1, Christene Lynch12, Christene Lynch11, K. K. Madsen1, Paolo A. Mazzali33, Paolo A. Mazzali8, Adam A. Miller34, Adam A. Miller35, Kunal Mooley36, Tara Murphy11, Tara Murphy12, Chow-Choong Ngeow9, David A. Nichols31, Samaya Nissanke31, Peter Nugent21, Peter Nugent22, Eran O. Ofek37, H. Qi5, Robert M. Quimby38, Robert M. Quimby39, Stephan Rosswog6, Florin Rusu40, Elaine M. Sadler12, Elaine M. Sadler11, Patricia Schmidt31, Jesper Sollerman6, Iain A. Steele8, A. R. Williamson31, Y. Xu1, Lin Yan1, Yoichi Yatsu32, C. Zhang5, Weijie Zhao40 
22 Dec 2017-Science
TL;DR: It is demonstrated that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis, which is dissimilar to classical short gamma-ray bursts with ultrarelativistic jets.
Abstract: Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and the delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.

579 citations


Proceedings ArticleDOI
03 Apr 2017
TL;DR: This taxonomy captures major architectural characteristics of blockchains and the impact of their principal design decisions and is intended to help with important architectural considerations about the performance and quality attributes of blockchain-based systems.
Abstract: Blockchain is an emerging technology for decentralised and transactional data sharing across a large network of untrusted participants. It enables new forms of distributed software architectures, where agreement on shared states can be established without trusting a central integration point. A major difficulty for architects designing applications based on blockchain is that thetechnology has many configurations and variants. Since blockchains are at an early stage, there is little product data or reliable technology evaluation available to compare different blockchains. In this paper, we propose how to classify and compare blockchains and blockchain-based systems to assist with the design and assessment of their impact on software architectures. Our taxonomy captures major architectural characteristics of blockchains and the impact of their principal design decisions. This taxonomy is intended to help with important architectural considerations about the performance and quality attributes of blockchain-based systems.

Journal ArticleDOI
TL;DR: Persistent ER stress and protein misfolding-initiated ROS cascades and their significant roles in the pathogenesis of multiple human disorders, including neurodegenerative diseases, diabetes mellitus, atherosclerosis, inflammation, ischemia, and kidney and liver diseases are reviewed.
Abstract: The endoplasmic reticulum (ER) is a fascinating network of tubules through which secretory and transmembrane proteins enter unfolded and exit as either folded or misfolded proteins, after which they are directed either toward other organelles or to degradation, respectively. The ER redox environment dictates the fate of entering proteins, and the level of redox signaling mediators modulates the level of reactive oxygen species (ROS). Accumulating evidence suggests the interrelation of ER stress and ROS with redox signaling mediators such as protein disulfide isomerase (PDI)-endoplasmic reticulum oxidoreductin (ERO)-1, glutathione (GSH)/glutathione disuphide (GSSG), NADPH oxidase 4 (Nox4), NADPH-P450 reductase (NPR), and calcium. Here, we reviewed persistent ER stress and protein misfolding-initiated ROS cascades and their significant roles in the pathogenesis of multiple human disorders, including neurodegenerative diseases, diabetes mellitus, atherosclerosis, inflammation, ischemia, and kidney and liver diseases.

Journal ArticleDOI
03 Jun 2016-Science
TL;DR: For example, NASA's magnetospheric multiscale (MMS) mission has found direct evidence for electron demagnetization and acceleration at sites along the sunward boundary of Earth's magnetosphere where the interplanetary magnetic field reconnects with the terrestrial magnetic field as discussed by the authors.
Abstract: Magnetic reconnection is a fundamental physical process in plasmas whereby stored magnetic energy is converted into heat and kinetic energy of charged particles Reconnection occurs in many astrophysical plasma environments and in laboratory plasmas Using measurements with very high time resolution, NASA's Magnetospheric Multiscale (MMS) mission has found direct evidence for electron demagnetization and acceleration at sites along the sunward boundary of Earth's magnetosphere where the interplanetary magnetic field reconnects with the terrestrial magnetic field We have (i) observed the conversion of magnetic energy to particle energy; (ii) measured the electric field and current, which together cause the dissipation of magnetic energy; and (iii) identified the electron population that carries the current as a result of demagnetization and acceleration within the reconnection diffusion/dissipation region

Posted Content
TL;DR: The results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization, and introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
Abstract: Overparameterized neural networks can be highly accurate on average on an i.i.d. test set yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization---a stronger-than-typical L2 penalty or early stopping---we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.

Proceedings ArticleDOI
21 Jul 2017
TL;DR: In this article, the authors explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning, and train separate detectors for different scales.
Abstract: Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).

Journal ArticleDOI
TL;DR: The major issues regarding this multi-step process, focussing in particular on challenges of the extraction of radiomic features from data sets provided by computed tomography, positron emission tomographic, and magnetic resonance imaging are summarised.
Abstract: Radiomics is an emerging translational field of research aiming to extract mineable high-dimensional data from clinical images. The radiomic process can be divided into distinct steps with definable inputs and outputs, such as image acquisition and reconstruction, image segmentation, features extraction and qualification, analysis, and model building. Each step needs careful evaluation for the construction of robust and reliable models to be transferred into clinical practice for the purposes of prognosis, non-invasive disease tracking, and evaluation of disease response to treatment. After the definition of texture parameters (shape features; first-, second-, and higher-order features), we briefly discuss the origin of the term radiomics and the methods for selecting the parameters useful for a radiomic approach, including cluster analysis, principal component analysis, random forest, neural network, linear/logistic regression, and other. Reproducibility and clinical value of parameters should be firstly tested with internal cross-validation and then validated on independent external cohorts. This article summarises the major issues regarding this multi-step process, focussing in particular on challenges of the extraction of radiomic features from data sets provided by computed tomography, positron emission tomography, and magnetic resonance imaging.

Journal ArticleDOI
Rafael Yuste1
TL;DR: As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.
Abstract: For over a century, the neuron doctrine--which states that the neuron is the structural and functional unit of the nervous system--has provided a conceptual foundation for neuroscience This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease

Journal ArticleDOI
TL;DR: This trial showed significantly longer overall survival with a CDK4/6 inhibitor plus endocrine therapy than with endocrine Therapy alone among patients with advanced hormone-receptor-positive, HER2-negative breast cancer.
Abstract: Background An earlier analysis of this phase 3 trial showed that the addition of a cyclin-dependent kinase 4 and 6 (CDK4/6) inhibitor to endocrine therapy provided a greater benefit with r...


Book ChapterDOI
08 Sep 2018
TL;DR: Zhang et al. as discussed by the authors proposed an accurate and lightweight deep network for image super-resolution, which implements a cascading mechanism upon a residual network and achieves state-of-the-art performance.
Abstract: In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to real-world applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep network for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present variant models of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods.

Journal ArticleDOI
13 Feb 2018-PeerJ
TL;DR: The citation impact of OA articles is examined, corroborating the so-called open-access citation advantage: accounting for age and discipline, OAarticles receive 18% more citations than average, an effect driven primarily by Green and Hybrid OA.
Abstract: Despite growing interest in Open Access (OA) to scholarly literature, there is an unmet need for large-scale, up-to-date, and reproducible studies assessing the prevalence and characteristics of OA. We address this need using oaDOI, an open online service that determines OA status for 67 million articles. We use three samples, each of 100,000 articles, to investigate OA in three populations: (1) all journal articles assigned a Crossref DOI, (2) recent journal articles indexed in Web of Science, and (3) articles viewed by users of Unpaywall, an open-source browser extension that lets users find OA articles using oaDOI. We estimate that at least 28% of the scholarly literature is OA (19M in total) and that this proportion is growing, driven particularly by growth in Gold and Hybrid. The most recent year analyzed (2015) also has the highest percentage of OA (45%). Because of this growth, and the fact that readers disproportionately access newer articles, we find that Unpaywall users encounter OA quite frequently: 47% of articles they view are OA. Notably, the most common mechanism for OA is not Gold, Green, or Hybrid OA, but rather an under-discussed category we dub Bronze: articles made free-to-read on the publisher website, without an explicit Open license. We also examine the citation impact of OA articles, corroborating the so-called open-access citation advantage: accounting for age and discipline, OA articles receive 18% more citations than average, an effect driven primarily by Green and Hybrid OA. We encourage further research using the free oaDOI service, as a way to inform OA policy and practice.

Book
03 Jul 2020
TL;DR: This survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving.
Abstract: Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles. As with any rapidly growing field, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several survey papers on particular sub-problems have appeared, no comprehensive survey on problems, datasets, and methods in computer vision for autonomous vehicles has been published. This monograph attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques. Our survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving. Towards this goal, we analyze the performance of the state of the art on several challenging benchmarking datasets, including KITTI, MOT, and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we also provide a website that allows navigating topics as well as methods and provides additional information.

Journal ArticleDOI
TL;DR: Only a minority of participants with MDD received minimally adequate treatment: 1 in 5 people in high-income and 1 in 27 in low-/lower-middle-income countries.
Abstract: Background Major depressive disorder (MDD) is a leading cause of disability worldwide. Aims To examine the: (a) 12-month prevalence of DSM-IV MDD; (b) proportion aware that they have a problem needing treatment and who want care; (c) proportion of the latter receiving treatment; and (d) proportion of such treatment meeting minimal standards. Method Representative community household surveys from 21 countries as part of the World Health Organization World Mental Health Surveys. Results Of 51 547 respondents, 4.6% met 12-month criteria for DSM-IV MDD and of these 56.7% reported needing treatment. Among those who recognised their need for treatment, most (71.1%) made at least one visit to a service provider. Among those who received treatment, only 41.0% received treatment that met minimal standards. This resulted in only 16.5% of all individuals with 12-month MDD receiving minimally adequate treatment. Conclusions Only a minority of participants with MDD received minimally adequate treatment: 1 in 5 people in high-income and 1 in 27 in low-/lower-middle-income countries. Scaling up care for MDD requires fundamental transformations in community education and outreach, supply of treatment and quality of services.

Journal ArticleDOI
17 Jul 2019
TL;DR: The spatiotemporal multi-graph convolution network (ST-MGCN), a novel deep learning model for ride-hailing demand forecasting, is proposed which first encode the non-Euclidean pair-wise correlations among regions into multiple graphs and then explicitly model these correlations using multi- graph convolution.
Abstract: Region-level demand forecasting is an essential task in ridehailing services. Accurate ride-hailing demand forecasting can guide vehicle dispatching, improve vehicle utilization, reduce the wait-time, and mitigate traffic congestion. This task is challenging due to the complicated spatiotemporal dependencies among regions. Existing approaches mainly focus on modeling the Euclidean correlations among spatially adjacent regions while we observe that non-Euclidean pair-wise correlations among possibly distant regions are also critical for accurate forecasting. In this paper, we propose the spatiotemporal multi-graph convolution network (ST-MGCN), a novel deep learning model for ride-hailing demand forecasting. We first encode the non-Euclidean pair-wise correlations among regions into multiple graphs and then explicitly model these correlations using multi-graph convolution. To utilize the global contextual information in modeling the temporal correlation, we further propose contextual gated recurrent neural network which augments recurrent neural network with a contextual-aware gating mechanism to re-weights different historical observations. We evaluate the proposed model on two real-world large scale ride-hailing demand datasets and observe consistent improvement of more than 10% over stateof-the-art baselines.

Journal ArticleDOI
TL;DR: In this article, rational inattention was used to model the decision maker's optimal strategy for discrete alternatives with imperfect information about their values, which results in choosing probabilistically in line with a modied multinomial logit model.
Abstract: Individuals must often choose among discrete alternatives with imperfect information about their values. Before choosing, they may have an opportunity to study the options, but doing so is costly. This costly information acquisition creates new choices such as the number of and types of questions to ask. We model these situations using the rational inattention approach to information frictions. We nd that the decision maker’s optimal strategy results in choosing probabilistically in line with a modied multinomial logit model. The modication arises because the decision maker’s prior knowledge and attention allocation strategy aect his evaluation of the alternatives. When the options are a priori homogeneous, the standard logit model emerges.

Proceedings ArticleDOI
02 Mar 2019
TL;DR: The FSAF module robustly improves the baseline RetinaNet by a large margin under various settings, while introducing nearly free inference overhead, and the resulting best model can achieve a state-of-the-art 44.6% mAP, outperforming all existing single-shot detectors on COCO.
Abstract: We motivate and present feature selective anchor-free (FSAF) module, a simple and effective building block for single-shot object detectors. It can be plugged into single-shot detectors with feature pyramid structure. The FSAF module addresses two limitations brought up by the conventional anchor-based detection: 1) heuristic-guided feature selection; 2) overlap-based anchor sampling. The general concept of the FSAF module is online feature selection applied to the training of multi-level anchor-free branches. Specifically, an anchor-free branch is attached to each level of the feature pyramid, allowing box encoding and decoding in the anchor-free manner at an arbitrary level. During training, we dynamically assign each instance to the most suitable feature level. At the time of inference, the FSAF module can work independently or jointly with anchor-based branches. We instantiate this concept with simple implementations of anchor-free branches and online feature selection strategy. Experimental results on the COCO detection track show that our FSAF module performs better than anchor-based counterparts while being faster. When working jointly with anchor-based branches, the FSAF module robustly improves the baseline RetinaNet by a large margin under various settings, while introducing nearly free inference overhead. And the resulting best model can achieve a state-of-the-art 44.6% mAP, outperforming all existing single-shot detectors on COCO.


Journal ArticleDOI
TL;DR: A large number of patients with or at risk of diabetes and metabolic complications of preexisting diabetes, including diabetic ketoacidosis and h...
Abstract: Diabetes and Covid-19 Diabetes is associated with an increased risk of severe Covid-19. New-onset diabetes and metabolic complications of preexisting diabetes, including diabetic ketoacidosis and h...

Journal ArticleDOI
TL;DR: The realization that external environmental factors, such as dietary components and essential micronutrients, as well as the gastrointestinal microbiota, can change the balance between H pylori's activity as a commensal or a pathogen has provided direction to studies aimed at defining the full carcinogenic potential of this organism.

Proceedings Article
12 Feb 2016
TL;DR: This paper proposes a scalable factorization model to incorporate visual signals into predictors of people's opinions, which is applied to a selection of large, real-world datasets and makes use of visual features extracted from product images using (pre-trained) deep networks.
Abstract: Modern recommender systems model people and items by discovering or 'teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.