scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
07 Mar 2019-Cell
TL;DR: This review will highlight critical nodal points in VEGF biology, including recent developments in immunotherapy for cancer and multitarget approaches in neovascular eye disease.

1,179 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework for the integration of the INSU-IN2P3-INP project with the National Science and Technology Facilities Council (NSF) and the Higher Education Funding Council for England (HEFL).
Abstract: ESA; CNES (France); CNRS/INSU-IN2P3-INP (France); ASI (Italy); CNR (Italy); INAF (Italy); NASA (USA); DoE (USA); STFC (UK); UKSA (UK); CSIC (Spain); MINECO (Spain); JA (Spain); RES (Spain); Tekes (Finland); AoF (Finland); CSC (Finland); DLR (Germany); MPG (Germany); CSA (Canada); DTU Space (Denmark); SER/SSO (Switzerland); RCN (Norway); SFI (Ireland); FCT/MCTES (Portugal); ERC (EU); PRACE (EU); Higher Education Funding Council for England; Science and Technology Facilities Council; Alfred P. Sloan Foundation; National Science Foundation; US Department of Energy Office of Science

1,178 citations


Journal ArticleDOI
TL;DR: It is shown that CQ mainly inhibits autophagy by impairing autophagosome fusion with lysosomes rather than by affecting the acidity and/or degradative activity of this organelle.
Abstract: Macroautophagy/autophagy is a conserved transport pathway where targeted structures are sequestered by phagophores, which mature into autophagosomes, and then delivered into lysosomes for degradati...

1,178 citations


Journal ArticleDOI
TL;DR: This paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the numberOf antenna elements, and shows that such an architecture can approach the performance of a fully digital scheme with much fewer number ofRF chains.
Abstract: The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multiple-input multiple-output (MIMO) system and a downlink multi-user multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used.

1,178 citations


Proceedings Article
27 Nov 2016
TL;DR: A deep-learning-based approach to collectively forecast the inflow and outflow of crowds in each and every region of a city, using the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic.
Abstract: Forecasting the flow of crowds is of great importance to traffic management and public safety, and very challenging as it is affected by many complex factors, such as inter-region traffic, events, and weather. We propose a deep-learning-based approach, called ST-ResNet, to collectively forecast the inflow and outflow of crowds in each and every region of a city. We design an end-to-end structure of ST-ResNet based on unique properties of spatio-temporal data. More specifically, we employ the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic. For each property, we design a branch of residual convolutional units, each of which models the spatial properties of crowd traffic. ST-ResNet learns to dynamically aggregate the output of the three residual neural networks based on data, assigning different weights to different branches and regions. The aggregation is further combined with external factors, such as weather and day of the week, to predict the final traffic of crowds in each and every region. Experiments on two types of crowd flows in Beijing and New York City (NYC) demonstrate that the proposed ST-ResNet outperforms six well-known methods.

1,178 citations



Journal ArticleDOI
23 Sep 2015-Neuron
TL;DR: Analysis of de novo CNVs from the full Simons Simplex Collection replicates prior findings of strong association with autism spectrum disorders (ASDs) and confirms six risk loci, including 6 CNV regions.

1,176 citations


Journal ArticleDOI
TL;DR: An important implication from the theory is that analytical skills will become less important, as AI takes over more analytical tasks, giving the “softer” intuitive and empathetic skills even more importance for service employees.
Abstract: Artificial intelligence (AI) is increasingly reshaping service by performing various tasks, constituting a major source of innovation, yet threatening human jobs We develop a theory of AI job repl

1,176 citations


Journal ArticleDOI
TL;DR: Nayaket et al. as mentioned in this paper revealed the existence of surface states of LaBi through the observation of three Dirac cones: two coexist at the corners and one appears at the centre of the Brillouin zone, by employing angle-resolved photoemission spectroscopy in conjunction with ab initio calculations.
Abstract: The rare-earth monopnictide LaBi exhibits exotic magneto-transport properties, including an extremely large and anisotropic magnetoresistance. Experimental evidence for topological surface states is still missing although band inversions have been postulated to induce a topological phase in LaBi. In this work, we have revealed the existence of surface states of LaBi through the observation of three Dirac cones: two coexist at the corners and one appears at the centre of the Brillouin zone, by employing angle-resolved photoemission spectroscopy in conjunction with ab initio calculations. The odd number of surface Dirac cones is a direct consequence of the odd number of band inversions in the bulk band structure, thereby proving that LaBi is a topological, compensated semimetal, which is equivalent to a time-reversal invariant topological insulator. Our findings provide insight into the topological surface states of LaBi’s semi-metallicity and related magneto-transport properties. The magnetoresistance suggests an exotic topological phase in LaBi, but the evidence is still missing. Here, Nayaket al. report the existence of surface states of LaBi through the observation of three Dirac cones, confirming it a topological semimetal.

1,176 citations


Posted Content
TL;DR: In this paper, a multi-scale architecture, an adversarial training method, and an image gradient difference loss function were proposed to predict future frames from a video sequence. But their performance was not as good as those of the previous works.
Abstract: Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset

1,175 citations


Posted ContentDOI
14 Mar 2019-bioRxiv
TL;DR: It is proposed that the Pearson residuals from ’regularized negative binomial regression’, where cellular sequencing depth is utilized as a covariate in a generalized linear model, successfully remove the influence of technical characteristics from downstream analyses while preserving biological heterogeneity.
Abstract: Single-cell RNA-seq (scRNA-seq) data exhibits significant cell-to-cell variation due to technical factors, including the number of molecules detected in each cell, which can confound biological heterogeneity with technical effects. To address this, we present a modeling framework for the normalization and variance stabilization of molecular count data from scRNA-seq experiments. We propose that the Pearson residuals from ’regularized negative binomial regression’, where cellular sequencing depth is utilized as a covariate in a generalized linear model, successfully remove the influence of technical characteristics from downstream analyses while preserving biological heterogeneity. Importantly, we show that an unconstrained negative binomial model may overfit scRNA-seq data, and overcome this by pooling information across genes with similar abundances to obtain stable parameter estimates. Our procedure omits the need for heuristic steps including pseudocount addition or log-transformation, and improves common downstream analytical tasks such as variable gene selection, dimensional reduction, and differential expression. Our approach can be applied to any UMI-based scRNA-seq dataset and is freely available as part of the R package sctransform, with a direct interface to our single-cell toolkit Seurat.

Posted Content
TL;DR: MetaQNN as discussed by the authors is a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task, where the learning agent is trained to sequentially choose CNN layers using $Q$-learning with an $\epsilon$-greedy exploration strategy and experience replay.
Abstract: At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using $Q$-learning with an $\epsilon$-greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: This work proposes a new CNN architecture to exploit unlabeled and sparsely labeled target domain data and simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks.
Abstract: Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.

Journal ArticleDOI
TL;DR: This review summarized the fundamental processes and mechanisms of “green” synthesis approaches, especially for metal and metal oxide nanoparticles using natural extracts and explored the role of biological components, essential phytochemicals (e.g., flavonoids, alkaloids, terpenoids, amides, and aldehydes) as reducing agents and solvent systems.
Abstract: In materials science, “green” synthesis has gained extensive attention as a reliable, sustainable, and eco-friendly protocol for synthesizing a wide range of materials/nanomaterials including metal/metal oxides nanomaterials, hybrid materials, and bioinspired materials. As such, green synthesis is regarded as an important tool to reduce the destructive effects associated with the traditional methods of synthesis for nanoparticles commonly utilized in laboratory and industry. In this review, we summarized the fundamental processes and mechanisms of “green” synthesis approaches, especially for metal and metal oxide [e.g., gold (Au), silver (Ag), copper oxide (CuO), and zinc oxide (ZnO)] nanoparticles using natural extracts. Importantly, we explored the role of biological components, essential phytochemicals (e.g., flavonoids, alkaloids, terpenoids, amides, and aldehydes) as reducing agents and solvent systems. The stability/toxicity of nanoparticles and the associated surface engineering techniques for achieving biocompatibility are also discussed. Finally, we covered applications of such synthesized products to environmental remediation in terms of antimicrobial activity, catalytic activity, removal of pollutants dyes, and heavy metal ion sensing.

Journal ArticleDOI
01 Apr 2018-Nature
TL;DR: Molten-salt-assisted chemical vapour deposition is used to synthesize a wide variety of two-dimensional transition-metal chalcogenides and elaborate how the salt decreases the melting point of the reactants and facilitates the formation of intermediate products, increasing the overall reaction rate.
Abstract: Investigations of two-dimensional transition-metal chalcogenides (TMCs) have recently revealed interesting physical phenomena, including the quantum spin Hall effect1,2, valley polarization3,4 and two-dimensional superconductivity 5 , suggesting potential applications for functional devices6–10. However, of the numerous compounds available, only a handful, such as Mo- and W-based TMCs, have been synthesized, typically via sulfurization11–15, selenization16,17 and tellurization 18 of metals and metal compounds. Many TMCs are difficult to produce because of the high melting points of their metal and metal oxide precursors. Molten-salt-assisted methods have been used to produce ceramic powders at relatively low temperature 19 and this approach 20 was recently employed to facilitate the growth of monolayer WS2 and WSe2. Here we demonstrate that molten-salt-assisted chemical vapour deposition can be broadly applied for the synthesis of a wide variety of two-dimensional (atomically thin) TMCs. We synthesized 47 compounds, including 32 binary compounds (based on the transition metals Ti, Zr, Hf, V, Nb, Ta, Mo, W, Re, Pt, Pd and Fe), 13 alloys (including 11 ternary, one quaternary and one quinary), and two heterostructured compounds. We elaborate how the salt decreases the melting point of the reactants and facilitates the formation of intermediate products, increasing the overall reaction rate. Most of the synthesized materials in our library are useful, as supported by evidence of superconductivity in our monolayer NbSe2 and MoTe2 samples21,22 and of high mobilities in MoS2 and ReS2. Although the quality of some of the materials still requires development, our work opens up opportunities for studying the properties and potential application of a wide variety of two-dimensional TMCs.

Journal ArticleDOI
TL;DR: Intravitreous aflibercept, bevacizumAB, or ranibizumab improved vision in eyes with center-involved diabetic macular edema, but the relative effect depended on baseline visual acuity.
Abstract: Background The relative efficacy and safety of intravitreous aflibercept, bevacizumab, and ranibizumab in the treatment of diabetic macular edema are unknown. Methods At 89 clinical sites, we randomly assigned 660 adults (mean age, 61±10 years) with diabetic macular edema involving the macular center to receive intravitreous aflibercept at a dose of 2.0 mg (224 participants), bevacizumab at a dose of 1.25 mg (218 participants), or ranibizumab at a dose of 0.3 mg (218 participants). The study drugs were administered as often as every 4 weeks, according to a protocol-specified algorithm. The primary outcome was the mean change in visual acuity at 1 year. Results From baseline to 1 year, the mean visual-acuity letter score (range, 0 to 100, with higher scores indicating better visual acuity; a score of 85 is approximately 20/20) improved by 13.3 with aflibercept, by 9.7 with bevacizumab, and by 11.2 with ranibizumab. Although the improvement was greater with aflibercept than with the other two drugs (P 0.50 for each pairwise comparison). When the initial letter score was less than 69 (approximately 20/50 or worse), the mean improvement was 18.9 with aflibercept, 11.8 with bevacizumab, and 14.2 with ranibizumab (P Conclusions Intravitreous aflibercept, bevacizumab, or ranibizumab improved vision in eyes with center-involved diabetic macular edema, but the relative effect depended on baseline visual acuity. When the initial visual-acuity loss was mild, there were no apparent differences, on average, among study groups. At worse levels of initial visual acuity, aflibercept was more effective at improving vision. (Funded by the National Institutes of Health; ClinicalTrials.gov number, NCT01627249.).

Proceedings Article
24 Apr 2017
TL;DR: This paper showed that using word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA/SVD.
Abstract: The success of neural network methods for computing word embeddings has motivated methods for generating semantic embeddings of longer pieces of text, such as sentences and paragraphs. Surprisingly, Wieting et al (ICLR'16) showed that such complicated methods are outperformed, especially in out-of-domain (transfer learning) settings, by simpler methods involving mild retraining of word embeddings and basic linear regression. The method of Wieting et al. requires retraining with a substantial labeled dataset such as Paraphrase Database (Ganitkevitch et al., 2013). The current paper goes further, showing that the following completely unsupervised sentence embedding is a formidable baseline: Use word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA/SVD. This weighting improves performance by about 10% to 30% in textual similarity tasks, and beats sophisticated supervised methods including RNN's and LSTM's. It even improves Wieting et al.'s embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent. The paper also gives a theoretical explanation of the success of the above unsupervised method using a latent variable generative model for sentences, which is a simple extension of the model in Arora et al. (TACL'16) with new "smoothing" terms that allow for words occurring out of context, as well as high probabilities for words like and, not in all contexts.

Journal ArticleDOI
TL;DR: In this paper, a brief summary of the key issues for the methanol-to-olefins (MTO) reaction is given, including studies on the reaction mechanism, molecular sieve synthesis and crystallization mechanism, catalyst and its manufacturing scale-up, reactor selection and reactor scaleup, process demonstration, and commercialization.
Abstract: The methanol-to-olefins (MTO) reaction is an interesting and important reaction for both fundamental research and industrial application. The Dalian Institute of Chemical Physics (DICP) has developed a MTO technology that led to the successful construction and operation of the world’s first coal to olefin plant in 2010. This historical perspective gives a brief summary on the key issues for the process development, including studies on the reaction mechanism, molecular sieve synthesis and crystallization mechanism, catalyst and its manufacturing scale-up, reactor selection and reactor scale-up, process demonstration, and commercialization. Further challenges on the fundamental research and the directions for future catalyst improvement are also suggested.

Posted Content
TL;DR: This work presents a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by Slowly removing the noise.
Abstract: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.

Journal ArticleDOI
TL;DR: In this article, a review on deactivation and regeneration of heterogeneous catalysts classifies deactivation by type (chemical, thermal, and mechanical) and by mechanism (poisoning, fouling, thermal degradation, vapor formation, vapor-solid and solid-solid reactions, and attrition/crushing).
Abstract: Deactivation of heterogeneous catalysts is a ubiquitous problem that causes loss of catalytic rate with time. This review on deactivation and regeneration of heterogeneous catalysts classifies deactivation by type (chemical, thermal, and mechanical) and by mechanism (poisoning, fouling, thermal degradation, vapor formation, vapor-solid and solid-solid reactions, and attrition/crushing). The key features and considerations for each of these deactivation types is reviewed in detail with reference to the latest literature reports in these areas. Two case studies on the deactivation mechanisms of catalysts used for cobalt Fischer-Tropsch and selective catalytic reduction are considered to provide additional depth in the topics of sintering, coking, poisoning, and fouling. Regeneration considerations and options are also briefly discussed for each deactivation mechanism.

Posted Content
15 Feb 2018
TL;DR: This work decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets.
Abstract: We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor. We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence. Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32. Our source code will become available after the review process.

Journal ArticleDOI
TL;DR: Alteration of the physiological mechanisms supporting SPW‐Rs leads to their pathological conversion, “p‐ripples,” which are a marker of epileptogenic tissue and can be observed in rodent models of schizophrenia and Alzheimer's Disease.
Abstract: Sharp wave ripples (SPW-Rs) represent the most synchronous population pattern in the mammalian brain. Their excitatory output affects a wide area of the cortex and several subcortical nuclei. SPW-Rs occur during "off-line" states of the brain, associated with consummatory behaviors and non-REM sleep, and are influenced by numerous neurotransmitters and neuromodulators. They arise from the excitatory recurrent system of the CA3 region and the SPW-induced excitation brings about a fast network oscillation (ripple) in CA1. The spike content of SPW-Rs is temporally and spatially coordinated by a consortium of interneurons to replay fragments of waking neuronal sequences in a compressed format. SPW-Rs assist in transferring this compressed hippocampal representation to distributed circuits to support memory consolidation; selective disruption of SPW-Rs interferes with memory. Recently acquired and pre-existing information are combined during SPW-R replay to influence decisions, plan actions and, potentially, allow for creative thoughts. In addition to the widely studied contribution to memory, SPW-Rs may also affect endocrine function via activation of hypothalamic circuits. Alteration of the physiological mechanisms supporting SPW-Rs leads to their pathological conversion, "p-ripples," which are a marker of epileptogenic tissue and can be observed in rodent models of schizophrenia and Alzheimer's Disease. Mechanisms for SPW-R genesis and function are discussed in this review.

Journal ArticleDOI
TL;DR: This article summarizes the ATTD consensus recommendations and represents the current understanding of how CGM results can affect outcomes.
Abstract: Measurement of glycated hemoglobin (HbA1c) has been the traditional method for assessing glycemic control. However, it does not reflect intra- and interday glycemic excursions that may lead to acute events (such as hypoglycemia) or postprandial hyperglycemia, which have been linked to both microvascular and macrovascular complications. Continuous glucose monitoring (CGM), either from real-time use (rtCGM) or intermittently viewed (iCGM), addresses many of the limitations inherent in HbA1c testing and self-monitoring of blood glucose. Although both provide the means to move beyond the HbA1c measurement as the sole marker of glycemic control, standardized metrics for analyzing CGM data are lacking. Moreover, clear criteria for matching people with diabetes to the most appropriate glucose monitoring methodologies, as well as standardized advice about how best to use the new information they provide, have yet to be established. In February 2017, the Advanced Technologies & Treatments for Diabetes (ATTD) Congress convened an international panel of physicians, researchers, and individuals with diabetes who are expert in CGM technologies to address these issues. This article summarizes the ATTD consensus recommendations and represents the current understanding of how CGM results can affect outcomes.

Proceedings ArticleDOI
Jifeng Dai1, Kaiming He1, Jian Sun1
27 Jun 2016
TL;DR: This paper presents Multitask Network Cascades for instance-aware semantic segmentation, which consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects, and develops an algorithm for the nontrivial end-to-end training of this causal, cascaded structure.
Abstract: Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multitask Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast/Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.

Journal ArticleDOI
07 Apr 2017-Science
TL;DR: In this paper, the authors combine Hi-C data with existing draft assemblies to generate chromosome-length scaffolds, which are then combined with draft sequences to create genome assemblies of the Aedes aegypti and Culex quinquefasciatus.
Abstract: The Zika outbreak, spread by the Aedes aegypti mosquito, highlights the need to create high-quality assemblies of large genomes in a rapid and cost-effective way. Here we combine Hi-C data with existing draft assemblies to generate chromosome-length scaffolds. We validate this method by assembling a human genome, de novo, from short reads alone (67× coverage). We then combine our method with draft sequences to create genome assemblies of the mosquito disease vectors Ae. aegypti and Culex quinquefasciatus, each consisting of three scaffolds corresponding to the three chromosomes in each species. These assemblies indicate that almost all genomic rearrangements among these species occur within, rather than between, chromosome arms. The genome assembly procedure we describe is fast, inexpensive, and accurate, and can be applied to many species.

Journal ArticleDOI
TL;DR: Theoretically, many-body localized (MBL) systems exhibit a new kind of robust integrability: an extensive set of quasilocal integrals of motion emerges, which provides an intuitive explanation of the breakdown of thermalization as mentioned in this paper.
Abstract: Thermalizing quantum systems are conventionallydescribed by statistical mechanics at equilib-rium. However, not all systems fall into this category, with many-body localization providinga generic mechanism for thermalization to fail in strongly disordered systems. Many-bodylocalized (MBL) systems remain perfect insulators at nonzero temperature, which do notthermalize and therefore cannot be describedusing statistical mechanics. This Colloquiumreviews recent theoretical and experimental advances in studies of MBL systems, focusing onthe new perspective provided by entanglement and nonequilibrium experimental probes suchas quantum quenches. Theoretically, MBL systems exhibit a new kind of robust integrability: anextensive set of quasilocal integrals of motion emerges, which provides an intuitive explanationof the breakdown of thermalization. A description based on quasilocal integrals of motion isused to predict dynamical properties of MBL systems, such as the spreading of quantumentanglement, the behavior of local observables, and the response to external dissipativeprocesses. Furthermore, MBL systems can exhibit eigenstate transitions and quantum ordersforbidden in thermodynamic equilibrium. An outline isgiven of the current theoretical under-standing of the quantum-to-classical transitionbetween many-body localized and ergodic phasesand anomalous transport in the vicinity of that transition. Experimentally, synthetic quantumsystems, which are well isolated from an external thermal reservoir, provide natural platforms forrealizing the MBL phase. Recent experiments with ultracold atoms, trapped ions, superconductingqubits, and quantum materials, in which different signatures of many-body localization have beenobserved, are reviewed. This Colloquium concludes by listing outstanding challenges andpromising future research directions.

Proceedings ArticleDOI
14 Dec 2018
TL;DR: PSMNet as discussed by the authors proposes a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision.
Abstract: Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https://github.com/JiaRenChang/PSMNet.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Matthew Abernathy3  +978 moreInstitutions (112)
TL;DR: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers as discussed by the authors.
Abstract: The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from binary black hole mergers. In this paper we present full results from a search for binary black hole merger signals with total masses up to 100M⊙ and detailed implications from our observations of these systems. Our search, based on general-relativistic models of gravitational wave signals from binary black hole systems, unambiguously identified two signals, GW150914 and GW151226, with a significance of greater than 5σ over the observing period. It also identified a third possible signal, LVT151012, with substantially lower significance, which has a 87% probability of being of astrophysical origin. We provide detailed estimates of the parameters of the observed systems. Both GW150914 and GW151226 provide an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime. We do not observe any deviations from general relativity, and place improved empirical bounds on several high-order post-Newtonian coefficients. From our observations we infer stellar-mass binary black hole merger rates lying in the range 9−240Gpc−3yr−1. These observations are beginning to inform astrophysical predictions of binary black hole formation rates, and indicate that future observing runs of the Advanced detector network will yield many more gravitational wave detections.

Journal ArticleDOI
TL;DR: The implementation of control measures on January 23 2020 was indispensable in reducing the eventual COVID-19 epidemic size, and the dynamic SEIR model, trained on the 2003 SARS data, was effective in predicting the epidemic peaks and sizes.
Abstract: Background: The coronavirus disease 2019 (COVID-19) outbreak originating in Wuhan, Hubei province, China, coincided with chunyun, the period of mass migration for the annual Spring Festival. To contain its spread, China adopted unprecedented nationwide interventions on January 23 2020. These policies included large-scale quarantine, strict controls on travel and extensive monitoring of suspected cases. However, it is unknown whether these policies have had an impact on the epidemic. We sought to show how these control measures impacted the containment of the epidemic. Methods: We integrated population migration data before and after January 23 and most updated COVID-19 epidemiological data into the Susceptible-Exposed-Infectious-Removed (SEIR) model to derive the epidemic curve. We also used an artificial intelligence (AI) approach, trained on the 2003 SARS data, to predict the epidemic. Results: We found that the epidemic of China should peak by late February, showing gradual decline by end of April. A five-day delay in implementation would have increased epidemic size in mainland China three-fold. Lifting the Hubei quarantine would lead to a second epidemic peak in Hubei province in mid-March and extend the epidemic to late April, a result corroborated by the machine learning prediction. Conclusions: Our dynamic SEIR model was effective in predicting the COVID-19 epidemic peaks and sizes. The implementation of control measures on January 23 2020 was indispensable in reducing the eventual COVID-19 epidemic size.

Proceedings ArticleDOI
06 Nov 2017
TL;DR: The semantic scene completion network (SSCNet) is introduced, an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum.
Abstract: This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.