scispace - formally typeset
Search or ask a question
Browse all papers

Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this article, the problem of graph generation is formulated as message passing between the primal node graph and its dual edge graph, which can take advantage of contextual cues to make better predictions on objects and their relationships.
Abstract: Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. Our key insight is that the graph generation problem can be formulated as message passing between the primal node graph and its dual edge graph. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods on the Visual Genome dataset as well as support relation inference in NYU Depth V2 dataset.

992 citations


Journal ArticleDOI
TL;DR: In this article, the design principles leading to these properties are identified and discussed, in particular, linear and mechanism-based metamaterials (such as origami-based and kirigami based metammaterials), harnessing instabilities and frustration, and topological and nonlinear metam materials.
Abstract: Mechanical metamaterials exhibit properties and functionalities that cannot be realized in conventional materials. Originally, the field focused on achieving unusual (zero or negative) values for familiar mechanical parameters, such as density, Poisson's ratio or compressibility, but more recently, new classes of metamaterials — including shape-morphing, topological and nonlinear metamaterials — have emerged. These materials exhibit exotic functionalities, such as pattern and shape transformations in response to mechanical forces, unidirectional guiding of motion and waves, and reprogrammable stiffness or dissipation. In this Review, we identify the design principles leading to these properties and discuss, in particular, linear and mechanism-based metamaterials (such as origami-based and kirigami-based metamaterials), metamaterials harnessing instabilities and frustration, and topological metamaterials. We conclude by outlining future challenges for the design, creation and conceptualization of advanced mechanical metamaterials.

992 citations


Journal ArticleDOI
28 Feb 2018-Nature
TL;DR: The detection of a flattened absorption profile in the sky-averaged radio spectrum that is largely consistent with expectations for the 21-centimetre signal induced by early stars; however, the best-fitting amplitude of the profile is more than a factor of two greater than the largest predictions.
Abstract: The 21-cm absorption profile is detected in the sky-averaged radio spectrum, but is much stronger than predicted, suggesting that the primordial gas might have been cooler than predicted. As the first stars heated hydrogen in the early Universe, the 21-cm hyperfine line—an astronomical standard that represents the spin-flip transition in the ground state of atomic hydrogen—was altered, causing the hydrogen gas to absorb photons from the microwave background. This should produce an observable absorption signal at frequencies of less than 200 megahertz (MHz). Judd Bowman and colleagues report the observation of an absorption profile centred at a frequency of 78 MHz that is about 19 MHz wide and 0.5 kelvin deep. The profile is generally in line with expectations, although it is deeper than predicted. An accompanying paper by Rennan Barkana suggests that baryons were interacting with cold dark-matter particles in the early Universe, cooling the gas more than had been expected. After stars formed in the early Universe, their ultraviolet light is expected, eventually, to have penetrated the primordial hydrogen gas and altered the excitation state of its 21-centimetre hyperfine line. This alteration would cause the gas to absorb photons from the cosmic microwave background, producing a spectral distortion that should be observable today at radio frequencies of less than 200 megahertz1. Here we report the detection of a flattened absorption profile in the sky-averaged radio spectrum, which is centred at a frequency of 78 megahertz and has a best-fitting full-width at half-maximum of 19 megahertz and an amplitude of 0.5 kelvin. The profile is largely consistent with expectations for the 21-centimetre signal induced by early stars; however, the best-fitting amplitude of the profile is more than a factor of two greater than the largest predictions2. This discrepancy suggests that either the primordial gas was much colder than expected or the background radiation temperature was hotter than expected. Astrophysical phenomena (such as radiation from stars and stellar remnants) are unlikely to account for this discrepancy; of the proposed extensions to the standard model of cosmology and particle physics, only cooling of the gas as a result of interactions between dark matter and baryons seems to explain the observed amplitude3. The low-frequency edge of the observed profile indicates that stars existed and had produced a background of Lyman-α photons by 180 million years after the Big Bang. The high-frequency edge indicates that the gas was heated to above the radiation temperature less than 100 million years later.

992 citations


Journal ArticleDOI
TL;DR: This Review focuses on recent findings that suggest that operational taxonomic unit-based analyses should be replaced with new methods that are based on exact sequence variants, methods for integrating metagenomic and metabolomic data, and issues surrounding compositional data analysis.
Abstract: Complex microbial communities shape the dynamics of various environments, ranging from the mammalian gastrointestinal tract to the soil. Advances in DNA sequencing technologies and data analysis have provided drastic improvements in microbiome analyses, for example, in taxonomic resolution, false discovery rate control and other properties, over earlier methods. In this Review, we discuss the best practices for performing a microbiome study, including experimental design, choice of molecular analysis technology, methods for data analysis and the integration of multiple omics data sets. We focus on recent findings that suggest that operational taxonomic unit-based analyses should be replaced with new methods that are based on exact sequence variants, methods for integrating metagenomic and metabolomic data, and issues surrounding compositional data analysis, where advances have been particularly rapid. We note that although some of these approaches are new, it is important to keep sight of the classic issues that arise during experimental design and relate to research reproducibility. We describe how keeping these issues in mind allows researchers to obtain more insight from their microbiome data sets.

992 citations


Journal ArticleDOI
TL;DR: The current research state-of-the-art of 5G IoT, key enabling technologies, and main research trends and challenges in5G IoT are reviewed.

992 citations


Posted Content
TL;DR: This paper proposes a new algorithm that combines top-down and bottom-up approaches to natural language description through a model of semantic attention, and significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.
Abstract: Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.

991 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations


Journal ArticleDOI
04 Mar 2016-Science
TL;DR: A composite catalyst circumvents conventional limitations on the Fischer-Tropsch synthesis of light olefins from syngas and achieves higher conversions and avoids deactivation through carbon buildup, by enabling a bifunctional catalyst affording two types of active sites with complementary properties.
Abstract: Although considerable progress has been made in direct synthesis gas (syngas) conversion to light olefins (C2(=)-C4(=)) via Fischer-Tropsch synthesis (FTS), the wide product distribution remains a challenge, with a theoretical limit of only 58% for C2-C4 hydrocarbons. We present a process that reaches C2(=)-C4(=) selectivity as high as 80% and C2-C4 94% at carbon monoxide (CO) conversion of 17%. This is enabled by a bifunctional catalyst affording two types of active sites with complementary properties. The partially reduced oxide surface (ZnCrO(x)) activates CO and H2, and C-C coupling is subsequently manipulated within the confined acidic pores of zeolites. No obvious deactivation is observed within 110 hours. Furthermore, this composite catalyst and the process may allow use of coal- and biomass-derived syngas with a low H2/CO ratio.

991 citations


Journal ArticleDOI
TL;DR: For example, the observed cores of many dark-matter dominated galaxies are both less dense and less cuspy than naively predicted in the Lambda$CDM as discussed by the authors, and the number of small galaxies and dwarf satellites in the Local Group is far below the predicted count of low-mass dark matter halos and subhalos within similar volumes.
Abstract: The dark energy plus cold dark matter ($\Lambda$CDM) cosmological model has been a demonstrably successful framework for predicting and explaining the large-scale structure of Universe and its evolution with time. Yet on length scales smaller than $\sim 1$ Mpc and mass scales smaller than $\sim 10^{11} M_{\odot}$, the theory faces a number of challenges. For example, the observed cores of many dark-matter dominated galaxies are both less dense and less cuspy than naively predicted in $\Lambda$CDM. The number of small galaxies and dwarf satellites in the Local Group is also far below the predicted count of low-mass dark matter halos and subhalos within similar volumes. These issues underlie the most well-documented problems with $\Lambda$CDM: Cusp/Core, Missing Satellites, and Too-Big-to-Fail. The key question is whether a better understanding of baryon physics, dark matter physics, or both will be required to meet these challenges. Other anomalies, including the observed planar and orbital configurations of Local Group satellites and the tight baryonic/dark matter scaling relations obeyed by the galaxy population, have been less thoroughly explored in the context of $\Lambda$CDM theory. Future surveys to discover faint, distant dwarf galaxies and to precisely measure their masses and density structure hold promising avenues for testing possible solutions to the small-scale challenges going forward. Observational programs to constrain or discover and characterize the number of truly dark low-mass halos are among the most important, and achievable, goals in this field over then next decade. These efforts will either further verify the $\Lambda$CDM paradigm or demand a substantial revision in our understanding of the nature of dark matter.

991 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: TVM as discussed by the authors is a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends, such as mobile phones, embedded devices, and accelerators.
Abstract: There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms - such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) - requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.

991 citations


Posted Content
Yin Zhou1, Oncel Tuzel1
TL;DR: VoxelNet is proposed, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network and learns an effective discriminative representation of objects with various geometries, leading to encouraging results in3D detection of pedestrians and cyclists.
Abstract: Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.

Journal ArticleDOI
TL;DR: The Chicago Face Database is introduced, a free resource consisting of 158 high-resolution, standardized photographs of Black and White males and females between the ages of 18 and 40 years and extensive data about these targets and factors associated with researchers’ judgments of suitability.
Abstract: Researchers studying a range of psychological phenomena (e.g., theory of mind, emotion, stereotyping and prejudice, interpersonal attraction, etc.) sometimes employ photographs of people as stimuli. In this paper, we introduce the Chicago Face Database, a free resource consisting of 158 high-resolution, standardized photographs of Black and White males and females between the ages of 18 and 40 years and extensive data about these targets. In Study 1, we report pre-testing of these faces, which includes both subjective norming data and objective physical measurements of the images included in the database. In Study 2 we surveyed psychology researchers to assess the suitability of these targets for research purposes and explored factors that were associated with researchers' judgments of suitability. Instructions are outlined for those interested in obtaining access to the stimulus set and accompanying ratings and measures.

Journal ArticleDOI
TL;DR: This review provides an introduction into this expanding and complex field of research focusing on the biogenesis, nucleic acid cargo loading, content, release, and uptake of extracellular vesicles.
Abstract: Extracellular vesicles are a heterogeneous group of membrane-limited vesicles loaded with various proteins, lipids, and nucleic acids. Release of extracellular vesicles from its cell of origin occurs either through the outward budding of the plasma membrane or through the inward budding of the endosomal membrane, resulting in the formation of multivesicular bodies, which release vesicles upon fusion with the plasma membrane. The release of vesicles can facilitate intercellular communication by contact with or by internalization of contents, either by fusion with the plasma membrane or by endocytosis into “recipient” cells. Although the interest in extracellular vesicle research is increasing, there are still no real standards in place to separate or classify the different types of vesicles. This review provides an introduction into this expanding and complex field of research focusing on the biogenesis, nucleic acid cargo loading, content, release, and uptake of extracellular vesicles.

Journal ArticleDOI
10 Mar 2016-Nature
TL;DR: The results help to explain the trivial-to-topological transition in finite systems and to quantify the scaling of topological protection with end-mode separation.
Abstract: Majorana zero modes are quasiparticle excitations in condensed matter systems that have been proposed as building blocks of fault-tolerant quantum computers. They are expected to exhibit non-Abelian particle statistics, in contrast to the usual statistics of fermions and bosons, enabling quantum operations to be performed by braiding isolated modes around one another. Quantum braiding operations are topologically protected insofar as these modes are pinned near zero energy, with the departure from zero expected to be exponentially small as the modes become spatially separated. Following theoretical proposals, several experiments have identified signatures of Majorana modes in nanowires with proximity-induced superconductivity and atomic chains, with small amounts of mode splitting potentially explained by hybridization of Majorana modes. Here, we use Coulomb-blockade spectroscopy in an InAs nanowire segment with epitaxial aluminium, which forms a proximity-induced superconducting Coulomb island (a 'Majorana island') that is isolated from normal-metal leads by tunnel barriers, to measure the splitting of near-zero-energy Majorana modes. We observe exponential suppression of energy splitting with increasing wire length. For short devices of a few hundred nanometres, sub-gap state energies oscillate as the magnetic field is varied, as is expected for hybridized Majorana modes. Splitting decreases by a factor of about ten for each half a micrometre of increased wire length. For devices longer than about one micrometre, transport in strong magnetic fields occurs through a zero-energy state that is energetically isolated from a continuum, yielding uniformly spaced Coulomb-blockade conductance peaks, consistent with teleportation via Majorana modes. Our results help to explain the trivial-to-topological transition in finite systems and to quantify the scaling of topological protection with end-mode separation.

Journal ArticleDOI
TL;DR: These estimates can assist decision makers in understanding the magnitude of adverse health outcomes associated with prescription opioid use such as overdose, abuse, and dependence as well as help decision makers evaluate the cost effectiveness of their choices.
Abstract: Importance:It is important to understand the magnitude and distribution of the economic burden of prescription opioid overdose, abuse, and dependence to inform clinical practice, research, and other decision makers. Decision makers choosing approaches to address this epidemic need cost information t

Journal ArticleDOI
TL;DR: This paper makes three contributions to clarify the ethical importance of algorithmic mediation, including a prescriptive map to organise the debate, and assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Abstract: In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

Journal ArticleDOI
TL;DR: A comprehensive review of starch retrogadation including the definition of the process, molecular mechanisms of how it occurs, and measurement methods and factors that influence starch retrogradation is provided in this paper.
Abstract: Starch retrogradation is a process in which disaggregated amylose and amylopectin chains in a gelatinized starch paste reassociate to form more ordered structures. Starch retrogradation has been the subject of intensive research over the last 50 years, mainly due to its detrimental effect on the sensory and storage qualities of many starchy foods. However, starch retrogadation is desirable for some starchy food products in terms of textural and nutritional properties. To better understand the effect of starch retrogradation on the quality of starchy foods, measurement methods of starch retrogradation and factors that influence starch retrogradation have been studied extensively. This article provides a comprehensive review of starch retrogradation including the definition of the process, molecular mechanisms of how it occurs, and measurement methods and factors that influence starch retrogradation. The review also discusses the effect of retrogradation on the in vitro enzyme digestibility of starch. Spectroscopic methods such as FTIR and Raman are considered to be very promising in characterizing starch retrogradation at a molecular level, although more studies are needed in the future.


Journal ArticleDOI
TL;DR: In the new, fifth release of STITCH, functionality to filter out the proteins and chemicals not associated with a given tissue is implemented and a new network view is implemented that gives the user the ability to view binding affinities of chemicals in the interaction network.
Abstract: Interactions between proteins and small molecules are an integral part of biological processes in living organisms. Information on these interactions is dispersed over many databases, texts and prediction methods, which makes it difficult to get a comprehensive overview of the available evidence. To address this, we have developed STITCH ('Search Tool for Interacting Chemicals') that integrates these disparate data sources for 430 000 chemicals into a single, easy-to-use resource. In addition to the increased scope of the database, we have implemented a new network view that gives the user the ability to view binding affinities of chemicals in the interaction network. This enables the user to get a quick overview of the potential effects of the chemical on its interaction partners. For each organism, STITCH provides a global network; however, not all proteins have the same pattern of spatial expression. Therefore, only a certain subset of interactions can occur simultaneously. In the new, fifth release of STITCH, we have implemented functionality to filter out the proteins and chemicals not associated with a given tissue. The STITCH database can be downloaded in full, accessed programmatically via an extensive API, or searched via a redesigned web interface at http://stitch.embl.de.

Proceedings Article
01 Jan 2016
TL;DR: In this article, the authors apply recurrent neural networks (RNN) on a new domain, namely recommender systems, and propose an RNN-based approach for session-based recommendations.
Abstract: We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.

Journal ArticleDOI
TL;DR: An evidence‐based guideline for the comprehensive management of osteoarthritis (OA) is developed as a collaboration between the American College of Rheumatology and the Arthritis Foundation, updating the 2012 ACR recommendations for the management of hand, hip, and knee OA.
Abstract: Objective To develop an evidence-based guideline for the comprehensive management of osteoarthritis (OA) as a collaboration between the American College of Rheumatology (ACR) and the Arthritis Foundation, updating the 2012 ACR recommendations for the management of hand, hip, and knee OA. Methods We identified clinically relevant population, intervention, comparator, outcomes questions and critical outcomes in OA. A Literature Review Team performed a systematic literature review to summarize evidence supporting the benefits and harms of available educational, behavioral, psychosocial, physical, mind-body, and pharmacologic therapies for OA. Grading of Recommendations Assessment, Development and Evaluation methodology was used to rate the quality of the evidence. A Voting Panel, including rheumatologists, an internist, physical and occupational therapists, and patients, achieved consensus on the recommendations. Results Based on the available evidence, either strong or conditional recommendations were made for or against the approaches evaluated. Strong recommendations were made for exercise, weight loss in patients with knee and/or hip OA who are overweight or obese, self-efficacy and self-management programs, tai chi, cane use, hand orthoses for first carpometacarpal (CMC) joint OA, tibiofemoral bracing for tibiofemoral knee OA, topical nonsteroidal antiinflammatory drugs (NSAIDs) for knee OA, oral NSAIDs, and intraarticular glucocorticoid injections for knee OA. Conditional recommendations were made for balance exercises, yoga, cognitive behavioral therapy, kinesiotaping for first CMC OA, orthoses for hand joints other than the first CMC joint, patellofemoral bracing for patellofemoral knee OA, acupuncture, thermal modalities, radiofrequency ablation for knee OA, topical NSAIDs, intraarticular steroid injections and chondroitin sulfate for hand OA, topical capsaicin for knee OA, acetaminophen, duloxetine, and tramadol. Conclusion This guideline provides direction for clinicians and patients making treatment decisions for the management of OA. Clinicians and patients should engage in shared decision-making that accounts for patients' values, preferences, and comorbidities. These recommendations should not be used to limit or deny access to therapies.

Journal ArticleDOI
TL;DR: This work calculates and map recent change over 5 years in cumulative impacts to marine ecosystems globally from fishing, climate change, and ocean- and land-based stressors and affirm the importance of addressing climate change to maintain and improve the condition of marine ecosystems.
Abstract: Human pressures on the ocean are thought to be increasing globally, yet we know little about their patterns of cumulative change, which pressures are most responsible for change, and which places are experiencing the greatest increases. Managers and policymakers require such information to make strategic decisions and monitor progress towards management objectives. Here we calculate and map recent change over 5 years in cumulative impacts to marine ecosystems globally from fishing, climate change, and ocean- and land-based stressors. Nearly 66% of the ocean and 77% of national jurisdictions show increased human impact, driven mostly by climate change pressures. Five percent of the ocean is heavily impacted with increasing pressures, requiring management attention. Ten percent has very low impact with decreasing pressures. Our results provide large-scale guidance about where to prioritize management efforts and affirm the importance of addressing climate change to maintain and improve the condition of marine ecosystems.

Posted Content
TL;DR: This paper significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% Expert human performance on Labyrinth.
Abstract: Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert human performance on Labyrinth.

Journal ArticleDOI
TL;DR: It is shown that AF4 can serve as an improved analytical tool for isolating extracellular vesicles and addressing the complexities of heterogeneous nanoparticle subpopulations, and three nanoparticle subsets demonstrated diverse organ biodistribution patterns, suggesting distinct biological functions.
Abstract: The heterogeneity of exosomal populations has hindered our understanding of their biogenesis, molecular composition, biodistribution and functions. By employing asymmetric flow field-flow fractionation (AF4), we identified two exosome subpopulations (large exosome vesicles, Exo-L, 90–120 nm; small exosome vesicles, Exo-S, 60–80 nm) and discovered an abundant population of non-membranous nanoparticles termed ‘exomeres’ (~35 nm). Exomere proteomic profiling revealed an enrichment in metabolic enzymes and hypoxia, microtubule and coagulation proteins as well as specific pathways, such as glycolysis and mTOR signalling. Exo-S and Exo-L contained proteins involved in endosomal function and secretion pathways, and mitotic spindle and IL-2/STAT5 signalling pathways, respectively. Exo-S, Exo-L and exomeres each had unique N-glycosylation, protein, lipid, DNA and RNA profiles and biophysical properties. These three nanoparticle subsets demonstrated diverse organ biodistribution patterns, suggesting distinct biological functions. This study demonstrates that AF4 can serve as an improved analytical tool for isolating extracellular vesicles and addressing the complexities of heterogeneous nanoparticle subpopulations.

Journal ArticleDOI
TL;DR: Idiopathic Pulmonary Fibrosis IdiopATHic pulmonary fibrosis appears to be increasing in incidence, and it requires early recognition and intervention with supportive care and pharmacologic agents to forestall its progression.
Abstract: Idiopathic Pulmonary Fibrosis Idiopathic pulmonary fibrosis appears to be increasing in incidence. It requires early recognition and intervention with supportive care and pharmacologic agents to forestall its progression. Lung transplantation may be curative, but the disease may recur in transplanted lungs.

Journal ArticleDOI
06 Jun 2017-PLOS ONE
TL;DR: The pot operon is important for the regulation of protein expression and biofilm formation in both encapsulated and NCC1 nonencapsulated Streptococcus pneumoniae, however, in contrast to encapsulated pneumococcal strains, polyamine acquisition via the PotABCD is not required for MNZ67 murine colonization, persistence in the lungs, or full virulence in a model of OM.
Abstract: Streptococcus pneumoniae is commonly found in the human nasopharynx and is the causative agent of multiple diseases. Since invasive pneumococcal infections are associated with encapsulated pneumococci, the capsular polysaccharide is the target of licensed pneumococcal vaccines. However, there is an increasing distribution of non-vaccine serotypes, as well as nonencapsulated S. pneumoniae (NESp). Both encapsulated and nonencapsulated pneumococci possess the polyamine oligo-transport operon (potABCD). Previous research has shown inactivation of the pot operon in encapsulated pneumococci alters protein expression and leads to a significant reduction in pneumococcal murine colonization, but the role of the pot operon in NESp is unknown. Here, we demonstrate deletion of potD from the NESp NCC1 strain MNZ67 does impact expression of the key proteins pneumolysin and PspK, but it does not inhibit murine colonization. Additionally, we show the absence of potD significantly increases biofilm production, both in vitro and in vivo. In a chinchilla model of otitis media (OM), the absence of potD does not significantly affect MNZ67 virulence, but it does significantly reduce the pathogenesis of the virulent encapsulated strain TIGR4 (serotype 4). Deletion of potD also significantly reduced persistence of TIGR4 in the lungs but increased persistence of PIP01 in the lungs. We conclude the pot operon is important for the regulation of protein expression and biofilm formation in both encapsulated and NCC1 nonencapsulated Streptococcus pneumoniae. However, in contrast to encapsulated pneumococcal strains, polyamine acquisition via the pot operon is not required for MNZ67 murine colonization, persistence in the lungs, or full virulence in a model of OM. Therefore, NESp virulence regulation needs to be further established to identify potential NESp therapeutic targets.

Journal ArticleDOI
TL;DR: With prolonged follow-up, first-line pembrolizumab monotherapy continues to demonstrate an OS benefit over chemotherapy in patients with previously untreated, advanced NSCLC without EGFR/ALK aberrations, despite crossover from the control arm to pembrolezumab as subsequent therapy.
Abstract: PurposeIn the randomized, open-label, phase III KEYNOTE-024 study, pembrolizumab significantly improved progression-free survival and overall survival (OS) compared with platinum-based chemotherapy in patients with previously untreated advanced non–small-cell lung cancer (NSCLC) with a programmed death ligand 1 tumor proportion score of 50% or greater and without EGFR/ALK aberrations. We report an updated OS and tolerability analysis, including analyses adjusting for potential bias introduced by crossover from chemotherapy to pembrolizumab.Patients and MethodsPatients were randomly assigned to pembrolizumab 200 mg every 3 weeks (for up to 2 years) or investigator’s choice of platinum-based chemotherapy (four to six cycles). Patients assigned to chemotherapy could cross over to pembrolizumab upon meeting eligibility criteria. The primary end point was progression-free survival; OS was an important key secondary end point. Crossover adjustment analysis was done using the following three methods: simplified ...

Proceedings Article
12 Feb 2016
TL;DR: Holographic embeddings are proposed to learn compositional vector space representations of entire knowledge graphs to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets.
Abstract: Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HOLE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator, HOLE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. Experimentally, we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets.

Book
01 May 2017
TL;DR: It is argued that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient.
Abstract: The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.

Journal ArticleDOI
TL;DR: Characterization technologies at the nanoscale level to study enzymes immobilized on surfaces are crucial to obtain valuable qualitative and quantitative information, including morphological visualization of the immobilized enzymes, to assess efficacy of an immobilization technique and development of future enzyme immobilization strategies.
Abstract: The current demands of sustainable green methodologies have increased the use of enzymatic technology in industrial processes. Employment of enzyme as biocatalysts offers the benefits of mild reaction conditions, biodegradability and catalytic efficiency. The harsh conditions of industrial processes, however, increase propensity of enzyme destabilization, shortening their industrial lifespan. Consequently, the technology of enzyme immobilization provides an effective means to circumvent these concerns by enhancing enzyme catalytic properties and also simplify downstream processing and improve operational stability. There are several techniques used to immobilize the enzymes onto supports which range from reversible physical adsorption and ionic linkages, to the irreversible stable covalent bonds. Such techniques produce immobilized enzymes of varying stability due to changes in the surface microenvironment and degree of multipoint attachment. Hence, it is mandatory to obtain information about the structure of the enzyme protein following interaction with the support surface as well as interactions of the enzymes with other proteins. Characterization technologies at the nanoscale level to study enzymes immobilized on surfaces are crucial to obtain valuable qualitative and quantitative information, including morphological visualization of the immobilized enzymes. These technologies are pertinent to assess efficacy of an immobilization technique and development of future enzyme immobilization strategies.