scispace - formally typeset
Search or ask a question
Browse all papers

Posted Content
TL;DR: This survey conducts a comprehensive review of the literature in graph embedding and proposes two taxonomies ofGraph embedding which correspond to what challenges exist in differentgraph embedding problem settings and how the existing work addresses these challenges in their solutions.
Abstract: Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximally preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work address these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques and application scenarios.

691 citations


Proceedings Article
04 Dec 2017
TL;DR: This work proposes a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation and hint learning and shows consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.
Abstract: Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.

691 citations


Journal ArticleDOI
TL;DR: The fundamental properties of TEs and their complex interactions with their cellular environment are introduced, which are crucial to understanding their impact and manifold consequences for organismal biology.
Abstract: Transposable elements (TEs) are major components of eukaryotic genomes. However, the extent of their impact on genome evolution, function, and disease remain a matter of intense interrogation. The rise of genomics and large-scale functional assays has shed new light on the multi-faceted activities of TEs and implies that they should no longer be marginalized. Here, we introduce the fundamental properties of TEs and their complex interactions with their cellular environment, which are crucial to understanding their impact and manifold consequences for organismal biology. While we draw examples primarily from mammalian systems, the core concepts outlined here are relevant to a broad range of organisms.

691 citations


Proceedings ArticleDOI
Daniel Golovin1, Benjamin Solnik1, Subhodeep Moitra1, Greg Kochanski1, John Karro1, D. Sculley1 
13 Aug 2017
TL;DR: Google Vizier is described, a Google-internal service for performing black-box optimization that has become the de facto parameter tuning engine at Google and is used to optimize many of the authors' machine learning models and other systems.
Abstract: Any sufficiently complex system acts as a black box when it becomes easier to experiment with than to understand. Hence, black-box optimization has become increasingly important as systems have become more complex. In this paper we describe Google Vizier, a Google-internal service for performing black-box optimization that has become the de facto parameter tuning engine at Google. Google Vizier is used to optimize many of our machine learning models and other systems, and also provides core capabilities to Google's Cloud Machine Learning HyperTune subsystem. We discuss our requirements, infrastructure design, underlying algorithms, and advanced features such as transfer learning and automated early stopping that the service provides.

691 citations


Journal ArticleDOI
TL;DR: In this Viewpoint article, 18 experts in the field tell us what exhaustion means to them, ranging from complete lack of effector function to altered functionality to prevent immunopathology, with potential differences between cancer and chronic infection.
Abstract: 'T cell exhaustion' is a broad term that has been used to describe the response of T cells to chronic antigen stimulation, first in the setting of chronic viral infection but more recently in response to tumours. Understanding the features of and pathways to exhaustion has crucial implications for the success of checkpoint blockade and adoptive T cell transfer therapies. In this Viewpoint article, 18 experts in the field tell us what exhaustion means to them, ranging from complete lack of effector function to altered functionality to prevent immunopathology, with potential differences between cancer and chronic infection. Their responses highlight the dichotomy between terminally differentiated exhausted T cells that are TCF1- and the self-renewing TCF1+ population from which they derive. These TCF1+ cells are considered by some to have stem cell-like properties akin to memory T cell populations, but the developmental relationships are unclear at present. Recent studies have also highlighted an important role for the transcriptional regulator TOX in driving the epigenetic enforcement of exhaustion, but key questions remain about the potential to reverse the epigenetic programme of exhaustion and how this might affect the persistence of T cell populations.

691 citations


Journal ArticleDOI
TL;DR: The second Gaia data release (DR2) contains very precise astrometric and photometric properties for more than one billion sources, astrophysical parameters for dozens of millions, radial velocities for millions, variability information for half a million stars from selected variability classes, and orbits for thousands of solar system objects.
Abstract: Context. The second Gaia data release (DR2) contains very precise astrometric and photometric properties for more than one billion sources, astrophysical parameters for dozens of millions, radial velocities for millions, variability information for half a million stars from selected variability classes, and orbits for thousands of solar system objects.Aims. Before the catalogue was published, these data have undergone dedicated validation processes. The goal of this paper is to describe the validation results in terms of completeness, accuracy, and precision of the various Gaia DR2 data.Methods. The validation processes include a systematic analysis of the catalogue content to detect anomalies, either individual errors or statistical properties, using statistical analysis and comparisons to external data or to models.Results. Although the astrometric, photometric, and spectroscopic data are of unprecedented quality and quantity, it is shown that the data cannot be used without dedicated attention to the limitations described here, in the catalogue documentation and in accompanying papers. We place special emphasis on the caveats for the statistical use of the data in scientific exploitation. In particular, we discuss the quality filters and the consideration of the properties, systematics, and uncertainties from astrometry to astrophysical parameters, together with the various selection functions.

690 citations


Journal ArticleDOI
TL;DR: This article assesses the different machine learning methods that deal with the challenges in IoT data by considering smart cities as the main use case and presents a taxonomy of machine learning algorithms explaining how different techniques are applied to the data in order to extract higher level information.

690 citations


Journal ArticleDOI
TL;DR: This document represents a continuation of the National Lipid Association recommendations developed by a diverse panel of experts who examined the evidence base and provided recommendations regarding the following topics: lifestyle therapies and strategies to improve patient outcomes by increasing adherence and using team-based collaborative care.

690 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This method presents two interesting insights: first, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection and second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically.
Abstract: This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.

690 citations


Journal ArticleDOI
09 Oct 2015-Science
TL;DR: “coordination-activity plots” are introduced that predict the geometric structure of optimal active sites on platinum (111) surface using a weighted average of surface coordination that includes second-nearest neighbors to assess optimal reactivity.
Abstract: A good heterogeneous catalyst for a given chemical reaction very often has only one specific type of surface site that is catalytically active. Widespread methodologies such as Sabatier-type activity plots determine optimal adsorption energies to maximize catalytic activity, but these are difficult to use as guidelines to devise new catalysts. We introduce “coordination-activity plots” that predict the geometric structure of optimal active sites. The method is illustrated on the oxygen reduction reaction catalyzed by platinum. Sites with the same number of first-nearest neighbors as (111) terraces but with an increased number of second-nearest neighbors are predicted to have superior catalytic activity. We used this rationale to create highly active sites on platinum (111), without alloying and using three different affordable experimental methods.

690 citations


Journal ArticleDOI
TL;DR: An ecological overview of the rare microbial biosphere is provided, including causes of rarity and the impacts of rare species on ecosystem functioning, and how rare species can have a preponderant role for local biodiversity and species turnover with rarity potentially bound to phylogenetically conserved features is discussed.
Abstract: Rare species are increasingly recognized as crucial, yet vulnerable components of Earth’s ecosystems. This is also true for microbial communities, which are typically composed of a high number of relatively rare species. Recent studies have demonstrated that rare species can have an over-proportional role in biogeochemical cycles and may be a hidden driver of microbiome function. In this review, we provide an ecological overview of the rare microbial biosphere, including causes of rarity and the impacts of rare species on ecosystem functioning. We discuss how rare species can have a preponderant role for local biodiversity and species turnover with rarity potentially bound to phylogenetically conserved features. Rare microbes may therefore be overlooked keystone species regulating the functioning of host-associated, terrestrial and aquatic environments. We conclude this review with recommendations to guide scientists interested in investigating this rapidly emerging research area.

Journal ArticleDOI
TL;DR: Marine20 as mentioned in this paper is an update to the internationally agreed marine radiocarbon age calibration curve that provides a non-polar global-average marine record of radioccarbon from 0 −55 cal kBP and serves as a baseline for regional oceanic variation.
Abstract: The concentration of radiocarbon (14C) differs between ocean and atmosphere. Radiocarbon determinations from samples which obtained their 14C in the marine environment therefore need a marine-specific calibration curve and cannot be calibrated directly against the atmospheric-based IntCal20 curve. This paper presents Marine20, an update to the internationally agreed marine radiocarbon age calibration curve that provides a non-polar global-average marine record of radiocarbon from 0–55 cal kBP and serves as a baseline for regional oceanic variation. Marine20 is intended for calibration of marine radiocarbon samples from non-polar regions; it is not suitable for calibration in polar regions where variability in sea ice extent, ocean upwelling and air-sea gas exchange may have caused larger changes to concentrations of marine radiocarbon. The Marine20 curve is based upon 500 simulations with an ocean/atmosphere/biosphere box-model of the global carbon cycle that has been forced by posterior realizations of our Northern Hemispheric atmospheric IntCal20 14C curve and reconstructed changes in CO2 obtained from ice core data. These forcings enable us to incorporate carbon cycle dynamics and temporal changes in the atmospheric 14C level. The box-model simulations of the global-average marine radiocarbon reservoir age are similar to those of a more complex three-dimensional ocean general circulation model. However, simplicity and speed of the box model allow us to use a Monte Carlo approach to rigorously propagate the uncertainty in both the historic concentration of atmospheric 14C and other key parameters of the carbon cycle through to our final Marine20 calibration curve. This robust propagation of uncertainty is fundamental to providing reliable precision for the radiocarbon age calibration of marine based samples. We make a first step towards deconvolving the contributions of different processes to the total uncertainty; discuss the main differences of Marine20 from the previous age calibration curve Marine13; and identify the limitations of our approach together with key areas for further work. The updated values for ΔR, the regional marine radiocarbon reservoir age corrections required to calibrate against Marine20, can be found at the data base http://calib.org/marine/.

Book
29 May 2015
TL;DR: The matrix concentration inequalities as discussed by the authors are a family of matrix inequalities that can be found in many areas of theoretical, applied, and computational mathematics. But they are not suitable for the analysis of random matrices.
Abstract: Random matrices now play a role in many areas of theoretical, applied,and computational mathematics. Therefore, it is desirable to have toolsfor studying random matrices that are flexible, easy to use, and powerful.Over the last fifteen years, researchers have developed a remarkablefamily of results, called matrix concentration inequalities, that achieveall of these goals.This monograph offers an invitation to the field of matrix concentrationinequalities. It begins with some history of random matrix theory;it describes a flexible model for random matrices that is suitablefor many problems; and it discusses the most important matrix concentrationresults. To demonstrate the value of these techniques, thepresentation includes examples drawn from statistics, machine learning,optimization, combinatorics, algorithms, scientific computing, andbeyond.

Posted Content
TL;DR: OpenAI Gym as mentioned in this paper is a toolkit for reinforcement learning research that includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms.
Abstract: OpenAI Gym is a toolkit for reinforcement learning research It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software

Posted Content
TL;DR: This work identifies two key properties related to the contrastive loss: alignment (closeness) of features from positive pairs, and uniformity of the induced distribution of the (normalized) features on the hypersphere.
Abstract: Contrastive representation learning has been outstandingly successful in practice. In this work, we identify two key properties related to the contrastive loss: (1) alignment (closeness) of features from positive pairs, and (2) uniformity of the induced distribution of the (normalized) features on the hypersphere. We prove that, asymptotically, the contrastive loss optimizes these properties, and analyze their positive effects on downstream tasks. Empirically, we introduce an optimizable metric to quantify each property. Extensive experiments on standard vision and language datasets confirm the strong agreement between both metrics and downstream task performance. Remarkably, directly optimizing for these two metrics leads to representations with comparable or better performance at downstream tasks than contrastive learning. Project Page: this https URL Code: this https URL , this https URL

Journal ArticleDOI
TL;DR: In this article, the authors explore the hedging capabilities of Bitcoin by applying the asymmetric GARCH methodology used in investigation of gold and show that bitcoin can clearly be used as a hedge against stocks in the Financial Times Stock Exchange Index.

Journal ArticleDOI
TL;DR: The authors developed the 36-item COVID Stress Scales (CSS) to measure these features, as they pertain to COVID-19, to better understand and assess COVID19-related distress.

Journal ArticleDOI
TL;DR: BiGG Models is presented, a completely redesigned Biochemical, Genetic and Genomic knowledge base that contains more than 75 high-quality, manually-curated genome-scale metabolic models that will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.
Abstract: Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.

Journal ArticleDOI
TL;DR: The global prevalence was higher in women and increased with age, peaking at the >95 age group among women and men in 2017, and a positive association was found between the age-standardised YLD rate and SDI at the regional and national levels.
Abstract: Objectives To report the level and trends of prevalence, incidence and years lived with disability (YLDs) for osteoarthritis (OA) in 195 countries and territories from 1990 to 2017 by age, sex and Socio-demographic index (SDI; a composite of sociodemographic factors). Methods Publicly available modelled data from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 were used. The burden of OA was estimated for 195 countries and territories from 1990 to 2017, through a systematic analysis of prevalence and incidence modelled data using the methods reported in the GBD 2017 Study. All estimates were presented as counts and age-standardised rates per 100 000 population, with uncertainty intervals (UIs). Results Globally, the age-standardised point prevalence and annual incidence rate of OA in 2017 were 3754.2 (95% UI 3389.4 to 4187.6) and 181.2 (95% UI 162.6 to 202.4) per 100 000, an increase of 9.3% (95% UI 8% to 10.7%) and 8.2% (95% UI 7.1% to 9.4%) from 1990, respectively. In addition, global age-standardised YLD rate in 2017 was 118.8 (95% UI 59.5 to 236.2), an increase of 9.6% (95% UI 8.3% to 11.1%) from 1990. The global prevalence was higher in women and increased with age, peaking at the >95 age group among women and men in 2017. Generally, a positive association was found between the age-standardised YLD rate and SDI at the regional and national levels. Age-standardised prevalence of OA in 2017 ranged from 2090.3 to 6128.1 cases per 100 000 population. United States (6128.1 (95% UI 5729.3 to 6582.9)), American Samoa (5281 (95% UI 4688 to 5965.9)) and Kuwait (5234.6 (95% UI 4643.2 to 5953.6)) had the three highest levels of age-standardised prevalence. Oman (29.6% (95% UI 24.8% to 34.9%)), Equatorial Guinea (28.6% (95% UI 24.4% to 33.7%)) and the United States 23.2% (95% UI 16.4% to 30.5%)) showed the highest increase in the age-standardised prevalence during 1990–2017. Conclusions OA is a major public health challenge. While there is remarkable international variation in the prevalence, incidence and YLDs due to OA, the burden is increasing in most countries. It is expected to continue with increased life expectancy and ageing of the global population. Improving population and policy maker awareness of risk factors, including overweight and injury, and the importance and benefits of management of OA, together with providing health services for an increasing number of people living with OA, are recommended for management of the future burden of this condition.

Journal ArticleDOI
TL;DR: The aim of this review is to recapitulate the clinical understanding of CSCR, with an emphasis on the most recent findings on epidemiology, risk factors, clinical and imaging diagnosis, and treatments options, and the novel mineralocorticoid pathway hypothesis.

Journal ArticleDOI
TL;DR: Acute kidney injury is common and is associated with poor outcomes, including increased mortality, among critically ill children and young adults, and a stepwise increase in 28‐day mortality was associated with worsening severity of acute kidney injury.
Abstract: BackgroundThe epidemiologic characteristics of children and young adults with acute kidney injury have been described in single-center and retrospective studies. We conducted a multinational, prospective study involving patients admitted to pediatric intensive care units to define the incremental risk of death and complications associated with severe acute kidney injury. MethodsWe used the Kidney Disease: Improving Global Outcomes criteria to define acute kidney injury. Severe acute kidney injury was defined as stage 2 or 3 acute kidney injury (plasma creatinine level ≥2 times the baseline level or urine output <0.5 ml per kilogram of body weight per hour for ≥12 hours) and was assessed for the first 7 days of intensive care. All patients 3 months to 25 years of age who were admitted to 1 of 32 participating units were screened during 3 consecutive months. The primary outcome was 28-day mortality. ResultsA total of 4683 patients were evaluated; acute kidney injury developed in 1261 patients (26.9%; 95% co...

Journal ArticleDOI
TL;DR: The 3D structure of a disease-relevant Aβ(1–42) fibril polymorph is determined combining data from solid-state NMR spectroscopy and mass-per-length measurements from EM, forming a double-horseshoe–like cross–β-sheet entity with maximally buried hydrophobic side chains.
Abstract: Amyloid-β (Aβ) is present in humans as a 39- to 42-amino acid residue metabolic product of the amyloid precursor protein. Although the two predominant forms, Aβ(1–40) and Aβ(1–42), differ in only two residues, they display different biophysical, biological, and clinical behavior. Aβ(1–42) is the more neurotoxic species, aggregates much faster, and dominates in senile plaque of Alzheimer’s disease (AD) patients. Although small Aβ oligomers are believed to be the neurotoxic species, Aβ amyloid fibrils are, because of their presence in plaques, a pathological hallmark of AD and appear to play an important role in disease progression through cell-to-cell transmissibility. Here, we solved the 3D structure of a disease-relevant Aβ(1–42) fibril polymorph, combining data from solid-state NMR spectroscopy and mass-per-length measurements from EM. The 3D structure is composed of two molecules per fibril layer, with residues 15–42 forming a double-horseshoe–like cross–β-sheet entity with maximally buried hydrophobic side chains. Residues 1–14 are partially ordered and in a β-strand conformation, but do not display unambiguous distance restraints to the remainder of the core structure.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: IIC as mentioned in this paper learns a neural network classifier from scratch, given only unlabeled data samples, and achieves state-of-the-art results in eight unsupervised clustering benchmarks spanning image classification and segmentation.
Abstract: We present a novel clustering objective that learns a neural network classifier from scratch, given only unlabelled data samples. The model discovers clusters that accurately match semantic classes, achieving state-of-the-art results in eight unsupervised clustering benchmarks spanning image classification and segmentation. These include STL10, an unsupervised variant of ImageNet, and CIFAR10, where we significantly beat the accuracy of our closest competitors by 6.6 and 9.5 absolute percentage points respectively. The method is not specialised to computer vision and operates on any paired dataset samples; in our experiments we use random transforms to obtain a pair from each image. The trained network directly outputs semantic labels, rather than high dimensional representations that need external processing to be usable for semantic clustering. The objective is simply to maximise mutual information between the class assignments of each pair. It is easy to implement and rigorously grounded in information theory, meaning we effortlessly avoid degenerate solutions that other clustering methods are susceptible to. In addition to the fully unsupervised mode, we also test two semi-supervised settings. The first achieves 88.8% accuracy on STL10 classification, setting a new global state-of-the-art over all existing methods (whether supervised, semi-supervised or unsupervised). The second shows robustness to 90% reductions in label coverage, of relevance to applications that wish to make use of small amounts of labels. github.com/xu-ji/IIC

Journal ArticleDOI
TL;DR: Successful fabrication of key electrical components on the flexible cellulose nanofibril paper with comparable performance to their rigid counterparts and clear demonstration of fungal biodegradation of the cellulose-nan ofibril-based electronics suggest that it is feasible to fabricate high-performance flexible electronics using ecofriendly materials.
Abstract: The rapid evolution of consumer electronics means that out-of-date devices quickly end up in the scrap heap. Here, the authors fabricate electrical components using biodegradable and flexible cellulose nanofibril paper—a natural sustainable resource derived from wood.

Journal ArticleDOI
Maeve Henchion1, Maria Hayes1, Anne Maria Mullen1, Mark A. Fenelon1, Brijesh K. Tiwari1 
20 Jul 2017-Foods
TL;DR: This paper outlines some potential demand scenarios and provides an overview of selected existing and novel protein sources in terms of their potential to sustainably deliver protein for the future, considering drivers and challenges relating to nutritional, environmental, and technological and market/consumer domains.
Abstract: A growing global population, combined with factors such as changing socio-demographics, will place increased pressure on the world’s resources to provide not only more but also different types of food. Increased demand for animal-based protein in particular is expected to have a negative environmental impact, generating greenhouse gas emissions, requiring more water and more land. Addressing this “perfect storm” will necessitate more sustainable production of existing sources of protein as well as alternative sources for direct human consumption. This paper outlines some potential demand scenarios and provides an overview of selected existing and novel protein sources in terms of their potential to sustainably deliver protein for the future, considering drivers and challenges relating to nutritional, environmental, and technological and market/consumer domains. It concludes that different factors influence the potential of existing and novel sources. Existing protein sources are primarily hindered by their negative environmental impacts with some concerns around health. However, they offer social and economic benefits, and have a high level of consumer acceptance. Furthermore, recent research emphasizes the role of livestock as part of the solution to greenhouse gas emissions, and indicates that animal-based protein has an important role as part of a sustainable diet and as a contributor to food security. Novel proteins require the development of new value chains, and attention to issues such as production costs, food safety, scalability and consumer acceptance. Furthermore, positive environmental impacts cannot be assumed with novel protein sources and care must be taken to ensure that comparisons between novel and existing protein sources are valid. Greater alignment of political forces, and the involvement of wider stakeholders in a governance role, as well as development/commercialization role, is required to address both sources of protein and ensure food security.

Journal ArticleDOI
TL;DR: It is hypothesized that cannabidiol would inhibit cannabinoid agonist activity through negative allosteric modulation of CB1 receptors through positive allosterics modulation ofCB1 receptors.
Abstract: Background and Purpose Cannabidiol has been reported to act as an antagonist at cannabinoid CB1 receptors. We hypothesized that cannabidiol would inhibit cannabinoid agonist activity through negative allosteric modulation of CB1 receptors. Experimental Approach Internalization of CB1 receptors, arrestin2 recruitment, and PLCβ3 and ERK1/2 phosphorylation, were quantified in HEK 293A cells heterologously expressing CB1 receptors and in the STHdhQ7/Q7 cell model of striatal neurons endogenously expressing CB1 receptors. Cells were treated with 2-arachidonylglycerol or Δ9-tetrahydrocannabinol alone and in combination with different concentrations of cannabidiol. Key Results Cannabidiol reduced the efficacy and potency of 2-arachidonylglycerol and Δ9-tetrahydrocannabinol on PLCβ3- and ERK1/2-dependent signalling in cells heterologously (HEK 293A) or endogenously (STHdhQ7/Q7) expressing CB1 receptors. By reducing arrestin2 recruitment to CB1 receptors, cannabidiol treatment prevented internalization of these receptors. The allosteric activity of cannabidiol depended upon polar residues being present at positions 98 and 107 in the extracellular amino terminus of the CB1 receptor. Conclusions and Implications Cannabidiol behaved as a non-competitive negative allosteric modulator of CB1 receptors. Allosteric modulation, in conjunction with effects not mediated by CB1 receptors, may explain the in vivo effects of cannabidiol. Allosteric modulators of CB1 receptors have the potential to treat CNS and peripheral disorders while avoiding the adverse effects associated with orthosteric agonism or antagonism of these receptors.

Journal ArticleDOI
TL;DR: The Stanford Network Analysis Platform (SNAP) as mentioned in this paper is a general-purpose, high-performance system that provides easy-to-use, highlevel operations for analysis and manipulation of large networks.
Abstract: Large networks are becoming a widely used abstraction for studying complex systems in a broad set of disciplines, ranging from social-network analysis to molecular biology and neuroscience. Despite an increasing need to analyze and manipulate large networks, only a limited number of tools are available for this task. Here, we describe the Stanford Network Analysis Platform (SNAP), a general-purpose, high-performance system that provides easy-to-use, high-level operations for analysis and manipulation of large networks. We present SNAP functionality, describe its implementational details, and give performance benchmarks. SNAP has been developed for single big-memory machines, and it balances the trade-off between maximum performance, compact in-memory graph representation, and the ability to handle dynamic graphs in which nodes and edges are being added or removed over time. SNAP can process massive networks with hundreds of millions of nodes and billions of edges. SNAP offers over 140 different graph algorithms that can efficiently manipulate large graphs, calculate structural properties, generate regular and random graphs, and handle attributes and metadata on nodes and edges. Besides being able to handle large graphs, an additional strength of SNAP is that networks and their attributes are fully dynamic; they can be modified during the computation at low cost. SNAP is provided as an open-source library in C++ as well as a module in Python. We also describe the Stanford Large Network Dataset, a set of social and information real-world networks and datasets, which we make publicly available. The collection is a complementary resource to our SNAP software and is widely used for development and benchmarking of graph analytics algorithms.

Journal ArticleDOI
25 Nov 2016-Science
TL;DR: This work uses atom-by-atom assembly to implement a platform for the deterministic preparation of regular one-dimensional arrays of individually controlled cold atoms, and results in defect-free arrays of more than 50 atoms in less than 400 milliseconds.
Abstract: The realization of large-scale fully controllable quantum systems is an exciting frontier in modern physical science. We use atom-by-atom assembly to implement a platform for the deterministic preparation of regular one-dimensional arrays of individually controlled cold atoms. In our approach, a measurement and feedback procedure eliminates the entropy associated with probabilistic trap occupation and results in defect-free arrays of more than 50 atoms in less than 400 milliseconds. The technique is based on fast, real-time control of 100 optical tweezers, which we use to arrange atoms in desired geometric patterns and to maintain these configurations by replacing lost atoms with surplus atoms from a reservoir. This bottom-up approach may enable controlled engineering of scalable many-body systems for quantum information processing, quantum simulations, and precision measurements.

Journal ArticleDOI
23 Jun 2016-Nature
TL;DR: This work reports the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer and explores the Schwinger mechanism of particle–antiparticle generation by monitoring the mass production and the vacuum persistence amplitude.
Abstract: A digital quantum simulation of a lattice gauge theory is performed on a quantum computer that consists of a few trapped-ion qubits; the model simulated is the Schwinger mechanism, which describes the creation of electron–positron pairs from vacuum. Quantum simulations promise to provide solutions to problems where classical computational methods fail. An example of a challenging computational problem is the real-time dynamics in gauge theories — field theories paramount to modern particle physics. This paper presents a digital quantum simulation of a lattice gauge theory on a quantum computer consisting of a few qubits comprising trapped calcium controlled by electromagnetic fields. The specific model that the authors simulate is the Schwinger mechanism, which describes the creation of electron–positron pairs from vacuum. As an early example of a particle-physics theory simulated with an atomic physics experiment, this could potentially open the door to simulating more complicated and otherwise computationally intractable models. Gauge theories are fundamental to our understanding of interactions between the elementary constituents of matter as mediated by gauge bosons1,2. However, computing the real-time dynamics in gauge theories is a notorious challenge for classical computational methods. This has recently stimulated theoretical effort, using Feynman’s idea of a quantum simulator3,4, to devise schemes for simulating such theories on engineered quantum-mechanical devices, with the difficulty that gauge invariance and the associated local conservation laws (Gauss laws) need to be implemented5,6,7. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model8,9) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism10,11, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron–positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields12 in favour of exotic long-range interactions, which can be directly and efficiently implemented on an ion trap architecture13. We explore the Schwinger mechanism of particle–antiparticle generation by monitoring the mass production and the vacuum persistence amplitude. Moreover, we track the real-time evolution of entanglement in the system, which illustrates how particle creation and entanglement generation are directly related. Our work represents a first step towards quantum simulation of high-energy theories using atomic physics experiments—the long-term intention is to extend this approach to real-time quantum simulations of non-Abelian lattice gauge theories.

Journal ArticleDOI
TL;DR: A comprehensive study of all steps in BoVW and different fusion methods is provided, and a simple yet effective representation is proposed, called hybrid supervector, by exploring the complementarity of different BoVW frameworks with improved dense trajectories.