scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A multidisciplinary approach is highly required to enable the formation of robust SEI for highly efficient energy storage systems.
Abstract: Lithium metal batteries (LMBs) are among the most promising candidates of high-energy-density devices for advanced energy storage. However, the growth of dendrites greatly hinders the practical applications of LMBs in portable electronics and electric vehicles. Constructing stable and efficient solid electrolyte interphase (SEI) is among the most effective strategies to inhibit the dendrite growth and thus to achieve a superior cycling performance. In this review, the mechanisms of SEI formation and models of SEI structure are briefly summarized. The analysis methods to probe the surface chemistry, surface morphology, electrochemical property, dynamic characteristics of SEI layer are emphasized. The critical factors affecting the SEI formation, such as electrolyte component, temperature, current density, are comprehensively debated. The efficient methods to modify SEI layer with the introduction of new electrolyte system and additives, ex-situ-formed protective layer, as well as electrode design, are summarized. Although these works afford new insights into SEI research, robust and precise routes for SEI modification with well-designed structure, as well as understanding of the connection between structure and electrochemical performance, is still inadequate. A multidisciplinary approach is highly required to enable the formation of robust SEI for highly efficient energy storage systems.

1,266 citations


Journal ArticleDOI
TL;DR: The renin-angiotensin-aldosterone system and members of the transforming growth factor-β family play an important role in activation of infarct myofibroblasts, and therapeutic modulation of the inflammatory and reparative response may hold promise for the prevention of postinfarction heart failure.
Abstract: In adult mammals, massive sudden loss of cardiomyocytes after infarction overwhelms the limited regenerative capacity of the myocardium, resulting in the formation of a collagen-based scar. Necrotic cells release danger signals, activating innate immune pathways and triggering an intense inflammatory response. Stimulation of toll-like receptor signaling and complement activation induces expression of proinflammatory cytokines (such as interleukin-1 and tumor necrosis factor-α) and chemokines (such as monocyte chemoattractant protein-1/ chemokine (C-C motif) ligand 2 [CCL2]). Inflammatory signals promote adhesive interactions between leukocytes and endothelial cells, leading to extravasation of neutrophils and monocytes. As infiltrating leukocytes clear the infarct from dead cells, mediators repressing inflammation are released, and anti-inflammatory mononuclear cell subsets predominate. Suppression of the inflammatory response is associated with activation of reparative cells. Fibroblasts proliferate, undergo myofibroblast transdifferentiation, and deposit large amounts of extracellular matrix proteins maintaining the structural integrity of the infarcted ventricle. The renin–angiotensin–aldosterone system and members of the transforming growth factor-β family play an important role in activation of infarct myofibroblasts. Maturation of the scar follows, as a network of cross-linked collagenous matrix is formed and granulation tissue cells become apoptotic. This review discusses the cellular effectors and molecular signals regulating the inflammatory and reparative response after myocardial infarction. Dysregulation of immune pathways, impaired suppression of postinfarction inflammation, perturbed spatial containment of the inflammatory response, and overactive fibrosis may cause adverse remodeling in patients with infarction contributing to the pathogenesis of heart failure. Therapeutic modulation of the inflammatory and reparative response may hold promise for the prevention of postinfarction heart failure.

1,266 citations


Proceedings ArticleDOI
09 May 2017
TL;DR: It is shown that, in comparison to other recently introduced large-scale datasets, TriviaQA has relatively complex, compositional questions, has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and requires more cross sentence reasoning to find answers.
Abstract: We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study.

1,266 citations


Journal ArticleDOI
J. Aasi1, J. Abadie1, B. P. Abbott1, Richard J. Abbott1  +884 moreInstitutions (98)
TL;DR: In this paper, the authors review the performance of the LIGO instruments during this epoch, the work done to characterize the detectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of the detectors to a variety of astrophysical sources.
Abstract: In 2009–2010, the Laser Interferometer Gravitational-Wave Observatory (LIGO) operated together with international partners Virgo and GEO600 as a network to search for gravitational waves (GWs) of astrophysical origin. The sensitivity of these detectors was limited by a combination of noise sources inherent to the instrumental design and its environment, often localized in time or frequency, that couple into the GW readout. Here we review the performance of the LIGO instruments during this epoch, the work done to characterize the detectors and their data, and the effect that transient and continuous noise artefacts have on the sensitivity of LIGO to a variety of astrophysical sources.

1,266 citations


Journal ArticleDOI
01 Jan 2020
TL;DR: In this paper, the authors propose a general design pipeline for GNN models and discuss the variants of each component, systematically categorize the applications, and propose four open problems for future research.
Abstract: Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics systems, learning molecular fingerprints, predicting protein interface, and classifying diseases demand a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures (like the dependency trees of sentences and the scene graphs of images) is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs. In recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning tasks. In this survey, we propose a general design pipeline for GNN models and discuss the variants of each component, systematically categorize the applications, and propose four open problems for future research.

1,266 citations


Proceedings ArticleDOI
De Cheng1, Yihong Gong1, Sanping Zhou1, Jinjun Wang1, Nanning Zheng1 
27 Jun 2016
TL;DR: A novel multi-channel parts-based convolutional neural network model under the triplet framework for person re-identification that significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets.
Abstract: Person re-identification across cameras remains a very challenging problem, especially when there are no overlapping fields of view between cameras. In this paper, we present a novel multi-channel parts-based convolutional neural network (CNN) model under the triplet framework for person re-identification. Specifically, the proposed CNN model consists of multiple channels to jointly learn both the global full-body and local body-parts features of the input persons. The CNN model is trained by an improved triplet loss function that serves to pull the instances of the same person closer, and at the same time push the instances belonging to different persons farther from each other in the learned feature space. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets.

1,265 citations


Journal ArticleDOI
TL;DR: In this article, the effect of different alkali elements in the post deposition treatment of CIGS solar cells was investigated and a diode analysis revealed an improved diode quality for cells treated-ed with heavier alkalis.
Abstract: We report on the use and effect of the alkali elements rubidium and caesium in the place of sodium and potassium in the alkali post deposition treatment (PDT) as applied to Cu(In,Ga)Se2 (CIGS) solar cell absorbers. In order to study the effects of the different alkali elements, we have produced a large number of CIGS solar cells with high efficiencies resulting in a good experimental resolution to detect even small differences in performance. We examine the electrical device parameters of these fully functional devices and observe a positive trend in the I –V parameters when moving from devices without PDT to KF-, RbF-, and eventually to CsF-PDT. A diode analysis reveals an improved diode quality for cells treat-ed with heavier alkalis. Furthermore, secondary ion mass spectrometry (SIMS) measurements reveal a competitive mechanism induced within the class of alkali elements in the CIGS absorber induced by the alkali post deposition treatment. (© 2016 WILEY-VCH Verlag GmbH &Co. KGaA, Weinheim)

1,264 citations


Posted Content
TL;DR: It is shown that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies.
Abstract: Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on "Stylized-ImageNet", a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.

1,264 citations


Proceedings Article
06 Aug 2017
TL;DR: In this paper, the authors employ simple evolutionary techniques at unprecedented scales to discover models for image classification problems, starting from trivial initial conditions and reaching accuracies of 94.6% and 77.0%, respectively.
Abstract: Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6% (95.6% for ensemble) and 77.0%, respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements.

1,263 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied data from patients with sepsis and septic shock that were reported to the New York State Department of Health from April 1, 2014, to June 30, 2016.
Abstract: BackgroundIn 2013, New York began requiring hospitals to follow protocols for the early identification and treatment of sepsis. However, there is controversy about whether more rapid treatment of sepsis improves outcomes in patients. MethodsWe studied data from patients with sepsis and septic shock that were reported to the New York State Department of Health from April 1, 2014, to June 30, 2016. Patients had a sepsis protocol initiated within 6 hours after arrival in the emergency department and had all items in a 3-hour bundle of care for patients with sepsis (i.e., blood cultures, broad-spectrum antibiotic agents, and lactate measurement) completed within 12 hours. Multilevel models were used to assess the associations between the time until completion of the 3-hour bundle and risk-adjusted mortality. We also examined the times to the administration of antibiotics and to the completion of an initial bolus of intravenous fluid. ResultsAmong 49,331 patients at 149 hospitals, 40,696 (82.5%) had the 3-hour...

1,263 citations


Proceedings Article
04 Dec 2017
TL;DR: In this paper, a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty was proposed for semantic segmentation and depth regression tasks, which can be interpreted as learned attenuation.
Abstract: There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

Journal ArticleDOI
TL;DR: An electrocatalyst for hydrogen generation based on very small amounts of cobalt dispersed as individual atoms on nitrogen-doped graphene is reported, which is robust and highly active in aqueous media with very low overpotentials.
Abstract: Reduction of water to hydrogen through electrocatalysis holds great promise for clean energy, but its large-scale application relies on the development of inexpensive and efficient catalysts to replace precious platinum catalysts. Here we report an electrocatalyst for hydrogen generation based on very small amounts of cobalt dispersed as individual atoms on nitrogen-doped graphene. This catalyst is robust and highly active in aqueous media with very low overpotentials (30 mV). A variety of analytical techniques and electrochemical measurements suggest that the catalytically active sites are associated with the metal centres coordinated to nitrogen. This unusual atomic constitution of supported metals is suggestive of a new approach to preparing extremely efficient single-atom catalysts.

Journal ArticleDOI
TL;DR: This review highlights the existing link between oxidative stress and AD, and the consequences towards the Aβ peptide and surrounding molecules in terms of oxidative damage, along with the implication of metal ions in AD.
Abstract: Oxidative stress is known to play an important role in the pathogenesis of a number of diseases. In particular, it is linked to the etiology of Alzheimer's disease (AD), an age-related neurodegenerative disease and the most common cause of dementia in the elderly. Histopathological hallmarks of AD are intracellular neurofibrillary tangles and extracellular formation of senile plaques composed of the amyloid-beta peptide (Aβ) in aggregated form along with metal-ions such as copper, iron or zinc. Redox active metal ions, as for example copper, can catalyze the production of Reactive Oxygen Species (ROS) when bound to the amyloid-β (Aβ). The ROS thus produced, in particular the hydroxyl radical which is the most reactive one, may contribute to oxidative damage on both the Aβ peptide itself and on surrounding molecule (proteins, lipids, …). This review highlights the existing link between oxidative stress and AD, and the consequences towards the Aβ peptide and surrounding molecules in terms of oxidative damage. In addition, the implication of metal ions in AD, their interaction with the Aβ peptide and redox properties leading to ROS production are discussed, along with both in vitro and in vivo oxidation of the Aβ peptide, at the molecular level.

Journal ArticleDOI
TL;DR: A Bell test is reported that closes the most significant of loopholes that provide loopholes for a local realist explanation of quantum mechanics, using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors.
Abstract: Local realism is the worldview in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light. Bell's theorem states that this worldview is incompatible with the predictions of quantum mechanics, as is expressed in Bell's inequalities. Previous experiments convincingly supported the quantum predictions. Yet, every experiment requires assumptions that provide loopholes for a local realist explanation. Here, we report a Bell test that closes the most significant of these loopholes simultaneously. Using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors, we observe a violation of a Bell inequality with high statistical significance. The purely statistical probability of our results to occur under local realism does not exceed 3.74×10^{-31}, corresponding to an 11.5 standard deviation effect.

Posted Content
TL;DR: A new release of the MOTChallenge benchmark, which focuses on multiple people tracking, and offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.
Abstract: Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.

Journal ArticleDOI
01 Apr 2018
TL;DR: A new epigenetic biomarker of aging, DNAm PhenoAge, is developed that strongly outperforms previous measures in regards to predictions for a variety of aging outcomes, including all-cause mortality, cancers, healthspan, physical functioning, and Alzheimer's disease.
Abstract: Identifying reliable biomarkers of aging is a major goal in geroscience. While the first generation of epigenetic biomarkers of aging were developed using chronological age as a surrogate for biological age, we hypothesized that incorporation of composite clinical measures of phenotypic age that capture differences in lifespan and healthspan may identify novel CpGs and facilitate the development of a more powerful epigenetic biomarker of aging. Using an innovative two-step process, we develop a new epigenetic biomarker of aging, DNAm PhenoAge, that strongly outperforms previous measures in regards to predictions for a variety of aging outcomes, including all-cause mortality, cancers, healthspan, physical functioning, and Alzheimer's disease. While this biomarker was developed using data from whole blood, it correlates strongly with age in every tissue and cell tested. Based on an in-depth transcriptional analysis in sorted cells, we find that increased epigenetic, relative to chronological age, is associated with increased activation of pro-inflammatory and interferon pathways, and decreased activation of transcriptional/translational machinery, DNA damage response, and mitochondrial signatures. Overall, this single epigenetic biomarker of aging is able to capture risks for an array of diverse outcomes across multiple tissues and cells, and provide insight into important pathways in aging.

Journal ArticleDOI
27 Jun 2017-JAMA
TL;DR: To estimate the recent prevalence and to investigate the ethnic variation of diabetes and prediabetes in the Chinese adult population, a nationally representative cross-sectional survey in 2013 in mainland China was conducted.
Abstract: Importance Previous studies have shown increasing prevalence of diabetes in China, which now has the world’s largest diabetes epidemic. Objectives To estimate the recent prevalence and to investigate the ethnic variation of diabetes and prediabetes in the Chinese adult population. Design, Setting, and Participants A nationally representative cross-sectional survey in 2013 in mainland China, which consisted of 170 287 participants. Exposures Fasting plasma glucose and hemoglobin A 1c levels were measured for all participants. A 2-hour oral glucose tolerance test was conducted for all participants without diagnosed diabetes. Main Outcomes and Measures Primary outcomes were total diabetes and prediabetes defined according to the 2010 American Diabetes Association criteria. Awareness and treatment were also evaluated. Hemoglobin A 1c concentration of less than 7.0% among treated diabetes patients was considered adequate glycemic control. Minority ethnic groups in China with at least 1000 participants (Tibetan, Zhuang, Manchu, Uyghur, and Muslim) were compared with Han participants. Results Among the Chinese adult population, the estimated standardized prevalence of total diagnosed and undiagnosed diabetes was 10.9% (95% CI, 10.4%-11.5%); that of diagnosed diabetes, 4.0% (95% CI, 3.6%-4.3%); and that of prediabetes, 35.7% (95% CI, 34.1%-37.4%). Among persons with diabetes, 36.5% (95% CI, 34.3%-38.6%) were aware of their diagnosis and 32.2% (95% CI, 30.1%-34.2%) were treated; 49.2% (95% CI, 46.9%-51.5%) of patients treated had adequate glycemic control. Tibetan and Muslim Chinese had significantly lower crude prevalence of diabetes than Han participants (14.7% [95% CI, 14.6%-14.9%] for Han, 4.3% [95% CI, 3.5%-5.0%] for Tibetan, and 10.6% [95% CI, 9.3%-11.9%] for Muslim; P Conclusions and Relevance Among adults in China, the estimated overall prevalence of diabetes was 10.9%, and that for prediabetes was 35.7%. Differences from previous estimates for 2010 may be due to an alternate method of measuring hemoglobin A 1c .

Journal ArticleDOI
TL;DR: ASTRAL-III is a faster version of the ASTRAL method for phylogenetic reconstruction and can scale up to 10,000 species and removes low support branches from gene trees, resulting in improved accuracy.
Abstract: Evolutionary histories can be discordant across the genome, and such discordances need to be considered in reconstructing the species phylogeny. ASTRAL is one of the leading methods for inferring species trees from gene trees while accounting for gene tree discordance. ASTRAL uses dynamic programming to search for the tree that shares the maximum number of quartet topologies with input gene trees, restricting itself to a predefined set of bipartitions. We introduce ASTRAL-III, which substantially improves the running time of ASTRAL-II and guarantees polynomial running time as a function of both the number of species (n) and the number of genes (k). ASTRAL-III limits the bipartition constraint set (X) to grow at most linearly with n and k. Moreover, it handles polytomies more efficiently than ASTRAL-II, exploits similarities between gene trees better, and uses several techniques to avoid searching parts of the search space that are mathematically guaranteed not to include the optimal tree. The asymptotic running time of ASTRAL-III in the presence of polytomies is $O\left ((nk)^{1.726} D \right)$ where D=O(nk) is the sum of degrees of all unique nodes in input trees. The running time improvements enable us to test whether contracting low support branches in gene trees improves the accuracy by reducing noise. In extensive simulations, we show that removing branches with very low support (e.g., below 10%) improves accuracy while overly aggressive filtering is harmful. We observe on a biological avian phylogenomic dataset of 14K genes that contracting low support branches greatly improve results. ASTRAL-III is a faster version of the ASTRAL method for phylogenetic reconstruction and can scale up to 10,000 species. With ASTRAL-III, low support branches can be removed, resulting in improved accuracy.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: In this paper, an implicit field is used to assign a value to each point in 3D space, so that a shape can be extracted as an iso-surface, and a binary classifier is trained to perform this assignment.
Abstract: We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. IM-NET is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and shape generation (via IM-GAN), we demonstrate superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Code and supplementary material are available at https://github.com/czq142857/implicit-decoder.

Posted Content
TL;DR: This paper presents a novel co-attention model for VQA that jointly reasons about image and question attention in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN).
Abstract: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.

Journal ArticleDOI
TL;DR: The history of the development in the research of fluorescent sensors, often referred to as chemosensors, and some pioneering and representative works from about 40 groups in the world that have made substantial contributions to this field are highlighted.
Abstract: Fluorescent chemosensors for ions and neutral analytes have been widely applied in many diverse fields such as biology, physiology, pharmacology, and environmental sciences. The field of fluorescent chemosensors has been in existence for about 150 years. In this time, a large range of fluorescent chemosensors have been established for the detection of biologically and/or environmentally important species. Despite the progress made in this field, several problems and challenges still exist. This tutorial review introduces the history and provides a general overview of the development in the research of fluorescent sensors, often referred to as chemosensors. This will be achieved by highlighting some pioneering and representative works from about 40 groups in the world that have made substantial contributions to this field. The basic principles involved in the design of chemosensors for specific analytes, problems and challenges in the field as well as possible future research directions are covered. The application of chemosensors in various established and emerging biotechnologies, is very bright.

Journal ArticleDOI
TL;DR: Gold(I) complexes selectively activate π-bonds of alkenes in complex molecular settings, which has been attributed to relativistic effects as discussed by the authors, and are the most effective catalysts for the electrophilic activation of alkynes under homogeneous conditions.
Abstract: 1.1. General Reactivity of Alkyne-Gold(I) Complexes For centuries, gold had been considered a precious, purely decorative inert metal. It was not until 1986 that Ito and Hayashi described the first application of gold(I) in homogeneous catalysis.1 More than one decade later, the first examples of gold(I) activation of alkynes were reported by Teles2 and Tanaka,3 revealing the potential of gold(I) in organic synthesis. Now, gold(I) complexes are the most effective catalysts for the electrophilic activation of alkynes under homogeneous conditions, and a broad range of versatile synthetic tools have been developed for the construction of carbon–carbon or carbon–heteroatom bonds. Gold(I) complexes selectively activate π-bonds of alkynes in complex molecular settings,4−10 which has been attributed to relativistic effects.11−13 In general, no other electrophilic late transition metal shows the breadth of synthetic applications of homogeneous gold(I) catalysts, although in occasions less Lewis acidic Pt(II) or Ag(I) complexes can be used as an alternative,9,10,14,15 particularly in the context of the activation of alkenes.16,17 Highly electrophilic Ga(III)18−22 and In(III)23,24 salts can also be used as catalysts, although often higher catalyst loadings are required. In general, the nucleophilic Markovnikov attack to η2-[AuL]+-activated alkynes 1 forms trans-alkenyl-gold complexes 2 as intermediates (Scheme 1).4,5a,9,10,12,25−29 This activation mode also occurs in gold-catalyzed cycloisomerizations of 1,n-enynes and in hydroarylation reactions, in which the alkene or the arene act as the nucleophile. Scheme 1 Anti-Nucleophilic Attack to η2-[AuL]+-Activated Alkynes

Posted Content
TL;DR: It is found that transfer learning using sentence embeddings tends to outperform word level transfer with surprisingly good performance with minimal amounts of supervised training data for a transfer task.
Abstract: We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub.

Proceedings ArticleDOI
Kaiming He1, Jian Sun1
07 Jun 2015
TL;DR: This paper investigates the accuracy of CNNs under constrained time cost, and presents an architecture that achieves very competitive accuracy in the ImageNet dataset, yet is 20% faster than “AlexNet” [14] (16.0% top-5 error, 10-view test).
Abstract: Though recent advanced convolutional neural networks (CNNs) have been improving the image recognition accuracy, the models are getting more complex and time-consuming. For real-world applications in industrial and commercial scenarios, engineers and developers are often faced with the requirement of constrained time budget. In this paper, we investigate the accuracy of CNNs under constrained time cost. Under this constraint, the designs of the network architectures should exhibit as trade-offs among the factors like depth, numbers of filters, filter sizes, etc. With a series of controlled comparisons, we progressively modify a baseline model while preserving its time complexity. This is also helpful for understanding the importance of the factors in network designs. We present an architecture that achieves very competitive accuracy in the ImageNet dataset (11.8% top-5 error, 10-view test), yet is 20% faster than “AlexNet” [14] (16.0% top-5 error, 10-view test).

01 Jan 2016
TL;DR: In this article, the authors pointed out that reports of nesting success that do not take into account the time span of observation for each nest usually understate losses, and sometimes the error can be very large.
Abstract: Reports of nesting success that do not take into account the time span of observation for each nest usually understate losses, and sometimes the error can be very large. More than a decade ago I pointed out this problem and proposed a way of dealing with it (Mayfield 1960:192-204; 1961). Since that time many field students have used the method, and it has proved especially helpful in combining fragments of data from many sources, as in the North American Nest-record Program at Cornell University. However, not every published report shows awareness of the problem, and letters of inquiry have shown that some people are deterred from dealing with it because of difficulty with details. Therefore, I offer these further suggestions to simplify the procedure as much as possible.

Proceedings Article
17 Jan 2017
TL;DR: In this article, the authors make theoretical steps towards fully understanding the training dynamics of GANs and perform targeted experiments to verify their assumptions, illustrate their claims, and quantify the phenomena.
Abstract: The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them.

Posted Content
TL;DR: In this paper, a new image density model based on the PixelCNN architecture is proposed for conditional image generation, which can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks.
Abstract: This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.

Journal ArticleDOI
TL;DR: The findings continue to support the importance of at least 60 min/day of MVPA for disease prevention and health promotion in children and youth, but also highlight the potential benefits of LPA and total PA.
Abstract: Moderate-to-vigorous physical activity (MVPA) is essential for disease prevention and health promotion. Emerging evidence suggests other intensities of physical activity (PA), including light-intensity activity (LPA), may also be important, but there has been no rigorous evaluation of the evidence. The purpose of this systematic review was to examine the relationships between objectively measured PA (total and all intensities) and health indicators in school-aged children and youth. Online databases were searched for peer-reviewed studies that met the a priori inclusion criteria: population (apparently healthy, aged 5–17 years), intervention/exposure/comparator (volumes, durations, frequencies, intensities, and patterns of objectively measured PA), and outcome (body composition, cardiometabolic biomarkers, physical fitness, behavioural conduct/pro-social behaviour, cognition/academic achievement, quality of life/well-being, harms, bone health, motor skill development, psychological distress, self-esteem)....

Journal ArticleDOI
Antonio F. Pardiñas1, Peter Holmans1, Andrew Pocklington1, Valentina Escott-Price1, Stephan Ripke2, Stephan Ripke3, Noa Carrera1, Sophie E. Legge1, Sophie Bishop1, D. F. Cameron1, Marian L. Hamshere1, Jun Han1, Leon Hubbard1, Amy Lynham1, Kiran Kumar Mantripragada1, Elliott Rees1, James H. MacCabe4, Steven A. McCarroll5, Bernhard T. Baune6, Gerome Breen7, Gerome Breen4, Enda M. Byrne8, Udo Dannlowski9, Thalia C. Eley4, Caroline Hayward10, Nicholas G. Martin11, Nicholas G. Martin8, Andrew M. McIntosh10, Robert Plomin4, David J. Porteous10, Naomi R. Wray8, Armando Caballero12, Daniel H. Geschwind13, Laura M. Huckins14, Douglas M. Ruderfer14, Enrique Santiago15, Pamela Sklar14, Eli A. Stahl14, Hyejung Won13, Esben Agerbo16, Esben Agerbo17, Thomas Damm Als17, Thomas Damm Als16, Ole A. Andreassen18, Ole A. Andreassen19, Marie Bækvad-Hansen16, Marie Bækvad-Hansen20, Preben Bo Mortensen17, Preben Bo Mortensen16, Carsten Bøcker Pedersen17, Carsten Bøcker Pedersen16, Anders D. Børglum16, Anders D. Børglum17, Jonas Bybjerg-Grauholm16, Jonas Bybjerg-Grauholm20, Srdjan Djurovic19, Srdjan Djurovic21, Naser Durmishi, Marianne Giørtz Pedersen17, Marianne Giørtz Pedersen16, Vera Golimbet, Jakob Grove, David M. Hougaard20, David M. Hougaard16, Manuel Mattheisen16, Manuel Mattheisen17, Espen Molden, Ole Mors16, Ole Mors22, Merete Nordentoft16, Merete Nordentoft23, Milica Pejovic-Milovancevic24, Engilbert Sigurdsson, Teimuraz Silagadze25, Christine Søholm Hansen16, Christine Søholm Hansen20, Kari Stefansson26, Hreinn Stefansson26, Stacy Steinberg26, Sarah Tosato27, Thomas Werge16, Thomas Werge28, Thomas Werge23, David A. Collier29, David A. Collier4, Dan Rujescu30, Dan Rujescu31, George Kirov1, Michael J. Owen1, Michael Conlon O'Donovan1, James T.R. Walters1 
TL;DR: A new genome-wide association study of schizophrenia is reported, and through meta-analysis with existing data and integrating genomic fine-mapping with brain expression and chromosome conformation data, 50 novel associated loci and 145 loci are identified.
Abstract: Schizophrenia is a debilitating psychiatric condition often associated with poor quality of life and decreased life expectancy. Lack of progress in improving treatment outcomes has been attributed to limited knowledge of the underlying biology, although large-scale genomic studies have begun to provide insights. We report a new genome-wide association study of schizophrenia (11,260 cases and 24,542 controls), and through meta-analysis with existing data we identify 50 novel associated loci and 145 loci in total. Through integrating genomic fine-mapping with brain expression and chromosome conformation data, we identify candidate causal genes within 33 loci. We also show for the first time that the common variant association signal is highly enriched among genes that are under strong selective pressures. These findings provide new insights into the biology and genetic architecture of schizophrenia, highlight the importance of mutation-intolerant genes and suggest a mechanism by which common risk variants persist in the population.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the threat posed by today's social bots and how their presence can endanger online ecosystems as well as our society, and how to deal with them.
Abstract: Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.