scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The role of ML as an effective approach for solving problems in geosciences and remote sensing will be highlighted and unique features of some of the ML techniques will be outlined with a specific attention to genetic programming paradigm.
Abstract: Learning incorporates a broad range of complex procedures. Machine learning (ML) is a subdivision of artificial intelligence based on the biological learning process. The ML approach deals with the design of algorithms to learn from machine readable data. ML covers main domains such as data mining, difficult-to-program applications, and software applications. It is a collection of a variety of algorithms (e.g. neural networks, support vector machines, self-organizing map, decision trees, random forests, case-based reasoning, genetic programming, etc.) that can provide multivariate, nonlinear, nonparametric regression or classification. The modeling capabilities of the ML-based methods have resulted in their extensive applications in science and engineering. Herein, the role of ML as an effective approach for solving problems in geosciences and remote sensing will be highlighted. The unique features of some of the ML techniques will be outlined with a specific attention to genetic programming paradigm. Furthermore, nonparametric regression and classification illustrative examples are presented to demonstrate the efficiency of ML for tackling the geosciences and remote sensing problems.

701 citations


Journal ArticleDOI
TL;DR: The ERS guidelines for the management of adult bronchiectasis describe the appropriate investigation and treatment strategies determined by a systematic review of the literature, using the GRADE approach to define the quality of the evidence and the level of recommendations.
Abstract: Bronchiectasis in adults is a chronic disorder associated with poor quality of life and frequent exacerbations in many patients. There have been no previous international guidelines.The European Respiratory Society guidelines for the management of adult bronchiectasis describe the appropriate investigation and treatment strategies determined by a systematic review of the literature.A multidisciplinary group representing respiratory medicine, microbiology, physiotherapy, thoracic surgery, primary care, methodology and patients considered the most relevant clinical questions (for both clinicians and patients) related to management of bronchiectasis. Nine key clinical questions were generated and a systematic review was conducted to identify published systematic reviews, randomised clinical trials and observational studies that answered these questions. We used the GRADE approach to define the quality of the evidence and the level of recommendations. The resulting guideline addresses the investigation of underlying causes of bronchiectasis, treatment of exacerbations, pathogen eradication, long term antibiotic treatment, anti-inflammatories, mucoactive drugs, bronchodilators, surgical treatment and respiratory physiotherapy.These recommendations can be used to benchmark quality of care for people with bronchiectasis across Europe and to improve outcomes.

701 citations



Journal ArticleDOI
TL;DR: In this paper , the authors developed a robust and sensitive sampling and analytical method with double shot pyrolysis - gas chromatography/mass spectrometry and applied it to measure plastic particles ≥700 nm in human whole blood from 22 healthy volunteers.

701 citations


Journal ArticleDOI
TL;DR: A regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains, that consistently outperforms state of the art approaches and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.
Abstract: Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.

701 citations


Journal ArticleDOI
13 Oct 2016-Nature
TL;DR: This work determines genome-wide mutation patterns in ASCs of the small intestine, colon and liver of human donors with ages ranging from 3 to 87 years by sequencing clonal organoid cultures derived from primary multipotent cells.
Abstract: The gradual accumulation of genetic mutations in human adult stem cells (ASCs) during life is associated with various age-related diseases, including cancer. Extreme variation in cancer risk across tissues was recently proposed to depend on the lifetime number of ASC divisions, owing to unavoidable random mutations that arise during DNA replication. However, the rates and patterns of mutations in normal ASCs remain unknown. Here we determine genome-wide mutation patterns in ASCs of the small intestine, colon and liver of human donors with ages ranging from 3 to 87 years by sequencing clonal organoid cultures derived from primary multipotent cells. Our results show that mutations accumulate steadily over time in all of the assessed tissue types, at a rate of approximately 40 novel mutations per year, despite the large variation in cancer incidence among these tissues. Liver ASCs, however, have different mutation spectra compared to those of the colon and small intestine. Mutational signature analysis reveals that this difference can be attributed to spontaneous deamination of methylated cytosine residues in the colon and small intestine, probably reflecting their high ASC division rate. In liver, a signature with an as-yet-unknown underlying mechanism is predominant. Mutation spectra of driver genes in cancer show high similarity to the tissue-specific ASC mutation spectra, suggesting that intrinsic mutational processes in ASCs can initiate tumorigenesis. Notably, the inter-individual variation in mutation rate and spectra are low, suggesting tissue-specific activity of common mutational processes throughout life.

700 citations


Journal ArticleDOI
TL;DR: In this paper, two different classes of symmetry protected nodal lines in the absence and in the presence of spin-orbital coupling (SOC), respectively, are studied. But unlike nodal line in the same symmetry class, each nodal can only be created (annihilated) in pairs.
Abstract: We theoretically study three-dimensional topological semimetals (TSMs) with nodal lines protected by crystalline symmetries. Compared to TSMs with point nodes, e.g., Weyl semimetals and Dirac semimetals, where the conduction and the valence bands touch at discrete points, in these TSMs the two bands cross at closed lines in the Brillouin zone. We propose two different classes of symmetry protected nodal lines in the absence and in the presence of spin-orbital coupling (SOC), respectively. In the former, we discuss nodal lines that are protected by a combination of inversion symmetry and time-reversal symmetry, yet, unlike previously studied nodal lines in the same symmetry class, each nodal line has a ${Z}_{2}$ monopole charge and can only be created (annihilated) in pairs. In the second class, with SOC, we show that a nonsymmorphic symmetry (screw axis) protects a four-band crossing nodal line in systems having both inversion and time-reversal symmetries.

700 citations


Journal ArticleDOI
18 Feb 2015-BMJ
TL;DR: Multidisciplinary biopsychosocial rehabilitation interventions were more effective than usual care and physical treatments in decreasing pain and disability in people with chronic low back pain and for work outcomes, multidisciplinary rehabilitation seems to be moreeffective than physical treatment but not more effectivethan usual care.
Abstract: Objective To assess the long term effects of multidisciplinary biopsychosocial rehabilitation for patients with chronic low back pain. Design Systematic review and random effects meta-analysis of randomised controlled trials. Data sources Electronic searches of Cochrane Back Review Group Trials Register, CENTRAL, Medline, Embase, PsycINFO, and CINAHL databases up to February 2014, supplemented by hand searching of reference lists and forward citation tracking of included trials. Study selection criteria Trials published in full; participants with low back pain for more than three months; multidisciplinary rehabilitation involved a physical component and one or both of a psychological component or a social or work targeted component; multidisciplinary rehabilitation was delivered by healthcare professionals from at least two different professional backgrounds; multidisciplinary rehabilitation was compared with a non- multidisciplinary intervention. Results Forty one trials included a total of 6858 participants with a mean duration of pain of more than one year who often had failed previous treatment. Sixteen trials provided moderate quality evidence that multidisciplinary rehabilitation decreased pain (standardised mean difference 0.21, 95% confidence interval 0.04 to 0.37; equivalent to 0.5 points in a 10 point pain scale) and disability (0.23, 0.06 to 0.40; equivalent to 1.5 points in a 24 point Roland-Morris index) compared with usual care. Nineteen trials provided low quality evidence that multidisciplinary rehabilitation decreased pain (standardised mean difference 0.51, −0.01 to 1.04) and disability (0.68, 0.16 to 1.19) compared with physical treatments, but significant statistical heterogeneity across trials was present. Eight trials provided moderate quality evidence that multidisciplinary rehabilitation improves the odds of being at work one year after intervention (odds ratio 1.87, 95% confidence interval 1.39 to 2.53) compared with physical treatments. Seven trials provided moderate quality evidence that multidisciplinary rehabilitation does not improve the odds of being at work (odds ratio 1.04, 0.73 to 1.47) compared with usual care. Two trials that compared multidisciplinary rehabilitation with surgery found little difference in outcomes and an increased risk of adverse events with surgery. Conclusions Multidisciplinary biopsychosocial rehabilitation interventions were more effective than usual care (moderate quality evidence) and physical treatments (low quality evidence) in decreasing pain and disability in people with chronic low back pain. For work outcomes, multidisciplinary rehabilitation seems to be more effective than physical treatment but not more effective than usual care.

700 citations


Journal ArticleDOI
TL;DR: This paper used unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity, which contains information about biological properties in its representations.
Abstract: In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity The resulting model contains information about biological properties in its representations The representations are learned from sequence data alone The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction

700 citations


Journal ArticleDOI
TL;DR: Ramucirumab plus F OLFIRI significantly improved overall survival compared with placebo plus FOLFIRI as second-line treatment for patients with metastatic colorectal carcinoma and toxic effects were manageable.
Abstract: Summary Background Angiogenesis is an important therapeutic target in colorectal carcinoma. Ramucirumab is a human IgG-1 monoclonal antibody that targets the extracellular domain of VEGF receptor 2. We assessed the efficacy and safety of ramucirumab versus placebo in combination with second-line FOLFIRI (leucovorin, fluorouracil, and irinotecan) for metastatic colorectal cancer in patients with disease progression during or after first-line therapy with bevacizumab, oxaliplatin, and a fluoropyrimidine. Methods Between Dec 14, 2010, and Aug 23, 2013, we enrolled patients into the multicentre, randomised, double-blind, phase 3 RAISE trial. Eligible patients had disease progression during or within 6 months of the last dose of first-line therapy. Patients were randomised (1:1) via a centralised, interactive voice-response system to receive 8 mg/kg intravenous ramucirumab plus FOLFIRI or matching placebo plus FOLFIRI every 2 weeks until disease progression, unacceptable toxic effects, or death. Randomisation was stratified by region, KRAS mutation status, and time to disease progression after starting first-line treatment. The primary endpoint was overall survival in the intention-to-treat population. This study is registered with ClinicalTrials.gov, number NCT01183780.ld Findings We enrolled 1072 patients (536 in each group). Median overall survival was 13·3 months (95% CI 12·4–14·5) for patients in the ramucirumab group versus 11·7 months (10·8–12·7) for the placebo group (hazard ratio 0·844 95% CI 0·730–0·976; log-rank p=0·0219). Survival benefit was consistent across subgroups of patients who received ramucirumab plus FOLFIRI. Grade 3 or worse adverse events seen in more than 5% of patients were neutropenia (203 [38%] of 529 patients in the ramucirumab group vs 123 [23%] of 528 in the placebo group, with febrile neutropenia incidence of 18 [3%] vs 13 [2%]), hypertension (59 [11%] vs 15 [3%]), diarrhoea (57 [11%] vs 51 [10%]), and fatigue (61 [12%] vs 41 [8%]). Interpretation Ramucirumab plus FOLFIRI significantly improved overall survival compared with placebo plus FOLFIRI as second-line treatment for patients with metastatic colorectal carcinoma. No unexpected adverse events were identified and toxic effects were manageable. Funding Eli Lilly.

700 citations


Journal ArticleDOI
TL;DR: In the past 20 years, impressive progress has been made both experimentally and theoretically in superconducting quantum circuits, which provide a platform for manipulating microwave photons as mentioned in this paper, and many higher-order effects, unusual and less familiar in traditional cavity quantum electrodynamics with natural atoms, have been experimentally observed.
Abstract: In the past 20 years, impressive progress has been made both experimentally and theoretically in superconducting quantum circuits, which provide a platform for manipulating microwave photons. This emerging field of superconducting quantum microwave circuits has been driven by many new interesting phenomena in microwave photonics and quantum information processing. For instance, the interaction between superconducting quantum circuits and single microwave photons can reach the regimes of strong, ultra-strong, and even deep-strong coupling. Many higher-order effects, unusual and less familiar in traditional cavity quantum electrodynamics with natural atoms, have been experimentally observed, e.g., giant Kerr effects, multi-photon processes, and single-atom induced bistability of microwave photons. These developments may lead to improved understanding of the counterintuitive properties of quantum mechanics, and speed up applications ranging from microwave photonics to superconducting quantum information processing. In this article, we review experimental and theoretical progress in microwave photonics with superconducting quantum circuits. We hope that this global review can provide a useful roadmap for this rapidly developing field.

Journal ArticleDOI
TL;DR: Some major analytical building blocks for the development of a theory of organizations are outlined and discussed in this article, and two literatures of agency theory are briefly discussed in light of these issues.
Abstract: The foundations are being put in place for a revolution in the science of organizations. Some major analytical building blocks for the development of a theory of organizations are outlined and discussed in this paper. This development of organization theory will be hastened by increased understanding of the importance of the choice of definitions, tautologies, analytical techniques, and types of evidence. The two literatures of agency theory are briefly discussed in light of these issues. Because accounting is an integral part of the structure of every organization, the development of a theory of organiza- tions will be closely associated with the development of a theory of accounting. This theory will explain why organizations take the form they do, why they behave as they do, and why accounting practices take the form they do. Because such positive theories as these are required for purposeful decision making, their development will provide a better scientific basis for the decisions of managers, standard-setting boards, and govern- ment regulatory bodies.

Journal ArticleDOI
TL;DR: This review consolidates the data on the classical and state-of-the-art methods for isolation of EVs, including exosomes, highlighting the advantages and disadvantages of each method.
Abstract: Background. Extracellular vesicles (EVs) play an essential role in the communication between cells and transport of diagnostically significant molecules. A wide diversity of approaches utilizing different biochemical properties of EVs and a lack of accepted protocols make data interpretation very challenging. Scope of Review. This review consolidates the data on the classical and state-of-the-art methods for isolation of EVs, including exosomes, highlighting the advantages and disadvantages of each method. Various characteristics of individual methods, including isolation efficiency, EV yield, properties of isolated EVs, and labor consumption are compared. Major Conclusions. A mixed population of vesicles is obtained in most studies of EVs for all used isolation methods. The properties of an analyzed sample should be taken into account when planning an experiment aimed at studying and using these vesicles. The problem of adequate EVs isolation methods still remains; it might not be possible to develop a universal EV isolation method but the available protocols can be used towards solving particular types of problems. General Significance. With the wide use of EVs for diagnosis and therapy of various diseases the evaluation of existing methods for EV isolation is one of the key problems in modern biology and medicine.

Journal ArticleDOI
TL;DR: This article illustrates how a framework for a research study design can be used to guide and inform the novice nurse researcher undertaking a study using grounded theory.
Abstract: Background:Grounded theory is a well-known methodology employed in many research studies. Qualitative and quantitative data generation techniques can be used in a grounded theory study. Grounded th...

Posted Content
TL;DR: A Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual convolutional neural Network (RRCNN), which are named RU-Net and R2U-Net respectively are proposed, which show superior performance on segmentation tasks compared to equivalent models including U-nets and residual U- net.
Abstract: Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).

Journal ArticleDOI
TL;DR: Inspired by earlier works, the application of deep learning models to detect COVID-19 patients from their chest radiography images and shows that the generated heatmaps contain most of the infected areas annotated by the authors' board certified radiologist.

Journal ArticleDOI
TL;DR: Evidence from the published literature supporting associations between CYP2D6 and CYC19 polymorphisms and SSRIs efficacy and safety is summarized and dosing recommendations for fluvoxamine, paroxetine, citalopram, escitaloprams, and sertraline based on CYP1C19 genotype are provided.
Abstract: Selective serotonin reuptake inhibitors (SSRIs) are primary treatment options for major depressive and anxiety disorders. CYP2D6 and CYP2C19 polymorphisms can influence the metabolism of SSRIs, thereby affecting drug efficacy and safety. We summarize evidence from the published literature supporting these associations and provide dosing recommendations for fluvoxamine, paroxetine, citalopram, escitalopram, and sertraline based on CYP2D6 and/or CYP2C19 genotype (updates at www.pharmgkb.org).

Proceedings Article
01 Jan 2020
TL;DR: GraphCL as discussed by the authors proposes a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data, which can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
Abstract: Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at this https URL.

Journal ArticleDOI
TL;DR: This work explores an array of prospective redesigns of plant systems at various scales aimed at increasing crop yields through improved photosynthetic efficiency and performance, and suggests some proposed redesigns are certain to face obstacles that will require alternate routes.
Abstract: The world’s crop productivity is stagnating whereas population growth, rising affluence, and mandates for biofuels put increasing demands on agriculture. Meanwhile, demand for increasing cropland competes with equally crucial global sustainability and environmental protection needs. Addressing this looming agricultural crisis will be one of our greatest scientific challenges in the coming decades, and success will require substantial improvements at many levels. We assert that increasing the efficiency and productivity of photosynthesis in crop plants will be essential if this grand challenge is to be met. Here, we explore an array of prospective redesigns of plant systems at various scales, all aimed at increasing crop yields through improved photosynthetic efficiency and performance. Prospects range from straightforward alterations, already supported by preliminary evidence of feasibility, to substantial redesigns that are currently only conceptual, but that may be enabled by new developments in synthetic biology. Although some proposed redesigns are certain to face obstacles that will require alternate routes, the efforts should lead to new discoveries and technical advances with important impacts on the global problem of crop productivity and bioenergy production.

Journal ArticleDOI
TL;DR: Evidence of the effectiveness of SLT for people with aphasia following stroke in terms of improved functional communication, reading, writing, and expressive language compared with no therapy is provided.
Abstract: Background Aphasia is an acquired language impairment following brain damage that affects some or all language modalities: expression and understanding of speech, reading, and writing. Approximately one third of people who have a stroke experience aphasia. Objectives To assess the effects of speech and language therapy (SLT) for aphasia following stroke. Search methods We searched the Cochrane Stroke Group Trials Register (last searched 9 September 2015), CENTRAL (2015, Issue 5) and other Cochrane Library Databases (CDSR, DARE, HTA, to 22 September 2015), MEDLINE (1946 to September 2015), EMBASE (1980 to September 2015), CINAHL (1982 to September 2015), AMED (1985 to September 2015), LLBA (1973 to September 2015), and SpeechBITE (2008 to September 2015). We also searched major trials registers for ongoing trials including ClinicalTrials.gov (to 21 September 2015), the Stroke Trials Registry (to 21 September 2015), Current Controlled Trials (to 22 September 2015), and WHO ICTRP (to 22 September 2015). In an effort to identify further published, unpublished, and ongoing trials we also handsearched the International Journal of Language and Communication Disorders (1969 to 2005) and reference lists of relevant articles, and we contacted academic institutions and other researchers. There were no language restrictions. Selection criteria Randomised controlled trials (RCTs) comparing SLT (a formal intervention that aims to improve language and communication abilities, activity and participation) versus no SLT; social support or stimulation (an intervention that provides social support and communication stimulation but does not include targeted therapeutic interventions); or another SLT intervention (differing in duration, intensity, frequency, intervention methodology or theoretical approach). Data collection and analysis We independently extracted the data and assessed the quality of included trials. We sought missing data from investigators. Main results We included 57 RCTs (74 randomised comparisons) involving 3002 participants in this review (some appearing in more than one comparison). Twenty-seven randomised comparisons (1620 participants) assessed SLT versus no SLT; SLT resulted in clinically and statistically significant benefits to patients' functional communication (standardised mean difference (SMD) 0.28, 95% confidence interval (CI) 0.06 to 0.49, P = 0.01), reading, writing, and expressive language, but (based on smaller numbers) benefits were not evident at follow-up. Nine randomised comparisons (447 participants) assessed SLT with social support and stimulation; meta-analyses found no evidence of a difference in functional communication, but more participants withdrew from social support interventions than SLT. Thirty-eight randomised comparisons (1242 participants) assessed two approaches to SLT. Functional communication was significantly better in people with aphasia that received therapy at a high intensity, high dose, or over a long duration compared to those that received therapy at a lower intensity, lower dose, or over a shorter period of time. The benefits of a high intensity or a high dose of SLT were confounded by a significantly higher dropout rate in these intervention groups. Generally, trials randomised small numbers of participants across a range of characteristics (age, time since stroke, and severity profiles), interventions, and outcomes. Authors' conclusions Our review provides evidence of the effectiveness of SLT for people with aphasia following stroke in terms of improved functional communication, reading, writing, and expressive language compared with no therapy. There is some indication that therapy at high intensity, high dose or over a longer period may be beneficial. HIgh-intensity and high dose interventions may not be acceptable to all.

Posted Content
TL;DR: In this article, a taxonomy of common programming pitfalls which may lead to security vulnerabilities in Ethereum smart contracts is presented, and a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.
Abstract: Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.

Journal ArticleDOI
TL;DR: In this article, the authors made a distinction between non-small-cell lung cancer (NSCLC) and small-cell cancer (SCLC), and showed that although overall mortality from lung cancer has been...
Abstract: Background Lung cancer is made up of distinct subtypes, including non–small-cell lung cancer (NSCLC) and small-cell lung cancer (SCLC). Although overall mortality from lung cancer has been...

Journal ArticleDOI
TL;DR: This work proposes to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors, and introduces a function that measures the compatibility between an image and a label embedding.
Abstract: Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: A novel Deep Supervised Hashing method to learn compact similarity-preserving binary code for the huge body of image data and extensive experiments show the promising performance of the method compared with the state-of-the-arts.
Abstract: In this paper, we present a new hashing method to learn compact binary codes for highly efficient image retrieval on large-scale datasets. While the complex image appearance variations still pose a great challenge to reliable retrieval, in light of the recent progress of Convolutional Neural Networks (CNNs) in learning robust image representation on various vision tasks, this paper proposes a novel Deep Supervised Hashing (DSH) method to learn compact similarity-preserving binary code for the huge body of image data. Specifically, we devise a CNN architecture that takes pairs of images (similar/dissimilar) as training inputs and encourages the output of each image to approximate discrete values (e.g. +1/-1). To this end, a loss function is elaborately designed to maximize the discriminability of the output space by encoding the supervised information from the input image pairs, and simultaneously imposing regularization on the real-valued outputs to approximate the desired discrete values. For image retrieval, new-coming query images can be easily encoded by propagating through the network and then quantizing the network outputs to binary codes representation. Extensive experiments on two large scale datasets CIFAR-10 and NUS-WIDE show the promising performance of our method compared with the state-of-the-arts.

Journal ArticleDOI
TL;DR: DEPICT as mentioned in this paper is an integrative tool that employs predicted gene functions to systematically prioritize the most likely causal genes at associated loci, highlight enriched pathways and identify tissues/cell types where genes from associated locis are highly expressed.
Abstract: The main challenge for gaining biological insights from genetic associations is identifying which genes and pathways explain the associations. Here we present DEPICT, an integrative tool that employs predicted gene functions to systematically prioritize the most likely causal genes at associated loci, highlight enriched pathways and identify tissues/cell types where genes from associated loci are highly expressed. DEPICT is not limited to genes with established functions and prioritizes relevant gene sets for many phenotypes.

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of 154 studies that apply deep learning to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Abstract: Context Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective In this work, we review 154 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to (1) the data, (2) the preprocessing methodology, (3) the DL design choices, (4) the results, and (5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About [Formula: see text] of the studies used convolutional neural networks (CNNs), while [Formula: see text] used recurrent neural networks (RNNs), most often with a total of 3-10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was [Formula: see text] across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. Significance To help the community progress and share work more effectively, we provide a list of recommendations for future studies and emphasize the need for more reproducible research. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly. A planned follow-up to this work will be an online public benchmarking portal listing reproducible results.

Journal ArticleDOI
TL;DR: In this paper, a dedicated effort to synthesize existing scientific knowledge across disciplines is underway and aims to provide a better understanding of the combined risks posed in the Mediterranean Basin, where fewer systematic observations schemes and impact models are based.
Abstract: Recent accelerated climate change has exacerbated existing environmental problems in the Mediterranean Basin that are caused by the combination of changes in land use, increasing pollution and declining biodiversity. For five broad and interconnected impact domains (water, ecosystems, food, health and security), current change and future scenarios consistently point to significant and increasing risks during the coming decades. Policies for the sustainable development of Mediterranean countries need to mitigate these risks and consider adaptation options, but currently lack adequate information — particularly for the most vulnerable southern Mediterranean societies, where fewer systematic observations schemes and impact models are based. A dedicated effort to synthesize existing scientific knowledge across disciplines is underway and aims to provide a better understanding of the combined risks posed.

Journal ArticleDOI
TL;DR: A review of recent advances in developing highly doped UCNPs is surveyed, the strategies that bypass the concentration quenching effect are highlighted, and new optical properties as well as emerging applications enabled by these nanoparticles are discussed.
Abstract: Lanthanide-doped upconversion nanoparticles (UCNPs) are capable of converting near-infra-red excitation into visible and ultraviolet emission. Their unique optical properties have advanced a broad range of applications, such as fluorescent microscopy, deep-tissue bioimaging, nanomedicine, optogenetics, security labelling and volumetric display. However, the constraint of concentration quenching on upconversion luminescence has hampered the nanoscience community to develop bright UCNPs with a large number of dopants. This review surveys recent advances in developing highly doped UCNPs, highlights the strategies that bypass the concentration quenching effect, and discusses new optical properties as well as emerging applications enabled by these nanoparticles.

Posted Content
TL;DR: This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.
Abstract: In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al, 2017) that are augmented with the ability to use unlabeled examples when producing prototypes These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would

Journal ArticleDOI
TL;DR: The most common neurologic complaints in COVID-19 are anosmia, ageusia, and headache, but other diseases, such as stroke, impairment of consciousness, seizure, and encephalopathy, have also been reported.
Abstract: Importance Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in December 2019, causing human coronavirus disease 2019 (COVID-19), which has now spread into a worldwide pandemic. The pulmonary manifestations of COVID-19 have been well described in the literature. Two similar human coronaviruses that cause Middle East respiratory syndrome (MERS-CoV) and severe acute respiratory syndrome (SARS-CoV-1) are known to cause disease in the central and peripheral nervous systems. Emerging evidence suggests COVID-19 has neurologic consequences as well. Observations This review serves to summarize available information regarding coronaviruses in the nervous system, identify the potential tissue targets and routes of entry of SARS-CoV-2 into the central nervous system, and describe the range of clinical neurological complications that have been reported thus far in COVID-19 and their potential pathogenesis. Viral neuroinvasion may be achieved by several routes, including transsynaptic transfer across infected neurons, entry via the olfactory nerve, infection of vascular endothelium, or leukocyte migration across the blood-brain barrier. The most common neurologic complaints in COVID-19 are anosmia, ageusia, and headache, but other diseases, such as stroke, impairment of consciousness, seizure, and encephalopathy, have also been reported. Conclusions and Relevance Recognition and understanding of the range of neurological disorders associated with COVID-19 may lead to improved clinical outcomes and better treatment algorithms. Further neuropathological studies will be crucial to understanding the pathogenesis of the disease in the central nervous system, and longitudinal neurologic and cognitive assessment of individuals after recovery from COVID-19 will be crucial to understand the natural history of COVID-19 in the central nervous system and monitor for any long-term neurologic sequelae.