scispace - formally typeset
Search or ask a question

Showing papers by "Tel Aviv University published in 2016"


Journal ArticleDOI
TL;DR: Pembrolizumab is a humanized monoclonal antibody against programmed death 1 (PD-1) that has antitumor activity in advanced non-small-cell lung cancer (NSCLC), with increased activity in tumors that express PD-L1 as mentioned in this paper.
Abstract: BackgroundPembrolizumab is a humanized monoclonal antibody against programmed death 1 (PD-1) that has antitumor activity in advanced non–small-cell lung cancer (NSCLC), with increased activity in tumors that express programmed death ligand 1 (PD-L1). MethodsIn this open-label, phase 3 trial, we randomly assigned 305 patients who had previously untreated advanced NSCLC with PD-L1 expression on at least 50% of tumor cells and no sensitizing mutation of the epidermal growth factor receptor gene or translocation of the anaplastic lymphoma kinase gene to receive either pembrolizumab (at a fixed dose of 200 mg every 3 weeks) or the investigator’s choice of platinum-based chemotherapy. Crossover from the chemotherapy group to the pembrolizumab group was permitted in the event of disease progression. The primary end point, progression-free survival, was assessed by means of blinded, independent, central radiologic review. Secondary end points were overall survival, objective response rate, and safety. ResultsMedi...

7,053 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: The Perseus software platform was developed to support biological and biomedical researchers in interpreting protein quantification, interaction and post-translational modification data and it is anticipated that Perseus's arsenal of algorithms and its intuitive usability will empower interdisciplinary analysis of complex large data sets.
Abstract: A main bottleneck in proteomics is the downstream biological analysis of highly multivariate quantitative protein abundance data generated using mass-spectrometry-based analysis. We developed the Perseus software platform (http://www.perseus-framework.org) to support biological and biomedical researchers in interpreting protein quantification, interaction and post-translational modification data. Perseus contains a comprehensive portfolio of statistical tools for high-dimensional omics data analysis covering normalization, pattern recognition, time-series analysis, cross-omics comparisons and multiple-hypothesis testing. A machine learning module supports the classification and validation of patient groups for diagnosis and prognosis, and it also detects predictive protein signatures. Central to Perseus is a user-friendly, interactive workflow environment that provides complete documentation of computational methods used in a publication. All activities in Perseus are realized as plugins, and users can extend the software by programming their own, which can be shared through a plugin store. We anticipate that Perseus's arsenal of algorithms and its intuitive usability will empower interdisciplinary analysis of complex large data sets.

5,165 citations


Journal ArticleDOI
TL;DR: Gaia as discussed by the authors is a cornerstone mission in the science programme of the European Space Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach.
Abstract: Gaia is a cornerstone mission in the science programme of the EuropeanSpace Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach. Both the spacecraft and the payload were built by European industry. The involvement of the scientific community focusses on data processing for which the international Gaia Data Processing and Analysis Consortium (DPAC) was selected in 2007. Gaia was launched on 19 December 2013 and arrived at its operating point, the second Lagrange point of the Sun-Earth-Moon system, a few weeks later. The commissioning of the spacecraft and payload was completed on 19 July 2014. The nominal five-year mission started with four weeks of special, ecliptic-pole scanning and subsequently transferred into full-sky scanning mode. We recall the scientific goals of Gaia and give a description of the as-built spacecraft that is currently (mid-2016) being operated to achieve these goals. We pay special attention to the payload module, the performance of which is closely related to the scientific performance of the mission. We provide a summary of the commissioning activities and findings, followed by a description of the routine operational mode. We summarise scientific performance estimates on the basis of in-orbit operations. Several intermediate Gaia data releases are planned and the data can be retrieved from the Gaia Archive, which is available through the Gaia home page.

5,164 citations


Journal ArticleDOI
TL;DR: The first Gaia data release, Gaia DR1 as discussed by the authors, consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims: A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods: The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results: Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues - a realisation of the Tycho-Gaia Astrometric Solution (TGAS) - and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr-1 for the proper motions. A systematic component of 0.3 mas should be added to the parallax uncertainties. For the subset of 94 000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr-1. For the secondary astrometric data set, the typical uncertainty of the positions is 10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to0.03 mag over the magnitude range 5 to 20.7. Conclusions: Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.

2,174 citations


Journal ArticleDOI
TL;DR: Several new features into ConSurf are introduced, including automatic selection of the best evolutionary model used to infer the rates, the able to homology-model query proteins, prediction of the secondary structure of query RNA molecules from sequence, the ability to view the biological assembly of a query (in addition to the single chain), mapping of the conservation grades onto 2D RNA models and an advanced view of the phylogenetic tree.
Abstract: The degree of evolutionary conservation of an amino acid in a protein or a nucleic acid in DNA/RNA reflects a balance between its natural tendency to mutate and the overall need to retain the structural integrity and function of the macromolecule. The ConSurf web server (http://consurf.tau.ac.il), established over 15 years ago, analyses the evolutionary pattern of the amino/nucleic acids of the macromolecule to reveal regions that are important for structure and/or function. Starting from a query sequence or structure, the server automatically collects homologues, infers their multiple sequence alignment and reconstructs a phylogenetic tree that reflects their evolutionary relations. These data are then used, within a probabilistic framework, to estimate the evolutionary rates of each sequence position. Here we introduce several new features into ConSurf, including automatic selection of the best evolutionary model used to infer the rates, the ability to homology-model query proteins, prediction of the secondary structure of query RNA molecules from sequence, the ability to view the biological assembly of a query (in addition to the single chain), mapping of the conservation grades onto 2D RNA models and an advanced view of the phylogenetic tree that enables interactively rerunning ConSurf with the taxa of a sub-tree.

2,159 citations


Journal ArticleDOI
TL;DR: The papers in this special section focus on the technology and applications supported by deep learning, which have proven to be powerful tools for a broad range of computer vision tasks.
Abstract: The papers in this special section focus on the technology and applications supported by deep learning. Deep learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications.

1,428 citations


Journal ArticleDOI
TL;DR: Among patients receiving initial systemic treatment for HR-positive, HER2-negative advanced breast cancer, the duration of progression-free survival was significantly longer among those receiving ribociclib plus letrozole than among those received placebo plus let rozole, with a higher rate of myelosuppression in the ribocIClib group.
Abstract: BackgroundThe inhibition of cyclin-dependent kinases 4 and 6 (CDK4/6) could potentially overcome or delay resistance to endocrine therapy in advanced breast cancer that is positive for hormone receptor (HR) and negative for human epidermal growth factor receptor 2 (HER2). MethodsIn this randomized, placebo-controlled, phase 3 trial, we evaluated the efficacy and safety of the selective CDK4/6 inhibitor ribociclib combined with letrozole for first-line treatment in 668 postmenopausal women with HR-positive, HER2-negative recurrent or metastatic breast cancer who had not received previous systemic therapy for advanced disease. We randomly assigned the patients to receive either ribociclib (600 mg per day on a 3-weeks-on, 1-week-off schedule) plus letrozole (2.5 mg per day) or placebo plus letrozole. The primary end point was investigator-assessed progression-free survival. Secondary end points included overall survival, overall response rate, and safety. A preplanned interim analysis was performed on Januar...

1,232 citations



Proceedings ArticleDOI
27 Jun 2016
TL;DR: This work proposes an algorithm, linear in the size of the image, deterministic and requires no training, that performs well on a wide variety of images and is competitive with other state-of-the-art methods on the single image dehazing problem.
Abstract: Haze limits visibility and reduces image contrast in outdoor images. The degradation is different for every pixel and depends on the distance of the scene point from the camera. This dependency is expressed in the transmission coefficients, that control the scene attenuation and amount of haze in every pixel. Previous methods solve the single image dehazing problem using various patch-based priors. We, on the other hand, propose an algorithm based on a new, non-local prior. The algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors, that form tight clusters in RGB space. Our key observation is that pixels in a given cluster are often non-local, i.e., they are spread over the entire image plane and are located at different distances from the camera. In the presence of haze these varying distances translate to different transmission coefficients. Therefore, each color cluster in the clear image becomes a line in RGB space, that we term a haze-line. Using these haze-lines, our algorithm recovers both the distance map and the haze-free image. The algorithm is linear in the size of the image, deterministic and requires no training. It performs well on a wide variety of images and is competitive with other stateof-the-art methods.

1,082 citations


Journal ArticleDOI
TL;DR: In this article, the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community, is presented.
Abstract: This White Paper presents the science case of an Electron-Ion Collider (EIC), focused on the structure and interactions of gluon-dominated matter, with the intent to articulate it to the broader nuclear science community. It was commissioned by the managements of Brookhaven National Laboratory (BNL) and Thomas Jefferson National Accelerator Facility (JLab) with the objective of presenting a summary of scientific opportunities and goals of the EIC as a follow-up to the 2007 NSAC Long Range plan. This document is a culmination of a community-wide effort in nuclear science following a series of workshops on EIC physics over the past decades and, in particular, the focused ten-week program on “Gluons and quark sea at high energies” at the Institute for Nuclear Theory in Fall 2010. It contains a brief description of a few golden physics measurements along with accelerator and detector concepts required to achieve them. It has been benefited profoundly from inputs by the users’ communities of BNL and JLab. This White Paper offers the promise to propel the QCD science program in the US, established with the CEBAF accelerator at JLab and the RHIC collider at BNL, to the next QCD frontier.

Journal ArticleDOI
TL;DR: In this trial involving patients without diabetes who had insulin resistance along with a recent history of ischemic stroke or TIA, the risk of stroke or myocardial infarction was lower among patients who received pioglitazone than among those who received placebo.
Abstract: BackgroundPatients with ischemic stroke or transient ischemic attack (TIA) are at increased risk for future cardiovascular events despite current preventive therapies. The identification of insulin resistance as a risk factor for stroke and myocardial infarction raised the possibility that pioglitazone, which improves insulin sensitivity, might benefit patients with cerebrovascular disease. MethodsIn this multicenter, double-blind trial, we randomly assigned 3876 patients who had had a recent ischemic stroke or TIA to receive either pioglitazone (target dose, 45 mg daily) or placebo. Eligible patients did not have diabetes but were found to have insulin resistance on the basis of a score of more than 3.0 on the homeostasis model assessment of insulin resistance (HOMA-IR) index. The primary outcome was fatal or nonfatal stroke or myocardial infarction. ResultsBy 4.8 years, a primary outcome had occurred in 175 of 1939 patients (9.0%) in the pioglitazone group and in 228 of 1937 (11.8%) in the placebo group...

Posted Content
TL;DR: This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.
Abstract: Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph- and 3D shape analysis and show that it consistently outperforms previous approaches.

Journal ArticleDOI
25 Feb 2016-Nature
TL;DR: It is shown that m1A is enriched around the start codon upstream of the first splice site: it preferentially decorates more structured regions around canonical and alternative translation initiation sites, is dynamic in response to physiological conditions, and correlates positively with protein production.
Abstract: Gene expression can be regulated post-transcriptionally through dynamic and reversible RNA modifications. A recent noteworthy example is N(6)-methyladenosine (m(6)A), which affects messenger RNA (mRNA) localization, stability, translation and splicing. Here we report on a new mRNA modification, N(1)-methyladenosine (m(1)A), that occurs on thousands of different gene transcripts in eukaryotic cells, from yeast to mammals, at an estimated average transcript stoichiometry of 20% in humans. Employing newly developed sequencing approaches, we show that m(1)A is enriched around the start codon upstream of the first splice site: it preferentially decorates more structured regions around canonical and alternative translation initiation sites, is dynamic in response to physiological conditions, and correlates positively with protein production. These unique features are highly conserved in mouse and human cells, strongly indicating a functional role for m(1)A in promoting translation of methylated mRNA.

Journal ArticleDOI
17 Jun 2016-Science
TL;DR: A fully reversible, two-mode, single-molecule electrical switch with unprecedented levels of accuracy, stability, consistency, and reproducibility is demonstrated.
Abstract: Through molecular engineering, single diarylethenes were covalently sandwiched between graphene electrodes to form stable molecular conduction junctions. Our experimental and theoretical studies of these junctions consistently show and interpret reversible conductance photoswitching at room temperature and stochastic switching between different conductive states at low temperature at a single-molecule level. We demonstrate a fully reversible, two-mode, single-molecule electrical switch with unprecedented levels of accuracy (on/off ratio of ~100), stability (over a year), and reproducibility (46 devices with more than 100 cycles for photoswitching and ~10 5 to 10 6 cycles for stochastic switching).

Journal ArticleDOI
TL;DR: The results support the use of isavuconazole for the primary treatment of patients with invasive mould disease and non-inferiority was shown.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4, Baptiste Abeloos5, Rosemarie Aben6, Ossama AbouZeid7, N. L. Abraham8, Halina Abramowicz9, Henso Abreu10, Ricardo Abreu11, Yiming Abulaiti12, Bobby Samir Acharya13, Bobby Samir Acharya14, Leszek Adamczyk15, David H. Adams16, Jahred Adelman17, Stefanie Adomeit18, Tim Adye19, A. A. Affolder20, Tatjana Agatonovic-Jovin21, Johannes Agricola22, Juan Antonio Aguilar-Saavedra23, Steven Ahlen24, Faig Ahmadov4, Faig Ahmadov25, Giulio Aielli26, Henrik Akerstedt12, T. P. A. Åkesson27, Andrei Akimov, Gian Luigi Alberghi28, Justin Albert29, S. Albrand30, M. J. Alconada Verzini31, Martin Aleksa32, Igor Aleksandrov25, Calin Alexa, Gideon Alexander9, Theodoros Alexopoulos33, Muhammad Alhroob2, Malik Aliev34, Gianluca Alimonti, John Alison35, Steven Patrick Alkire36, Bmm Allbrooke8, Benjamin William Allen11, Phillip Allport37, Alberto Aloisio38, Alejandro Alonso39, Francisco Alonso31, Cristiano Alpigiani40, Mahmoud Alstaty1, B. Alvarez Gonzalez32, D. Álvarez Piqueras41, Mariagrazia Alviggi38, Brian Thomas Amadio42, K. Amako, Y. Amaral Coutinho43, Christoph Amelung44, D. Amidei45, S. P. Amor Dos Santos46, António Amorim47, Simone Amoroso32, Glenn Amundsen44, Christos Anastopoulos48, Lucian Stefan Ancu49, Nansi Andari17, Timothy Andeen50, Christoph Falk Anders51, G. Anders32, John Kenneth Anders20, Kelby Anderson35, Attilio Andreazza52, Andrei51, Stylianos Angelidakis53, Ivan Angelozzi6, Philipp Anger54, Aaron Angerami36, Francis Anghinolfi32, Alexey Anisenkov55, Nuno Anjos56 
Aix-Marseille University1, University of Oklahoma2, University of Iowa3, Azerbaijan National Academy of Sciences4, Université Paris-Saclay5, University of Amsterdam6, University of California, Santa Cruz7, University of Sussex8, Tel Aviv University9, Technion – Israel Institute of Technology10, University of Oregon11, Stockholm University12, King's College London13, International Centre for Theoretical Physics14, AGH University of Science and Technology15, Brookhaven National Laboratory16, Northern Illinois University17, Ludwig Maximilian University of Munich18, Rutherford Appleton Laboratory19, University of Liverpool20, University of Belgrade21, University of Göttingen22, University of Granada23, Boston University24, Joint Institute for Nuclear Research25, University of Rome Tor Vergata26, Lund University27, University of Bologna28, University of Victoria29, University of Grenoble30, National University of La Plata31, CERN32, National Technical University of Athens33, University of Salento34, University of Chicago35, Columbia University36, University of Birmingham37, University of Naples Federico II38, University of Copenhagen39, University of Washington40, University of Valencia41, Lawrence Berkeley National Laboratory42, Federal University of Rio de Janeiro43, Brandeis University44, University of Michigan45, University of Coimbra46, University of Lisbon47, University of Sheffield48, University of Geneva49, University of Texas at Austin50, Heidelberg University51, University of Milan52, National and Kapodistrian University of Athens53, Dresden University of Technology54, Novosibirsk State University55, IFAE56
TL;DR: In this article, a combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented.
Abstract: Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented. The combination is based on the analysis of five production processes, namely gluon fusion, vector boson fusion, and associated production with a $W$ or a $Z$ boson or a pair of top quarks, and of the six decay modes $H \to ZZ, WW$, $\gamma\gamma, \tau\tau, bb$, and $\mu\mu$. All results are reported assuming a value of 125.09 GeV for the Higgs boson mass, the result of the combined measurement by the ATLAS and CMS experiments. The analysis uses the CERN LHC proton--proton collision data recorded by the ATLAS and CMS experiments in 2011 and 2012, corresponding to integrated luminosities per experiment of approximately 5 fb$^{-1}$ at $\sqrt{s}=7$ TeV and 20 fb$^{-1}$ at $\sqrt{s} = 8$ TeV. The Higgs boson production and decay rates measured by the two experiments are combined within the context of three generic parameterisations: two based on cross sections and branching fractions, and one on ratios of coupling modifiers. Several interpretations of the measurements with more model-dependent parameterisations are also given. The combined signal yield relative to the Standard Model prediction is measured to be 1.09 $\pm$ 0.11. The combined measurements lead to observed significances for the vector boson fusion production process and for the $H \to \tau\tau$ decay of $5.4$ and $5.5$ standard deviations, respectively. The data are consistent with the Standard Model predictions for all parameterisations considered.

Journal ArticleDOI
TL;DR: In this article, the authors developed and analyzed a global soil visible-near infrared (vis-NIR) spectral library, which is currently the largest and most diverse database of its kind, and showed that the information encoded in the spectra can describe soil composition and be associated to land cover and its global geographic distribution, which acts as a surrogate for global climate variability.

Journal ArticleDOI
20 Nov 2016
TL;DR: This work proposes a direct-detection coherent receiver that combines the advantages of coherent transmission and the cost-effectiveness of direct detection, and is more efficient in terms of spectral occupancy and energy consumption.
Abstract: The interest for short-reach links of the kind needed for inter-data-center communications has fueled in recent years the search for transmission schemes that are simultaneously highly performing and cost effective. In this work we propose a direct-detection coherent receiver that combines the advantages of coherent transmission and the cost-effectiveness of direct detection. The working principle of the proposed receiver is based on the famous Kramers–Kronig (KK) relations, and its implementation requires transmitting a continuous-wave signal at one edge of the information-carrying signal spectrum. The KK receiver scheme allows digital postcompensation of linear propagation impairments and, as compared to other existing solutions, is more efficient in terms of spectral occupancy and energy consumption.

Posted Content
TL;DR: Domain Transfer Network (DTN) as discussed by the authors employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves.
Abstract: We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.

Journal ArticleDOI
TL;DR: The discovery that rumen microbiome components are tightly linked to cows' ability to extract energy from their feed, termed feed efficiency, is reported.
Abstract: Ruminants have the remarkable ability to convert human-indigestible plant biomass into human-digestible food products, due to a complex microbiome residing in the rumen compartment of their upper digestive tract. Here we report the discovery that rumen microbiome components are tightly linked to cows' ability to extract energy from their feed, termed feed efficiency. Feed efficiency was measured in 146 milking cows and analyses of the taxonomic composition, gene content, microbial activity and metabolomic composition was performed on the rumen microbiomes from the 78 most extreme animals. Lower richness of microbiome gene content and taxa was tightly linked to higher feed efficiency. Microbiome genes and species accurately predicted the animals' feed efficiency phenotype. Specific enrichment of microbes and metabolic pathways in each of these microbiome groups resulted in better energy and carbon channeling to the animal, while lowering methane emissions to the atmosphere. This ecological and mechanistic understanding of the rumen microbiome could lead to an increase in available food resources and environmentally friendly livestock agriculture.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2828 moreInstitutions (191)
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.

Proceedings ArticleDOI
14 Mar 2016
TL;DR: Item2vec as mentioned in this paper is an item-based collaborative filtering method based on skip-gram with negative sampling (SGNS) that produces embedding for items in a latent space.
Abstract: Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities. Recently, several works in the field of Natural Language Processing (NLP) suggested to learn a latent representation of words using neural embedding algorithms. Among them, the Skip-gram with Negative Sampling (SGNS), also known as word2vec, was shown to provide state-of-the-art results on various linguistics tasks. In this paper, we show that item-based CF can be cast in the same framework of neural word embedding. Inspired by SGNS, we describe a method we name item2vec for item-based CF that produces embedding for items in a latent space. The method is capable of inferring item-item relations even when user information is not available. We present experimental results that demonstrate the effectiveness of the item2vec method and show it is competitive with SVD.

Journal ArticleDOI
TL;DR: The work carried out by the task force toward identifying challenges and opportunities in the development of technologies with potential for improving the clinical management and the quality of life of individuals with PD is summarized.
Abstract: The miniaturization, sophistication, proliferation, and accessibility of technologies are enabling the capture of more and previously inaccessible phenomena in Parkinson's disease (PD). However, more information has not translated into a greater understanding of disease complexity to satisfy diagnostic and therapeutic needs. Challenges include noncompatible technology platforms, the need for wide-scale and long-term deployment of sensor technology (among vulnerable elderly patients in particular), and the gap between the "big data" acquired with sensitive measurement technologies and their limited clinical application. Major opportunities could be realized if new technologies are developed as part of open-source and/or open-hardware platforms that enable multichannel data capture sensitive to the broad range of motor and nonmotor problems that characterize PD and are adaptable into self-adjusting, individualized treatment delivery systems. The International Parkinson and Movement Disorders Society Task Force on Technology is entrusted to convene engineers, clinicians, researchers, and patients to promote the development of integrated measurement and closed-loop therapeutic systems with high patient adherence that also serve to (1) encourage the adoption of clinico-pathophysiologic phenotyping and early detection of critical disease milestones, (2) enhance the tailoring of symptomatic therapy, (3) improve subgroup targeting of patients for future testing of disease-modifying treatments, and (4) identify objective biomarkers to improve the longitudinal tracking of impairments in clinical care and research. This article summarizes the work carried out by the task force toward identifying challenges and opportunities in the development of technologies with potential for improving the clinical management and the quality of life of individuals with PD. © 2016 International Parkinson and Movement Disorder Society.

Journal ArticleDOI
12 Jan 2016-JAMA
TL;DR: Kidney failure risk equations developed in a Canadian population showed high discrimination and adequate calibration when validated in 31 multinational cohorts, but the original risk equations overestimated risk in some non-North American cohorts.
Abstract: Importance Identifying patients at risk of chronic kidney disease (CKD) progression may facilitate more optimal nephrology care. Kidney failure risk equations, including such factors as age, sex, estimated glomerular filtration rate, and calcium and phosphate concentrations, were previously developed and validated in 2 Canadian cohorts. Validation in other regions and in CKD populations not under the care of a nephrologist is needed. Objective To evaluate the accuracy of the risk equations across different geographic regions and patient populations through individual participant data meta-analysis. Data Sources Thirty-one cohorts, including 721 357 participants with CKD stages 3 to 5 in more than 30 countries spanning 4 continents, were studied. These cohorts collected data from 1982 through 2014. Study Selection Cohorts participating in the CKD Prognosis Consortium with data on end-stage renal disease. Data Extraction and Synthesis Data were obtained and statistical analyses were performed between July 2012 and June 2015. Using the risk factors from the original risk equations, cohort-specific hazard ratios were estimated and combined using random-effects meta-analysis to form new pooled kidney failure risk equations. Original and pooled kidney failure risk equation performance was compared, and the need for regional calibration factors was assessed. Main Outcomes and Measures Kidney failure (treatment by dialysis or kidney transplant). Results During a median follow-up of 4 years of 721 357 participants with CKD, 23 829 cases kidney failure were observed. The original risk equations achieved excellent discrimination (ability to differentiate those who developed kidney failure from those who did not) across all cohorts (overallCstatistic, 0.90; 95% CI, 0.89-0.92 at 2 years;Cstatistic at 5 years, 0.88; 95% CI, 0.86-0.90); discrimination in subgroups by age, race, and diabetes status was similar. There was no improvement with the pooled equations. Calibration (the difference between observed and predicted risk) was adequate in North American cohorts, but the original risk equations overestimated risk in some non-North American cohorts. Addition of a calibration factor that lowered the baseline risk by 32.9% at 2 years and 16.5% at 5 years improved the calibration in 12 of 15 and 10 of 13 non-North American cohorts at 2 and 5 years, respectively (P = .04 andP = .02). Conclusions and Relevance Kidney failure risk equations developed in a Canadian population showed high discrimination and adequate calibration when validated in 31 multinational cohorts. However, in some regions the addition of a calibration factor may be necessary.

Journal ArticleDOI
Cristian Pattaro, Alexander Teumer1, Mathias Gorski2, Audrey Y. Chu3  +732 moreInstitutions (157)
TL;DR: A meta-analysis of genome-wide association studies for estimated glomerular filtration rate suggests that genetic determinants of eGFR are mediated largely through direct effects within the kidney and highlight important cell types and biological pathways.
Abstract: Reduced glomerular filtration rate defines chronic kidney disease and is associated with cardiovascular and all-cause mortality. We conducted a meta-analysis of genome-wide association studies for estimated glomerular filtration rate (eGFR), combining data across 133,413 individuals with replication in up to 42,166 individuals. We identify 24 new and confirm 29 previously identified loci. Of these 53 loci, 19 associate with eGFR among individuals with diabetes. Using bioinformatics, we show that identified genes at eGFR loci are enriched for expression in kidney tissues and in pathways relevant for kidney development and transmembrane transporter activity, kidney structure, and regulation of glucose metabolism. Chromatin state mapping and DNase I hypersensitivity analyses across adult tissues demonstrate preferential mapping of associated variants to regulatory regions in kidney but not extra-renal tissues. These findings suggest that genetic determinants of eGFR are mediated largely through direct effects within the kidney and highlight important cell types and biological pathways.

Proceedings ArticleDOI
16 Jul 2016
TL;DR: A novel deep learning method for improving the belief propagation algorithm by assigning weights to the edges of the Tanner graph that allows for only a single codeword instead of exponential number of codewords.
Abstract: A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of codewords. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

Journal ArticleDOI
TL;DR: The presence of volatile glycine accompanied by methylamine and ethylamines in the coma of 67P/Churyumov-Gerasimenko measured by the ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) mass spectrometer demonstrates that comets could have played a crucial role in the emergence of life on Earth.
Abstract: The importance of comets for the origin of life on Earth has been advocated for many decades. Amino acids are key ingredients in chemistry, leading to life as we know it. Many primitive meteorites contain amino acids, and it is generally believed that these are formed by aqueous alterations. In the collector aerogel and foil samples of the Stardust mission after the flyby at comet Wild 2, the simplest form of amino acids, glycine, has been found together with precursor molecules methylamine and ethylamine. Because of contamination issues of the samples, a cometary origin was deduced from the 13C isotopic signature. We report the presence of volatile glycine accompanied by methylamine and ethylamine in the coma of 67P/Churyumov-Gerasimenko measured by the ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) mass spectrometer, confirming the Stardust results. Together with the detection of phosphorus and a multitude of organic molecules, this result demonstrates that comets could have played a crucial role in the emergence of life on Earth.

Journal ArticleDOI
TL;DR: In this article, the authors reviewed predictors of loneliness in the older population as described in the current literature and a small qualitative study and conducted two focus groups were conducted asking older participants about the causes of loneliness.
Abstract: BACKGROUND: Older persons are particularly vulnerable to loneliness because of common age-related changes and losses. This paper reviews predictors of loneliness in the older population as described in the current literature and a small qualitative study. METHODS: Peer-reviewed journal articles were identified from psycINFO, MEDLINE, and Google Scholar from 2000-2012. Overall, 38 articles were reviewed. Two focus groups were conducted asking older participants about the causes of loneliness. RESULTS: Variables significantly associated with loneliness in older adults were: female gender, non-married status, older age, poor income, lower educational level, living alone, low quality of social relationships, poor self-reported health, and poor functional status. Psychological attributes associated with loneliness included poor mental health, low self-efficacy beliefs, negative life events, and cognitive deficits. These associations were mainly studied in cross-sectional studies. In the focus groups, participants mentioned environmental barriers, unsafe neighborhoods, migration patterns, inaccessible housing, and inadequate resources for socializing. Other issues raised in the focus groups were the relationship between loneliness and boredom and inactivity, the role of recent losses of family and friends, as well as mental health issues, such as shame and fear. CONCLUSIONS: Future quantitative studies are needed to examine the impact of physical and social environments on loneliness in this population. It is important to better map the multiple factors and ways by which they impact loneliness to develop better solutions for public policy, city, and environmental planning, and individually based interventions. This effort should be viewed as a public health priority. Language: en

Journal ArticleDOI
TL;DR: The process of inflammatory reaction that is to be expected following implantation of PLA is reviewed, and specific cases in which the inflammatory reaction can result in safety concerns are highlighted.