Showing papers by "Université de Sherbrooke published in 2017"
•
24 Apr 2017TL;DR: In this paper, an LSTM-based meta-learner model is proposed to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime.
Abstract: Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
2,981 citations
••
TL;DR: A fast and accurate fully automatic method for brain tumor segmentation which is competitive both in terms of accuracy and speed compared to the state of the art, and introduces a novel cascaded architecture that allows the system to more accurately model local label dependencies.
2,538 citations
01 Jan 2017
TL;DR: A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
Abstract: We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.
The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages.
We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.
1,713 citations
••
TL;DR: The InTBIR Participants and Investigators have provided informed consent for the study to take place in Poland.
Abstract: Additional co-authors: Endre Czeiter, Marek Czosnyka, Ramon Diaz-Arrastia, Jens P Dreier, Ann-Christine Duhaime, Ari Ercole, Thomas A van Essen, Valery L Feigin, Guoyi Gao, Joseph Giacino, Laura E Gonzalez-Lara, Russell L Gruen, Deepak Gupta, Jed A Hartings, Sean Hill, Ji-yao Jiang, Naomi Ketharanathan, Erwin J O Kompanje, Linda Lanyon, Steven Laureys, Fiona Lecky, Harvey Levin, Hester F Lingsma, Marc Maegele, Marek Majdan, Geoffrey Manley, Jill Marsteller, Luciana Mascia, Charles McFadyen, Stefania Mondello, Virginia Newcombe, Aarno Palotie, Paul M Parizel, Wilco Peul, James Piercy, Suzanne Polinder, Louis Puybasset, Todd E Rasmussen, Rolf Rossaint, Peter Smielewski, Jeannette Soderberg, Simon J Stanworth, Murray B Stein, Nicole von Steinbuchel, William Stewart, Ewout W Steyerberg, Nino Stocchetti, Anneliese Synnot, Braden Te Ao, Olli Tenovuo, Alice Theadom, Dick Tibboel, Walter Videtta, Kevin K W Wang, W Huw Williams, Kristine Yaffe for the InTBIR Participants and Investigators
1,354 citations
••
German Cancer Research Center1, Université de Sherbrooke2, University Health Network3, University of Pittsburgh4, IMT Institute for Advanced Studies Lucca5, St. Jude Children's Research Hospital6, University of Toronto7, Zhejiang University of Technology8, Harvard University9, Utrecht University10, Université de Montréal11, National Research Council12, University of Washington13, University of Western Ontario14, École Polytechnique Fédérale de Lausanne15, ETSI16, Siemens17, University of Southern California18, King's College London19, University of Bordeaux20, Centre national de la recherche scientifique21, Copenhagen University Hospital22, University of Hamburg23, University of Basel24
TL;DR: The encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent) is reported, however, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups.
Abstract: Tractography based on non-invasive diffusion imaging is central to the study of human brain connectivity. To date, the approach has not been systematically validated in ground truth studies. Based on a simulated human brain data set with ground truth tracts, we organized an open international tractography challenge, which resulted in 96 distinct submissions from 20 research groups. Here, we report the encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent). However, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups. Taken together, our results demonstrate and confirm fundamental ambiguities inherent in tract reconstruction based on orientation information alone, which need to be considered when interpreting tractography and connectivity results. Our approach provides a novel framework for estimating reliability of tractography and encourages innovation to address its current limitations.
996 citations
••
TL;DR: HMSC is operationalise the HMSC framework as a hierarchical Bayesian joint species distribution model, and is implemented as R- and Matlab-packages which enable computationally efficient analyses of large data sets.
Abstract: Community ecology aims to understand what factors determine the assembly and dynamics of species assemblages at different spatiotemporal scales. To facilitate the integration between conceptual and statistical approaches in community ecology, we propose Hierarchical Modelling of Species Communities (HMSC) as a general, flexible framework for modern analysis of community data. While non-manipulative data allow for only correlative and not causal inference, this framework facilitates the formulation of data-driven hypotheses regarding the processes that structure communities. We model environmental filtering by variation and covariation in the responses of individual species to the characteristics of their environment, with potential contingencies on species traits and phylogenetic relationships. We capture biotic assembly rules by species-to-species association matrices, which may be estimated at multiple spatial or temporal scales. We operationalise the HMSC framework as a hierarchical Bayesian joint species distribution model, and implement it as R- and Matlab-packages which enable computationally efficient analyses of large data sets. Armed with this tool, community ecologists can make sense of many types of data, including spatially explicit data and time-series data. We illustrate the use of this framework through a series of diverse ecological examples.
588 citations
••
21 Jul 2017TL;DR: A simplified convolutional neural network which combines local and global information through a multi-resolution 4×5 grid structure is proposed which implements a loss function inspired by the Mumford-Shah functional which penalizes errors on the boundary, enabling near real-time, high performance saliency detection.
Abstract: Saliency detection aims to highlight the most relevant objects in an image. Methods using conventional models struggle whenever salient objects are pictured on top of a cluttered background while deep neural nets suffer from excess complexity and slow evaluation speeds. In this paper, we propose a simplified convolutional neural network which combines local and global information through a multi-resolution 4×5 grid structure. Instead of enforcing spacial coherence with a CRF or superpixels as is usually the case, we implemented a loss function inspired by the Mumford-Shah functional which penalizes errors on the boundary. We trained our model on the MSRA-B dataset, and tested it on six different saliency benchmark datasets. Results show that our method is on par with the state-of-the-art while reducing computation time by a factor of 18 to 100 times, enabling near real-time, high performance saliency detection.
505 citations
••
TL;DR: GSK2586881 was well-tolerated in patients with ARDS, and the rapid modulation of RAS peptides suggests target engagement, although the study was not powered to detect changes in acute physiology or clinical outcomes.
Abstract: Renin-angiotensin system (RAS) signaling and angiotensin-converting enzyme 2 (ACE2) have been implicated in the pathogenesis of acute respiratory distress syndrome (ARDS). We postulated that repleting ACE2 using GSK2586881, a recombinant form of human angiotensin-converting enzyme 2 (rhACE2), could attenuate acute lung injury. We conducted a two-part phase II trial comprising an open-label intrapatient dose escalation and a randomized, double-blind, placebo-controlled phase in ten intensive care units in North America. Patients were between the ages of 18 and 80 years, had an American-European Consensus Criteria consensus diagnosis of ARDS, and had been mechanically ventilated for less than 72 h. In part A, open-label GSK2586881 was administered at doses from 0.1 mg/kg to 0.8 mg/kg to assess safety, pharmacokinetics, and pharmacodynamics. Following review of data from part A, a randomized, double-blind, placebo-controlled investigation of twice-daily doses of GSK2586881 (0.4 mg/kg) for 3 days was conducted (part B). Biomarkers, physiological assessments, and clinical endpoints were collected over the dosing period and during follow-up. Dose escalation in part A was well-tolerated without clinically significant hemodynamic changes. Part B was terminated after 39 of the planned 60 patients following a planned futility analysis. Angiotensin II levels decreased rapidly following infusion of GSK2586881, whereas angiotensin-(1–7) and angiotensin-(1–5) levels increased and remained elevated for 48 h. Surfactant protein D concentrations were increased, whereas there was a trend for a decrease in interleukin-6 concentrations in rhACE2-treated subjects compared with placebo. No significant differences were noted in ratio of partial pressure of arterial oxygen to fraction of inspired oxygen, oxygenation index, or Sequential Organ Failure Assessment score. GSK2586881 was well-tolerated in patients with ARDS, and the rapid modulation of RAS peptides suggests target engagement, although the study was not powered to detect changes in acute physiology or clinical outcomes. ClinicalTrials.gov, NCT01597635
. Registered on 26 January 2012.
497 citations
••
University of Alberta1, Montreal Heart Institute2, University Health Network3, McGill University4, McMaster University5, University of Waterloo6, University of Calgary7, Université de Sherbrooke8, University of Western Ontario9, St. Michael's Hospital10, Halifax11, Royal Jubilee Hospital12, University of British Columbia13, St. Boniface General Hospital14, Sunnybrook Health Sciences Centre15, University of Saskatchewan16, Durham University17, Laval University18, University of Colorado Boulder19, Université de Montréal20, St. John's University21
TL;DR: The 2017 HF guidelines provide updated guidance on the diagnosis and management that should aid in day-to-day decisions for caring for patients with HF, with attention to strategies and treatments to prevent HF, to the organization of HF care, comorbidity management, as well as practical issues around the timing of referral and follow-up care.
465 citations
••
TL;DR: Several examples of type I and type II photosensitized oxidation reactions are provided to illustrate the complexity and the diversity of the degradation pathways of mostly relevant biomolecules upon one‐electron oxidation and singlet oxygen reactions.
Abstract: Here, 10 guidelines are presented for a standardized definition of type I and type II photosensitized oxidation reactions. Because of varied notions of reactions mediated by photosensitizers, a checklist of recommendations is provided for their definitions. Type I and type II photoreactions are oxygen-dependent and involve unstable species such as the initial formation of radical cation or neutral radicals from the substrates and/or singlet oxygen (1 O21 ∆g ) by energy transfer to molecular oxygen. In addition, superoxide anion radical (O2·-) can be generated by a charge-transfer reaction involving O2 or more likely indirectly as the result of O2 -mediated oxidation of the radical anion of type I photosensitizers. In subsequent reactions, O2·- may add and/or reduce a few highly oxidizing radicals that arise from the deprotonation of the radical cations of key biological targets. O2·- can also undergo dismutation into H2 O2 , the precursor of the highly reactive hydroxyl radical (·OH) that may induce delayed oxidation reactions in cells. In the second part, several examples of type I and type II photosensitized oxidation reactions are provided to illustrate the complexity and the diversity of the degradation pathways of mostly relevant biomolecules upon one-electron oxidation and singlet oxygen reactions.
461 citations
••
TL;DR: Higher total fruit, vegetable, and legume intake was inversely associated with major cardiovascular disease, myocardial infarction, cardiovascular mortality, non-cardiovascular mortality, and total mortality in the models adjusted for age, sex, and centre (random effect).
••
University of Lübeck1, Technische Universität München2, University of Bern3, Pakistan Institute of Nuclear Science and Technology4, Imperial College London5, Katholieke Universiteit Leuven6, Université de Sherbrooke7, University Medical Center Freiburg8, Northeastern University (China)9, German Cancer Research Center10, Aalto University11, University of Helsinki12, Old Dominion University13, National Taiwan University of Science and Technology14, Chalmers University of Technology15, Johns Hopkins University16, École Polytechnique de Montréal17
TL;DR: This paper proposes a common evaluation framework for automatic stroke lesion segmentation from MRIP, describes the publicly available datasets, and presents the results of the two sub‐challenges: Sub‐Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES).
••
TL;DR: A theoretical and statistical framework to determine DNA termini and phage packaging mechanisms using NGS data is developed and validated using a set of phages with well-established packaging mechanisms representative of the termini diversity.
Abstract: The worrying rise of antibiotic resistance in pathogenic bacteria is leading to a renewed interest in bacteriophages as a treatment option. Novel sequencing technologies enable description of an increasing number of phage genomes, a critical piece of information to understand their life cycle, phage-host interactions, and evolution. In this work, we demonstrate how it is possible to recover more information from sequencing data than just the phage genome. We developed a theoretical and statistical framework to determine DNA termini and phage packaging mechanisms using NGS data. Our method relies on the detection of biases in the number of reads, which are observable at natural DNA termini compared with the rest of the phage genome. We implemented our method with the creation of the software PhageTerm and validated it using a set of phages with well-established packaging mechanisms representative of the termini diversity, i.e. 5′cos (Lambda), 3′cos (HK97), pac (P1), headful without a pac site (T4), DTR (T7) and host fragment (Mu). In addition, we determined the termini of nine Clostridium difficile phages and six phages whose sequences were retrieved from the Sequence Read Archive. PhageTerm is freely available (https://sourceforge.net/projects/phageterm), as a Galaxy ToolShed and on a Galaxy-based server (https://galaxy.pasteur.fr).
••
TL;DR: In this paper, the authors explore the application of LCA to the various areas in the buildings sector and discuss related challenges and research opportunities from these and other areas that require research are discussed.
Abstract: Considering the vast and rapidly growing area of buildings, LCA research is being led in numerous different areas ranging from building materials and components level to whole building analysis. This review aims to explore the application of LCA to the various areas in the buildings sector. The areas of embodied energy and building certification systems have seen the maximum growth in the most recent years. Related challenges and research opportunities from these and other areas that require research are discussed. This paper also reviews the use of LCA in buildings industry and reports the associated developments and future research opportunities. The research areas identified include, comparison issues of LCA studies, difference in calculated and actual impacts, refurbishment analysis for whole buildings, system boundary selection procedure, standard data collection procedure, missing data, embodied energy indicator, deconstruction analysis, implementation of dynamic LCA, use of LCA in industry and difference in results from LCA integrated certification and LCA of buildings.
••
International Sleep Products Association1, United States Department of Agriculture2, Goddard Space Flight Center3, Katholieke Universiteit Leuven4, European Centre for Medium-Range Weather Forecasts5, Monash University6, University of Rome Tor Vergata7, University of Toulouse8, Netherlands Space Office9, Mississippi State University10, Jet Propulsion Laboratory11, Université de Sherbrooke12
TL;DR: In this paper, the authors present a review of the significant progress which has been made over the last decade in this field of research with a focus on L-band, and a discussion on possible applications to the SMOS and SMAP soil moisture retrieval approaches.
••
University of Milan1, Universidade Federal de Minas Gerais2, University of Insubria3, Max Planck Society4, National University of Cordoba5, University of Oldenburg6, Université de Sherbrooke7, Smithsonian Tropical Research Institute8, Leiden University9, Moscow State University10, University of Wisconsin–Eau Claire11, Federal University of Pernambuco12, VU University Amsterdam13, University of Sheffield14, University of Exeter15, College of African Wildlife Management16, University of Western Sydney17, University of Minnesota18, University of California, Davis19, University of Alaska Fairbanks20, University of New South Wales21, Chinese Academy of Sciences22, Stanford University23, Centre national de la recherche scientifique24, Gembloux Agro-Bio Tech25, Spanish National Research Council26
TL;DR: The CSR strategies of vascular plants can be compared quantitatively within and between biomes at the global scale and the strategy–environment relationships it elucidates will help to predict which kinds of species may assemble in response to changes in biogeochemical cycles, climate and land use.
Abstract: Summary
Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA) (representing, respectively, interspecific variation in plant size and conservative vs. acquisitive resource economics) promises the general application of CSR strategies across biomes, including the tropical forests hosting a large proportion of Earth's diversity.
We used trait variation for 3068 tracheophytes (representing 198 families, six continents and 14 biomes) to create a globally calibrated CSR strategy calculator tool and investigate strategy–environment relationships across biomes world-wide.
Due to disparity in trait availability globally, co-inertia analysis was used to check correspondence between a ‘wide geographic coverage, few traits’ data set and a ‘restricted coverage, many traits’ subset of 371 species for which 14 whole-plant, flowering, seed and leaf traits (including leaf nitrogen content) were available. CSR strategy/environment relationships within biomes were investigated using fourth-corner and RLQ analyses to determine strategy/climate specializations.
Strong, significant concordance (RV = 0·597; P < 0·0001) was evident between the 14 trait multivariate space and when only LA, LDMC and SLA were used.
Biomes such as tropical moist broadleaf forests exhibited strategy convergence (i.e. clustered around a CS/CSR median; C:S:R = 43:42:15%), with CS-selection associated with warm, stable situations (lesser temperature seasonality), with greater annual precipitation and potential evapotranspiration. Other biomes were characterized by strategy divergence: for example, deserts varied between xeromorphic perennials such as Larrea divaricata, classified as S-selected (C:S:R = 1:99:0%) and broadly R-selected annual herbs (e.g. Claytonia perfoliata; R/CR-selected; C:S:R = 21:0:79%). Strategy convergence was evident for several growth habits (e.g. trees) but not others (forbs).
The CSR strategies of vascular plants can now be compared quantitatively within and between biomes at the global scale. Through known linkages between underlying leaf traits and growth rates, herbivory and decomposition rates, this method and the strategy–environment relationships it elucidates will help to predict which kinds of species may assemble in response to changes in biogeochemical cycles, climate and land use.
••
TL;DR: A combination of prospective experimental and modeling research on precooling uniformity, responsive food inventory management systems, and cold chains in developing countries is proposed for the improvement of the cold chain at the global scale.
Abstract: The cold chain is responsible for the preservation and transportation of perishable foods in the proper temperature range to slow biological decay processes and deliver safe and high-quality foods to consumers Studies show that the efficiency of the cold chain is often less than ideal, as temperature abuses above or below the optimal product-specific temperature range occur frequently, a situation that significantly increases food waste and endangers food safety In this work, field studies on time-temperature conditions at each critical stage of the cold chain are reviewed to assess the current state of commercial cold chains Precooling, ground operations during transportation, storage during display at retail and in domestic refrigerators, and commercial handling practices are identified and discussed as the major weaknesses in the modern cold chain The improvement in efficiency achieved through the measurement, analysis, and management of time-temperature conditions is reviewed, along with the accompanying technical and practical challenges delaying the implementation of such methods A combination of prospective experimental and modeling research on precooling uniformity, responsive food inventory management systems, and cold chains in developing countries is proposed for the improvement of the cold chain at the global scale
••
TL;DR: The first machine learning method for ground truthing a video is proposed, based on a multi-resolution convolutional neural network with a cascaded architecture, for segmenting foreground moving objects pictured in surveillance videos.
••
TL;DR: The approach of prestoring a controllable amount of strain energy to obtain a strong and tunable photoinduced mechanical force in azobenzene LCN can be further explored for applications of light-driven polymer actuators.
Abstract: A new strategy for enhancing the photoinduced mechanical force is demonstrated using a reprocessable azobenzene-containing liquid crystalline network (LCN). The basic idea is to store mechanical strain energy in the polymer beforehand so that UV light can then be used to generate a mechanical force not only from the direct light to mechanical energy conversion upon the trans–cis photoisomerization of azobenzene mesogens but also from the light-triggered release of the prestored strain energy. It is shown that the two mechanisms can add up to result in unprecedented photoindued mechanical force. Together with the malleability of the polymer stemming from the use of dynamic covalent bonds for chain crosslinking, large-size polymer photoactuators in the form of wheels or spring-like “motors” can be constructed, and, by adjusting the amount of prestored strain energy in the polymer, a variety of robust, light-driven motions with tunable rolling or moving direction and speed can be achieved. The approach of prestoring a controllable amount of strain energy to obtain a strong and tunable photoinduced mechanical force in azobenzene LCN can be further explored for applications of light-driven polymer actuators.
••
TL;DR: This work resolves magnon number states through spectroscopic measurements of a superconducting qubit with the hybrid system in the strong dispersive regime, enabling it to detect a change in the magnetic moment of the ferromagnet equivalent to a single spin flipped among more than 1019 spins.
Abstract: Combining different physical systems in hybrid quantum circuits opens up novel possibilities for quantum technologies. In quantum magnonics, quanta of collective excitation modes in a ferromagnet, called magnons, interact coherently with qubits to access quantum phenomena of magnonics. We use this architecture to probe the quanta of collective spin excitations in a millimeter-sized ferromagnetic crystal. More specifically, we resolve magnon number states through spectroscopic measurements of a superconducting qubit with the hybrid system in the strong dispersive regime. This enables us to detect a change in the magnetic moment of the ferromagnet equivalent to a single spin flipped among more than 1019 spins. Our demonstration highlights the strength of hybrid quantum systems to provide powerful tools for quantum sensing and quantum information processing.
••
University of Calgary1, McGill University Health Centre2, Libin Cardiovascular Institute of Alberta3, University of British Columbia4, Montreal Children's Hospital5, Université du Québec à Trois-Rivières6, Université de Montréal7, Laval University8, McMaster University9, University of Alberta10, University of Toronto11, Ottawa Hospital Research Institute12, Hôpital Maisonneuve-Rosemont13, University of Western Ontario14, Memorial University of Newfoundland15, Centre for Addiction and Mental Health16, University of Ottawa17, McGill University18, University Health Network19, University of Saskatchewan20, University of Manitoba21, Concordia University Wisconsin22, St. Michael's Hospital23, Montreal General Hospital24, Heart and Stroke Foundation of Canada25, Dalhousie University26, Université de Sherbrooke27, Université du Québec à Montréal28, Montreal Heart Institute29, Population Health Research Institute30, Simon Fraser University31, St George's, University of London32, Centre Hospitalier Universitaire Sainte-Justine33, Children's Hospital of Eastern Ontario34
TL;DR: Hypertension Canada provides annually updated, evidence-based guidelines for the diagnosis, assessment, prevention, and treatment of hypertension, including 10 new guidelines for individuals with non-AOBP readings ≥ 140 mm Hg.
••
TL;DR: This paper investigates energy efficiency improvement for a downlink NOMA single-cell network by considering imperfect CSI, and proposes an iterative algorithm for user scheduling and power allocation to maximize the system energy efficiency.
Abstract: Non-orthogonal multiple access (NOMA) exploits successive interference cancellation technique at the receivers to improve the spectral efficiency. By using this technique, multiple users can be multiplexed on the same subchannel to achieve high sum rate. Most previous research works on NOMA systems assume perfect channel state information (CSI). However, in this paper, we investigate energy efficiency improvement for a downlink NOMA single-cell network by considering imperfect CSI. The energy efficient resource scheduling problem is formulated as a non-convex optimization problem with the constraints of outage probability limit, the maximum power of the system, the minimum user data rate, and the maximum number of multiplexed users sharing the same subchannel. Different from previous works, the maximum number of multiplexed users can be greater than two, and the imperfect CSI is first studied for resource allocation in NOMA. To efficiently solve this problem, the probabilistic mixed problem is first transformed into a non-probabilistic problem. An iterative algorithm for user scheduling and power allocation is proposed to maximize the system energy efficiency. The optimal user scheduling based on exhaustive search serves as a system performance benchmark, but it has high computational complexity. To balance the system performance and the computational complexity, a new suboptimal user scheduling scheme is proposed to schedule users on different subchannels. Based on the user scheduling scheme, the optimal power allocation expression is derived by the Lagrange approach. By transforming the fractional-form problem into an equivalent subtractive-form optimization problem, an iterative power allocation algorithm is proposed to maximize the system energy efficiency. Simulation results demonstrate that the proposed user scheduling algorithm closely attains the optimal performance.
••
TL;DR: Results suggested that various proposed approaches to quantifying biological aging may not measure the same aspects of the aging process, and further systematic evaluation and refinement of measures of biological aging is needed to furnish outcomes for geroprotector trials.
Abstract: The geroscience hypothesis posits that therapies to slow biological processes of aging can prevent disease and extend healthy years of life. To test such "geroprotective" therapies in humans, outcome measures are needed that can assess extension of disease-free life span. This need has spurred development of different methods to quantify biological aging. But different methods have not been systematically compared in the same humans. We implemented 7 methods to quantify biological aging using repeated-measures physiological and genomic data in 964 middle-aged humans in the Dunedin Study (New Zealand; persons born 1972-1973). We studied 11 measures in total: telomere-length and erosion, 3 epigenetic-clocks and their ticking rates, and 3 biomarker-composites. Contrary to expectation, we found low agreement between different measures of biological aging. We next compared associations between biological aging measures and outcomes that geroprotective therapies seek to modify: physical functioning, cognitive decline, and subjective signs of aging, including aged facial appearance. The 71-cytosine-phosphate-guanine epigenetic clock and biomarker composites were consistently related to these aging-related outcomes. However, effect sizes were modest. Results suggested that various proposed approaches to quantifying biological aging may not measure the same aspects of the aging process. Further systematic evaluation and refinement of measures of biological aging is needed to furnish outcomes for geroprotector trials.
••
Population Health Research Institute1, Peking Union Medical College2, King Saud University3, University of the Philippines Manila4, Université de Sherbrooke5, Bayero University Kano6, UCSI University7, University of Cape Town8, University of La Frontera9, Public Health Foundation of India10, Qatar Airways11, Eduardo Mondlane University12, Northwestern University13
TL;DR: Marked regional differences in mortality in patients with heart failure persisted after multivariable adjustment for cardiac and non-cardiac factors; variations in mortality between regions could be the result of health-care infrastructure, quality and access, or environmental and genetic factors.
••
TL;DR: The articles in this Oxidatively Damaged DNA & Repair special issue detail the reactions by which intracellular DNA is oxidatively damaged, and the enzymatic reactions and pathways by which living organisms survive such assaults by repair processes.
••
TL;DR: HPLC based methods are appropriate for monitoring oxidatively damage to DNA and the frequency of DNA lesions generated upon severe oxidizing conditions, including exposure to ionizing radiation is low, at best a few modifications per 106 normal bases.
••
[...]
TL;DR: The Large Scale Movie Description Challenge (LSMDC) as discussed by the authors ) is a dataset of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total).
Abstract: Audio description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. We introduce the Large Scale Movie Description Challenge (LSMDC) which contains a parallel corpus of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total). The goal of the challenge is to automatically generate descriptions for the movie clips. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in the challenges organized in the context of two workshops at ICCV 2015 and ECCV 2016.
••
University of Toronto1, Alberta Children's Hospital2, University of Manitoba3, University of Alberta4, Mayo Clinic5, Dalhousie University6, McGill University7, Queen's University8, McMaster University9, University of Western Ontario10, Université de Sherbrooke11, Children's Hospital of Eastern Ontario12, Université de Montréal13, Janeway Children's Health and Rehabilitation Centre14, University of British Columbia15
TL;DR: This national data set provides a population-based disease incidence rate and demonstrates the protective effect of antithrombotic treatment in older children, and frequent long-term emerging deficits in neonates and in children with cardiac disorders.
••
TL;DR: This work reveals for the first time in humans a clear structural connectivity between the insula and the cingulate, parahippocampal, supramarginal and angular gyri as well as the precuneus and occipital regions.
Abstract: The insula is a complex structure involved in a wide range of functions. Tracing studies on nonhuman primates reveal a wide array of cortical connections in the frontal (orbitofrontal and prefrontal cortices, cingulate areas and supplementary motor area), parietal (primary and secondary somatosensory cortices) and temporal (temporal pole, auditory, prorhinal and entorhinal cortices) lobes. However, recent human tractography studies have not observed connections between the insula and the cingulate cortices, although these structures are thought to be functionally intimately connected. In this work, we try to unravel the structural connectivity between these regions and other known functionally connected structures, benefiting from a higher number of subjects and the latest state-of-the-art high angular resolution diffusion imaging (HARDI) tractography algorithms with anatomical priors. By performing an HARDI tractography analysis on 46 young normal adults, our study reveals a wide array of connections between the insula and the frontal, temporal, parietal and occipital lobes as well as limbic regions, with a rostro-caudal organization in line with tracing studies in macaques. Notably, we reveal for the first time in humans a clear structural connectivity between the insula and the cingulate, parahippocampal, supramarginal and angular gyri as well as the precuneus and occipital regions.
••
University of Bristol1, Medical Research Council2, Dartmouth College3, Pompeu Fabra University4, Erasmus University Medical Center5, Erasmus University Rotterdam6, Université de Sherbrooke7, University of California, Berkeley8, Emory University9, Norwegian Institute of Public Health10, National Institutes of Health11, University of Western Australia12, University Medical Center Groningen13, University of Paris14, North Carolina State University15, Columbia University16, University of California, San Francisco17, University of Washington18, University of Southampton19, International Agency for Research on Cancer20, Karolinska Institutet21, University of Michigan22, University of Memphis23, University of Southern Denmark24, North Carolina Central University25, Harvard University26, Kaiser Permanente27, United States Department of Health and Human Services28, Utrecht University29, University of South Carolina30, Stockholm County Council31, University of California, Davis32, Drexel University33, Duke University34, Johns Hopkins University35, Boston Children's Hospital36, Oslo University Hospital37, Southampton General Hospital38, Frederiksberg Hospital39, University of Copenhagen40
TL;DR: In this article, the association between pre-pregnancy maternal BMI and methylation at over 450,000 sites in newborn blood DNA, across 19 cohorts (9,340 mother-newborn pairs).
Abstract: Pre-pregnancy maternal obesity is associated with adverse offspring outcomes at birth and later in life. Individual studies have shown that epigenetic modifications such as DNA methylation could contribute. Within the Pregnancy and Childhood Epigenetics (PACE) Consortium, we meta-analysed the association between pre-pregnancy maternal BMI and methylation at over 450,000 sites in newborn blood DNA, across 19 cohorts (9,340 mother-newborn pairs). We attempted to infer causality by comparing the effects of maternal versus paternal BMI and incorporating genetic variation. In four additional cohorts (1,817 mother-child pairs), we meta-analysed the association between maternal BMI at the start of pregnancy and blood methylation in adolescents. In newborns, maternal BMI was associated with small (<0.2% per BMI unit (1 kg/m2), P < 1.06 × 10-7) methylation variation at 9,044 sites throughout the genome. Adjustment for estimated cell proportions greatly attenuated the number of significant CpGs to 104, including 86 sites common to the unadjusted model. At 72/86 sites, the direction of the association was the same in newborns and adolescents, suggesting persistence of signals. However, we found evidence for acausal intrauterine effect of maternal BMI on newborn methylation at just 8/86 sites. In conclusion, this well-powered analysis identified robust associations between maternal adiposity and variations in newborn blood DNA methylation, but these small effects may be better explained by genetic or lifestyle factors than a causal intrauterine mechanism. This highlights the need for large-scale collaborative approaches and the application of causal inference techniques in epigenetic epidemiology.