scispace - formally typeset
Search or ask a question

Showing papers by "University of Lincoln published in 2018"


Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.

1,357 citations


Journal ArticleDOI
14 Aug 2018-Sensors
TL;DR: A comprehensive review of research dedicated to applications of machine learning in agricultural production systems is presented, demonstrating how agriculture will benefit from machine learning technologies.
Abstract: Machine learning has emerged with big data technologies and high-performance computing to create new opportunities for data intensive science in the multi-disciplinary agri-technologies domain. In this paper, we present a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized in (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management. The filtering and classification of the presented articles demonstrate how agriculture will benefit from machine learning technologies. By applying machine learning to sensor data, farm management systems are evolving into real time artificial intelligence enabled programs that provide rich recommendations and insights for farmer decision support and action.

1,262 citations


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

1,165 citations


Journal ArticleDOI
TL;DR: This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets.
Abstract: Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist–Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

835 citations


Journal ArticleDOI
Andrew Shepherd1, Erik R. Ivins2, Eric Rignot3, Ben Smith4, Michiel R. van den Broeke, Isabella Velicogna3, Pippa L. Whitehouse5, Kate Briggs1, Ian Joughin4, Gerhard Krinner6, Sophie Nowicki7, Tony Payne8, Ted Scambos9, Nicole Schlegel2, Geruo A3, Cécile Agosta, Andreas P. Ahlstrøm10, Greg Babonis11, Valentina R. Barletta12, Alejandro Blazquez, Jennifer Bonin13, Beata Csatho11, Richard I. Cullather7, Denis Felikson14, Xavier Fettweis, René Forsberg12, Hubert Gallée6, Alex S. Gardner2, Lin Gilbert15, Andreas Groh16, Brian Gunter17, Edward Hanna18, Christopher Harig19, Veit Helm20, Alexander Horvath21, Martin Horwath16, Shfaqat Abbas Khan12, Kristian K. Kjeldsen10, Hannes Konrad1, Peter L. Langen22, Benoit S. Lecavalier23, Bryant D. Loomis7, Scott B. Luthcke7, Malcolm McMillan1, Daniele Melini24, Sebastian H. Mernild25, Sebastian H. Mernild26, Sebastian H. Mernild27, Yara Mohajerani3, Philip Moore28, Jeremie Mouginot3, Jeremie Mouginot6, Gorka Moyano, Alan Muir15, Thomas Nagler, Grace A. Nield5, Johan Nilsson2, Brice Noël, Ines Otosaka1, Mark E. Pattle, W. Richard Peltier29, Nadege Pie14, Roelof Rietbroek30, Helmut Rott, Louise Sandberg-Sørensen12, Ingo Sasgen20, Himanshu Save14, Bernd Scheuchl3, Ernst Schrama31, Ludwig Schröder16, Ki-Weon Seo32, Sebastian B. Simonsen12, Thomas Slater1, Giorgio Spada33, T. C. Sutterley3, Matthieu Talpe9, Lev Tarasov23, Willem Jan van de Berg, Wouter van der Wal31, Melchior van Wessem, Bramha Dutt Vishwakarma34, David N. Wiese2, Bert Wouters 
14 Jun 2018-Nature
TL;DR: This work combines satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that the Antarctic Ice Sheet lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6‚¬3.9 millimetres.
Abstract: The Antarctic Ice Sheet is an important indicator of climate change and driver of sea-level rise. Here we combine satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that it lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6 ± 3.9 millimetres (errors are one standard deviation). Over this period, ocean-driven melting has caused rates of ice loss from West Antarctica to increase from 53 ± 29 billion to 159 ± 26 billion tonnes per year; ice-shelf collapse has increased the rate of ice loss from the Antarctic Peninsula from 7 ± 13 billion to 33 ± 16 billion tonnes per year. We find large variations in and among model estimates of surface mass balance and glacial isostatic adjustment for East Antarctica, with its average rate of mass gain over the period 1992–2017 (5 ± 46 billion tonnes per year) being the least certain.

725 citations


Book
27 Mar 2018
TL;DR: Imitation learning as discussed by the authors is a generalization of reinforcement learning, where a teacher can demonstrate a desired behavior rather than attempting to manually engineer it, which is referred to as imitation learning.
Abstract: As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning. We pay particular attention to the intimate connection between imitation learning approaches and those of structured prediction Daume III et al. [2009]. To structure this discussion, we categorize imitation learning techniques based on the following key criteria which drive algorithmic decisions: 1) The structure of the policy space. Is the learned policy a time-index trajectory (trajectory learning), a mapping from observations to actions (so called behavioral cloning [Bain and Sammut, 1996]), or the result of a complex optimization or planning problem at each execution as is common in inverse optimal control methods [Kalman, 1964, Moylan and Anderson, 1973]. 2) The information available during training and testing. In particular, is the learning algorithm privy to the full state that the teacher possess? Is the learner able to interact with the teacher and gather corrections or more data? Does the learner have a (typically a priori) model of the system with which it interacts? Does the learner have access to the reward (cost) function that the teacher is attempting to optimize? 3) The notion of success. Different algorithmic approaches provide varying guarantees on the resulting learned behavior. These guarantees range from weaker (e.g., measuring disagreement with the agent’s decision) to stronger (e.g., providing guarantees on the performance of the learner with respect to a true cost function, either known or unknown). We organize our work by paying particular attention to distinction (1): dividing imitation learning into directly replicating desired behavior (sometimes called behavioral cloning) and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]). In the latter case, behavior arises as the result of an optimization problem solved for each new instance that the learner faces. In addition to method analysis, we discuss the design decisions a practitioner must make when selecting an imitation learning approach. Moreover, application examples—such as robots that play table tennis [Kober and Peters, 2009], programs that play the game of Go [Silver et al., 2016], and systems that understand natural language [Wen et al., 2015]— illustrate the properties and motivations behind different forms of imitation learning. We conclude by presenting a set of open questions and point towards possible future research directions for machine learning.

554 citations


Journal ArticleDOI
13 Sep 2018-Nature
TL;DR: A global modelling approach shows that in response to rises in global sea level, gains of up to 60% in coastal wetland areas are possible, if appropriate coastal management solutions are developed to help support wetland resilience.
Abstract: The response of coastal wetlands to sea-level rise during the twenty-first century remains uncertain Global-scale projections suggest that between 20 and 90 per cent (for low and high sea-level rise scenarios, respectively) of the present-day coastal wetland area will be lost, which will in turn result in the loss of biodiversity and highly valued ecosystem services1-3 These projections do not necessarily take into account all essential geomorphological4-7 and socio-economic system feedbacks8 Here we present an integrated global modelling approach that considers both the ability of coastal wetlands to build up vertically by sediment accretion, and the accommodation space, namely, the vertical and lateral space available for fine sediments to accumulate and be colonized by wetland vegetation We use this approach to assess global-scale changes in coastal wetland area in response to global sea-level rise and anthropogenic coastal occupation during the twenty-first century On the basis of our simulations, we find that, globally, rather than losses, wetland gains of up to 60 per cent of the current area are possible, if more than 37 per cent (our upper estimate for current accommodation space) of coastal wetlands have sufficient accommodation space, and sediment supply remains at present levels In contrast to previous studies1-3, we project that until 2100, the loss of global coastal wetland area will range between 0 and 30 per cent, assuming no further accommodation space in addition to current levels Our simulations suggest that the resilience of global wetlands is primarily driven by the availability of accommodation space, which is strongly influenced by the building of anthropogenic infrastructure in the coastal zone and such infrastructure is expected to change over the twenty-first century Rather than being an inevitable consequence of global sea-level rise, our findings indicate that large-scale loss of coastal wetlands might be avoidable, if sufficient additional accommodation space can be created through careful nature-based adaptation solutions to coastal management

550 citations


Journal ArticleDOI
TL;DR: Development in the next decade will see the adoption of user friendly biosensors for point-of-care and medical diagnosis as innovations are brought to improve the analytical performances and usability of the current designs.
Abstract: Gold nanoparticles (AuNPs) provide excellent platforms for the development of colorimetric biosensors as they can be easily functionalised, displaying different colours depending on their size, shape and state of aggregation. In the last decade, a variety of biosensors have been developed to exploit the extent of colour changes as nano-particles (NPs) either aggregate or disperse, in the presence of analytes. Of critical importance to the design of these methods is that the behaviour of the systems has to be reproducible and predictable. Much has been accomplished in understanding the interactions between a variety of substrates and AuNPs, and how these interactions can be harnessed as colorimetric reporters in biosensors. However, despite these developments, only a few biosensors have been used in practice for the detection of analytes in biological samples. The transition from proof of concept to market biosensors requires extensive long-term reliability and shelf life testing, and modification of protocols and design features to make them safe and easy to use by the population at large. Developments in the next decade will see the adoption of user friendly biosensors for point-of-care and medical diagnosis as innovations are brought to improve the analytical performances and usability of the current designs. This review discusses the mechanisms, strategies, recent advances and perspectives for the use of AuNPs as colorimetric biosensors.

410 citations


Journal ArticleDOI
TL;DR: The use of a novel design approach of caplets with perforated channels to accelerate drug release from 3D printed tablets and the incorporation of short channels can be adopted in the designs of dosage forms, implants or stents to enhance the release rate of eluting drug from polymer‐rich structures is demonstrated.

261 citations


Journal ArticleDOI
TL;DR: G gaps and challenges in evaluation and rollout of new diagnostics and biomarkers are highlighted, and areas needing further investment are prioritised, including impact assessment and cost-benefit studies.
Abstract: Summary Tuberculosis remains the leading cause of death from an infectious disease worldwide. Early and accurate diagnosis and detection of drug-sensitive and drug-resistant tuberculosis is essential for achieving global tuberculosis control. Despite the introduction of the Xpert MTB/RIF assay as the first-line rapid tuberculosis diagnostic test, the gap between global estimates of incidence and new case notifications is 4·1 million people. More accurate, rapid, and cost-effective screening tests are needed to improve case detection. Diagnosis of extrapulmonary tuberculosis and tuberculosis in children, people living with HIV, and pregnant women remains particularly problematic. The diagnostic molecular technology landscape has continued to expand, including the development of tests for resistance to several antituberculosis drugs. Biomarkers are urgently needed to indicate progression from latent infection to clinical disease, to predict risk of reactivation after cure, and to provide accurate endpoints for drug and vaccine trials. Sophisticated bioinformatic computational tools and systems biology approaches are being applied to the discovery and validation of biomarkers, with substantial progress taking place. New data have been generated from the study of T-cell responses and T-cell function, serological studies, flow cytometric-based assays, and protein and gene expression studies. Alternative diagnostic strategies under investigation as potential screening and triaging tools include non-sputum-based detection with breath-based tests and automated digital radiography. We review developments and key achievements in the search for new tuberculosis diagnostics and biomarkers. We highlight gaps and challenges in evaluation and rollout of new diagnostics and biomarkers, and prioritise areas needing further investment, including impact assessment and cost–benefit studies.

220 citations


Report SeriesDOI
21 Jun 2018
TL;DR: The state of the art in the application of RAS in Agri-Food production is reviewed and research and innovation needs are explored to ensure these technologies reach their full potential and deliver the necessary impacts in the Agre-Food sector.
Abstract: Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over £108bn p.a., with 3.9m employees in a truly international industry and exports £20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment ("Transforming Food Production: from Farm to Fork"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.

Journal ArticleDOI
TL;DR: A systematic literature review and two meta-analyses aimed at answering the questions: Is working alliance actually poorer in VCP?
Abstract: Videoconferencing psychotherapy (VCP)-the remote delivery of psychotherapy via secure video link-is an innovative way of delivering psychotherapy, which has the potential to overcome many of the regularly cited barriers to accessing psychological treatment. However, some debate exists as to whether an adequate working alliance can be formed between therapist and client, when therapy is delivered through such a medium. The presented article is a systematic literature review and two meta-analyses aimed at answering the questions: Is working alliance actually poorer in VCP? And is outcome equivalence possible between VCP and face-to-face delivery? Twelve studies were identified which met inclusion/exclusion criteria, all of which demonstrated good working alliance and outcome for VCP. Meta-analyses showed that working alliance in VCP was inferior to face-to-face delivery (standardized mean difference [SMD] = -0.30; 95% confidence interval [CI] [-0.67, 0.07], p = 0.11; with the lower bound of the CI extending beyond the noninferiority margin [-0.50]), but that target symptom reduction was noninferior (SMD = -0.03; 95% CI [-0.45, 0.40], p = 0.90; CI within the noninferiority margin [0.50]). These results are discussed and directions for future research recommended.

Journal ArticleDOI
TL;DR: A stochastic feedback controller is derived that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot by using a probabilistic representation.
Abstract: Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a unified framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.

Journal ArticleDOI
TL;DR: The proposed 3D supervoxel based learning method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.

Journal ArticleDOI
TL;DR: In patients with severe aortic stenosis, late gadolinium enhancement on cardiovascular magnetic resonance was independently associated with mortality; its presence was associated with a 2-fold higher late mortality.
Abstract: BACKGROUND: Aortic valve replacement (AVR) for aortic stenosis is timed primarily on the development of symptoms, but late surgery can result in irreversible myocardial dysfunction and additional risk The aim of this study was to determine whether the presence of focal myocardial scar preoperatively was associated with long-term mortality METHODS: In a longitudinal observational outcome study, survival analysis was performed in patients with severe aortic stenosis listed for valve intervention at 6 UK cardiothoracic centers Patients underwent preprocedural echocardiography (for valve severity assessment) and cardiovascular magnetic resonance for ventricular volumes, function and scar quantification between January 2003 and May 2015 Myocardial scar was categorized into 3 patterns (none, infarct, or noninfarct patterns) and quantified with the full width at half-maximum method as percentage of the left ventricle All-cause mortality and cardiovascular mortality were tracked for a minimum of 2 years RESULTS: Six hundred seventy-four patients with severe aortic stenosis (age, 75±14 years; 63% male; aortic valve area, 038±014 cm2/m2; mean gradient, 46±18 mm Hg; left ventricular ejection fraction, 610±167%) were included Scar was present in 51% (18% infarct pattern, 33% noninfarct) Management was surgical AVR (n=399) or transcatheter AVR (n=275) During follow-up (median, 36 years), 145 patients (215%) died (52 after surgical AVR, 93 after transcatheter AVR) In multivariable analysis, the factors independently associated with all-cause mortality were age (hazard ratio [HR], 150; 95% CI, 111-204; P=0009, scaled by epochs of 10 years), Society of Thoracic Surgeons score (HR, 112; 95% CI, 103-122; P=0007), and scar presence (HR, 239; 95% CI, 140-405; P=0001) Scar independently predicted all-cause (264% versus 129%; P<0001) and cardiovascular (150% versus 48%; P<0001) mortality, regardless of intervention (transcatheter AVR, P=0002; surgical AVR, P=0026 [all-cause mortality]) Every 1% increase in left ventricular myocardial scar burden was associated with 11% higher all-cause mortality hazard (HR, 111; 95% CI, 105-117; P<0001) and 8% higher cardiovascular mortality hazard (HR, 108; 95% CI, 101-117; P<0001) CONCLUSIONS: In patients with severe aortic stenosis, late gadolinium enhancement on cardiovascular magnetic resonance was independently associated with mortality; its presence was associated with a 2-fold higher late mortality

Journal ArticleDOI
TL;DR: Experimental results illustrate that the proposed control solution is characterized by improved robustness performance against various disturbances and uncertainties compared with traditional ADRC and integral model predictive control (MPC) approaches.
Abstract: The output voltage regulation problem of a PWM-based dc-dc buck converter under various sources of uncertainties and disturbances is investigated in this paper via an optimized active disturbance rejection control (ADRC) approach. Aiming to practical implementation, a new reduced-order generalized proportional integral (GPI) observer is first designed to estimate the lumped (possibly time-varying) disturbances within the dc-dc circuit. By integrating the disturbance estimation information raised by the reduced-order GPI observer into the output prediction, an optimized ADRC method is developed to achieve optimized tracking performance even in the presence of disturbances and uncertainties. It is shown that the proposed controller will guarantee the rigorous stability of the closed-loop system, for any bounded uncertainties of the circuit, by appropriately choosing the observer gains and the bandwidth factor. Experimental results illustrate that the proposed control solution is characterized by improved robustness performance against various disturbances and uncertainties compared with traditional ADRC and integral model predictive control (MPC) approaches.

Journal ArticleDOI
TL;DR: Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications and is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain.

Journal ArticleDOI
Wade Abbott, Orly Alber1, Ed Bayer1, Jean-Guy Berrin, Alisdair B. Boraston2, Harry Brumer3, Ryszard Brzezinski4, Anthony J. Clarke5, Beatrice Cobucci-Ponzano6, Darrell Cockburn7, Pedro M. Coutinho8, Mirjam Czjzek9, Bareket Dassa1, Gideon J. Davies10, Vincent G. H. Eijsink11, Jens M. Eklöf3, Alfons K. G. Felice12, Elizabeth Ficko-Blean9, Geoff Pincher13, Thierry Fontaine14, Zui Fujimoto15, Kiyotaka Fujita16, Shinya Fushinobu17, Harry J. Gilbert18, Tracey M. Gloster19, Ethan D. Goddard-Borger20, Ian R. Greig21, Jan-Hendrik Hehemann22, Glyn R. Hemsworth23, Bernard Henrissat9, Masafumi Hidaka17, Ramon Hurtado-Guerrero24, Kiyohiko Igarashi17, Takuya Ishida17, Štefan Janeček25, Seino A. K. Jongkees17, Nathalie Juge26, Satoshi Kaneko27, Takane Katayama28, Motomitsu Kitaoka15, Naotake Konno29, Daniel Kracher12, Anna A. Kulminskaya30, Alicia Lammerts van Bueren31, Sine Larsen31, Junho Lee3, Markus Linder32, Leila LoLeggio33, Roland Ludwig12, Ana R. Luís34, Mirko M. Maksimainen35, Brian L. Mark36, Richard McLean37, Gurvan Michel9, Cedric Montanier, Marco Moracci6, Haruhide Mori38, Hiroyuki Nakai39, Wim Nerinckx40, Takayuki Ohnuma41, Richard W. Pickersgill42, Kathleen Piens43, Tirso Pons, Etienne Rebuffet, Peter J. Reilly44, Magali Remaud-Simeon45, Brian P. Rempel3, Kyle Robinson3, David R. Rose46, Juha Rouvinen47, Wataru Saburi38, Yuichi Sakamoto, Mats Sandgren43, Fathima Aidha Shaikh3, Yuval Shoham48, Franz J. St John49, Jerry Ståhlberg43, Michael D. L. Suits50, Gerlind Sulzenbacher51, Gerlind Sulzenbacher8, Tomomi Sumida, Ryuichiro Suzuki52, Birte Svensson53, Toki Taira27, Edward J. Taylor54, Takashi Tonozuka55, Breeanna R. Urbanowicz56, Gustav Vaaje-Kolstad11, Wim Van den Ende57, Annabelle Varrot58, Maxime Versluys57, Florence Vincent51, Florence Vincent8, David J. Vocadlo21, Warren W. Wakarchuk59, Tom Wennekes60, Rohan J. Williams61, Spencer J. Williams61, David Wilson62, Stephen G. Withers3, Katsuro Yaoi63, Vivian L. Y. Yip3, Ran Zhang3 
Weizmann Institute of Science1, University of Victoria2, University of British Columbia3, Université de Sherbrooke4, University of Guelph5, National Research Council6, Pennsylvania State University7, Aix-Marseille University8, University of Paris9, University of York10, Norwegian University of Life Sciences11, University of Vienna12, University of Adelaide13, Pasteur Institute14, National Agriculture and Food Research Organization15, Kagoshima University16, University of Tokyo17, Newcastle University18, University of St Andrews19, Walter and Eliza Hall Institute of Medical Research20, Simon Fraser University21, Max Planck Society22, University of Leeds23, University of Zaragoza24, Slovak Academy of Sciences25, Quadram Institute26, University of the Ryukyus27, Ishikawa Prefectural University28, Utsunomiya University29, Petersburg Nuclear Physics Institute30, University of Groningen31, Aalto University32, University of Copenhagen33, University of Lisbon34, University of Oulu35, University of Manitoba36, University of Lethbridge37, Hokkaido University38, Niigata University39, Ghent University40, Kindai University41, Queen Mary University of London42, Swedish University of Agricultural Sciences43, Iowa State University44, Institut national des sciences appliquées45, University of Waterloo46, University of Eastern Finland47, Technion – Israel Institute of Technology48, United States Department of Agriculture49, Wilfrid Laurier University50, Institut national de la recherche agronomique51, Akita Prefectural University52, Technical University of Denmark53, University of Lincoln54, Tokyo University of Agriculture and Technology55, University of Georgia56, Université catholique de Louvain57, University of Grenoble58, Ryerson University59, Utrecht University60, University of Melbourne61, Cornell University62, National Institute of Advanced Industrial Science and Technology63
TL;DR: CAZypedia was initiated in 2007 to create a comprehensive, living encyclopedia of the carbohydrate-active enzymes (CAZymes) and associated carbohydrate-binding modules involved in the synthesis, modification and degradation of complex carbohydrates.
Abstract: CAZypedia was initiated in 2007 to create a comprehensive, living encyclopedia of the carbohydrate active enzymes (CAZymes) and associated carbohydrate-binding modules involved in the synthesis, modification and degradation of complex carbohydrates. CAZypedia is closely connected with the actively curated CAZy database, which provides a sequence-based foundation for the biochemical, mechanistic and structural characterization of these diverse proteins. Now celebrating its 10th anniversary online, CAZypedia is a successful example of dynamic, community-driven and expert-based biocuration. CAZypedia is an open-access resource available at URL http://www.cazypedia.org.

Journal ArticleDOI
TL;DR: Even if anthropogenic warming were constrained to less than 2 °C above pre-industrial, the Greenland and Antarctic ice sheets will continue to lose mass this century, with rates similar to those observed over the past decade.
Abstract: Even if anthropogenic warming were constrained to less than 2 °C above pre-industrial, the Greenland and Antarctic ice sheets will continue to lose mass this century, with rates similar to those observed over the past decade. However, nonlinear responses cannot be excluded, which may lead to larger rates of mass loss. Furthermore, large uncertainties in future projections still remain, pertaining to knowledge gaps in atmospheric (Greenland) and oceanic (Antarctica) forcing. On millennial timescales, both ice sheets have tipping points at or slightly above the 1.5–2.0 °C threshold; for Greenland, this may lead to irreversible mass loss due to the surface mass balance–elevation feedback, whereas for Antarctica, this could result in a collapse of major drainage basins due to ice-shelf weakening.

Proceedings ArticleDOI
21 Apr 2018
TL;DR: An analysis task involving news presented via Facebook reveals a diverse range of judgement forming strategies, with participants relying on personal judgements as to plausibility and scepticism around sources and journalistic style.
Abstract: In the so called 'post-truth' era, characterized by a loss of public trust in various institutions, and the rise of 'fake news' disseminated via the internet and social media, individuals may face uncertainty about the veracity of information available, whether it be satire or malicious hoax. We investigate attitudes to news delivered by social media, and subsequent verification strategies applied, or not applied, by individuals. A survey reveals that two thirds of respondents regularly consumed news via Facebook, and that one third had at some point come across fake news that they initially believed to be true. An analysis task involving news presented via Facebook reveals a diverse range of judgement forming strategies, with participants relying on personal judgements as to plausibility and scepticism around sources and journalistic style. This reflects a shift away from traditional methods of accessing the news, and highlights the difficulties in combating the spread of fake news.

Journal ArticleDOI
TL;DR: This work provides a novel example where computer‐aided design was instrumental at modifying the performance of solid dosage forms and may serve as the foundation for a new generation of dosage forms with complicated geometric structures to achieve functionality that is usually achieved by a sophisticated formulation approach.

Journal ArticleDOI
TL;DR: In this article, an evaluation of stand-alone data mining models (i.e., reduced error pruning tree (REPT), M5P and instance-based learning (IBK)) and hybrid models, (e.g., bagging-M5P, random committee-REPT (RC)-REPT) and random subspace-rePT (RS-REpt)) for predicting suspended sediment loads (SSL) resulting from glacial melting at an Andean catchment in Chile has been conducted in this article.

Journal ArticleDOI
TL;DR: In 2017, the dominant greenhouse gases released into Earth's atmosphere-carbon dioxide, methane, and nitrous oxide-reached new record highs. as mentioned in this paper The annual global average carbon dioxide concentration at Earth's surface for 2017 was 405.0 ± 0.1 ppm, 2.2 ppm greater than for 2016 and the highest in the modern atmospheric measurement record and in ice core records dating back as far as 800 000 years.
Abstract: In 2017, the dominant greenhouse gases released into Earth's atmosphere-carbon dioxide, methane, and nitrous oxide-reached new record highs. The annual global average carbon dioxide concentration at Earth's surface for 2017 was 405.0 ± 0.1 ppm, 2.2 ppm greater than for 2016 and the highest in the modern atmospheric measurement record and in ice core records dating back as far as 800 000 years. The global growth rate of CO2 has nearly quadrupled since the early 1960s. With ENSO-neutral conditions present in the central and eastern equatorial Pacific Ocean during most of the year and weak La Nina conditions notable at the start and end, the global temperature across land and ocean surfaces ranked as the second or third highest, depending on the dataset, since records began in the mid-to-late 1800s. Notably, it was the warmest non-El Nino year in the instrumental record. Above Earth's surface, the annual lower tropospheric temperature was also either second or third highest according to all datasets analyzed. The lower stratospheric temperature was about 0.2°C higher than the record cold temperature of 2016 according to most of the in situ and satellite datasets. Several countries, including Argentina, Uruguay, Spain, and Bulgaria, reported record high annual temperatures. Mexico broke its annual record for the fourth consecutive year. On 27 January, the temperature reached 43.4°C at Puerto Madryn, Argentina-the highest temperature recorded so far south (43°S) anywhere in the world. On 28 May in Turbat, western Pakistan, the high of 53.5°C tied Pakistan's all-time highest temperature and became the world-record highest temperature for May. In the Arctic, the 2017 land surface temperature was 1.6°C above the 1981-2010 average, the second highest since the record began in 1900, behind only 2016. The five highest annual Arctic temperatures have all occurred since 2007. Exceptionally high temperatures were observed in the permafrost across the Arctic, with record values reported in much of Alaska and northwestern Canada. In August, high sea surface temperature (SST) records were broken for the Chukchi Sea, with some regions as warm as +11°C, or 3° to 4°C warmer than the longterm mean (1982-present). According to paleoclimate studies, today's abnormally warm Arctic air and SSTs have not been observed in the last 2000 years. The increasing temperatures have led to decreasing Arctic sea ice extent and thickness. On 7 March, sea ice extent at the end of the growth season saw its lowest maximum in the 37-year satellite record, covering 8% less area than the 1981-2010 average. The Arctic sea ice minimum on 13 September was the eighth lowest on record and covered 25% less area than the long-term mean. Preliminary data indicate that glaciers across the world lost mass for the 38th consecutive year on record; the declines are remarkably consistent from region to region. Cumulatively since 1980, this loss is equivalent to slicing 22 meters off the top of the average glacier. Antarctic sea ice extent remained below average for all of 2017, with record lows during the first four months. Over the continent, the austral summer seasonal melt extent and melt index were the second highest since 2005, mostly due to strong positive anomalies of air temperature over most of the West Antarctic coast. In contrast, the East Antarctic Plateau saw record low mean temperatures in March. The year was also distinguished by the second smallest Antarctic ozone hole observed since 1988. Across the global oceans, the overall long-term SST warming trend remained strong. Although SST cooled slightly from 2016 to 2017, the last three years produced the three highest annual values observed; these high anomalies have been associated with widespread coral bleaching. The most recent global coral bleaching lasted three full years, June 2014 to May 2017, and was the longest, most widespread, and almost certainly most destructive such event on record. Global integrals of 0-700-m and 0-2000-m ocean heat content reached record highs in 2017, and global mean sea level during the year became the highest annual average in the 25-year satellite altimetry record, rising to 77 mm above the 1993 average. In the tropics, 2017 saw 85 named tropical storms, slightly above the 1981-2010 average of 82. The North Atlantic basin was the only basin that featured an above-normal season, its seventh most active in the 164-year record. Three hurricanes in the basin were especially notable. Harvey produced record rainfall totals in areas of Texas and Louisiana, including a storm total of 1538.7 mm near Beaumont, Texas, which far exceeds the previous known U.S. tropical cyclone record of 1320.8 mm. Irma was the strongest tropical cyclone globally in 2017 and the strongest Atlantic hurricane outside of the Gulf of Mexico and Caribbean on record with maximum winds of 295 km h-1. Maria caused catastrophic destruction across the Caribbean Islands, including devastating wind damage and flooding across Puerto Rico. Elsewhere, the western North Pacific, South Indian, and Australian basins were all particularly quiet. Precipitation over global land areas in 2017 was clearly above the long-term average. Among noteworthy regional precipitation records in 2017, Russia reported its second wettest year on record (after 2013) and Norway experienced its sixth wettest year since records began in 1900. Across India, heavy rain and flood-related incidents during the monsoon season claimed around 800 lives. In August and September, above-normal precipitation triggered the most devastating floods in more than a decade in the Venezuelan states of Bolivar and Delta Amacuro. In Nigeria, heavy rain during August and September caused the Niger and Benue Rivers to overflow, bringing floods that displaced more than 100 000 people. Global fire activity was the lowest since at least 2003; however, high activity occurred in parts of North America, South America, and Europe, with an unusually long season in Spain and Portugal, which had their second and third driest years on record, respectively. Devastating fires impacted British Columbia, destroying 1.2 million hectares of timber, bush, and grassland, due in part to the region's driest summer on record. In the United States, an extreme western wildfire season burned over 4 million hectares; the total costs of $18 billion tripled the previous U.S. annual wildfire cost record set in 1991.

Journal ArticleDOI
TL;DR: This paper focuses on neurodegenerative diseases, particularly Parkinson’s, as the development model, by creating a new database and using it for training, evaluating and validating the proposed systems.
Abstract: This paper presents a novel class of systems assisting diagnosis and personalised assessment of diseases in healthcare. The targeted systems are end-to-end deep neural architectures that are designed (trained and tested) and subsequently used as whole systems, accepting raw input data and producing the desired outputs. Such architectures are state-of-the-art in image analysis and computer vision, speech recognition and language processing. Their application in healthcare for prediction and diagnosis purposes can produce high accuracy results and can be combined with medical knowledge to improve effectiveness, adaptation and transparency of decision making. The paper focuses on neurodegenerative diseases, particularly Parkinson’s, as the development model, by creating a new database and using it for training, evaluating and validating the proposed systems. Experimental results are presented which illustrate the ability of the systems to detect and predict Parkinson’s based on medical imaging information.

Journal ArticleDOI
TL;DR: Investigation of prevalence, duration and risk factors of appendicular osteoarthritis in dogs under primary veterinary care in the UK found evidence for substantial impact of osteoartritis on canine welfare at the individual and population level.
Abstract: Osteoarthritis is the most common joint disease diagnosed in veterinary medicine and poses considerable challenges to canine welfare. This study aimed to investigate prevalence, duration and risk factors of appendicular osteoarthritis in dogs under primary veterinary care in the UK. The VetCompassTM programme collects clinical data on dogs attending UK primary-care veterinary practices. The study included all VetCompassTM dogs under veterinary care during 2013. Candidate osteoarthritis cases were identified using multiple search strategies. A random subset was manually evaluated against a case definition. Of 455,557 study dogs, 16,437 candidate osteoarthritis cases were identified; 6104 (37%) were manually checked and 4196 (69% of sample) were confirmed as cases. Additional data on demography, clinical signs, duration and management were extracted for confirmed cases. Estimated annual period prevalence (accounting for subsampling) of appendicular osteoarthritis was 2.5% (CI95: 2.4-2.5%) equating to around 200,000 UK affected dogs annually. Risk factors associated with osteoarthritis diagnosis included breed (e.g. Labrador, Golden Retriever), being insured, being neutered, of higher bodyweight and being older than eight years. Duration calculation trials suggest osteoarthritis affects 11.4% of affected individuals' lifespan, providing further evidence for substantial impact of osteoarthritis on canine welfare at the individual and population level.

Journal ArticleDOI
TL;DR: This survey paper classifies fault diagnosis methods in recent five years into three categories based on decision centers and key attributes of employed algorithms: centralized approaches, distributed approaches, and hybrid approaches.
Abstract: Wireless sensor networks (WSNs) often consist of hundreds of sensor nodes that may be deployed in relatively harsh and complex environments. In views of hardware cost, sensor nodes always adopt relatively cheap chips, which make these nodes become error-prone or faulty in the course of their operation. Natural factors and electromagnetic interference could also influence the performance of the WSNs. When sensor nodes become faulty, they may have died which means they cannot communicate with other members in the wireless network, they may be still alive but produce incorrect data, they may be unstable jumping between normal state and faulty state. To improve data quality, shorten response time, strengthen network security, and prolong network lifespan, many studies have focused on fault diagnosis. This survey paper classifies fault diagnosis methods in recent five years into three categories based on decision centers and key attributes of employed algorithms: centralized approaches, distributed approaches, and hybrid approaches. As all these studies have specific goals and limitations, this paper tries to compare them, lists their merits and limits, and propose potential research directions based on established methods and theories.

Journal ArticleDOI
27 Jul 2018
TL;DR: In this paper, the authors survey and discuss AI techniques as enablers for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in longterm autonomy.
Abstract: Autonomous systems will play an essential role in many applications across diverse domains including space, marine, air, field, road, and service robotics. They will assist us in our daily routines and perform dangerous, dirty, and dull tasks. However, enabling robotic systems to perform autonomously in complex, real-world scenarios over extended time periods (i.e., weeks, months, or years) poses many challenges. Some of these have been investigated by subdisciplines of Artificial Intelligence (AI) including navigation and mapping, perception, knowledge representation and reasoning, planning, interaction, and learning. The different subdisciplines have developed techniques that, when re-integrated within an autonomous system, can enable robots to operate effectively in complex, long-term scenarios. In this letter, we survey and discuss AI techniques as “enablers” for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in long-term autonomy.

Journal ArticleDOI
TL;DR: Support is found for both temporal and contextual repeatability of cognitive performance, with mean R estimates ranging between 0.15 and 0.28, which highlights the widespread occurrence of consistent inter-individual variation in cognition across a range of taxa which, like behaviour, may be associated with fitness outcomes.
Abstract: Behavioural and cognitive processes play important roles in mediating an individual's interactions with its environment. Yet, while there is a vast literature on repeatable individual differences in behaviour, relatively little is known about the repeatability of cognitive performance. To further our understanding of the evolution of cognition, we gathered 44 studies on individual performance of 25 species across six animal classes and used meta-analysis to assess whether cognitive performance is repeatable. We compared repeatability (R) in performance (1) on the same task presented at different times (temporal repeatability), and (2) on different tasks that measured the same putative cognitive ability (contextual repeatability). We also addressed whether R estimates were influenced by seven extrinsic factors (moderators): type of cognitive performance measurement, type of cognitive task, delay between tests, origin of the subjects, experimental context, taxonomic class and publication status. We found support for both temporal and contextual repeatability of cognitive performance, with mean R estimates ranging between 0.15 and 0.28. Repeatability estimates were mostly influenced by the type of cognitive performance measures and publication status. Our findings highlight the widespread occurrence of consistent inter-individual variation in cognition across a range of taxa which, like behaviour, may be associated with fitness outcomes.This article is part of the theme issue 'Causes and consequences of individual differences in cognitive abilities'.

Journal ArticleDOI
TL;DR: In this paper, the role of cultural intelligence of expatriate managers in the processes of conventional (CKT) and reverse knowledge transfer (RKT) in Multinational Companies (MNCs) was analyzed.

Journal ArticleDOI
TL;DR: By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed and is expected to widen the applications of both soft gripper and electroadhesion technologies.
Abstract: Current soft pneumatic grippers cannot robustly grasp flat materials and flexible objects on curved surfaces without distorting them. Current electroadhesive grippers, on the other hand, are difficult to actively deform to complex shapes to pick up free-form surfaces or objects. An easy-to-implement PneuEA gripper is proposed by the integration of an electroadhesive gripper and a two-fingered soft pneumatic gripper. The electroadhesive gripper was fabricated by segmenting a soft conductive silicon sheet into a two-part electrode design and embedding it in a soft dielectric elastomer. The two-fingered soft pneumatic gripper was manufactured using a standard soft lithography approach. This novel integration has combined the benefits of both the electroadhesive and soft pneumatic grippers. As a result, the proposed PneuEA gripper was not only able to pick-and-place flat and flexible materials such as a porous cloth but also delicate objects such as a light bulb. By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed. This work is expected to widen the applications of both soft gripper and electroadhesion technologies.