scispace - formally typeset
Search or ask a question

Showing papers by "University of Hertfordshire published in 2018"


Journal ArticleDOI
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4  +414 moreInstitutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.

5,988 citations



Journal ArticleDOI
TL;DR: A thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost is undertaken.
Abstract: As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.

410 citations


Journal ArticleDOI
TL;DR: An approach to co-culture multiple different MPSs linked together physiologically on re-useable, open-system microfluidic platforms that are compatible with the quantitative study of a range of compounds, including lipophilic drugs is reported.
Abstract: Microphysiological systems (MPSs) are in vitro models that capture facets of in vivo organ function through use of specialized culture microenvironments, including 3D matrices and microperfusion. Here, we report an approach to co-culture multiple different MPSs linked together physiologically on re-useable, open-system microfluidic platforms that are compatible with the quantitative study of a range of compounds, including lipophilic drugs. We describe three different platform designs – “4-way”, “7-way”, and “10-way” – each accommodating a mixing chamber and up to 4, 7, or 10 MPSs. Platforms accommodate multiple different MPS flow configurations, each with internal re-circulation to enhance molecular exchange, and feature on-board pneumatically-driven pumps with independently programmable flow rates to provide precise control over both intra- and inter-MPS flow partitioning and drug distribution. We first developed a 4-MPS system, showing accurate prediction of secreted liver protein distribution and 2-week maintenance of phenotypic markers. We then developed 7-MPS and 10-MPS platforms, demonstrating reliable, robust operation and maintenance of MPS phenotypic function for 3 weeks (7-way) and 4 weeks (10-way) of continuous interaction, as well as PK analysis of diclofenac metabolism. This study illustrates several generalizable design and operational principles for implementing multi-MPS “physiome-on-a-chip” approaches in drug discovery.

309 citations


Journal ArticleDOI
Giovanna Tinetti1, Pierre Drossart, Paul Eccleston2, Paul Hartogh3  +240 moreInstitutions (45)
TL;DR: The ARIEL mission as mentioned in this paper was designed to observe a large number of transiting planets for statistical understanding, including gas giants, Neptunes, super-Earths and Earth-size planets around a range of host star types using transit spectroscopy in the 1.25-7.8 μm spectral range and multiple narrow-band photometry in the optical.
Abstract: Thousands of exoplanets have now been discovered with a huge range of masses, sizes and orbits: from rocky Earth-like planets to large gas giants grazing the surface of their host star. However, the essential nature of these exoplanets remains largely mysterious: there is no known, discernible pattern linking the presence, size, or orbital parameters of a planet to the nature of its parent star. We have little idea whether the chemistry of a planet is linked to its formation environment, or whether the type of host star drives the physics and chemistry of the planet’s birth, and evolution. ARIEL was conceived to observe a large number (~1000) of transiting planets for statistical understanding, including gas giants, Neptunes, super-Earths and Earth-size planets around a range of host star types using transit spectroscopy in the 1.25–7.8 μm spectral range and multiple narrow-band photometry in the optical. ARIEL will focus on warm and hot planets to take advantage of their well-mixed atmospheres which should show minimal condensation and sequestration of high-Z materials compared to their colder Solar System siblings. Said warm and hot atmospheres are expected to be more representative of the planetary bulk composition. Observations of these warm/hot exoplanets, and in particular of their elemental composition (especially C, O, N, S, Si), will allow the understanding of the early stages of planetary and atmospheric formation during the nebular phase and the following few million years. ARIEL will thus provide a representative picture of the chemical nature of the exoplanets and relate this directly to the type and chemical environment of the host star. ARIEL is designed as a dedicated survey mission for combined-light spectroscopy, capable of observing a large and well-defined planet sample within its 4-year mission lifetime. Transit, eclipse and phase-curve spectroscopy methods, whereby the signal from the star and planet are differentiated using knowledge of the planetary ephemerides, allow us to measure atmospheric signals from the planet at levels of 10–100 part per million (ppm) relative to the star and, given the bright nature of targets, also allows more sophisticated techniques, such as eclipse mapping, to give a deeper insight into the nature of the atmosphere. These types of observations require a stable payload and satellite platform with broad, instantaneous wavelength coverage to detect many molecular species, probe the thermal structure, identify clouds and monitor the stellar activity. The wavelength range proposed covers all the expected major atmospheric gases from e.g. H2O, CO2, CH4 NH3, HCN, H2S through to the more exotic metallic compounds, such as TiO, VO, and condensed species. Simulations of ARIEL performance in conducting exoplanet surveys have been performed – using conservative estimates of mission performance and a full model of all significant noise sources in the measurement – using a list of potential ARIEL targets that incorporates the latest available exoplanet statistics. The conclusion at the end of the Phase A study, is that ARIEL – in line with the stated mission objectives – will be able to observe about 1000 exoplanets depending on the details of the adopted survey strategy, thus confirming the feasibility of the main science objectives.

298 citations


Journal ArticleDOI
TL;DR: In this article, a detailed census of the properties (velocities, distances, luminosities and masses) and spatial distribution of a complete sample of ~8000 dense clumps located in the Galactic disk is presented.
Abstract: Abridged: ATLASGAL is an unbiased 870 micron submillimetre survey of the inner Galactic plane. It provides a large and systematic inventory of all massive, dense clumps in the Galaxy (>1000 Msun) and includes representative samples of all embedded stages of high-mass star formation. Here we present the first detailed census of the properties (velocities, distances, luminosities and masses) and spatial distribution of a complete sample of ~8000 dense clumps located in the Galactic disk. We derive highly reliable velocities and distances to ~97% of the sample and use mid- and far-infrared survey data to develop an evolutionary classification scheme that we apply to the whole sample. Comparing the evolutionary subsamples reveals trends for increasing dust temperatures, luminosities and line-widths as a function of evolution indicating that the feedback from the embedded proto-clusters is having a significant impact on the structure and dynamics of their natal clumps. We find 88\,per\,cent are already associated with star formation at some level. We also find the clump mass to be independent of evolution suggesting that the clumps form with the majority of their mass in-situ. We estimate the statistical lifetime of the quiescent stage to be ~5 x 10^4 yr for clump masses ~1000 Msun decreasing to ~1 x 10^4 yr for clump masses >10000 Msun. We find a strong correlation between the fraction of clumps associated with massive stars and peak column density. The fraction is initially small at low column densities but reaching 100\,per\,cent for column densities above 10^{23} cm^{-2}; there are no clumps with column density clumps above this value that are not already associated with massive star formation. All of the evidence is consistent with a dynamic view of star formation wherein the clumps form rapidly and are initially very unstable so that star formation quickly ensues.

232 citations


Journal ArticleDOI
TL;DR: Bacterial counts in PMD presented higher cell enrichment factors than generally observed for PA fraction, when compared to FL bacteria in the surrounding waters, and the community distinction between the three habitats was consistent across the large-scale sampling in the Western Mediterranean basin.

231 citations


Journal ArticleDOI
TL;DR: It is repeated that including GD reflects the essence of the ICD and will facilitate treatment and prevention for those who need it and the decision whether or not to include GD is based on clinical evidence and public health needs.
Abstract: The proposed introduction of gaming disorder (GD) in the 11th revision of the International Classification of Diseases (ICD-11) developed by the World Health Organization (WHO) has led to a lively debate over the past year. Besides the broad support for the decision in the academic press, a recent publication by van Rooij et al. (2018) repeated the criticism raised against the inclusion of GD in ICD-11 by Aarseth et al. (2017). We argue that this group of researchers fails to recognize the clinical and public health considerations, which support the WHO perspective. It is important to recognize a range of biases that may influence this debate; in particular, the gaming industry may wish to diminish its responsibility by claiming that GD is not a public health problem, a position which maybe supported by arguments from scholars based in media psychology, computer games research, communication science, and related disciplines. However, just as with any other disease or disorder in the ICD-11, the decision whether or not to include GD is based on clinical evidence and public health needs. Therefore, we reiterate our conclusion that including GD reflects the essence of the ICD and will facilitate treatment and prevention for those who need it.

198 citations


Journal ArticleDOI
TL;DR: The document aims to offer an appraisal of all relevant literature up to July 2017, focusing on any key developments, and address important, practical clinical questions relating to the primary guideline objective.
Abstract: The overall objective of the guideline is to provide up-to-date, evidence-based recommendations for the management of lichen sclerosus (LS) in adults (18+ years), children (0-12 years) and young people (13-17 years). The document aims to. offer an appraisal of all relevant literature up to July 2017, focusing on any key developments. address important, practical clinical questions relating to the primary guideline objective. provide guideline recommendations and if appropriate research recommendations. The guideline is presented as a detailed review with highlighted recommendations for practical use in primary care and in secondary care clinics, in addition to an updated Patient Information Leaflet (PIL; available on the BAD website, http://www.bad.org.uk/for-the-public/patient-information-leaflets). This article is protected by copyright. All rights reserved.

196 citations


Journal ArticleDOI
TL;DR: Nine critical and achievable research priorities identified by the Network, needed in order to advance understanding of PUI, with a view towards identifying vulnerable individuals for early intervention are described.

190 citations


Journal ArticleDOI
TL;DR: In this article, a wideband wide-field spectral deconvolution framework (ddfacet) based on image plane faceting, that takes into account generic direction-dependent effects is presented.
Abstract: The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (ddfacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of ddfacet presented here can account for any externally defined Jones matrices and/or beam patterns.

Journal ArticleDOI
TL;DR: This study investigated the different steps of plastic colonization of polyolefin-based plastics and highlighted different trends between polymer types with distinct surface properties and composition, with the biodegradable AA-OXO and PHBV presenting higher colonization by active and specific bacteria compared to non-biodesgradable polymers (PE and OXO).
Abstract: Plastics are ubiquitous in the oceans and constitute suitable matrices for bacterial attachment and growth. Understanding biofouling mechanisms is a key issue to assessing the ecological impacts and fate of plastics in marine environment. In this study, we investigated the different steps of plastic colonization of polyolefin-based plastics, on the first one hand, including conventional low-density polyethylene (PE), additivated PE with pro-oxidant (OXO), and artificially aged OXO (AA-OXO); and of a polyester, poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV), on the other hand. We combined measurements of physical surface properties of polymers (hydrophobicity and roughness) with microbiological characterization of the biofilm (cell counts, taxonomic composition, and heterotrophic activity) using a wide range of techniques, with some of them used for the first time on plastics. Our experimental setup using aquariums with natural circulating seawater during 6 weeks allowed us to characterize the successive phases of primo-colonization, growing, and maturation of the biofilms. We highlighted different trends between polymer types with distinct surface properties and composition, the biodegradable AA-OXO and PHBV presenting higher colonization by active and specific bacteria compared to non-biodegradable polymers (PE and OXO). Succession of bacterial population occurred during the three colonization phases, with hydrocarbonoclastic bacteria being highly abundant on all plastic types. This study brings original data that provide new insights on the colonization of non-biodegradable and biodegradable polymers by marine microorganisms.

Proceedings ArticleDOI
20 Mar 2018
TL;DR: A case of study where a bug discovered in a Smart Contract library, and perhaps "unsafe" programming, allowed an attack on Parity, a wallet application, causing the freezing of about 500K Ethers, is analyzed.
Abstract: Smart Contracts have gained tremendous popularity in the past few years, to the point that billions of US Dollars are currently exchanged every day through such technology. However, since the release of the Frontier network of Ethereum in 2015, there have been many cases in which the execution of Smart Contracts managing Ether coins has led to problems or conflicts. Compared to traditional Software Engineering, a discipline of Smart Contract and Blockchain programming, with standardized best practices that can help solve the mentioned problems and conflicts, is not yet sufficiently developed. Furthermore, Smart Contracts rely on a non-standard software life-cycle, according to which, for instance, delivered applications can hardly be updated or bugs resolved by releasing a new version of the software. In this paper we advocate the need for a discipline of Blockchain Software Engineering, addressing the issues posed by smart contract programming and other applications running on blockchains.We analyse a case of study where a bug discovered in a Smart Contract library, and perhaps "unsafe" programming, allowed an attack on Parity, a wallet application, causing the freezing of about 500K Ethers (about 150M USD, in November 2017). In this study we analyze the source code of Parity and the library, and discuss how recognised best practices could mitigate, if adopted and adapted, such detrimental software misbehavior. We also reflect on the specificity of Smart Contract software development, which makes some of the existing approaches insufficient, and call for the definition of a specific Blockchain Software Engineering.

Journal ArticleDOI
TL;DR: In this article, the authors identify which baseline patient and clinical characteristics are associated with a better outcome, 6'weeks and 6'months after starting a course of physiotherapy for shoulder pain.
Abstract: Background/aim Shoulder pain is a major musculoskeletal problem. We aimed to identify which baseline patient and clinical characteristics are associated with a better outcome, 6 weeks and 6 months after starting a course of physiotherapy for shoulder pain. Methods 1030 patients aged ≥18 years referred to physiotherapy for the management of musculoskeletal shoulder pain were recruited and provided baseline data. 840 (82%) provided outcome data at 6 weeks and 811 (79%) at 6 months. 71 putative prognostic factors were collected at baseline. Outcomes were the Shoulder Pain and Disability Index (SPADI) and Quick Disability of the Arm, Shoulder and Hand Questionnaire. Multivariable linear regression was used to analyse prognostic factors associated with outcome. Results Parameter estimates (β) are presented for the untransformed SPADI at 6 months, a negative value indicating less pain and disability. 4 factors were associated with better outcomes for both measures and time points: lower baseline disability (β=−0.32, 95% CI −0.23 to −0.40), patient expectation of ‘complete recovery’ compared to ‘slight improvement’ as ‘a result of physiotherapy’ (β=−12.43, 95% CI −8.20 to −16.67), higher pain self-efficacy (β=−0.36, 95% CI −0.50 to −0.22) and lower pain severity at rest (β=−1.89, 95% CI −1.26 to −2.51). Conclusions Psychological factors were consistently associated with patient-rated outcome, whereas clinical examination findings associated with a specific structural diagnosis were not. When assessing people with musculoskeletal shoulder pain and considering referral to physiotherapy services, psychosocial and medical information should be considered. Study registration Protocol published at http://www.biomedcentral.com/1471-2474/14/192.

Journal ArticleDOI
TL;DR: It was concluded that neural networks model with back propagation learning algorithm has an advantage over the other models in estimating the RUL for slow speed bearings if the proper network structure is chosen and sufficient data is provided.
Abstract: Acoustic emission (AE) technique can be successfully utilized for condition monitoring of various machining and industrial processes. To keep machines function at optimal levels, fault prognosis model to predict the remaining useful life (RUL) of machine components is required. This model is used to analyze the output signals of a machine whilst in operation and accordingly helps to set an early alarm tool that reduces the untimely replacement of components and the wasteful machine downtime. Recent improvements indicate the drive on the way towards incorporation of prognosis and diagnosis machine learning techniques in future machine health management systems. With this in mind, this work employs three supervised machine learning techniques; support vector machine regression, multilayer artificial neural network model and gaussian process regression, to correlate AE features with corresponding natural wear of slow speed bearings throughout series of laboratory experiments. Analysis of signal parameters such as signal intensity estimator and root mean square was undertaken to discriminate individual types of early damage. It was concluded that neural networks model with back propagation learning algorithm has an advantage over the other models in estimating the RUL for slow speed bearings if the proper network structure is chosen and sufficient data is provided.

Journal ArticleDOI
TL;DR: In this paper, the authors present the UH Research Archive for Astronomy & Astrophysics (UHRA) for personal research, educational, and non-commercial purposes only.
Abstract: © 2018 ESO. Reproduced with permission from Astronomy & Astrophysics. Content in the UH Research Archive is made available for personal research, educational, and non-commercial purposes only. Unless otherwise stated, all content is protected by copyright, and in the absence of an open license, permissions for further re-use should be sought from the publisher, the author, or other copyright holder.

Journal ArticleDOI
TL;DR: In this paper, a model for the concurrent formation of globular clusters (GCs) and supermassive stars (SMSs, ≳103 M⊙) is presented to address the origin of the HeCNONaMgAl abundance anomalies in GCs.
Abstract: We present a model for the concurrent formation of globular clusters (GCs) and supermassive stars (SMSs, ≳103 M⊙) to address the origin of the HeCNONaMgAl abundance anomalies in GCs. GCs form in converging gas flows and accumulate low-angular momentum gas, which accretes on to protostars. This leads to an adiabatic contraction of the cluster and an increase of the stellar collision rate. A SMS can form via runaway collisions if the cluster reaches sufficiently high density before two-body relaxation halts the contraction. This condition is met if the number of stars ≳106 and the gas accretion rate ≳105 M⊙ Myr−1, reminiscent of GC formation in high gas-density environments, such as – but not restricted to – the early Universe. The strong SMS wind mixes with the inflowing pristine gas, such that the protostars accrete diluted hot-hydrogen burning yields of the SMS. Because of continuous rejuvenation, the amount of processed material liberated by the SMS can be an order of magnitude higher than its maximum mass. This ‘conveyor-belt’ production of hot-hydrogen burning products provides a solution to the mass budget problem that plagues other scenarios. Additionally, the liberated material is mildly enriched in helium and relatively rich in other hot-hydrogen burning products, in agreement with abundances of GCs today. Finally, we find a super-linear scaling between the amount of processed material and cluster mass, providing an explanation for the observed increase of the fraction of processed material with GC mass. We discuss open questions of this new GC enrichment scenario and propose observational tests.

Journal ArticleDOI
TL;DR: The first life forms evolved in a highly reducing environment, which makes the concept of "reductive stress" somewhat redundant, and the battle for iron between bacteria and animal hosts continues today, a much greater daily threat to the authors' survival than "oxidative stress" and "redox stress".

Journal ArticleDOI
TL;DR: The preclinical and clinical evidence that lay the foundations of the efficacy of ketamine in the treatment of suicidal ideation in mood disorders are reviewed, thereby approaching the essential question of the understanding of neurobiological processes of suicide and the potential therapeutics.
Abstract: Despite the continuous advancement in neurosciences as well as in the knowledge of human behaviors pathophysiology, currently suicide represents a puzzling challenge The World Health Organization (WHO) has established that one million people die by suicide every year, with the impressive daily rate of a suicide every 40 s The weightiest concern about suicidal behavior is how difficult it is for healthcare professionals to predict However, recent evidence in genomic studies has pointed out the essential role that genetics could play in influencing person's suicide risk Combining genomic and clinical risk assessment approaches, some studies have identified a number of biomarkers for suicidal ideation, which are involved in neural connectivity, neural activity, mood, as well as in immune and inflammatory response, such as the mammalian target of rapamycin (mTOR) signaling This interesting discovery provides the neurobiological bases for the use of drugs that impact these specific signaling pathways in the treatment of suicidality, such as ketamine Ketamine, an N-methyl-d-aspartate glutamate (NMDA) antagonist agent, has recently hit the headlines because of its rapid antidepressant and concurrent anti-suicidal action Here we review the preclinical and clinical evidence that lay the foundations of the efficacy of ketamine in the treatment of suicidal ideation in mood disorders, thereby also approaching the essential question of the understanding of neurobiological processes of suicide and the potential therapeutics

Journal ArticleDOI
TL;DR: It is concluded that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.
Abstract: During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naive Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.

Journal ArticleDOI
05 May 2018-BMJ Open
TL;DR: A very wide variation in the medication error and error-related adverse events rates is reported in the studies, this reflecting heterogeneity in the populations studied, study designs employed and outcomes evaluated.
Abstract: Objective To investigate the epidemiology of medication errors and error-related adverse events in adults in primary care, ambulatory care and patients’ homes. Design Systematic review. Data source Six international databases were searched for publications between 1 January 2006 and 31 December 2015. Data extraction and analysis Two researchers independently extracted data from eligible studies and assessed the quality of these using established instruments. Synthesis of data was informed by an appreciation of the medicines’ management process and the conceptual framework from the International Classification for Patient Safety. Results 60 studies met the inclusion criteria, of which 53 studies focused on medication errors, 3 on error-related adverse events and 4 on risk factors only. The prevalence of prescribing errors was reported in 46 studies: prevalence estimates ranged widely from 2% to 94%. Inappropriate prescribing was the most common type of error reported. Only one study reported the prevalence of monitoring errors, finding that incomplete therapeutic/safety laboratory-test monitoring occurred in 73% of patients. The incidence of preventable adverse drug events (ADEs) was estimated as 15/1000 person-years, the prevalence of drug–drug interaction-related adverse drug reactions as 7% and the prevalence of preventable ADE as 0.4%. A number of patient, healthcare professional and medication-related risk factors were identified, including the number of medications used by the patient, increased patient age, the number of comorbidities, use of anticoagulants, cases where more than one physician was involved in patients’ care and care being provided by family physicians/general practitioners. Conclusion A very wide variation in the medication error and error-related adverse events rates is reported in the studies, this reflecting heterogeneity in the populations studied, study designs employed and outcomes evaluated. This review has identified important limitations and discrepancies in the methodologies used and gaps in the literature on the epidemiology and outcomes of medication errors in community settings.

Journal ArticleDOI
TL;DR: In this paper, the Gaia DR2 distances of about 700 selected young stellar objects in the benchmark giant molecular cloud Orion A were used to infer its 3D shape and orientation, and they found that Orion A is not the fairly straight filamentary cloud that we see in (2D) projection, but instead a cometary-like cloud oriented toward the Galactic plane, with two distinct components: a denser and enhanced star-forming (bent) Head, and a lower density and star-formation quieter ∼75 pc long Tail.
Abstract: We use the Gaia DR2 distances of about 700 mid-infrared selected young stellar objects in the benchmark giant molecular cloud Orion A to infer its 3D shape and orientation. We find that Orion A is not the fairly straight filamentary cloud that we see in (2D) projection, but instead a cometary-like cloud oriented toward the Galactic plane, with two distinct components: a denser and enhanced star-forming (bent) Head, and a lower density and star-formation quieter ∼75 pc long Tail. The true extent of Orion A is not the projected ∼40 pc but ∼90 pc, making it by far the largest molecular cloud in the local neighborhood. Its aspect ratio (∼30:1) and high column-density fraction (∼45%) make it similar to large-scale Milky Way filaments (“bones”), despite its distance to the galactic mid-plane being an order of magnitude larger than typically found for these structures.

Journal ArticleDOI
TL;DR: Evidence is steadily and increasingly accumulating to confirm the poorer cognitive outcome for women than men with Alzheimer's disease, and sex-related differences in risk factors, resilience, cognitive reserve, and rates of deterioration have implications for clinical practice.
Abstract: Purpose of reviewWomen are more impacted by Alzheimer's disease than men – they are at significantly greater risk of developing Alzheimer's disease, and recent research shows that they also appear to suffer a greater cognitive deterioration than men at the same disease stage. The purpose of this art

Journal ArticleDOI
TL;DR: Findings from a neuroimaging study of Pavlovian fear reversal suggest there is an absence of ventromedial prefrontal cortex safety signaling in obsessive compulsive disorder, which potentially undermines explicit contingency knowledge and may help to explain the link between cognitive inflexibility, fear, and anxiety processing in compulsive disorders such as obsessiveCompulsive disorder.
Abstract: Compulsions are repetitive, stereotyped thoughts and behaviors designed to reduce harm. Growing evidence suggests that the neurocognitive mechanisms mediating behavioral inhibition (motor inhibition, cognitive inflexibility) reversal learning and habit formation (shift from goal-directed to habitual responding) contribute toward compulsive activity in a broad range of disorders. In obsessive compulsive disorder, distributed network perturbation appears focused around the prefrontal cortex, caudate, putamen, and associated neuro-circuitry. Obsessive compulsive disorder-related attentional set-shifting deficits correlated with reduced resting state functional connectivity between the dorsal caudate and the ventrolateral prefrontal cortex on neuroimaging. In contrast, experimental provocation of obsessive compulsive disorder symptoms reduced neural activation in brain regions implicated in goal-directed behavioral control (ventromedial prefrontal cortex, caudate) with concordant increased activation in regions implicated in habit learning (presupplementary motor area, putamen). The ventromedial prefrontal cortex plays a multifaceted role, integrating affective evaluative processes, flexible behavior, and fear learning. Findings from a neuroimaging study of Pavlovian fear reversal, in which obsessive compulsive disorder patients failed to flexibly update fear responses despite normal initial fear conditioning, suggest there is an absence of ventromedial prefrontal cortex safety signaling in obsessive compulsive disorder, which potentially undermines explicit contingency knowledge and may help to explain the link between cognitive inflexibility, fear, and anxiety processing in compulsive disorders such as obsessive compulsive disorder.

Journal ArticleDOI
TL;DR: CBTp has a small therapeutic effect on functioning at end-of-trial, although this benefit is not evident at follow-up, and there is no evidence that CBTp increases quality of life post-intervention.
Abstract: The effect of cognitive behavioural therapy for psychosis (CBTp) on the core symptoms of schizophrenia has proven contentious, with current meta-analyses finding at most only small effects However, it has been suggested that the effects of CBTp in areas other than psychotic symptoms are at least as important and potentially benefit from the intervention We meta-analysed RCTs investigating the effectiveness of CBTp for functioning, distress and quality of life in individuals diagnosed with schizophrenia and related disorders Data from 36 randomised controlled trials (RCTs) met our inclusion criteria- 27 assessing functioning (1579 participants); 8 for distress (465 participants); and 10 for quality of life (592 participants) The pooled effect size for functioning was small but significant for the end-of-trial (025: 95% CI: 014 to 033); however, this became non-significant at follow-up (010 [95%CI -007 to 026]) Although a small benefit of CBT was evident for reducing distress (037: 95%CI 005 to 069), this became nonsignificant when adjusted for possible publication bias (018: 95%CI -012 to 048) Finally, CBTp showed no benefit for improving quality of life (004: 95% CI: -012 to 019) CBTp has a small therapeutic effect on functioning at end-of-trial, although this benefit is not evident at follow-up Although CBTp produced a small benefit on distress, this was subject to possible publication bias and became nonsignificant when adjusted We found no evidence that CBTp increases quality of life post-intervention

Journal ArticleDOI
TL;DR: Initial recommendations on the minimum dataset required to consider an iPSC line of clinical grade are outlined in this report, which are likely to lead to revision of these guidelines on a regular basis.
Abstract: Use of clinical-grade human induced pluripotent stem cell (iPSC) lines as a starting material for the generation of cellular therapeutics requires demonstration of comparability of lines derived from different individuals and in different facilities This requires agreement on the critical quality attributes of such lines and the assays that should be used Working from established recommendations and guidance from the International Stem Cell Banking Initiative for human embryonic stem cell banking, and concentrating on those issues more relevant to iPSCs, a series of consensus workshops has made initial recommendations on the minimum dataset required to consider an iPSC line of clinical grade, which are outlined in this report Continued evolution of this field will likely lead to revision of these guidelines on a regular basis

Journal ArticleDOI
TL;DR: A practical framework for including these new dimensions in an already well-defined model of quality improvement has the potential to harness the growing quality improvement movement to shape a more sustainable health service, while improving patient outcomes.
Abstract: Sustainability can be considered a domain of quality in -healthcare, extending the responsibility of health services to patients not just of today but of the future. The longer term -perspective highlights the impacts of our healthcare system on our environment and communities and in turn back onto population health. A sustainable approach will therefore expand the healthcare definition of value to measure health outcomes against environmental and social impacts alongside financial costs. We set out a practical framework for including these new dimensions in an already well-defined model of quality improvement. This has the potential to harness the growing quality improvement movement to shape a more sustainable health service, while improving patient outcomes. Early experience suggests that the new model may also provide immediate -benefits, including additional motivation for clinicians to engage in quality improvement, directing their efforts towards high value interventions and enabling capture and communication of a wider range of impacts on patients, staff and communities.

Journal ArticleDOI
TL;DR: The aim of this paper is to briefly summarize and provide an update on major clozapine adverse effects, especially focusing on those that are severe and potentially life threatening, even if most of the latter are relatively uncommon.
Abstract: Clozapine, a dibenzodiazepine developed in 1961, is a multireceptorial atypical antipsychotic approved for the treatment of resistant schizophrenia. Since its introduction, it has remained the drug of choice in treatment-resistant schizophrenia, despite a wide range of adverse effects, as it is a very effective drug in everyday clinical practice. However, clozapine is not considered as a top-of-the-line treatment because it may often be difficult for some patients to tolerate as some adverse effects can be particularly bothersome (i.e. sedation, weight gain, sialorrhea etc.) and it has some other potentially dangerous and life-threatening side effects (i.e. myocarditis, seizures, agranulocytosis or granulocytopenia, gastrointestinal hypomotility etc.). As poor treatment adherence in patients with resistant schizophrenia may increase the risk of a psychotic relapse, which may further lead to impaired social and cognitive functioning, psychiatric hospitalizations and increased treatment costs, clozapine adverse effects are a common reason for discontinuing this medication. Therefore, every effort should be made to monitor and minimize these adverse effects in order to improve their early detection and management. The aim of this paper is to briefly summarize and provide an update on major clozapine adverse effects, especially focusing on those that are severe and potentially life threatening, even if most of the latter are relatively uncommon.

Journal ArticleDOI
15 Nov 2018-Nature
TL;DR: In this article, a low-amplitude periodic signal with a period of 233 days was found to arise from a super-Earth around Barnard's star, with a minimum mass of 3.2 times that of Earth orbiting near its snow line.
Abstract: Barnard’s star is a red dwarf, and has the largest proper motion (apparent motion across the sky) of all known stars. At a distance of 1.8 parsecs1, it is the closest single star to the Sun; only the three stars in the α Centauri system are closer. Barnard’s star is also among the least magnetically active red dwarfs known2,3 and has an estimated age older than the Solar System. Its properties make it a prime target for planetary searches; various techniques with different sensitivity limits have been used previously, including radial-velocity imaging4–6, astrometry7,8 and direct imaging9, but all ultimately led to negative or null results. Here we combine numerous measurements from high-precision radial-velocity instruments, revealing the presence of a low-amplitude periodic signal with a period of 233 days. Independent photometric and spectroscopic monitoring, as well as an analysis of instrumental systematic effects, suggest that this signal is best explained as arising from a planetary companion. The candidate planet around Barnard’s star is a cold super-Earth, with a minimum mass of 3.2 times that of Earth, orbiting near its snow line (the minimum distance from the star at which volatile compounds could condense). The combination of all radial-velocity datasets spanning 20 years of measurements additionally reveals a long-term modulation that could arise from a stellar magnetic-activity cycle or from a more distant planetary object. Because of its proximity to the Sun, the candidate planet has a maximum angular separation of 220 milliarcseconds from Barnard’s star, making it an excellent target for direct imaging and astrometric observations in the future. Analysis of 20 years of observations of Barnard’s star from seven facilities reveals a signal with a period of 233 days that is indicative of a companion planet.

Journal ArticleDOI
TL;DR: In this article, a series of high-resolution cosmological zoom simulations of galaxy formation is used to investigate the relationship between the ultraviolet (UV) slope, beta, and the ratio of the infrared luminosity to UV luminosity (IRX) in the spectral energy distributions (SEDs) of galaxies.
Abstract: We utilise a series of high-resolution cosmological zoom simulations of galaxy formation to investigate the relationship between the ultraviolet (UV) slope, beta, and the ratio of the infrared luminosity to UV luminosity (IRX) in the spectral energy distributions (SEDs) of galaxies We employ dust radiative transfer calculations in which the SEDs of the stars in galaxies propagate through the dusty interstellar medium Our main goals are to understand the origin of, and scatter in the IRX-beta relation; to assess the efficacy of simplified stellar population synthesis screen models in capturing the essential physics in the IRX-beta relation; and to understand systematic deviations from the canonical local IRX-beta relations in particular populations of high-redshift galaxies Our main results follow Galaxies that have young stellar populations with relatively cospatial UV and IR emitting regions and a Milky Way-like extinction curve fall on or near the standard Meurer relation This behaviour is well captured by simplified screen models Scatter in the IRX-beta relation is dominated by three major effects: (i) older stellar populations drive galaxies below the relations defined for local starbursts due to a reddening of their intrinsic UV SEDs; (ii) complex geometries in high-z heavily star forming galaxies drive galaxies toward blue UV slopes owing to optically thin UV sightlines; (iii) shallow extinction curves drive galaxies downward in the IRX-beta plane due to lowered NUV/FUV extinction ratios We use these features of the UV slopes of galaxies to derive a fitting relation that reasonably collapses the scatter back toward the canonical local relation Finally, we use these results to develop an understanding for the location of two particularly enigmatic populations of galaxies in the IRX-beta plane: z~2-4 dusty star forming galaxies, and z>5 star forming galaxies