scispace - formally typeset
Search or ask a question

Showing papers by "University College Dublin published in 2016"


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
Bin Zhou1, Yuan Lu2, Kaveh Hajifathalian2, James Bentham1  +494 moreInstitutions (170)
TL;DR: In this article, the authors used a Bayesian hierarchical model to estimate trends in diabetes prevalence, defined as fasting plasma glucose of 7.0 mmol/L or higher, or history of diagnosis with diabetes, or use of insulin or oral hypoglycaemic drugs in 200 countries and territories in 21 regions, by sex and from 1980 to 2014.

2,782 citations


Journal ArticleDOI
TL;DR: This review tries to address the issue while providing the fundamental principles of these techniques, summarizing the core mathematical principles and offering practical guidelines on tackling commonly encountered problems while running DLS and ZP measurements.

2,215 citations


Journal ArticleDOI
TL;DR: This updated version of mclust adds new covariance structures, dimension reduction capabilities for visualisation, model selection criteria, initialisation strategies for the EM algorithm, and bootstrap-based inference, making it a full-featured R package for data analysis via finite mixture modelling.
Abstract: Finite mixture models are being used increasingly to model a wide variety of random phenomena for clustering, classification and density estimation. mclust is a powerful and popular package which allows modelling of data as a Gaussian finite mixture with different covariance structures and different numbers of mixture components, for a variety of purposes of analysis. Recently, version 5 of the package has been made available on CRAN. This updated version adds new covariance structures, dimension reduction capabilities for visualisation, model selection criteria, initialisation strategies for the EM algorithm, and bootstrap-based inference, making it a full-featured R package for data analysis via finite mixture modelling.

1,841 citations


Journal ArticleDOI
26 Jul 2016-eLife
TL;DR: The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
Abstract: Being taller is associated with enhanced longevity, and higher education and earnings. We reanalysed 1472 population-based studies, with measurement of height on more than 18.6 million participants to estimate mean height for people born between 1896 and 1996 in 200 countries. The largest gain in adult height over the past century has occurred in South Korean women and Iranian men, who became 20.2 cm (95% credible interval 17.5–22.7) and 16.5 cm (13.3–19.7) taller, respectively. In contrast, there was little change in adult height in some sub-Saharan African countries and in South Asia over the century of analysis. The tallest people over these 100 years are men born in the Netherlands in the last quarter of 20th century, whose average heights surpassed 182.5 cm, and the shortest were women born in Guatemala in 1896 (140.3 cm; 135.8–144.8). The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.

1,348 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+$\alpha_s$ uncertainties suitable for applications at the LHC Run II.
Abstract: We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+$\alpha_s$ uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. We finally discuss tools which allow for the delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.

1,098 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored the financial asset capabilities of bitcoin using GARCH models and found that bitcoin can be classified as something in between gold and the American dollar on a scale from pure medium of exchange advantages to pure store of value advantages.

1,050 citations


Journal ArticleDOI
TL;DR: These recommendations provide stakeholders with an updated consensus on the pharmacological treatment of PsA and strategies to reach optimal outcomes in PsA, based on a combination of evidence and expert opinion.
Abstract: Background Since the publication of the European League Against Rheumatism recommendations for the pharmacological treatment of psoriatic arthritis (PsA) in 2012, new evidence and new therapeutic agents have emerged. The objective was to update these recommendations. Methods A systematic literature review was performed regarding pharmacological treatment in PsA. Subsequently, recommendations were formulated based on the evidence and the expert opinion of the 34 Task Force members. Levels of evidence and strengths of recommendations were allocated. Results The updated recommendations comprise 5 overarching principles and 10 recommendations, covering pharmacological therapies for PsA from non-steroidal anti-inflammatory drugs (NSAIDs), to conventional synthetic (csDMARD) and biological (bDMARD) disease-modifying antirheumatic drugs, whatever their mode of action, taking articular and extra-articular manifestations of PsA into account, but focusing on musculoskeletal involvement. The overarching principles address the need for shared decision-making and treatment objectives. The recommendations address csDMARDs as an initial therapy after failure of NSAIDs and local therapy for active disease, followed, if necessary, by a bDMARD or a targeted synthetic DMARD (tsDMARD). The first bDMARD would usually be a tumour necrosis factor (TNF) inhibitor. bDMARDs targeting interleukin (IL)12/23 (ustekinumab) or IL-17 pathways (secukinumab) may be used in patients for whom TNF inhibitors are inappropriate and a tsDMARD such as a phosphodiesterase 4-inhibitor (apremilast) if bDMARDs are inappropriate. If the first bDMARD strategy fails, any other bDMARD or tsDMARD may be used. Conclusions These recommendations provide stakeholders with an updated consensus on the pharmacological treatment of PsA and strategies to reach optimal outcomes in PsA, based on a combination of evidence and expert opinion.

802 citations


Journal ArticleDOI
TL;DR: To update the 2009 Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) treatment recommendations for the spectrum of manifestations affecting patients with psoriatic arthritis (PsA).
Abstract: Objective To update the 2009 Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) treatment recommendations for the spectrum of manifestations affecting patients with psoriatic arthritis (PsA). Methods GRAPPA rheumatologists, dermatologists, and PsA patients drafted overarching principles for the management of PsA, based on consensus achieved at face-to-face meetings and via online surveys. We conducted literature reviews regarding treatment for the key domains of PsA (arthritis, spondylitis, enthesitis, dactylitis, skin disease, and nail disease) and convened a new group to identify pertinent comorbidities and their effect on treatment. Finally, we drafted treatment recommendations for each of the clinical manifestations and assessed the level of agreement for the overarching principles and treatment recommendations among GRAPPA members, using an online questionnaire. Results Six overarching principles had ≥80% agreement among both health care professionals (n = 135) and patient research partners (n = 10). We developed treatment recommendations and a schema incorporating these principles for arthritis, spondylitis, enthesitis, dactylitis, skin disease, nail disease, and comorbidities in the setting of PsA, using the Grading of Recommendations, Assessment, Development and Evaluation process. Agreement of >80% was reached for approval of the individual recommendations and the overall schema. Conclusion We present overarching principles and updated treatment recommendations for the key manifestations of PsA, including related comorbidities, based on a literature review and consensus of GRAPPA members (rheumatologists, dermatologists, other health care providers, and patient research partners). Further updates are anticipated as the therapeutic landscape in PsA evolves.

717 citations


Journal ArticleDOI
09 Jun 2016-Nature
TL;DR: In this article, the authors analyse genome-wide data from 51 Eurasians from ~45,000-7,000 years ago and find that the proportion of Neanderthal DNA decreased from 3-6% to around 2%, consistent with natural selection against Neanderthal variants in modern humans.
Abstract: Modern humans arrived in Europe ~45,000 years ago, but little is known about their genetic composition before the start of farming ~8,500 years ago. Here we analyse genome-wide data from 51 Eurasians from ~45,000-7,000 years ago. Over this time, the proportion of Neanderthal DNA decreased from 3-6% to around 2%, consistent with natural selection against Neanderthal variants in modern humans. Whereas there is no evidence of the earliest modern humans in Europe contributing to the genetic composition of present-day Europeans, all individuals between ~37,000 and ~14,000 years ago descended from a single founder population which forms part of the ancestry of present-day Europeans. An ~35,000-year-old individual from northwest Europe represents an early branch of this founder population which was then displaced across a broad region, before reappearing in southwest Europe at the height of the last Ice Age ~19,000 years ago. During the major warming period after ~14,000 years ago, a genetic component related to present-day Near Easterners became widespread in Europe. These results document how population turnover and migration have been recurring themes of European prehistory.

702 citations


Journal ArticleDOI
25 Aug 2016-Nature
TL;DR: This paper reported genome-wide ancient DNA from 44 ancient Near Easterners ranging in time between ~12,000 and 1,400 bc, from Natufian hunter-gatherers to Bronze Age farmers, showing that the earliest populations of the Near East derived around half their ancestry from a 'Basal Eurasian' lineage that had little if any Neanderthal admixture and that separated from other non-African lineages before their separation from each other.
Abstract: We report genome-wide ancient DNA from 44 ancient Near Easterners ranging in time between ~12,000 and 1,400 bc, from Natufian hunter–gatherers to Bronze Age farmers. We show that the earliest populations of the Near East derived around half their ancestry from a ‘Basal Eurasian’ lineage that had little if any Neanderthal admixture and that separated from other non-African lineages before their separation from each other. The first farmers of the southern Levant (Israel and Jordan) and Zagros Mountains (Iran) were strongly genetically differentiated, and each descended from local hunter–gatherers. By the time of the Bronze Age, these two populations and Anatolian-related farmers had mixed with each other and with the hunter–gatherers of Europe to greatly reduce genetic differentiation. The impact of the Near Eastern farmers extended beyond the Near East: farmers related to those of Anatolia spread westward into Europe; farmers related to those of the Levant spread southward into East Africa; farmers related to those of Iran spread northward into the Eurasian steppe; and people related to both the early farmers of Iran and to the pastoralists of the Eurasian steppe spread eastward into South Asia.

Journal ArticleDOI
TL;DR: In this article, the authors explore the hedging capabilities of Bitcoin by applying the asymmetric GARCH methodology used in investigation of gold and show that bitcoin can clearly be used as a hedge against stocks in the Financial Times Stock Exchange Index.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2283 moreInstitutions (141)
TL;DR: Combined fits to CMS UE proton–proton data at 7TeV and to UEProton–antiproton data from the CDF experiment at lower s, are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13.
Abstract: New sets of parameters ("tunes") for the underlying-event (UE) modeling of the PYTHIA8, PYTHIA6 and HERWIG++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE data at sqrt(s) = 7 TeV and to UE data from the CDF experiment at lower sqrt(s), are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13 TeV. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan (q q-bar to Z / gamma* to lepton-antilepton + jets) observables at 7 and 8 TeV are presented, as well as predictions of MB and UE observables at 13 TeV.

Journal ArticleDOI
TL;DR: Four new supervised methods to detect the number of clusters were developed and tested and were found to outperform the existing methods using both evenly and unevenly sampled data sets and a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested.
Abstract: Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved.

Journal ArticleDOI
TL;DR: Assessment of the effects of legislative smoking bans on morbidity and mortality from exposure to secondhand smoke, and smoking prevalence and tobacco consumption provides more robust support for the previous conclusions that the introduction of a legislative smoking ban does lead to improved health outcomes through reduction in SHS.
Abstract: Smoking bans have been implemented in a variety of settings, as well as being part of policy in many jurisdictions to protect the public and employees from the harmful effects of secondhand smoke (SHS). They also offer the potential to influence social norms and the smoking behaviour of those populations they affect. Since the first version of this review in 2010, more countries have introduced national smoking legislation banning indoor smoking.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the science capabilities of different eLISA mission designs, including four-link (two-arm) and six-link configurations with different arm lengths, low-frequency noise sensitivities and mission durations.
Abstract: We compare the science capabilities of different eLISA mission designs, including four-link (two-arm) and six-link (three-arm) configurations with different arm lengths, low-frequency noise sensitivities and mission durations. For each of these configurations we consider a few representative massive black hole formation scenarios. These scenarios are chosen to explore two physical mechanisms that greatly affect eLISA rates, namely (i) black hole seeding, and (ii) the delays between the merger of two galaxies and the merger of the black holes hosted by those galaxies. We assess the eLISA parameter estimation accuracy using a Fisher matrix analysis with spin-precessing, inspiral-only waveforms. We quantify the information present in the merger and ringdown by rescaling the inspiral-only Fisher matrix estimates using the signal-to-noise ratio from nonprecessing inspiral-merger-ringdown phenomenological waveforms, and from a reduced set of precessing numerical relativity/post-Newtonian hybrid waveforms. We find that all of the eLISA configurations considered in our study should detect some massive black hole binaries. However, configurations with six links and better low-frequency noise will provide much more information on the origin of black holes at high redshifts and on their accretion history, and they may allow the identification of electromagnetic counterparts to massive black hole mergers.

Journal ArticleDOI
TL;DR: In this article, the authors explore the relationships between student engagement, co-creation and student-staff partnership before providing a typology of the roles students can assume in working collaboratively with staff.
Abstract: Against a backdrop of rising interest in students becoming partners in learning and teaching in higher education, this paper begins by exploring the relationships between student engagement, co-creation and student–staff partnership before providing a typology of the roles students can assume in working collaboratively with staff. Acknowledging that co-creating learning and teaching is not straightforward, a set of examples from higher education institutions in Europe and North America illustrates some important challenges that can arise during co-creation. These examples also provide the basis for suggestions regarding how such challenges might be resolved or re-envisaged as opportunities for more meaningful collaboration. The challenges are presented under three headings: resistance to co-creation; navigating institutional structures, practices and norms; and establishing an inclusive co-creation approach. The paper concludes by highlighting the importance of transparency within co-creation approaches and of changing mindsets about the potential opportunities and institutional benefits of staff and students co-creating learning and teaching.

Journal ArticleDOI
Roel Aaij1, C. Abellán Beteta2, Bernardo Adeva3, Marco Adinolfi4  +761 moreInstitutions (64)
TL;DR: An angular analysis of the B0 → K*0(→ K+π−)μ+μ− decay is presented in this paper, where the angular observables and their correlations are reported in bins of q2, the invariant mass squared of the dimuon system.
Abstract: An angular analysis of the B0 → K*0(→ K+π−)μ+μ− decay is presented. The dataset corresponds to an integrated luminosity of 3.0 fb−1 of pp collision data collected at the LHCb experiment. The complete angular information from the decay is used to determine CP-averaged observables and CP asymmetries, taking account of possible contamination from decays with the K+π− system in an S-wave configuration. The angular observables and their correlations are reported in bins of q2, the invariant mass squared of the dimuon system. The observables are determined both from an unbinned maximum likelihood fit and by using the principal moments of the angular distribution. In addition, by fitting for q2-dependent decay amplitudes in the region 1.1 < q2 < 6.0 GeV2/c4, the zero-crossing points of several angular observables are computed. A global fit is performed to the complete set of CP-averaged observables obtained from the maximum likelihood fit. This fit indicates differences with predictions based on the Standard Model at the level of 3.4 standard deviations. These differences could be explained by contributions from physics beyond the Standard Model, or by an unexpectedly large hadronic effect that is not accounted for in the Standard Model predictions.[Figure not available: see fulltext.]

Journal ArticleDOI
01 Mar 2016-eLife
TL;DR: Using large-scale online assessment of psychiatric symptoms and neurocognitive performance in two independent general-population samples, it was found that deficits in goal-directed control were most strongly associated with a symptom dimension comprising compulsive behavior and intrusive thought.
Abstract: Prominent theories suggest that compulsive behaviors, characteristic of obsessive-compulsive disorder and addiction, are driven by shared deficits in goal-directed control, which confers vulnerability for developing rigid habits. However, recent studies have shown that deficient goal-directed control accompanies several disorders, including those without an obvious compulsive element. Reasoning that this lack of clinical specificity might reflect broader issues with psychiatric diagnostic categories, we investigated whether a dimensional approach would better delineate the clinical manifestations of goal-directed deficits. Using large-scale online assessment of psychiatric symptoms and neurocognitive performance in two independent general-population samples, we found that deficits in goal-directed control were most strongly associated with a symptom dimension comprising compulsive behavior and intrusive thought. This association was highly specific when compared to other non-compulsive aspects of psychopathology. These data showcase a powerful new methodology and highlight the potential of a dimensional, biologically-grounded approach to psychiatry research.

Journal ArticleDOI
TL;DR: This paper considers the question ‘what makes Big Data, Big Data?’, applying Kitchin’s taxonomy of seven Big Data traits to 26 datasets drawn from seven domains, each of which is considered in the literature to constitute Big Data.
Abstract: Big Data has been variously defined in the literature. In the main, definitions suggest that Big Data possess a suite of key traits: volume, velocity and variety (the 3Vs), but also exhaustivity, resolution, indexicality, relationality, extensionality and scalability. However, these definitions lack ontological clarity, with the term acting as an amorphous, catch-all label for a wide selection of data. In this paper, we consider the question ‘what makes Big Data, Big Data?’, applying Kitchin’s taxonomy of seven Big Data traits to 26 datasets drawn from seven domains, each of which is considered in the literature to constitute Big Data. The results demonstrate that only a handful of datasets possess all seven traits, and some do not possess either volume and/or variety. Instead, there are multiple forms of Big Data. Our analysis reveals that the key definitional boundary markers are the traits of velocity and exhaustivity. We contend that Big Data as an analytical category needs to be unpacked, with the genus of Big Data further delineated and its various species identified. It is only through such ontological work that we will gain conceptual clarity about what constitutes Big Data, formulate how best to make sense of it, and identify how it might be best used to make sense of the world.

Journal ArticleDOI
TL;DR: In this largest meta-analysis of hypertensive patients, the nocturnal BP fall provided substantial prognostic information, independent of 24-hour SBP levels, and heterogeneity was low for systolic night-to-day ratio and Reverse dipping and moderate for extreme dippers.
Abstract: The prognostic importance of the nocturnal systolic blood pressure (SBP) fall, adjusted for average 24-hour SBP levels, is unclear. The Ambulatory Blood Pressure Collaboration in Patients With Hypertension (ABC-H) examined this issue in a meta-analysis of 17 312 hypertensives from 3 continents. Risks were computed for the systolic night-to-day ratio and for different dipping patterns (extreme, reduced, and reverse dippers) relative to normal dippers. ABC-H investigators provided multivariate adjusted hazard ratios (HRs), with and without adjustment for 24-hour SBP, for total cardiovascular events (CVEs), coronary events, strokes, cardiovascular mortality, and total mortality. Average 24-hour SBP varied from 131 to 140 mm Hg and systolic night-to-day ratio from 0.88 to 0.93. There were 1769 total CVEs, 916 coronary events, 698 strokes, 450 cardiovascular deaths, and 903 total deaths. After adjustment for 24-hour SBP, the systolic night-to-day ratio predicted all outcomes: from a 1-SD increase, summary HRs were 1.12 to 1.23. Reverse dipping also predicted all end points: HRs were 1.57 to 1.89. Reduced dippers, relative to normal dippers, had a significant 27% higher risk for total CVEs. Risks for extreme dippers were significantly influenced by antihypertensive treatment (P<0.001): untreated patients had increased risk of total CVEs (HR, 1.92), whereas treated patients had borderline lower risk (HR, 0.72) than normal dippers. For CVEs, heterogeneity was low for systolic night-to-day ratio and reverse/reduced dipping and moderate for extreme dippers. Quality of included studies was moderate to high, and publication bias was undetectable. In conclusion, in this largest meta-analysis of hypertensive patients, the nocturnal BP fall provided substantial prognostic information, independent of 24-hour SBP levels.

Journal ArticleDOI
TL;DR: The literature review that follows expands this paradigm and introduces emerging areas that should be prioritised for continued research, supporting a companion position statement paper that proposes recommendations for using this summary of information, and needs for specific future research.
Abstract: Lateral ankle sprains (LASs) are the most prevalent musculoskeletal injury in physically active populations. They also have a high prevalence in the general population and pose a substantial healthcare burden. The recurrence rates of LASs are high, leading to a large percentage of patients with LAS developing chronic ankle instability. This chronicity is associated with decreased physical activity levels and quality of life and associates with increasing rates of post-traumatic ankle osteoarthritis, all of which generate financial costs that are larger than many have realised. The literature review that follows expands this paradigm and introduces emerging areas that should be prioritised for continued research, supporting a companion position statement paper that proposes recommendations for using this summary of information, and needs for specific future research.

Journal ArticleDOI
TL;DR: The Fermi Gamma-ray Burst Monitor (GBM) is an excellent partner in the search for electromagnetic counterparts to gravitational-wave (GW) events as mentioned in this paper, with an instantaneous view of 70% of the sky.
Abstract: With an instantaneous view of 70% of the sky, the Fermi Gamma-ray Burst Monitor (GBM) is an excellent partner in the search for electromagnetic counterparts to gravitational-wave (GW) events. GBM observations at the time of the Laser Interferometer Gravitational-wave Observatory (LIGO) event GW150914 reveal the presence of a weak transient above 50 keV, 0.4 s after the GW event, with a false-alarm probability of 0.0022 (2.9(sigma)). This weak transient lasting 1 s was not detected by any other instrument and does not appear to be connected with other previously known astrophysical, solar, terrestrial, or magnetospheric activity. Its localization is ill-constrained but consistent with the direction of GW150914. The duration and spectrum of the transient event are consistent with a weak short gamma-ray burst (GRB) arriving at a large angle to the direction in which Fermi was pointing where the GBM detector response is not optimal. If the GBM transient is associated with GW150914, then this electromagnetic signal from a stellar mass black hole binary merger is unexpected. We calculate a luminosity in hard X-ray emission between 1 keV and 10 MeV of 1.8(sup +1.5, sub -1.0) x 10(exp 49) erg/s. Future joint observations of GW events by LIGO/Virgo and Fermi GBM could reveal whether the weak transient reported here is a plausible counterpart to GW150914 or a chance coincidence, and will further probe the connection between compact binary mergers and short GRBs.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed international experience with curtailment of wind and solar energy on bulk power systems in recent years, with a focus on eleven countries in Europe, North America, and Asia.
Abstract: Greater penetrations of variable renewable generation on some electric grids have resulted in increased levels of curtailment in recent years. Studies of renewable energy grid integration have found that curtailment levels may grow as the penetration of wind and solar energy generation increases. This paper reviews international experience with curtailment of wind and solar energy on bulk power systems in recent years, with a focus on eleven countries in Europe, North America, and Asia. It examines levels of curtailment, the causes of curtailment, curtailment methods and use of market-based dispatch, as well as operational, institutional, and other changes that are being made to reduce renewable energy curtailment.

Journal ArticleDOI
27 Jan 2016-PLOS ONE
TL;DR: Investigation of 3 aspects of minimization, as defined by the Childhood Trauma Questionnaire's MD scale, suggested that a minimizing response bias—as detected by the MD subscale—has a small but significant moderating effect on the CTQ’s discriminative validity.
Abstract: Childhood maltreatment has diverse, lifelong impact on morbidity and mortality. The Childhood Trauma Questionnaire (CTQ) is one of the most commonly used scales to assess and quantify these experiences and their impact. Curiously, despite very widespread use of the CTQ, scores on its Minimization-Denial (MD) subscale-originally designed to assess a positive response bias-are rarely reported. Hence, little is known about this measure. If response biases are either common or consequential, current practices of ignoring the MD scale deserve revision. Therewith, we designed a study to investigate 3 aspects of minimization, as defined by the CTQ's MD scale: 1) its prevalence; 2) its latent structure; and finally 3) whether minimization moderates the CTQ's discriminative validity in terms of distinguishing between psychiatric patients and community volunteers. Archival, item-level CTQ data from 24 multinational samples were combined for a total of 19,652 participants. Analyses indicated: 1) minimization is common; 2) minimization functions as a continuous construct; and 3) high MD scores attenuate the ability of the CTQ to distinguish between psychiatric patients and community volunteers. Overall, results suggest that a minimizing response bias-as detected by the MD subscale-has a small but significant moderating effect on the CTQ's discriminative validity. Results also may suggest that some prior analyses of maltreatment rates or the effects of early maltreatment that have used the CTQ may have underestimated its incidence and impact. We caution researchers and clinicians about the widespread practice of using the CTQ without the MD or collecting MD data but failing to assess and control for its effects on outcomes or dependent variables.

Journal ArticleDOI
TL;DR: An overview of the potential, recent advances, and challenges of optical security and encryption using free space optics is presented, highlighting the need for more specialized hardware and image processing algorithms.
Abstract: Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Perez-Cabre], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.

Journal ArticleDOI
TL;DR: It is shown that grain boundaries notably accelerate the electron-hole recombination in MAPbI3, and a route to increased photon-to-electron conversion efficiencies through rational GB passivation is suggested.
Abstract: Advancing organohalide perovskite solar cells requires understanding of carrier dynamics. Electron–hole recombination is a particularly important process because it constitutes a major pathway of energy and current losses. Grain boundaries (GBs) are common in methylammonium lead iodine CH3NH3PbI3 (MAPbI3) perovskite polycrystalline films. First-principles calculations have suggested that GBs have little effect on the recombination; however, experiments defy this prediction. Using nonadiabatic (NA) molecular dynamics combined with time-domain density functional theory, we show that GBs notably accelerate the electron–hole recombination in MAPbI3. First, GBs enhance the electron–phonon NA coupling by localizing and contributing to the electron and hole wave functions and by creating additional phonon modes that couple to the electronic degrees of freedom. Second, GBs decrease the MAPbI3 bandgap, reducing the number of vibrational quanta needed to accommodate the electronic energy loss. Third, the phonon-ind...

Journal ArticleDOI
TL;DR: The Eukaryotic Linear Motif resource is a manually curated database of short linear motifs (SLiMs) that contains more than 240 different motif classes with over 2700 experimentally validated instances.
Abstract: The Eukaryotic Linear Motif (ELM) resource (http://elm.eu.org) is a manually curated database of short linear motifs (SLiMs). In this update, we present the latest additions to this resource, along with more improvements to the web interface. ELM 2016 contains more than 240 different motif classes with over 2700 experimentally validated instances, manually curated from more than 2400 scientific publications. In addition, more data have been made available as individually searchable pages and are downloadable in various formats.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy1  +1619 moreInstitutions (220)
TL;DR: In this article, the sky localization of the first observed compact binary merger is presented, where the authors describe the low-latency analysis of the LIGO data and present a sky localization map.
Abstract: A gravitational-wave (GW) transient was identified in data recorded by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) detectors on 2015 September 14. The event, initially designated G184098 and later given the name GW150914, is described in detail elsewhere. By prior arrangement, preliminary estimates of the time, significance, and sky location of the event were shared with 63 teams of observers covering radio, optical, near-infrared, X-ray, and gamma-ray wavelengths with ground- and space-based facilities. In this Letter we describe the low-latency analysis of the GW data and present the sky localization of the first observed compact binary merger. We summarize the follow-up observations reported by 25 teams via private Gamma-ray Coordinates Network circulars, giving an overview of the participating facilities, the GW sky localization coverage, the timeline, and depth of the observations. As this event turned out to be a binary black hole merger, there is little expectation of a detectable electromagnetic (EM) signature. Nevertheless, this first broadband campaign to search for a counterpart of an Advanced LIGO source represents a milestone and highlights the broad capabilities of the transient astronomy community and the observing strategies that have been developed to pursue neutron star binary merger events. Detailed investigations of the EM data and results of the EM follow-up campaign are being disseminated in papers by the individual teams.

Journal ArticleDOI
TL;DR: The effective antifibrotic function of engineered MSCs is able to selectively transfer miR-let7c to damaged kidney cells and will pave the way for the use of MSC's for therapeutic delivery of miRNA targeted at kidney disease.