scispace - formally typeset
Search or ask a question

Showing papers by "York University published in 2016"


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: A clinically oriented review and evidence-based recommendations regarding physical activity and exercise in people with type 1 diabetes, type 2 diabetes, gestational diabetes mellitus, and prediabetes are provided.
Abstract: The adoption and maintenance of physical activity are critical foci for blood glucose management and overall health in individuals with diabetes and prediabetes. Recommendations and precautions vary depending on individual characteristics and health status. In this Position Statement, we provide a clinically oriented review and evidence-based recommendations regarding physical activity and exercise in people with type 1 diabetes, type 2 diabetes, gestational diabetes mellitus, and prediabetes. Physical activity includes all movement that increases energy use, whereas exercise is planned, structured physical activity. Exercise improves blood glucose control in type 2 diabetes, reduces cardiovascular risk factors, contributes to weight loss, and improves well-being (1,2). Regular exercise may prevent or delay type 2 diabetes development (3). Regular exercise also has considerable health benefits for people with type 1 diabetes (e.g., improved cardiovascular fitness, muscle strength, insulin sensitivity, etc.) (4). The challenges related to blood glucose management vary with diabetes type, activity type, and presence of diabetes-related complications (5,6). Physical activity and exercise recommendations, therefore, should be tailored to meet the specific needs of each individual. Physical activity recommendations and precautions may vary by diabetes type. The primary types of diabetes are type 1 and type 2. Type 1 diabetes (5%–10% of cases) results from cellular-mediated autoimmune destruction of the pancreatic β-cells, producing insulin deficiency (7). Although it can occur at any age, β-cell destruction rates vary, typically occurring more rapidly in youth than in adults. Type 2 diabetes (90%–95% of cases) results from a progressive loss of insulin secretion, usually also with insulin resistance. Gestational diabetes mellitus occurs during pregnancy, with screening typically occurring at 24–28 weeks of gestation in pregnant women not previously known to have diabetes. Prediabetes is diagnosed when blood glucose levels are above the normal range but not high enough to be classified as …

1,532 citations


Journal ArticleDOI
Sergey Alekhin, Wolfgang Altmannshofer1, Takehiko Asaka2, Brian Batell3, Fedor Bezrukov4, Kyrylo Bondarenko5, Alexey Boyarsky5, Ki-Young Choi6, Cristóbal Corral7, Nathaniel Craig8, David Curtin9, Sacha Davidson10, Sacha Davidson11, André de Gouvêa12, Stefano Dell'Oro, Patrick deNiverville13, P. S. Bhupal Dev14, Herbi K. Dreiner15, Marco Drewes16, Shintaro Eijima17, Rouven Essig18, Anthony Fradette13, Björn Garbrecht16, Belen Gavela19, Gian F. Giudice3, Mark D. Goodsell20, Mark D. Goodsell21, Dmitry Gorbunov22, Stefania Gori1, Christophe Grojean23, Alberto Guffanti24, Thomas Hambye25, Steen Honoré Hansen24, Juan Carlos Helo7, Juan Carlos Helo26, Pilar Hernández27, Alejandro Ibarra16, Artem Ivashko5, Artem Ivashko28, Eder Izaguirre1, Joerg Jaeckel29, Yu Seon Jeong30, Felix Kahlhoefer, Yonatan Kahn31, Andrey Katz32, Andrey Katz33, Andrey Katz3, Choong Sun Kim30, Sergey Kovalenko7, Gordan Krnjaic1, Valery E. Lyubovitskij34, Valery E. Lyubovitskij35, Valery E. Lyubovitskij36, Simone Marcocci, Matthew McCullough3, David McKeen37, Guenakh Mitselmakher38, Sven Moch39, Rabindra N. Mohapatra9, David E. Morrissey40, Maksym Ovchynnikov28, Emmanuel A. Paschos, Apostolos Pilaftsis14, Maxim Pospelov13, Maxim Pospelov1, Mary Hall Reno41, Andreas Ringwald, Adam Ritz13, Leszek Roszkowski, Valery Rubakov, Oleg Ruchayskiy17, Oleg Ruchayskiy24, Ingo Schienbein42, Daniel Schmeier15, Kai Schmidt-Hoberg, Pedro Schwaller3, Goran Senjanovic43, Osamu Seto44, Mikhail Shaposhnikov17, Lesya Shchutska38, J. Shelton45, Robert Shrock18, Brian Shuve1, Michael Spannowsky46, Andrew Spray47, Florian Staub3, Daniel Stolarski3, Matt Strassler33, Vladimir Tello, Francesco Tramontano48, Anurag Tripathi, Sean Tulin49, Francesco Vissani, Martin Wolfgang Winkler15, Kathryn M. Zurek50, Kathryn M. Zurek51 
Perimeter Institute for Theoretical Physics1, Niigata University2, CERN3, University of Connecticut4, Leiden University5, Korea Astronomy and Space Science Institute6, Federico Santa María Technical University7, University of California, Santa Barbara8, University of Maryland, College Park9, Claude Bernard University Lyon 110, University of Lyon11, Northwestern University12, University of Victoria13, University of Manchester14, University of Bonn15, Technische Universität München16, École Polytechnique Fédérale de Lausanne17, Stony Brook University18, Autonomous University of Madrid19, University of Paris20, Centre national de la recherche scientifique21, Moscow Institute of Physics and Technology22, Autonomous University of Barcelona23, University of Copenhagen24, Université libre de Bruxelles25, University of La Serena26, University of Valencia27, Taras Shevchenko National University of Kyiv28, Heidelberg University29, Yonsei University30, Princeton University31, University of Geneva32, Harvard University33, University of Tübingen34, Tomsk State University35, Tomsk Polytechnic University36, University of Washington37, University of Florida38, University of Hamburg39, TRIUMF40, University of Iowa41, University of Grenoble42, International Centre for Theoretical Physics43, Hokkai Gakuen University44, University of Illinois at Urbana–Champaign45, Durham University46, University of Melbourne47, University of Naples Federico II48, York University49, University of California, Berkeley50, Lawrence Berkeley National Laboratory51
TL;DR: It is demonstrated that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.
Abstract: This paper describes the physics case for a new fixed target facility at CERN SPS. The SHiP (search for hidden particles) experiment is intended to hunt for new physics in the largely unexplored domain of very weakly interacting particles with masses below the Fermi scale, inaccessible to the LHC experiments, and to study tau neutrino physics. The same proton beam setup can be used later to look for decays of tau-leptons with lepton flavour number non-conservation, $\tau \to 3\mu $ and to search for weakly-interacting sub-GeV dark matter candidates. We discuss the evidence for physics beyond the standard model and describe interactions between new particles and four different portals—scalars, vectors, fermions or axion-like particles. We discuss motivations for different models, manifesting themselves via these interactions, and how they can be probed with the SHiP experiment and present several case studies. The prospects to search for relatively light SUSY and composite particles at SHiP are also discussed. We demonstrate that the SHiP experiment has a unique potential to discover new physics and can directly probe a number of solutions of beyond the standard model puzzles, such as neutrino masses, baryon asymmetry of the Universe, dark matter, and inflation.

842 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of molecular communication (MC) through a communication engineering lens is provided in this paper, which includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms.
Abstract: With much advancement in the field of nanotechnology, bioengineering, and synthetic biology over the past decade, microscales and nanoscales devices are becoming a reality. Yet the problem of engineering a reliable communication system between tiny devices is still an open problem. At the same time, despite the prevalence of radio communication, there are still areas where traditional electromagnetic waves find it difficult or expensive to reach. Points of interest in industry, cities, and medical applications often lie in embedded and entrenched areas, accessible only by ventricles at scales too small for conventional radio waves and microwaves, or they are located in such a way that directional high frequency systems are ineffective. Inspired by nature, one solution to these problems is molecular communication (MC), where chemical signals are used to transfer information. Although biologists have studied MC for decades, it has only been researched for roughly 10 year from a communication engineering lens. Significant number of papers have been published to date, but owing to the need for interdisciplinary work, much of the results are preliminary. In this survey, the recent advancements in the field of MC engineering are highlighted. First, the biological, chemical, and physical processes used by an MC system are discussed. This includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms. Then, a comprehensive survey of some of the recent works on MC through a communication engineering lens is provided. The survey ends with a technology readiness analysis of MC and future research directions.

762 citations


Journal ArticleDOI
06 Sep 2016-JAMA
TL;DR: Exposure to MRI during the first trimester of pregnancy compared with nonexposure was not associated with increased risk of harm to the fetus or in early childhood, and gadolinium MRI at any time during pregnancy was associated with an increasedrisk of a broad set of rheumatological, inflammatory, or infiltrative skin conditions and for stillbirth or neonatal death.
Abstract: Importance Fetal safety of magnetic resonance imaging (MRI) during the first trimester of pregnancy or with gadolinium enhancement at any time of pregnancy is unknown. Objective To evaluate the long-term safety after exposure to MRI in the first trimester of pregnancy or to gadolinium at any time during pregnancy. Design, Setting, and Participants Universal health care databases in the province of Ontario, Canada, were used to identify all births of more than 20 weeks, from 2003-2015. Exposures Magnetic resonance imaging exposure in the first trimester of pregnancy, or gadolinium MRI exposure at any time in pregnancy. Main Outcomes and Measures For first-trimester MRI exposure, the risk of stillbirth or neonatal death within 28 days of birth and any congenital anomaly, neoplasm, and hearing or vision loss was evaluated from birth to age 4 years. For gadolinium-enhanced MRI in pregnancy, connective tissue or skin disease resembling nephrogenic systemic fibrosis (NSF-like) and a broader set of rheumatological, inflammatory, or infiltrative skin conditions from birth were identified. Results Of 1 424 105 deliveries (48% girls; mean gestational age, 39 weeks), the overall rate of MRI was 3.97 per 1000 pregnancies. Comparing first-trimester MRI (n = 1737) to no MRI (n = 1 418 451), there were 19 stillbirths or deaths vs 9844 in the unexposed cohort (adjusted relative risk [RR], 1.68; 95% CI, 0.97 to 2.90) for an adjusted risk difference of 4.7 per 1000 person-years (95% CI, −1.6 to 11.0). The risk was also not significantly higher for congenital anomalies, neoplasm, or vision or hearing loss. Comparing gadolinium MRI (n = 397) with no MRI (n = 1 418 451), the hazard ratio for NSF-like outcomes was not statistically significant. The broader outcome of any rheumatological, inflammatory, or infiltrative skin condition occurred in 123 vs 384 180 births (adjusted HR, 1.36; 95% CI, 1.09 to 1.69) for an adjusted risk difference of 45.3 per 1000 person-years (95% CI, 11.3 to 86.8). Stillbirths and neonatal deaths occurred among 7 MRI-exposed vs 9844 unexposed pregnancies (adjusted RR, 3.70; 95% CI, 1.55 to 8.85) for an adjusted risk difference of 47.5 per 1000 pregnancies (95% CI, 9.7 to 138.2). Conclusions and Relevance Exposure to MRI during the first trimester of pregnancy compared with nonexposure was not associated with increased risk of harm to the fetus or in early childhood. Gadolinium MRI at any time during pregnancy was associated with an increased risk of a broad set of rheumatological, inflammatory, or infiltrative skin conditions and for stillbirth or neonatal death. The study may not have been able to detect rare adverse outcomes.

555 citations


Posted Content
TL;DR: In this article, the authors proposed spatiotemporal residual networks (ResNets) for action recognition in videos, which is a combination of two-stream convolutional networks and residual connections between appearance and motion pathways.
Abstract: Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.

504 citations


Journal ArticleDOI
TL;DR: The results dramatically improve the constraints on SIDM models and may allow the masses of both DM and dark mediator particles to be measured even if the dark sector is completely hidden from the standard model, which is illustrated for the dark photon model.
Abstract: Astrophysical observations spanning dwarf galaxies to galaxy clusters indicate that dark matter (DM) halos are less dense in their central regions compared to expectations from collisionless DM N-body simulations. Using detailed fits to DM halos of galaxies and clusters, we show that self-interacting DM (SIDM) may provide a consistent solution to the DM deficit problem across all scales, even though individual systems exhibit a wide diversity in halo properties. Since the characteristic velocity of DM particles varies across these systems, we are able to measure the self-interaction cross section as a function of kinetic energy and thereby deduce the SIDM particle physics model parameters. Our results prefer a mildly velocity-dependent cross section, from σ/m≈2 cm^{2}/g on galaxy scales to σ/m≈0.1 cm^{2}/g on cluster scales, consistent with the upper limits from merging clusters. Our results dramatically improve the constraints on SIDM models and may allow the masses of both DM and dark mediator particles to be measured even if the dark sector is completely hidden from the standard model, which we illustrate for the dark photon model.

482 citations


Proceedings Article
05 Dec 2016
TL;DR: The novel spatiotemporal ResNet is introduced and evaluated using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.
Abstract: Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.

453 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2828 moreInstitutions (191)
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.

440 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2812 moreInstitutions (207)
TL;DR: In this paper, an independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b tagging algorithm used in the online trigger are also presented.
Abstract: The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag rate for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.

362 citations


Journal ArticleDOI
26 Apr 2016
TL;DR: Holobionts and hologenomes are incontrovertible, multipartite entities that result from ecological, evolutionary, and genetic processes at various levels that constitute a wider vocabulary and framework for host biology in light of the microbiome.
Abstract: Given the complexity of host-microbiota symbioses, scientists and phi- losophers are asking questions at new biological levels of hierarchical organization— what is a holobiont and hologenome? When should this vocabulary be applied? Are these concepts a null hypothesis for host-microbe systems or limited to a certain spectrum of symbiotic interactions such as host-microbial coevolution? Critical dis- course is necessary in this nascent area, but productive discourse requires that skep- tics and proponents use the same lexicon. For instance, critiquing the hologenome concept is not synonymous with critiquing coevolution, and arguing that an entity is not a primary unit of selection dismisses the fact that the hologenome concept has always embraced multilevel selection. Holobionts and hologenomes are incontro- vertible, multipartite entities that result from ecological, evolutionary, and genetic processes at various levels. They are not restricted to one special process but consti- tute a wider vocabulary and framework for host biology in light of the microbiome.

Journal ArticleDOI
TL;DR: In this paper, a rolling window analysis is used to construct out-of-sample one-step-ahead forecasts of dynamic conditional correlations and optimal hedge ratios for emerging market stock prices.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2862 moreInstitutions (191)
TL;DR: The methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, are described, with a primary focus on the large 20.3 kg-1 data sample.
Abstract: The large rate of multiple simultaneous protonproton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the advers ...

Journal ArticleDOI
TL;DR: Impacts of NO3-BVOC chemistry on air quality and climate are outlined, and critical research needs to better constrain this interaction to improve the predictive capabilities of atmospheric models.
Abstract: . Oxidation of biogenic volatile organic compounds (BVOC) by the nitrate radical (NO3) represents one of the important interactions between anthropogenic emissions related to combustion and natural emissions from the biosphere. This interaction has been recognized for more than 3 decades, during which time a large body of research has emerged from laboratory, field, and modeling studies. NO3-BVOC reactions influence air quality, climate and visibility through regional and global budgets for reactive nitrogen (particularly organic nitrates), ozone, and organic aerosol. Despite its long history of research and the significance of this topic in atmospheric chemistry, a number of important uncertainties remain. These include an incomplete understanding of the rates, mechanisms, and organic aerosol yields for NO3-BVOC reactions, lack of constraints on the role of heterogeneous oxidative processes associated with the NO3 radical, the difficulty of characterizing the spatial distributions of BVOC and NO3 within the poorly mixed nocturnal atmosphere, and the challenge of constructing appropriate boundary layer schemes and non-photochemical mechanisms for use in state-of-the-art chemical transport and chemistry–climate models. This review is the result of a workshop of the same title held at the Georgia Institute of Technology in June 2015. The first half of the review summarizes the current literature on NO3-BVOC chemistry, with a particular focus on recent advances in instrumentation and models, and in organic nitrate and secondary organic aerosol (SOA) formation chemistry. Building on this current understanding, the second half of the review outlines impacts of NO3-BVOC chemistry on air quality and climate, and suggests critical research needs to better constrain this interaction to improve the predictive capabilities of atmospheric models.

Journal ArticleDOI
TL;DR: An extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates.
Abstract: Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naive model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors presented results for several light hadronic quantities obtained from simulations of $2+1$ flavor domain wall lattice QCD with large physical volumes and nearly physical pion masses at two lattice spacings.
Abstract: We present results for several light hadronic quantities (${f}_{\ensuremath{\pi}}$, ${f}_{K}$, ${B}_{K}$, ${m}_{ud}$, ${m}_{s}$, ${t}_{0}^{1/2}$, ${w}_{0}$) obtained from simulations of $2+1$ flavor domain wall lattice QCD with large physical volumes and nearly physical pion masses at two lattice spacings We perform a short, $\mathcal{O}(3)%$, extrapolation in pion mass to the physical values by combining our new data in a simultaneous chiral/continuum ``global fit'' with a number of other ensembles with heavier pion masses We use the physical values of ${m}_{\ensuremath{\pi}}$, ${m}_{K}$ and ${m}_{\mathrm{\ensuremath{\Omega}}}$ to determine the two quark masses and the scale---all other quantities are outputs from our simulations We obtain results with subpercent statistical errors and negligible chiral and finite-volume systematics for these light hadronic quantities, including ${f}_{\ensuremath{\pi}}=1302(9)\text{ }\text{ }\mathrm{MeV}$; ${f}_{K}=1555(8)\text{ }\text{ }\mathrm{MeV}$; the average up/down quark mass and strange quark mass in the $\overline{\mathrm{MS}}$ scheme at 3 GeV, 2997(49) and 8164(117) MeV respectively; and the neutral kaon mixing parameter, ${B}_{K}$, in the renormalization group invariant scheme, 0750(15) and the $\overline{\mathrm{MS}}$ scheme at 3 GeV, 0530(11)

Journal ArticleDOI
TL;DR: A case for why ontologies can contribute to blockchain design is made and a traceability ontology is analyzed and some of its representations are translated to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.
Abstract: An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods -- e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods -- has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.

Journal ArticleDOI
19 Jan 2016-JAMA
TL;DR: Among family members of older patients with fee-for service Medicare who died of lung or colorectal cancer, earlier hospice enrollment, avoidance of ICU admissions within 30 days of death, and death occurring outside the hospital were associated with perceptions of better end-of-life care.
Abstract: Importance Patients with advanced-stage cancer are receiving increasingly aggressive medical care near death, despite growing concerns that this reflects poor-quality care. Objective To assess the association of aggressive end-of-life care with bereaved family members’ perceptions of the quality of end-of-life care and patients’ goal attainment. Design, Setting, and Participants Interviews with 1146 family members of Medicare patients with advanced-stage lung or colorectal cancer in the Cancer Care Outcomes Research and Surveillance study (a multiregional, prospective, observational study) who died by the end of 2011 (median, 144.5 days after death; interquartile range, 85.0-551.0 days). Exposures Claims-based quality measures of aggressive end-of-life care (ie, intensive care unit [ICU] admission or repeated hospitalizations or emergency department visits during the last month of life; chemotherapy ≤2 weeks of death; no hospice or ≤3 days of hospice services; and deaths occurring in the hospital). Main Outcomes and Measures Family member–reported quality rating of “excellent” for end-of-life care. Secondary outcomes included patients’ goal attainment (ie, end-of-life care congruent with patients’ wishes and location of death occurred in preferred place). Results Of 1146 patients with cancer (median age, 76.0 years [interquartile range, 65.0-87.0 years]; 55.8% male), bereaved family members reported excellent end-of-life care for 51.3%. Family members reported excellent end-of-life care more often for patients who received hospice care for longer than 3 days (58.8% [352/599]) than those who did not receive hospice care or received 3 or fewer days (43.1% [236/547]) (adjusted difference, 16.5 percentage points [95% CI, 10.7 to 22.4 percentage points]). In contrast, family members of patients admitted to an ICU within 30 days of death reported excellent end-of-life care less often (45.0% [68/151]) than those who were not admitted to an ICU within 30 days of death (52.3% [520/995]) (adjusted difference, −9.4 percentage points [95% CI, −18.2 to −0.6 percentage points]). Similarly, family members of patients who died in the hospital reported excellent end-of-life care less often (42.2% [194/460]) than those who did not die in the hospital (57.4% [394/686]) (adjusted difference, −17.0 percentage points [95% CI, −22.9 to −11.1 percentage points]). Family members of patients who did not receive hospice care or received 3 or fewer days were less likely to report that patients died in their preferred location (40.0% [152/380]) than those who received hospice care for longer than 3 days (72.8% [287/394]) (adjusted difference, −34.4 percentage points [95% CI, −41.7 to −27.0 percentage points]). Conclusions and Relevance Among family members of older patients with fee-for service Medicare who died of lung or colorectal cancer, earlier hospice enrollment, avoidance of ICU admissions within 30 days of death, and death occurring outside the hospital were associated with perceptions of better end-of-life care. These findings are supportive of advance care planning consistent with the preferences of patients.

Journal ArticleDOI
Morad Aaboud, Alexander Kupco1, P. Davison2, Samuel Webb3  +2869 moreInstitutions (194)
TL;DR: The luminosity determination for the ATLAS detector at the LHC during pp collisions at s√= 8 TeV in 2012 is presented in this article, where the evaluation of the luminosity scale is performed using several luminometers.
Abstract: The luminosity determination for the ATLAS detector at the LHC during pp collisions at s√= 8 TeV in 2012 is presented. The evaluation of the luminosity scale is performed using several luminometers ...

Journal ArticleDOI
TL;DR: Qualitative findings suggest that participants view policies as nuisance, ignoring them to pursue the ends of digital production, without being inhibited by the means.
Abstract: This paper addresses ‘the biggest lie on the internet’ with an empirical investigation of privacy policy (PP) and terms of service (TOS) policy reading behavior. An experimental survey (N=543) assessed the extent to which individuals ignore PP and TOS when joining a fictitious social networking site, NameDrop. Results reveal 74% skipped PP, selecting ‘quick join.’ For readers, average PP reading time was 73 seconds, and average TOS reading time was 51 seconds. Based on average adult reading speed (250-280 words per minute), PP should have taken 30 minutes to read, TOS 16 minutes. A regression analysis revealed information overload as a significant negative predictor of reading TOS upon signup, when TOS changes, and when PP changes. Qualitative findings further suggest that participants view policies as nuisance, ignoring them to pursue the ends of digital production, without being inhibited by the means. Implications were revealed as 98% missed NameDrop TOS ‘gotcha clauses’ about data sharing with the NSA and employers, and about providing a first-born child as payment for SNS access.

Journal ArticleDOI
Xiaodi Li1, Jianhong Wu1
TL;DR: In this work, general and applicable results for uniform stability, uniform asymptotic stability and exponential stability of the systems with state-dependent delayed impulses are established by using the impulsive control theory and some comparison arguments.

Journal ArticleDOI
TL;DR: In this paper, the authors comprehensively review the evolution of biochar from several lignocellulosic biomasses influenced by pyrolysis temperature and heating rate.
Abstract: Biofuels and biomaterials are gaining increased attention because of their ecofriendly nature and renewable precursors. Biochar is a recalcitrant carbonaceous product obtained from pyrolysis of biomass and other biogenic wastes. Biochar has found many notable applications in diverse areas because of its versatile physicochemical properties. Some of the promising biochar applications discussed in this paper include char gasification and combustion for energy production, soil remediation, carbon sequestration, catalysis, as well as development of activated carbon and specialty materials with biomedical and industrial uses. The pyrolysis temperature and heating rates are the limiting factors that determine the biochar properties such as fixed carbon, volatile matter, mineral phases, surface area, porosity and pore size distribution, alkalinity, electrical conductivity, cation-exchange capacity, etc. A broad investigation of these properties determining biochar application is rare in literature. With this objective, this paper comprehensively reviews the evolution of biochar from several lignocellulosic biomasses influenced by pyrolysis temperature and heating rate. Lower pyrolysis temperatures produce biochar with higher yields, and greater levels of volatiles, electrical conductivity and cation-exchange capacity. Conversely, higher temperatures generate biochar with a greater extent of aromatic carbon, alkalinity and surface area with microporosity. Nevertheless, this coherent review summarizes the valorization potentials of biochar for various environmental, industrial and biomedical applications.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2851 moreInstitutions (208)
TL;DR: The results suggest that the ridge in pp collisions arises from the same or similar underlying physics as observed in p+Pb collisions, and that the dynamics responsible for the ridge has no strong sqrt[s] dependence.
Abstract: ATLAS has measured two-particle correlations as a function of relative azimuthal-angle, $\Delta \phi$, and pseudorapidity, $\Delta \eta$, in $\sqrt{s}$=13 and 2.76 TeV $pp$ collisions at the LHC using charged particles measured in the pseudorapidity interval $|\eta|$<2.5. The correlation functions evaluated in different intervals of measured charged-particle multiplicity show a multiplicity-dependent enhancement at $\Delta \phi \sim 0$ that extends over a wide range of $\Delta\eta$, which has been referred to as the "ridge". Per-trigger-particle yields, $Y(\Delta \phi)$, are measured over 2<$|\Delta\eta|$<5. For both collision energies, the $Y(\Delta \phi)$ distribution in all multiplicity intervals is found to be consistent with a linear combination of the per-trigger-particle yields measured in collisions with less than 20 reconstructed tracks, and a constant combinatoric contribution modulated by $\cos{(2\Delta \phi)}$. The fitted Fourier coefficient, $v_{2,2}$, exhibits factorization, suggesting that the ridge results from per-event $\cos{(2\phi)}$ modulation of the single-particle distribution with Fourier coefficients $v_2$. The $v_2$ values are presented as a function of multiplicity and transverse momentum. They are found to be approximately constant as a function of multiplicity and to have a $p_{\mathrm{T}}$ dependence similar to that measured in $p$+Pb and Pb+Pb collisions. The $v_2$ values in the 13 and 2.76 TeV data are consistent within uncertainties. These results suggest that the ridge in $pp$ collisions arises from the same or similar underlying physics as observed in $p$+Pb collisions, and that the dynamics responsible for the ridge has no strong $\sqrt{s}$ dependence.

Posted ContentDOI
21 Aug 2016-bioRxiv
TL;DR: In this paper, a convolutional neural network (CNN) was used for classification of functional MRI data for the purposes of medical image analysis and Alzheimer's disease prediction, achieving a reproducible accuracy of 99.9% and 98.84% for the fMRI and MRI pipelines, respectively.
Abstract: To extract patterns from neuroimaging data, various techniques, including statistical methods and machine learning algorithms, have been explored to ultimately aid in Alzheimer9s disease diagnosis of older adults in both clinical and research applications. However, identifying the distinctions between Alzheimer′s brain data and healthy brain data in older adults (age > 75) is challenging due to highly similar brain patterns and image intensities. Recently, cutting-edge deep learning technologies have been rapidly expanding into numerous fields, including medical image analysis. This work outlines state-of-the-art deep learning-based pipelines employed to distinguish Alzheimer′s magnetic resonance imaging (MRI) and functional MRI data from normal healthy control data for the same age group. Using these pipelines, which were executed on a GPU-based high performance computing platform, the data were strictly and carefully preprocessed. Next, scale and shift invariant low- to high-level features were obtained from a high volume of training images using convolutional neural network (CNN) architecture. In this study, functional MRI data were used for the first time in deep learning applications for the purposes of medical image analysis and Alzheimer′s disease prediction. These proposed and implemented pipelines, which demonstrate a significant improvement in classification output when compared to other studies, resulted in high and reproducible accuracy rates of 99.9% and 98.84% for the fMRI and MRI pipelines, respectively.

Journal ArticleDOI
TL;DR: In this article, the authors provide a thematically-driven review of CSR communication literature across five core sub-disciplines, identifying dominant views upon the audience and CSR purpose, as well as pervasive theoretical approaches and research paradigms.
Abstract: Growing recognition that communication with stakeholders forms an essential element in the design, implementation and success of corporate social responsibility (CSR) has given rise to a burgeoning CSR communication literature. However this literature is scattered across various sub-disciplines of management research and exhibits considerable heterogeneity in its core assumptions, approaches and goals. This article provides a thematically-driven review of the extant literature across five core sub-disciplines, identifying dominant views upon the audience of CSR communication (internal/external actors) and CSR communication purpose, as well as pervasive theoretical approaches and research paradigms manifested across these areas. The article then sets out a new conceptual framework - the 4Is of CSR communication research - that distinguishes between research on CSR Integration, CSR Interpretation, CSR Identity, and CSR Image. This typology of research streams organizes the central themes, opportunities and challenges for CSR communication theory development, and provides a heuristic against which future research can be located. [ABSTRACT FROM AUTHOR]

Journal ArticleDOI
TL;DR: In this article, the authors explored the relationship between globalization and energy consumption for India by endogenizing economic growth, financial development and urbanization, and found that in the long run, the acceleration of globalization (measured in three dimensions) leads to a decline in energy demand in India.

Journal ArticleDOI
Michael Fausnaugh1, Kelly D. Denney1, Aaron J. Barth2, Misty C. Bentz3, M. C. Bottorff4, Michael T. Carini5, K. V. Croxall1, G. De Rosa6, M. R. Goad7, Keith Horne8, Michael D. Joner9, Shai Kaspi10, Minjin Kim11, S. A. Klimanov, Christopher S. Kochanek1, D. C. Leonard12, Hagai Netzer13, Bradley M. Peterson1, K. Schnülle14, S. G. Sergeev, Marianne Vestergaard15, W. Zheng16, Ying Zu17, P. Arévalo18, C. Bazhaw3, G. A. Borman, Todd A. Boroson, W. N. Brandt19, A. A. Breeveld20, Brendon J. Brewer21, E. M. Cackett22, D. M. Crenshaw3, E. Dalla Bontà, A. de Lorenzo-Cáceres8, M. Dietrich23, Rick Edelson24, N. V. Efimova, Justin Ely6, Phil Evans7, A. V. Filippenko16, K. Flatland12, N. Gehrels25, S. Geier, J. M. Gelbord, L. Gonzalez12, V. Gorjian26, Catherine J. Grier1, Catherine J. Grier19, D. Grupe27, Patrick B. Hall28, S. Hicks5, D. Horenstein3, T. Hutchison4, Myungshin Im29, J. J. Jensen30, J. D. Jones3, Jelle Kaastra31, Brandon C. Kelly32, J. A. Kennea, Sang Chul Kim11, Kirk T. Korista33, G. A. Kriss34, J. C. Lee11, P. Lira35, F. MacInnis4, E. R. Manne-Nicholas3, S. Mathur1, I. M. McHardy36, C. Montouri37, R. Musso4, S. V. Nazarov, Ryan Norris3, J. A. Nousek19, D. N. Okhmat, A. Pancoast38, I. E. Papadakis39, J. R. Parks3, Liuyi Pei2, Richard W. Pogge1, J.-U. Pott14, S. E. Rafter40, H.-W. Rix14, D. A. Saylor3, J. S. Schimoia41, M. H. Siegel, M. Spencer9, D. A. Starkey8, H.-I. Sung11, K. G. Teems3, Tommaso Treu42, Tommaso Treu32, C. S. Turner3, Phil Uttley43, Carolin Villforth44, Y. Weiss10, Jong-Hak Woo29, H. Yan45, S. Young24 
TL;DR: In this paper, the authors used data obtained with the MODS spectrographs with funding from National Science Foundation (NSF) and the NSF Telescope System Instrumentation (TSIP), with additional funds from the OhioBoard of Regents and the Ohio State University Office of Research.
Abstract: The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia; The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University. This paper used data obtained with the MODS spectrographs built with funding from National Science Foundation (NSF) grant AST-9987045 and the NSF Telescope System Instrumentation Program (TSIP), with additional funds from the Ohio Board of Regents and the Ohio State University Office of Research. This paper made use of the modsIDL spectral data reduction pipeline developed in part with funds provided by NSF Grant AST - 1108693. The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. KAIT and its ongoing operation were made possible by donations from Sun Microsystems, Inc., the Hewlett-Packard Company, AutoScope Corporation, Lick Observatory, the NSF, the University of California, the Sylvia and Jim Katzman Foundation, and the TABASGO Foundation. Research at Lick Observatory is partially supported by a generous gift from Google. Support for HST program number GO-13330 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. M.M.F., G.D.R., B.M.P., C.J.G., and R.W.P. are grateful for the support of the NSF through grant AST- 1008882 to The Ohio State University. A.J.B. and L.P. have been supported by NSF grant AST-1412693. A.V.F. and W.- K.Z. are grateful for financial assistance from NSF grant AST- 1211916, the TABASGO Foundation, and the Christopher R. Redlich Fund. M.C. Bentz gratefully acknowledges support through NSF CAREER grant AST-1253702 to Georgia State University. M.C. Bottorff acknowledges HHMI for support through an undergraduate science education grant to Southwestern University. K.D.D. is supported by an NSF Fellowship awarded under grant AST-1302093. R.E. gratefully acknowledges support from NASA under awards NNX13AC26G, NNX13AC63G, and NNX13AE99G. J.M.G. gratefully acknowledges support from NASA under award NNH13CH61C. P.B.H. is supported by NSERC. M.I. acknowledges support from the Creative Initiative program, No. 2008-0060544, of the National Research Foundation of Korea (NRFK) funded by the Korean government (MSIP). M.D.J. acknowledges NSF grant AST-0618209 used for obtaining the 0.91 m telescope at WMO. SRON is financially supported by NWO, the Netherlands Organization for Scientific Research. B.C.K. is partially supported by the UC Center for Galaxy Evolution. C.S.K. acknowledges the support of NSF grant AST-1009756. D.C.L. acknowledges support from NSF grants AST-1009571 and AST-1210311, under which part of this research (photometric observations collected at MLO) was carried out. We thank Nhieu Duong, Harish Khandrika, Richard Mellinger, J. Chuck Horst, Steven Armen, and Eddie Garcia for assistance with the MLO observations. P.L. acknowledges support from Fondecyt grant #1120328. A.P. acknowledges support from a NSF graduate fellowship, a UCSB Dean’s Fellowship, and a NASA Einstein Fellowship. J.S.S. acknowledges CNPq, National Council for Scientific and Technological Development (Brazil) for partial support and The Ohio State University for warm hospitality. T.T. has been supported by NSF grant AST-1412315. T.T. and B.C.K. acknowledge support from the Packard Foundation in the form of a Packard Research Fellowship to T.T.; also, T.T. thanks the American Academy in Rome and the Observatory of Monteporzio Catone for kind hospitality. The Dark Cosmology Centre is funded by the Danish National Research Foundation. M.V. gratefully acknowledges support from the Danish Council for Independent Research via grant no. DFF–4002-00275. J.-H.W. acknowledges support by the National Research Foundation of Korea (NRF) grant funded by the Korean government (No. 2010-0027910). E.D.B. is supported by Padua University through grants 60A02-5857/13, 60A02-5833/14, 60A02-4434/15, and CPDA133894. K.H. acknowledges support from STFC grant ST/M001296/1. S.A.K. thanks Dr. I. A. Rakhimov, the Director of Svetloe Observatory, for his support and hospitality. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.

Journal ArticleDOI
TL;DR: HDL-C level is unlikely to represent a CV-specific risk factor given similarities in its associations with non-CV outcomes, and complex associations exist between HDL-C levels and sociodemographic, lifestyle, comorbidity factors, and mortality.

Journal ArticleDOI
TL;DR: Youth advocates, parents, clinicians, and coaches need to work together with the sport governing bodies to ensure healthy environments for play and competition that do not create long-term health issues yet support athletic competition at the highest level desired.
Abstract: Background:Early sport specialization is not a requirement for success at the highest levels of competition and is believed to be unhealthy physically and mentally for young athletes. It also disco...

Journal ArticleDOI
Mark S. Schwartz1
TL;DR: In this paper, a revised EDM model is proposed that consolidates and attempts to bridge together the varying and sometimes directly conflicting propositions and perspectives that have been advanced, and the proposed model, called "Integrated Ethical Decision Making", is introduced in order to fill the gaps and bridge the current divide in EDM theory.
Abstract: Ethical decision-making (EDM) descriptive theoretical models often conflict with each other and typically lack comprehensiveness. To address this deficiency, a revised EDM model is proposed that consolidates and attempts to bridge together the varying and sometimes directly conflicting propositions and perspectives that have been advanced. To do so, the paper is organized as follows. First, a review of the various theoretical models of EDM is provided. These models can generally be divided into (a) rationalist-based (i.e., reason); and (b) non-rationalist-based (i.e., intuition and emotion). Second, the proposed model, called ‘Integrated Ethical Decision Making,’ is introduced in order to fill the gaps and bridge the current divide in EDM theory. The individual and situational factors as well as the process of the proposed model are then described. Third, the academic and managerial implications of the proposed model are discussed. Finally, the limitations of the proposed model are presented.