Showing papers by "University of South Carolina published in 2016"
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
TL;DR: The findings continue to support the importance of at least 60 min/day of MVPA for disease prevention and health promotion in children and youth, but also highlight the potential benefits of LPA and total PA.
Abstract: Moderate-to-vigorous physical activity (MVPA) is essential for disease prevention and health promotion. Emerging evidence suggests other intensities of physical activity (PA), including light-intensity activity (LPA), may also be important, but there has been no rigorous evaluation of the evidence. The purpose of this systematic review was to examine the relationships between objectively measured PA (total and all intensities) and health indicators in school-aged children and youth. Online databases were searched for peer-reviewed studies that met the a priori inclusion criteria: population (apparently healthy, aged 5–17 years), intervention/exposure/comparator (volumes, durations, frequencies, intensities, and patterns of objectively measured PA), and outcome (body composition, cardiometabolic biomarkers, physical fitness, behavioural conduct/pro-social behaviour, cognition/academic achievement, quality of life/well-being, harms, bone health, motor skill development, psychological distress, self-esteem)....
1,259 citations
••
Children's Hospital of Eastern Ontario1, University of Ottawa2, University of Alberta3, Public Health Agency of Canada4, Conference Board of Canada5, University of British Columbia6, Douglas Mental Health University Institute7, Queen's University8, Pennington Biomedical Research Center9, McMaster University10, McGill University11, University of Wollongong12, University of South Australia13, University of South Carolina14, University of Prince Edward Island15, University of Calgary16, Swansea University17, University of Toronto18, Camosun College19
TL;DR: The Canadian 24-Hour Movement Guidelines for Children and Youth: An Integration of Physical Activity, Sedentary Behaviour, and Sleep provide evidence-informed recommendations for a healthy day (24 h), comprising a combination of sleep, sedentary behaviours, light-, moderate-, and vigorous-intensity physical activity.
Abstract: Leaders from the Canadian Society for Exercise Physiology convened representatives of national organizations, content experts, methodologists, stakeholders, and end-users who followed rigorous and transparent guideline development procedures to create the Canadian 24-Hour Movement Guidelines for Children and Youth: An Integration of Physical Activity, Sedentary Behaviour, and Sleep. These novel guidelines for children and youth aged 5-17 years respect the natural and intuitive integration of movement behaviours across the whole day (24-h period). The development process was guided by the Appraisal of Guidelines for Research Evaluation (AGREE) II instrument and systematic reviews of evidence informing the guidelines were assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach. Four systematic reviews (physical activity, sedentary behaviour, sleep, integrated behaviours) examining the relationships between and among movement behaviours and several health indicators were completed and interpreted by expert consensus. Complementary compositional analyses were performed using Canadian Health Measures Survey data to examine the relationships between movement behaviours and health indicators. A stakeholder survey was employed (n = 590) and 28 focus groups/stakeholder interviews (n = 104) were completed to gather feedback on draft guidelines. Following an introductory preamble, the guidelines provide evidence-informed recommendations for a healthy day (24 h), comprising a combination of sleep, sedentary behaviours, light-, moderate-, and vigorous-intensity physical activity. Proactive dissemination, promotion, implementation, and evaluation plans have been prepared in an effort to optimize uptake and activation of the new guidelines. Future research should consider the integrated relationships among movement behaviours, and similar integrated guidelines for other age groups should be developed.
1,114 citations
••
City College of New York1, Mackenzie Presbyterian University2, University of São Paulo3, New York University4, Beth Israel Deaconess Medical Center5, University Medical Center Freiburg6, University of Minnesota7, University of Pennsylvania8, University of Michigan9, Air Force Research Laboratory10, University of Calgary11, Albert Einstein College of Medicine12, University of Göttingen13, University of New South Wales14, University of Freiburg15, University of South Carolina16, MedStar National Rehabilitation Hospital17, University of Florida18
TL;DR: Evidence from relevant animal models indicates that brain injury by Direct Current Stimulation (DCS) occurs at predicted brain current densities that are over an order of magnitude above those produced by conventional tDCS.
874 citations
••
Yale University1, Brown University2, Veterans Health Administration3, National Institutes of Health4, University of Colorado Denver5, University of Iowa6, University of L'Aquila7, Oregon Health & Science University8, University of Arizona9, University of Oxford10, University of Cincinnati11, University of Newcastle12, Heidelberg University13, University of South Carolina14, University of Western Ontario15, Tel Aviv University16, OSF Saint Francis Medical Center17
TL;DR: In this trial involving patients without diabetes who had insulin resistance along with a recent history of ischemic stroke or TIA, the risk of stroke or myocardial infarction was lower among patients who received pioglitazone than among those who received placebo.
Abstract: BackgroundPatients with ischemic stroke or transient ischemic attack (TIA) are at increased risk for future cardiovascular events despite current preventive therapies. The identification of insulin resistance as a risk factor for stroke and myocardial infarction raised the possibility that pioglitazone, which improves insulin sensitivity, might benefit patients with cerebrovascular disease. MethodsIn this multicenter, double-blind trial, we randomly assigned 3876 patients who had had a recent ischemic stroke or TIA to receive either pioglitazone (target dose, 45 mg daily) or placebo. Eligible patients did not have diabetes but were found to have insulin resistance on the basis of a score of more than 3.0 on the homeostasis model assessment of insulin resistance (HOMA-IR) index. The primary outcome was fatal or nonfatal stroke or myocardial infarction. ResultsBy 4.8 years, a primary outcome had occurred in 175 of 1939 patients (9.0%) in the pioglitazone group and in 228 of 1937 (11.8%) in the placebo group...
771 citations
••
TL;DR: A scientifically based harmonized definition of MHO is proposed, which will hopefully contribute to more comparable data in the future and a better understanding on the MHO subgroup and its CVD prognosis.
Abstract: The prevalence of obesity has increased worldwide over the past few decades. In 2013, the prevalence of obesity exceeded the 50% of the adult population in some countries from Oceania, North Africa, and Middle East. Lower but still alarmingly high prevalence was observed in North America (≈30%) and in Western Europe (≈20%). These figures are of serious concern because of the strong link between obesity and disease. In the present review, we summarize the current evidence on the relationship of obesity with cardiovascular disease (CVD), discussing how both the degree and the duration of obesity affect CVD. Although in the general population, obesity and, especially, severe obesity are consistently and strongly related with higher risk of CVD incidence and mortality, the one-size-fits-all approach should not be used with obesity. There are relevant factors largely affecting the CVD prognosis of obese individuals. In this context, we thoroughly discuss important concepts such as the fat-but-fit paradigm, the metabolically healthy but obese (MHO) phenotype and the obesity paradox in patients with CVD. About the MHO phenotype and its CVD prognosis, available data have provided mixed findings, what could be partially because of the adjustment or not for key confounders such as cardiorespiratory fitness, and to the lack of consensus on the MHO definition. In the present review, we propose a scientifically based harmonized definition of MHO, which will hopefully contribute to more comparable data in the future and a better understanding on the MHO subgroup and its CVD prognosis.
712 citations
••
TL;DR: Recent progress in the study of marine microbial surface colonization and biofilm development is synthesized and discussed and questions are posed for targeted investigation of surface-specific community-level microbial features to advance understanding ofsurface-associated microbial community ecology and the biogeochemical functions of these communities.
Abstract: SUMMARY Biotic and abiotic surfaces in marine waters are rapidly colonized by microorganisms. Surface colonization and subsequent biofilm formation and development provide numerous advantages to these organisms and support critical ecological and biogeochemical functions in the changing marine environment. Microbial surface association also contributes to deleterious effects such as biofouling, biocorrosion, and the persistence and transmission of harmful or pathogenic microorganisms and their genetic determinants. The processes and mechanisms of colonization as well as key players among the surface-associated microbiota have been studied for several decades. Accumulating evidence indicates that specific cell-surface, cell-cell, and interpopulation interactions shape the composition, structure, spatiotemporal dynamics, and functions of surface-associated microbial communities. Several key microbial processes and mechanisms, including (i) surface, population, and community sensing and signaling, (ii) intraspecies and interspecies communication and interaction, and (iii) the regulatory balance between cooperation and competition, have been identified as critical for the microbial surface association lifestyle. In this review, recent progress in the study of marine microbial surface colonization and biofilm development is synthesized and discussed. Major gaps in our knowledge remain. We pose questions for targeted investigation of surface-specific community-level microbial features, answers to which would advance our understanding of surface-associated microbial community ecology and the biogeochemical functions of these communities at levels from molecular mechanistic details through systems biological integration.
696 citations
••
TL;DR: In this article, the authors examined the effect of firm-generated content (FGC) in social media on three key customer metrics: spending, cross-buying, and customer profitability.
Abstract: Given the unprecedented reach of social media, firms are increasingly relying on it as a channel for marketing communication. The objective of this study is to examine the effect of firm-generated content (FGC) in social media on three key customer metrics: spending, cross-buying, and customer profitability. The authors further investigate the synergistic effects of FGC with television advertising and e-mail communication. To accomplish their objectives, the authors assemble a novel data set comprising customers’ social media participation data, transaction data, and attitudinal data obtained through surveys. The results indicate that after the authors account for the effects of television advertising and e-mail marketing, FGC has a positive and significant effect on customers’ behavior. The authors show that FGC works synergistically with both television advertising and e-mail marketing and also find that the effect of FGC is greater for more experienced, tech-savvy, and social media–prone custom...
614 citations
••
TL;DR: This work provides users with simple methods for detecting and correcting problems in the image conversion process, and serves as an overview for developers who wish to either develop their own tools or adapt the open source tools created by the authors.
566 citations
••
TL;DR: This statement sets forth a new framework for NIPS that is supported by information from validation and clinical utility studies, and Laboratories are encouraged to meet the needs of providers and their patients by delivering meaningful screening reports and to engage in education.
503 citations
••
TL;DR: The landscape of disaster resilience indicators is littered with wide range of tools, scorecards, indices, and indices that purport to measure disaster resilience in some manner as mentioned in this paper, however, there is no dominant approach across these characteristics.
Abstract: The landscape of disaster resilience indicators is littered with wide range of tools, scorecards, indices that purport to measure disaster resilience in some manner. This paper examines the existing qualitative and quantitative approaches to resilience assessment in order to delineate common concepts and variables. Twenty seven different resilience assessment tools, indices, and scorecards were examined. Four different parameters were used to distinguish between them—focus (on assets baseline conditions); spatial orientation (local to global), methodology (top down or bottom up), and domain area (characteristics to capacities). There is no dominant approach across these characteristics. In a more detailed procedure, fourteen empirically based case studies were examined that had actually implemented one of the aforementioned tools, indices, or scorecards to look for overlaps in both concepts measured and variables. The most common elements in all the assessment approaches can be divided into attributes and assets (economic, social, environmental, infrastructure) and capacities (social capital, community functions, connectivity, and planning). The greatest variable overlap in the case studies is with specific measures of social capital based on religious affiliation and civic organizations, and for health access (measured by the number of physicians). Based on the analysis a core set of attributes/assets, capacities, and proxy measures are presented as a path forward, recognizing that new data may be required to adequately measure many of the dimensions of community disaster resilience.
••
TL;DR: This article conducted three studies to explore how individual and national differences influence the relationship between social media use and customer brand relationships and found that engaging customers via social media is associated with higher consumer-brand relationships and word of mouth communications when consumers anthropomorphize the brand and they avoid uncertainty.
••
TL;DR: In this paper, the authors investigated the linkages of customer engagement with traditional antecedents of brand loyalty and found that customer engagement enhances customers' service brand evaluation, brand trust, and brand loyalty.
Abstract: Customer engagement has recently emerged in both academic literature and practitioner discussions as a brand loyalty predictor that may be superior to other traditional loyalty antecedents. However, empirical inquiry on customer engagement is relatively scarce. As tourism and hospitality firms have widely adopted customer engagement strategies for managing customer–brand relationships, further understanding of this concept is essential. Using structural equation modeling, this study investigates the linkages of customer engagement with traditional antecedents of brand loyalty. Results based on 496 hotel and airline customers suggest that customer engagement enhances customers’ service brand evaluation, brand trust, and brand loyalty. The results show that service brand loyalty can be strengthened not only through the service consumption experience but also through customer engagement beyond the service encounter. This study contributes to the literature by providing an empirical evaluation of the relation...
••
TL;DR: Three-dimensional graphene foam incorporated with nitrogen defects as a metal-free catalyst for CO2 reduction and density functional theory calculations confirm pyridinic N as the most active site forCO2 reduction, consistent with experimental results.
Abstract: The practical recycling of carbon dioxide (CO2) by the electrochemical reduction route requires an active, stable, and affordable catalyst system. Although noble metals such as gold and silver have been demonstrated to reduce CO2 into carbon monoxide (CO) efficiently, they suffer from poor durability and scarcity. Here we report three-dimensional (3D) graphene foam incorporated with nitrogen defects as a metal-free catalyst for CO2 reduction. The nitrogen-doped 3D graphene foam requires negligible onset overpotential (−0.19 V) for CO formation, and it exhibits superior activity over Au and Ag, achieving similar maximum Faradaic efficiency for CO production (∼85%) at a lower overpotential (−0.47 V) and better stability for at least 5 h. The dependence of catalytic activity on N-defect structures is unraveled by systematic experimental investigations. Indeed, the density functional theory calculations confirm pyridinic N as the most active site for CO2 reduction, consistent with experimental results.
••
TL;DR: Considering the noted associations between various assessments of MC and with multiple aspects of HRPF, the development of MC in childhood may both directly and indirectly augment HRPF and may serve to enhance theDevelopment of long-term health outcomes in children and adolescents.
••
Systems Research Institute1, United States Environmental Protection Agency2, Southern California Coastal Water Research Project3, Natural Resources Conservation Service4, National Oceanic and Atmospheric Administration5, University of Wisconsin-Madison6, Natural Resources Research Institute7, University of South Carolina8, Engineer Research and Development Center9
TL;DR: Environmental toxicology, environmental chemistry, and risk-assessment expertise must interface with ecologists, engineers, and public health practitioners to engage the complexities of HAB assessment and management, to address the forcing factors for HAB formation, and to reduce the threats posed to inland surface water quality.
Abstract: In this Focus article, the authors ask a seemingly simple question: Are harmful algal blooms (HABs) becoming the greatest inland water quality threat to public health and aquatic ecosystems? When HAB events require restrictions on fisheries, recreation, and drinking water uses of inland water bodies significant economic consequences result. Unfortunately, the magnitude, frequency, and duration of HABs in inland waters are poorly understood across spatiotemporal scales and differentially engaged among states, tribes, and territories. Harmful algal bloom impacts are not as predictable as those from conventional chemical contaminants, for which water quality assessment and management programs were primarily developed, because interactions among multiple natural and anthropogenic factors determine the likelihood and severity to which a HAB will occur in a specific water body. These forcing factors can also affect toxin production. Beyond site-specific water quality degradation caused directly by HABs, the presence of HAB toxins can negatively influence routine surface water quality monitoring, assessment, and management practices. Harmful algal blooms present significant challenges for achieving water quality protection and restoration goals when these toxins confound interpretation of monitoring results and environmental quality standards implementation efforts for other chemicals and stressors. Whether HABs presently represent the greatest threat to inland water quality is debatable, though in inland waters of developed countries they typically cause more severe acute impacts to environmental quality than conventional chemical contamination events. The authors identify several timely research needs. Environmental toxicology, environmental chemistry, and risk-assessment expertise must interface with ecologists, engineers, and public health practitioners to engage the complexities of HAB assessment and management, to address the forcing factors for HAB formation, and to reduce the threats posed to inland surface water quality.
••
Cincinnati Children's Hospital Medical Center1, University of Texas at Austin2, Emory University3, Wayne State University4, University of Toronto5, East Carolina University6, Baylor College of Medicine7, Eastern Virginia Medical School8, Children's National Medical Center9, University of Texas Southwestern Medical Center10, University of Alabama at Birmingham11, Nemours Foundation12, Case Western Reserve University13, Columbia University14, University of Pennsylvania15, Medical University of South Carolina16, State University of New York System17, University of South Carolina18, Harvard University19, University of South Alabama20, University of Miami21, University of Mississippi22, Children's Memorial Hospital23, Duke University24, Georgia Regents University25, University of Southern California26
TL;DR: High-risk children with sickle cell anaemia and abnormal TCD velocities who have received at least 1 year of transfusions, and have no MRA-defined severe vasculopathy, hydroxycarbamide treatment can substitute for chronic transfusions to maintain TCD velocity and help to prevent primary stroke.
••
TL;DR: In this paper, an independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b tagging algorithm used in the online trigger are also presented.
Abstract: The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag rate for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.
••
Université catholique de Louvain1, University of Cambridge2, Lamont–Doherty Earth Observatory3, National Center for Atmospheric Research4, British Antarctic Survey5, Centre national de la recherche scientifique6, University College London7, University of Tokyo8, Max Planck Society9, University of Barcelona10, Potsdam Institute for Climate Impact Research11, Spanish National Research Council12, Institut Français13, University of South Carolina14, Memorial University of Newfoundland15, University of Bremen16
TL;DR: In this article, the authors identify eleven interglacials in the last 800,000 years, a result that is robust to alternative definitions, such as the onset of an interglacial (glacial termination) seems to require a reducing precession parameter (increasing Northern Hemisphere summer insolation), but this condition alone is insufficient.
Abstract: Interglacials, including the present (Holocene) period, are warm, low land ice extent (high sea level), end-members of glacial cycles. Based on a sea level definition, we identify eleven interglacials in the last 800,000 years, a result that is robust to alternative definitions. Data compilations suggest that despite spatial heterogeneity, Marine Isotope Stages (MIS) 5e (last interglacial) and 11c (~400 ka ago) were globally strong (warm), while MIS 13a (~500 ka ago) was cool at many locations. A step change in strength of interglacials at 450 ka is apparent only in atmospheric CO2 and in Antarctic and deep ocean temperature. The onset of an interglacial (glacial termination) seems to require a reducing precession parameter (increasing Northern Hemisphere summer insolation), but this condition alone is insufficient. Terminations involve rapid, nonlinear, reactions of ice volume, CO2, and temperature to external astronomical forcing. The precise timing of events may be modulated by millennial-scale climate change that can lead to a contrasting timing of maximum interglacial intensity in each hemisphere. A variety of temporal trends is observed, such that maxima in the main records are observed either early or late in different interglacials. The end of an interglacial (glacial inception) is a slower process involving a global sequence of changes. Interglacials have been typically 10–30 ka long. The combination of minimal reduction in northern summer insolation over the next few orbital cycles, owing to low eccentricity, and high atmospheric greenhouse gas concentrations implies that the next glacial inception is many tens of millennia in the future.
••
TL;DR: It is demonstrated that the single-atom alloy (SAA) strategy applied to Pt reduces the binding strength of CO while maintaining catalytic performance, which is vital to other industrial reaction systems, such as hydrocarbon oxidation, electrochemical methanol oxidation, and hydrogen fuel cells.
Abstract: Platinum catalysts are extensively used in the chemical industry and as electrocatalysts in fuel cells. Pt is notorious for its sensitivity to poisoning by strong CO adsorption. Here we demonstrate that the single-atom alloy (SAA) strategy applied to Pt reduces the binding strength of CO while maintaining catalytic performance. By using surface sensitive studies, we determined the binding strength of CO to different Pt ensembles, and this in turn guided the preparation of PtCu alloy nanoparticles (NPs). The atomic ratio Pt:Cu = 1:125 yielded a SAA which exhibited excellent CO tolerance in H2 activation, the key elementary step for hydrogenation and hydrogen electro-oxidation. As a probe reaction, the selective hydrogenation of acetylene to ethene was performed under flow conditions on the SAA NPs supported on alumina without activity loss in the presence of CO. The ability to maintain reactivity in the presence of CO is vital to other industrial reaction systems, such as hydrocarbon oxidation, electrochem...
••
TL;DR: In this paper, the authors examined the effect of an organizational response to negative customer reviews on the perceptions and evaluations of prospective customers toward an online negative review and any accompanying hotel response, and found that the provision of an online response enhanced inferences that potential consumers draw regarding the business's trustworthiness and the extent to which it cares about its customers.
••
TL;DR: First and second order temporal approximation schemes based on the “Invariant Energy Quadratization” method are developed, where all nonlinear terms are treated semi-explicitly, leading to a symmetric positive definite linear system to be solved at each time step.
••
TL;DR: In this paper, the authors examined and synthesized extant literature pertaining to barriers to substance abuse and mental health treatment for persons with co-occurring substance use disorders (COD).
••
TL;DR: The authors examined whether the effect of procedural justice and competing variables (i.e., distributive justice and police effectiveness) on police legitimacy evaluations operate in the same manner across individual and situational differences.
Abstract: This study tests the generality of Tyler’s process-based model of policing by examining whether the effect of procedural justice and competing variables (i.e., distributive justice and police effectiveness) on police legitimacy evaluations operate in the same manner across individual and situational differences. Data from a random sample of mail survey respondents are used to test the “invariance thesis” (N = 1681). Multiplicative interaction effects between the key antecedents of legitimacy (measured separately for obligation to obey and trust in the police) and various demographic categories, prior experiences, and perceived neighborhood conditions are estimated in a series of multivariate regression equations. The effect of procedural justice on police legitimacy is largely invariant. However, regression and marginal results show that procedural justice has a larger effect on trust in law enforcement among people with prior victimization experience compared to their counterparts. Additionally, the distributive justice effect on trust in the police is more pronounced for people who have greater fear of crime and perceive higher levels of disorder in their neighborhood. The results suggest that Tyler’s process-based model is a “general” theory of individual police legitimacy evaluations. The police can enhance their legitimacy by ensuring procedural fairness during citizen interactions. The role of procedural justice also appears to be particularly important when the police interact with crime victims.
••
TL;DR: In this paper, the authors examine how complex environments affect a firm's adoption of Corporate Social Responsibility (CSR) practices and propose that these effects will be weighted depending on their relative salience.
Abstract: Multinational enterprises (MNEs) operate in complex transnational organizational fields with multiple, diverse, and possibly conflicting institutional forces. This paper examines how such complex environments affect a firm's adoption of Corporate Social Responsibility (CSR) practices. To capture the effect of transnational fields, we consider the institutional influences of all country environments to which the firm is linked through its portfolio of operations and propose that these effects will be weighted depending on their relative salience. We identify a set of factors that make certain pressures more salient than others, including firm's economic dependence on a particular country, heterogeneity of institutional forces within the firm's transnational field, exposure to leading countries with more stringent CSR templates, and intensity and commitment to particular economic linkages (i.e., foreign direct investment versus international trade). Our hypotheses are tested and supported in a study of 710 US MNEs from 2007 to 2011 with global ties to over 100 countries.
••
TL;DR: In this article, the authors developed a baseline model of residents' support for tourism and compared it with four competing models, each model contains the terms of the baseline model and additional relationships reflecting alternative theoretical possibilities.
Abstract: Social exchange theory (SET) has made significant contributions to research on residents’ support for tourism. Nevertheless, studies are based on an incomplete set of variables and are characterized by alternative, yet contradictory, and theoretically sound research propositions. Using key constructs of SET, this study develops a baseline model of residents’ support and compares it with four competing models. Each model contains the terms of the baseline model and additional relationships reflecting alternative theoretical possibilities. The models were tested using data collected from residents of Niagara Region, Canada. Results indicated that in the best fitted model, residents’ support for tourism was influenced by their perceptions of positive impacts. Residents’ power and their trust in government significantly predicted their life satisfaction and their perceptions of positive impacts. Personal benefits from tourism significantly influenced residents’ perceptions of the positive and negative impacts...
••
TL;DR: The results suggest that the ridge in pp collisions arises from the same or similar underlying physics as observed in p+Pb collisions, and that the dynamics responsible for the ridge has no strong sqrt[s] dependence.
Abstract: ATLAS has measured two-particle correlations as a function of relative azimuthal-angle, $\Delta \phi$, and pseudorapidity, $\Delta \eta$, in $\sqrt{s}$=13 and 2.76 TeV $pp$ collisions at the LHC using charged particles measured in the pseudorapidity interval $|\eta|$<2.5. The correlation functions evaluated in different intervals of measured charged-particle multiplicity show a multiplicity-dependent enhancement at $\Delta \phi \sim 0$ that extends over a wide range of $\Delta\eta$, which has been referred to as the "ridge". Per-trigger-particle yields, $Y(\Delta \phi)$, are measured over 2<$|\Delta\eta|$<5. For both collision energies, the $Y(\Delta \phi)$ distribution in all multiplicity intervals is found to be consistent with a linear combination of the per-trigger-particle yields measured in collisions with less than 20 reconstructed tracks, and a constant combinatoric contribution modulated by $\cos{(2\Delta \phi)}$. The fitted Fourier coefficient, $v_{2,2}$, exhibits factorization, suggesting that the ridge results from per-event $\cos{(2\phi)}$ modulation of the single-particle distribution with Fourier coefficients $v_2$. The $v_2$ values are presented as a function of multiplicity and transverse momentum. They are found to be approximately constant as a function of multiplicity and to have a $p_{\mathrm{T}}$ dependence similar to that measured in $p$+Pb and Pb+Pb collisions. The $v_2$ values in the 13 and 2.76 TeV data are consistent within uncertainties. These results suggest that the ridge in $pp$ collisions arises from the same or similar underlying physics as observed in $p$+Pb collisions, and that the dynamics responsible for the ridge has no strong $\sqrt{s}$ dependence.
••
TL;DR: The first search for ν_{μ}→ν_{e} transitions by the NOvA experiment finds 6 events in the Far Detector, compared to a background expectation of 0.99±0.11(syst) events based on the Near Detector measurement.
Abstract: We report results from the first search for ν_{μ}→ν_{e} transitions by the NOvA experiment. In an exposure equivalent to 2.74×10^{20} protons on target in the upgraded NuMI beam at Fermilab, we observe 6 events in the Far Detector, compared to a background expectation of 0.99±0.11(syst) events based on the Near Detector measurement. A secondary analysis observes 11 events with a background of 1.07±0.14(syst). The 3.3σ excess of events observed in the primary analysis disfavors 0.1π<δ_{CP}<0.5π in the inverted mass hierarchy at the 90% C.L.
••
TL;DR: It is shown that controlling the defect configuration in graphene is critical to overcome a fundamental limitation posed by quantum capacitance and opens new channels for ion diffusion.
Abstract: Defects are often written off as performance limiters. Contrary to this notion, it is shown that controlling the defect configuration in graphene is critical to overcome a fundamental limitation posed by quantum capacitance and opens new channels for ion diffusion. Defect-engineered graphene flexible pouch capacitors with energy densities of 500% higher than the state-of-the-art supercapacitors are demonstrated.
••
TL;DR: In this article, it is demonstrated that solid Na metal can be electrochemically plated/stripped at ambient temperature with high efficiency (>99%) on both copper and inexpensive aluminum current collectors.